Ultimately, RAW formats aren't that complex, and camera firmware is mostly developed in countries that don't have strong open source software traditions.
It's some binary parsing, reading metadata, maybe doing some decompression-- a thousand lines of C++ on average for each format. These aren't complex codecs like HEVC and only reach JPEG complexity by embedding them as thumbnails!
Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
Photographers rarely care, so it doesn't appreciably impact sales. Raw processing software packages have generally good support available soon after new cameras are released.
However, Fujifilm lossless compressed raw actually does a decent job keeping the file sizes down (about 50% to 60% the file size of uncompressed) while maintaining decent write speed during burst shooting.
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
I think this is being too generous.
DNG is just an offshoot of TIFF. Having written a basic DNG parser having never read up on TIFFs before, it really isn’t that hard.
As far as experimental features, there’s room in the spec for injecting your own stuff, similar to MakerNote in EXIF if I recall.
If you are planning to do experimental stuff, I’d say what Apple pulled off with ProRAW is the most innovative thing that a camera manufacturer has done in forever. They worked with Adobe to get it into the spec. All of these camera manufacturers have similar working relationships with Adobe, so there’s really no excuse. And if you can’t wait that long, again, MakerNote it.
In my opinion, custom RAW formats are a case study in “Not Invented Here” syndrome.
It took a long time for Canon CR3 raw format to be supported by darktable because, although the format itself had been reverse engineered, there was a fear from the developers that it was covered by a patent and that they risked a lawsuit by integrating it in DT. IIRC, they had attempted to contact Cabon legal to obtain some sort of waiver, without success.
I'm fact I'm not sure how that saga ended and CR3 support was finally added a few years after the release of the Canon mirrorless cameras that output CR3.
I think that might be why a lot of camera makers don't care to use DNG - it's easier to make their own format and easy enough for others to reverse engineer it.
One thing that open source libraries do tend to miss is that very important extra metadata - for example, Phase One IIQ files have an embedded sensor profile or full on black frame that is not yet encoded into the raw data like it typically is for a NEF or DNG from many cameras. It does seem rawspeed handles this from a quick scan of the code.
It can get more tricky - Sinar digital backs have an extra dark frame file (and flat frame!) that is not part of the RAW files, and that is not handled by any open source library to my knowledge - though I did write a basic converter myself to handle it: https://github.com/mgolub2/iatodng_rs
I'm not sure how DNG would be able to handle having both a dark and flat frame without resorting to applying them to the raw data and saving only the processed (still unbayered) data.
> One thing that open source libraries do tend to miss is that very important extra metadata - for example, Phase One IIQ files have an embedded sensor profile or full on black frame that is not yet encoded into the raw data like it typically is for a NEF or DNG from many cameras.
In astronomy/astrophotography the FITS format[1] is commonly used, which supports all these things and is, as the name suggests, extremely flexible. I wonder why it never caught on in regular photography.
Oh interesting! This seems like it would be a good fit ;)
Especially for really old setups that had RGB color wheels and multiple exposures, exactly like a multispectral astro image might. Phase one also has a multispectral capture system for cultural heritage, which just shoots individual IIQs to my knowledge… It would work great too for multiple pixel shift shots.
Possibly, the engineers just didn’t know about it when they were asked to write the firmware? It’s funny, I think most RAW formats are just weird TIFFs to some degree, so why not use this instead.
Maybe? I’m not familiar enough with DNG to say - possibly that wasn’t a thing when Phase One first started using IIQ? I doubt it was around when Sinar was - in fact the last two (esprit 65 & S30|45 ) Sinar backs do use DNG as an option!
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), [..]
Technically speaking, implementing DNG would be another development activity on top of a RAW export, because RAW also has a purpose in development and tuning of the camera and its firmware.
It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development.
It just "happens" to be also available to select for the end-user after product-launch. Supporting DNG would mean adding an extra feature and then hiding the RAW-option again.
I can imagine it's hard to make this a priority in a project plan, since most of the objectives are already achieved by saving in RAW
> It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development.
This is what I was thinking, that there are potentially so many RAW formats because there are so many sensors with potentially different output data. There should be a way to standardize this though.
Yeah, but it's not standardised because its output is so close to "bare metal", it's wrapped into a standardised format a few steps later when a JPG/HEIC/... is created.
Supporting DNG means that those few steps later it should be standardised into ANOTHER RAW-equivalent. A format which happens to be patented and comes with a license and legal implications.
Among them the right for Adobe to every method you used to make this conversion from your proprietary bare-metal sensor-data. This is not trivial, because if you're a vendor working on sensor-tech you wouldn't want to be required to share all your processing with Adobe for free...
I have no knowledge of DNG, what I was suggesting is that someone should devise a some kind of extensible, self-describing format that can be used in place of RAW without losing any sensor data as with JPEG/HEIC/etc.
Well, DNG ("Digital Negative") is such a format, defined and patented by Adobe, but with a license allowing free use under certain conditions.
The conditions are somewhat required to make sure that Adobe remains in control of the format, but at the same time they create a commitment and legal implications for anyone adopting it.
I disagree. Bufferoverflow frames raw formats as something that's really only there for R&D purposes, and it's more or less just an afterthought that it's available to photographers. In reality, Narretz points out, getting access to the raw sensor data is a key feature to many photographers; it's an essential aspect of the product from a user perspective.
Since you disagree: where in this thread did anyone state the opposite of what you just wrote, who said that RAW is NOT a key feature to many photographers?
> It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development. It just "happens" to be also available to select for the end-user after product-launch.
Just as I wrote. CR3 is used by Canon also during development and tuning of their sensors and cameras.
DNG would not replace CR3, because CR3 would still be needed before launch, and Canon has no incentive to change their entire internal toolchain to comply to Adobes DNG specification.
Especially not because the DNG format is patented and allows Adobe to revoke the license in case of dispute...
First of all, it does not "just happen" to be selectable. RAW contains information that is not available in a JPG or PNG , but which is crucial to a lot of serious photographers.
Second, the native raw images do include a ton of adjustments in brightness, contrast and color correction. All of which gets lost when you open the image file with apps provided from other companies than the camera vendor. Eg. open a Nikon-raw in NC Software and then in Lightroom. Big difference. Adobe has some profiles that get near the original result, but the Nikon raw standards often are better.
So DNG would absolutely be an advantage because then at least these color corrections could natively be implemented and not get lost in the process.
Noone is disputing the advantage of RAW. I tried to provide the view from a pure development perspective, looking at a feature backlog.
It "just happens" to be selectable because it is a byproduct of the internal development: The existing RAW format is used internally during development and tuning of the product, and is implemented to work with vendor-internal processes and tools.
Supporting DNG would require a SEPARATE development, and it would still not replace a proprietary RAW-format in the internal toolchain.
(because the DNG patent-license comes with rights for Adobe as well as an option to revoke the license)
most people who shoot RAW don't care for the in camera picture adjustments so don't care if RAW shows up looking what it did in the camera because we apply our own edits anyways, if we need something like that we shot jpeg
A patented format where Adobe standardized the exact syntax for each parameter, with mandatory and optional elements to be compliant, and (!) a patent license with some non-trivial implications which is also only valid if the implementation is compliant.
In a development environment, this format competes with an already-implemented proprietary RAW-format which already works and can be improved upon without involvement of a legal department or 3rd party.
That is the intended purpose of a patent. From WIPO [1]:
> The patent owner has the exclusive right to prevent or stop others from commercially exploiting the patented invention for a limited period within the country or region in which the patent was granted. In other words, patent protection means that the invention cannot be commercially made, used, distributed, imported or sold by others without the patent owner's consent.
This is not correct. Both the subhead of the article and the DNG format's Wikipedia Page state that DNG is open and not subject to IP licensing.
While having two file formats to deal with in software development definitely "competes" with the simplicity of just having one, patents and licensing aren't the reason they're not choosing Adobe DNG.
The fact that both your sources are NOT the actual DNG license text should be sufficient to humble yourself from "This is not correct" to at least a question.
--> Your information source is incomplete. Please refer to the license of DNG [0].
The patent rights are only granted:
1. When used to make compliant implementations to the specification,
2. Adobe has the right to license at no cost every method used to create this DNG from the manufacturer, and
3. Adobe reserves the right to revoke the rights "in the event that such licensee or its affiliates brings any patent action against Adobe or its affiliates related to the reading or writing of files that comply with the DNG Specification"
--
None of this is trivial to a large company.
First of all, it requires involvement of a legal department for clearance,
Second, you are in risk of violation of the patent as soon as you are not compliant to the specification,
Third, you may have to open every IP to Adobe at no charge which is required in order to create a DNG from your sensor (which can be a significant risk and burden if you develop your own sensor) and
Fourth, in case the aforementioned IP is repurposed by Adobe and you take legal action, your patent-license for DNG is revoked.
--
--> If you are a vendor with a working RAW implementation and all the necessary tools for it in place, it's hard to make a case on why you should go through all that just to implement another specification.
None of this is terrifying and seems overblown. I read the patent grant you linked to. It makes sense that one would not grant the right to make incompatible versions. That would confuse the user. Also, the right of revocation only applies if the DNG implementor tries to sue Adobe. Why would they do that?
Occam's razor here suggests that the camera manufacturers' answers are correct, especially since they are all the same. DNG doesn't let them store what they want to and change it at will -- and this is true of any standardized file format and not true of any proprietary format.
> None of this is terrifying and seems overblown. I read the patent grant [..]
Considering that you entered this discussion instantly claiming that others are wrong without having even read the license in question makes this conversation rather..."open-ended"
> Also, the right of revocation only applies if the DNG implementor tries to sue Adobe. Why would they do that?
As I wrote above, Adobe reserves the right to use every patent that happens to be used to create this DNG from your design at no cost, and will revoke your license if you disagree i.e. with what they do with it.
> Occam's razor here suggests [..]
Or, as I suggested, it's simply hard to make a case in favor of developing and maintaining DNG with all that burden if you anyway have to support RAW
That's fair. It's certainly not "open source" in that way that term is usually used. I still think that's not the primary issue and that the manufacturers are being honest about their preference for proprietary formats. But I see that Adobe legal concerns hanging over their heads isn't an advantage, for sure.
> granted by Adobe to individuals and organizations that desire to develop, market, and/or distribute hardware and software that reads and/or writes image files compliant with the DNG Specification.
If I use it for something it's not images because I want to create a DNG file that's a DNG file and a Gameboy ROM at the same time. Or if I'm a security researcher testing non compliant files. Or if I'm not a great developer or haven't had enough time to make my library perfectly compliant with the specification... Will I be sued for breaking the license?
The fatal scenario for a camera vendor would be to transition your customers to DNG over some years, then a dispute arises which causes Adobe to revoke your patent license, and suddenly all your past products are in violation of Adobe's DNG patent.
You not only have to remove DNG-support on those products, but due to warranty-law in many countries have to provide an equivalent feature to the customer (--> develop a converter application again, but this time for products you already closed development for years ago).
Alternative would be to settle with Adobe to spare the cost for all that. So Adobe has all the cards in this game.
Now: Why bother transitioning your customers to DNG...?
What? Number two would make most companies run the other way. “Whatever you use to create a DNG, secret sauce or algorithm or processing from your sensor data, Adobe can license” - you act like it’s no big deal but it’s often the closely guarded color science or such things.
You can argue that maybe those things shouldn’t be considered trade secrets or whatever. But there’s just a bit more to it than that.
Neither DNG nor various vendor-specific raw formats are meant for tuning an image sensor. They can be used for that in some specific cases, but it's not what they are for. They're for taking photos and providing the user with less opinionated data so they can do the processing of their photos the way they want rather than rely on predefined processing pipeline implemented in the camera.
Despite the name, this is rarely a pure raw stream of data coming from the sensor. It's usually close enough for practical photographic purposes though.
The main reason people shoot raw is to have more creative control over the final product.
A simple example is white balance. The sensor doesn't know anything about it, but typical postprocessing makes both a 2700K incandescent and a 5700K strobe look white. A photographer might prefer to make the incandescent lights look more yellow. There's a white balance setting in the camera to do that when taking the picture, but it's a lot easier to get it perfect later in front of a large color-calibrated display than in the field.
Another example is dealing with a scene containing a lot of dynamic range, such as direct sunlight and dark shadows. The camera's sensor can capture a greater range of brightness than a computer screen can display or a printer can represent, so a photographer might prefer to delay decisions about what's dark grey with some details and what's clipped to black.
Everything you said is supported by regular image formats. You can adjust white balance of any photo and you think image formats are only limited to 16-bit and sRGB?
That’s not why we use RAW. It’s partly because (1) if you used Adobe RGB or Rec. 709 on a JPEG, a lot of people would screw it up, (2) you get a little extra raw data from the pre-filtering of Bayer, X-Trans, etc. data, (3) it’s less development work for camera manufacturers, and (4) partly historical.
> Everything you said is supported by regular image formats. You can adjust white balance of any photo and you think image formats are only limited to 16-bit and sRGB?
No - the non-RAW image formats offered were traditionally JPG and 8-bit TIFF. Neither of those are suitable for good quality post-capture edits, irrespective of their colour space (in fact, too-wide a colour space is likely to make the initial capture worse because of the limited 8-bit-per-colour range).
These days there is HEIF/similar formats, which may be good enough. But support in 3rd party tools (including Adobe) is no better than RAW yet, i.e., you need to go through a conversion step. So...
Also don't forget one of the promises of RAW: That RAW developers will continue to evolve, so that you'll be able to generate a better conversion down the line than now. Granted, given the maturity of developers the pace of innovation has slowed down a lot compared to 20 years ago, but there are still incremental improvements happening.
Another advantage of RAW is non-destructive editing, at least in developers that support it and are more than import plugins for traditional editors. I rarely have to touch Photoshop these days.
what format can i a change the white balance of the image on other then RAW in software, for all the years i have used digital cameras i can't think of one...
Try and adjust shadows and highlights in a jpg vs a raw file and see what happens. There is no data there in the jpg just black and white blown out. Raw file you can brighten the shadows and find moth man standing there with a little extra sensor noise.
The bottleneck is usually in SD card write speeds, however. Sport photographers often skip raw and only use JPG because the files are smaller and as a result, one can take more photos in one burst.
For raw at high frame rates, high end cameras don't use SD cards but things like CFexpress which absolutely can keep up (and there are also various compressed RAW formats these days which apply a degree of lossy compression to reduce file size).
As I understand it, the reason some professional sports photographers don't shoot RAW (or it's less important) is more because they are in an environment where publishing quickly is important, so upload speeds matter and there isn't really much time to postprocess.
Canon’s “sport professional” camera has lower resolution than the “second tier” cameras. It has a higher frame rate and CFExpress and SDXC2 so bandwidth isn’t an issue. Last I checked you could burst 40 or 50 frames (at 20ish fps) before filling the internal buffer.
You can definitely do more than that these days. My Nikon Z8 can do 30fps with 1xCFExpress, and the flagship Z9 can do 120fps because it has 2xCFExpress and alternates writes. On the Sony side they have a closer differentiation to what you describe, the flagship (A1 II) does only 30fps compared to the next-best (A9 III) which does 120fps, while the prosumer (A7 RV) only does 10fps.
I don't know Canon well, but 120fps w/ dual CFExpress + 800-1200 frames buffer is fairly standard on top-end professional sports/wildlife mirrorless cameras these days.
I believe this might have been the case in the past, where (a) sensor resolutions were lower - so the raw images less bulky, (b) camera CPUs were slower - so you would like to take them out of the equation.
These days, the bottleneck for achieving continuous shooting rate is probably writting to the sd card (which is the standard for the consumer/pro-sumer models).
It is always written into a memory buffer first, which could be like 256 megabytes... it tooks time to fill it up, once it is filled, memory card speed becomes a bottleneck. So, actually, writing only jpegs would trigger the slowdown later, so you could take more frames before the buffer fills up
This was my guess too, get the raw bayer data from the sensor in one go + some metadata.
Then as the sensors and cameras evolve they are just accumulating different formats?
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
I am a weirdo and always liked and used Pentax (now Ricoh) they do support the DNG format.
These formats aren't complex because they really are supposed to be raw (-:
But yeah, it would be preferable to have them use the digital negative (DNG) format, but why bother when the community makes the work for them? Reminds me of how Bethesda does things.
Traditional Nikon NEF is pretty simple. It's just a tiff. Lossy compression is just gamma-encoding with a LUT (stored in the file). I think most traditional raws are principally similar. More complex compression schemes like ticoraw are fairly recent.
What's complex is the metadata. All the cameras have different AF, WB and exposure systems.
The contents are simple. How to interpret the contents, is not simple. That is why you see internet advice advocating for keeping old raw files around, because Lightroom and Photoshop sometimes gets updates which can cram out better results from old raw files.
(Edit: I mean, if you want to get a basic debayered RGB image from a raw, that's not too hard. But if you want to cram out the most, there are a lot of devils in a lot of details. Things like estimating how many green pixels are not actually green, but light-spill from what should have been red pixels is just the beginning.)
It’s the best place to add “signature steps.” Things like noise reduction, chromatic aberration correction, and one-step HDR processing.
I used to work for a camera manufacturer, and our Raw decoder was an extremely intense pipeline step. It was treated as one of the biggest secrets in the company.
Third-party deinterlacers could not exactly match ours, although they could get very good results.
Anecdotally, using Darktable, I could never get as good of a demosaicing result as using the straight-out-of-camera JPEGs from my Fujifilm GFX 100S. In challenging scenarios such as fine diagonal lines, Darktable's algorithms such as LMMSE would add a lot of false colour to the image.
However, modern deep learning-based joint demosaicing and denoising algorithms handily outperform Darktable's classical algorithms.
Well, it is obvious that between a RAW file and the final image there are a lot of complex processing steps. But that is independent of the file format used. DNG isn't so much different, just documented. And while the manufacturers converter might give the best results, the photographers rather use the image processing programs from Adobe or their competition which use their own RAW converters anyway.
Yeah, they could do it with DNG (I suppose), but they don't really have any reason to do so (in their minds). Personally, I like open stuff, but they did not share my mindset, and I respected their posture.
Raw decoding is an algorithm, not a container format. The issue is every is coming up with their own proprietary containers for identical data that just represents sensor readings.
The issue is that companies want control of the demosaicing stage, and the container format is part of that strategy.
If a file format is a corporate proprietary one, then there's no expectation that they should provide services that do not directly benefit them, or that expose internal corporate trade secrets, in service to an open format.
If they have their own format, then they don't have to lose any sleep over stuff that doesn't interest or benefit them.
By definition, a RAW container contains sensor data, and nothing more. Are you saying that Adobe is using their proprietary algorithms to render proprietary RAW formats in Lightroom?
Not publicly. It’s not difficult to figure out, but I make it a point, not to post stuff that would show up in their search algorithms.
But it was a pretty major one, and I ran their host image pipeline software team.
[Edited to Add] It was one of the “no comment” companies. They won’t discuss their Raw format in detail, and neither will I, even though it has been many years, since I left that company, and it’s likely that my knowledge is dated.
That was my suspicion initially. In fact, when I read about mass DNG adoption, my first thought was "but how would it work for this company?" (admittedly I don't know much about DNG, but intuitively I had my doubts).
It seems to me that long ago, camera companies thought they would charge money for their proprietary conversion software. It has been obvious for nearly as long that nobody is going to pay for it, and delayed compatibility with the software people actually want to use will only slow down sales of new models.
With that reasoning long-dead, is there some other competitive advantage they perceive to keeping details of the raw format secret?
The main reason is that image Quality is the main coefficient of their corporation. They felt that it was a competitive advantage, and sort of a "secret ingredient," like you will hear from master chefs.
They feel that their images have a "corporate fingerprint," and are always concerned that images not get out, that don't demonstrate that.
This often resulted in difficulty, getting sample images.
Also, for things like chromatic aberration correction, you could add metadata that describes the lens that took the picture, and use that to inform the correction algorithm.
In many cases, a lens that displays chromatic aberration is an embarrassment. It's one of those "dirty little secrets," that camera manufacturers don't want to admit exists.
As they started producing cheaper lenses, with less glass, they would get more ChrAb, and they didn't want people to see that.
Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy." If you can apply it before, you can minimize data loss. Same with noise reduction.
Many folks here, would absolutely freak, if they saw the complexity of our deBayer filter. It was a pretty massive bit of code.
I am very skeptical that chromatic aberration can be applied before a demosaic and then the result can be stored in a Bayer array again. There seems to be no advantage in storing the result of chromatic aberration correction in a raw Bayer array, which has less information, than a full array with the three RGB values per pixel. Perhaps I am not understanding it correctly?
Thanks for the explanation. I have to question how reality-based that thinking is. I do not, of course expect you to defend it.
It seems to me that nearly all photographers who are particularly concerned with image quality shoot raw and use third-party processing software. Perhaps that's a decision not rooted firmly in reality, but it would take a massive effort focused on software UX to get very many to switch to first-party software.
> Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy."
Are you saying that they're baking chromatic aberration corrections into the raw files themselves so that third-party software can't detect it? I know the trend lately is to tolerate more software-correctable flaws in lenses today because it allows for gains elsewhere (often sharpness or size, not just price), but I'm used to seeing those corrections as a step in the raw development pipeline which software can toggle.
I think we're getting into that stuff that I don't want to elaborate on. They would probably get cranky I have said what I've said, but that's pretty common knowledge.
If the third-party stuff has access to the raw Bayer format, they can do pretty much anything. They may not have the actual manufacturer data on lenses, but they may be able to do a lot.
Also, 50MP, lossless-compressed (or uncompressed) 16-bit-per-channel images tend to be big. It takes a lot to process them; especially if you have time constraints (like video). Remember that these devices have their own, low-power processors, and they need to handle the data. If we wrote host software to provide matching processing, we needed to mimic what the device firmware did. You don't necessarily have that issue, with third-party pipelines, as no one expects them to match.
Every camera has a unique RAW format even cameras from the same company. The article briefly mentions this but doesn't go into that much detail. I've got at least 10 Nikon cameras going back to 2005 and every "NEF" Nikon RAW file is different so if you buy your camera on the first day it is released you have to wait for your software vendor to add support or shoot in JPEG format. There have been a few times when the RAW files are so similar that you can use a hex or EXIF editor and change the camera model EXIF field to an older supported camera and load the file. But in theory the RAW converter has been profiled for each specific camera using ICC color targets and stuff like that.
> But in theory the RAW converter has been profiled for each specific camera using ICC color targets and stuff like that.
In practice too, if consistent results are desired. The format being identical doesn't mean the values the sensor captures under the same conditions will be identical, so a color-calibrated workflow could produce wrong results.
It would be nice to have a setting for "treat camera Y like camera X (here there be dragons)" though. I've had to do something similar with the Lensfun database to get lens corrections working on Mk. II of a lens where Mk. I was supported, but a GUI would be nice. A prompt to guess the substitution automatically would be even nicer.
One problem is that you cannot have a universal format that is both truly raw and doesn't embed camera specific information. Camera sensors from different companies (and different generations) don't have the same color (or if you prefer, spectral) responses with both their Bayer filter layer and the underlying physical sensor. If you have truly raw numbers, you need the specific spectral response information to interpret them; if you don't need spectral response information, you don't actually have truly raw numbers. People very much want raw numbers for various reasons, and also camera companies are not really enthused about disclosing the spectral response characteristics of their sensors (although people obviously reverse engineer them anyway).
> Camera sensors from different companies (and different generations) don't have the same color (or if you prefer, spectral) responses with both their Bayer filter layer and the underlying physical sensor
This is all accommodated for in the DNG spec. The camera manufacturers specify the necessary matrix transforms to get into the XYZ colorspace, along with a linearization table.
If they really think the spectral sensitivity is some valuable IP, they are delusional. It should take one Macbeth chart, a spreadsheet, and one afternoon to reverse engineer this stuff.
Given that third party libraries have figured this stuff out, seems they have failed while only making things more difficult for users.
What does RAW really mean then? Couldn't they simply redefine what RAW means to create a standard that can include proprietary technology? Like why not define it as including a spectral response?
There is no 'RAW' format as such. In practice, 'RAW' is a jargon term for "camera specific format with basically raw sensor readings and various additional information". Typically the various RAW formats don't embed the spectral information, just a camera model identifier, because why waste space on stuff the camera makers already know and will put in their (usually maker specific) processing software.
(Eg Nikon's format is 'NEF', Canon's is 'CR3', and so on, named after the file extensions.)
I don't know if DNG can contain (optional) spectral response information, but camera makers were traditionally not enthused about sharing such information, or for that matter other information they put in their various raw formats. Nikon famously 'encrypted' some NEF information at one point (which was promptly broken by third party tools).
A 1920x1080 24-bit RAW image is a file of exactly 6,220,800 bytes. There are only a few possible permutations of parameters: Which of the 4 corners comes first, whether the row-major or column-major order, what order the 3 colors are in (RGB or BGR), and whether the colors are stored as planes or not. (Without planes, a pixel's R, G and B bytes are adjacent. With planes, you essentially have three parallel monochrome images, i.e. cat r.raw g.raw b.raw > rgb.raw) [1]
What the article is describing sounds like something that's not a raw file, but a full image format with a header.
[1] One may ask, how does the receiving software know the file is 1920 x 1080 and not, say, 3840 x 540? Or for that matter, a grayscale image of size 5760 x 1080?
The answer is that, with no header, you have to supply that information yourself when importing the image. (E.g. you have to manually type it into a text entry field in your image editor's file import UI.)
We’ll, yes. You’re thinking of the classic RAW format that was just a plain array of RGB pixels without a header.
When talking about digital cameras RAW refers to a collection of vendor specific file formats for capturing raw sensor data, together with a bunch of metadata.
I did push for all my digital images to be DNG, and they are, up to around 2018, and two out of four cameras use it natively - Pentax, Leica - while the other two use their own formats - Canon, Fuji.
The reason I’m less fussy now is because the combination of edits, metadata and image data in a single file didn’t necessarily help me when I switched from Lightroom to Capture One. I would love to be able to update the files to use newer RAW processors and better IQ, but I lose the Lightroom edit information in C1. That makes sense as they do things differently. But I hoped that with DNG there was a universal format for handling edits.
My JPEGs remain the definitive version of the images but I would love to be able to recover all those original edits again in C1, or any other editing program.
Over the past 15-20 years I've used both Sonys, Canons and Nikons, and I absolutely feel that Nikon puts a lot more effort, with much better results, into the usability of their pro/prosumer cameras - and, really, even their $500-$1000 consumer range - both in terms of the on-display UI and the ergonomics and handling of the actual camera.
What always stood out most for me compared to Canon was Nikon's larger viewfinders, letting you commit to actual photography rather than being stuck with a feeling of peeping through a keyhole, and placement of buttons on the camera body allowing for maintained control of the most necessary functions (shutter speed, aperture and even ISO) without having to change your grip or move the camera away from your face.
Only Nikons I own are 35mm film FM2 and F4. The bodies feel like tactile bliss. FM2 has a dry lubricated system with crazy titanium honeycombed etched shutter and F4 is the last pro DSLR they made with no menu system.
On the digital front I found Fuji X-Txx series to be like tiny Nikons in their usability with all common controls on dials.
I'm (at least) a third-generation Nikon shooter, and I still have my grandfather's FTn. For its era, predating CNC and CAD, it is very comfortable to use, but the leather "eveready" case shell is welcome.
(One reason I shoot Nikon is because I can still shoot his glass on modern bodies. Indeed, that's what my D5300 spends a lot of its time wearing these days.)
True revolutions in consumer imaging excepted, I doubt I'll feel more than an occasional gadget fan's urge to replace my D850 and D500 as my primary bodies. Oh, the Z series has features, I won't disagree, even if I'm deeply suspicious of EVFs and battery life. But the D850 is a slightly stately, super-versatile full-frame body, and the D500 is a light, 20fps APS-C, that share identical UIs, lens and peripheral lineups, and (given a fast card to write to) deep enough buffers to mostly not need thinking about.
For someone like me who cares very little about technical specs, and a great deal for the ability to hang a camera over their shoulder and walk out the door and never once lose a shot due to equipment failure, there really isn't much that could matter more. I may have 350 milliseconds to get a flight shot of a spooked heron, or be holding my breath and focusing near 1:1 macro with three flash heads twelve inches away from a busily foraging and politely opinionated hornet. In those moments, eye and hand and machine and mind and body all must work as one to get the shot, and having to think at all about how to use the camera essentially guarantees a miss.
Hence the five years of work I've put into not having to think about that. I suppose I could've done more than well enough with any system, sure. But my experiences with others have left me usually quite glad Nikon's is the system I invested in.
Old school Zeiss glass is like butter for any camera body. My dad told me to stick with Nikon and spend my money on lenses first. He was not wrong. You can put 25 year old professional lenses on a mid-market Nikon body and the images will be stunning with very little effort.
I havent tried the mirrorless cameras but on dslr canon is great ux imo. Everything you need to adjust on the fly is easy. Its usually controlled with a dial that can change the parameter it adjusts with a modifier button. Saving you what might be yet another dial on like a fuji xt5.
But even then once youve metered a scene how often do you adjust iso on the fly? Hardly ever. Fixed iso, aperture priority, center dot focus and metering, off to the races.
Most hardware companies are just terrible at software in general. Camera makers are pretty average in that regard.
Usability of the camera hardware and software ecosystem is another matter. I think the common wisdom is that most paying users don't want beginner-friendly, they want powerful and familiar. So everything emulates the paradigms of what came before. DSLRs try to provide an interface that would be familiar to someone used to a 50 year old SLR camera, and Lightroom tries to emulate a physical darkroom. Being somewhat hostile to the uninitiated might even be seen as a feature.
It's also like 4 digital dials. And you can leave most to Auto until you realize each specific dial enables something you desire. Sony tried "non-scary automagic" approach, and have instantly gone back to dials.
There's also Sigma BF if that's what you want; Sigma actually do pretty good job from perspective of minimalistic, idealistic, on-point, field usable UI, though the return of that effort just isn't worthwhile. I have the OG DP1, it feels natural as IntelliMouse PS/2. I've tried dp2 Quattro once and it felt natural as any serious right-handed trackballs. They scratch so many of camera nerds' itching points.
Most people just buys an A7M4 and an 24-70 Zeiss. And then they stupidly leave it all to auto and never touches the dials. And it puts smiles on people's faces 80% of times. And that's okay. No?
Yes, fully agreed. However, the way the companies currently approach this - catering for the ever-reducing niche, will end up killing the DLSRs over time. They just don't offer enough over phones, and the UX/SW being so crappy alienates the potential new userbase completely.
You can achieve maybe a quarter of the kinds of shots on a phone that an interchangeable-lens camera will let you make.
That's an extremely important quarter! For most people it covers everything they ever want a camera to do. But if you want to get into the other 75%, you're never going to be able to do it with the enormous constraints imposed by a phone camera's strict optical limits, arising from the tight physical constraints into which that camera has to fit.
I had two phones with 108MP sensors and while you can zoom in on the resulting image the details are suggestions rather than what I would consider pixels.
Whereas a $1500 Nikon 15MP from 20 years ago is real crisp, and I can put a 300mm lens on it if I want to "zoom in".
Even my old nikon 1 v1 with its cropped sensor 12MP takes "better pictures" than the two 108MP phone cameras.
But there are uses for the pixel density and I enjoyed having 108MP for certain shots, otherwise not using that mode in general.
Yeah, that's the exact tradeoff. 108MP (or even whatever the real photosite count is that they're shift-capturing or otherwise trick-shooting to get that number) on a sensor that small is genuinely revolutionary. But only giving that sensor as much light to work with as a matchhead-sized lens can capture for it, there's no way to avoid relying very heavily on the ISP to yield an intelligible image. Again, that does an incredible job for what little it's given to work with - but doing so requires it be what we could fairly call "inventive," with the result that anywhere near 100% zoom, "suggestions" are exactly what you're seeing. The detail is as much computational as "real."
People make much of whatever Samsung it was a couple years back, that got caught copy-pasting a sharper image of Luna into that one shot everyone takes and then gets disappointed with the result because, unlike the real thing, our brain doesn't make the moon seem bigger in pictures. But they all do this and they have for years. I tried taking pictures of some Polistes exclamans wasps with my phone a couple years back, in good bright lighting with a decent CRI (my kitchen, they were houseguests). Now if you image search that species name, you'll see these wasps are quite colorful, with complex markings in shades ranging from bright yellow through orange, "ferruginous" rust-red, and black.
In the light I had in the kitchen, I could see all these colors clearly with my eyes, through the glass of the heated terrarium that was serving as the wasps' temporary enclosure. (They'd shown a distinct propensity for the HVAC registers, and while I find their company congenial, having a dozen fertile females exploring the ductwork might have been a bit much even for me...) But as far as I could get the cameras on this iPhone 13 mini to report, from as close as their shitty minimum working distance allows, these wasps were all solid yellow from the flat of their heart-shaped faces to the tip of their pointy butts. No matter what I did, even pulling a shot into Photoshop to sample pixels and experimentally oversaturate, I couldn't squeeze more than a hint of red out of anything without resorting to hue adjustments, i.e. there is no red there to find.
So all I can conclude is the frigging thing made up a wasp - oh, not in the computer vision, generative AI sense we would mean that now, or even in the Samsung sense that only works for the one subject anyway, but in the sense that even in the most favorable of real-world conditions, it's working from such a total approximation of the actual scene that, unless that scene corresponds closely enough to what the ISP's pipeline was "trained on" by the engineers who design phones' imaging subsystems, the poor hapless thing really can't help but screw it up.
This is why people who complain about discrete cameras' lack of brains are wrongheaded to do so. I see how they get there, but there are some aspects of physics that really can't be replaced by computation, including basically all the ones that matter, and the physical, optical singlemindedness of the discrete camera's sole design focus is what liberates it to excel in that realm. Just as with humans, all cramming a phone in there will do is give the poor thing anxiety.
I generally judge a camera by how accurately it can capture sunset, relative to what i actually see. on a samsung galaxy note 20, i can mess with the white balance a bit to get it "pretty close", but tends to clamp color values so the colors are more uniform than they are in real life. I've seen orange dreamsicle, strawberry sherbet, lavender, at the same time, at different intensities in the same section of sky. No phone camera seems to be able to capture that. http://projectftm.com/#noo2qor_GgyU1ofgr0B4jA captured last month. it wasn't so "pastel", it was much more rich. The lightening at the "horizon" is also common with phone cameras, and has been since the iphone 4 and Nexus series of phones. It looks awful and i don't get why people put up with it.
Sony is famous for having the worst interfaces of all the big camera manufacturers.
Lightroom most likely has “obsolete paradigms” for the same reason Photoshop does: because professionals want to use what they know rather than what is fashionable. Reprogramming their muscle memory is not something people want to be doing. Anyway, I find Lightroom’s UI very nice to work with.
I don't know, I think the learning curve is pretty gentle. Like all complex software, it may be difficult to master, but getting started felt easy enough as far as I remember.
I found it incredibly frustrating at first, so much so that a Loupedeck became a wise and necessary investment to keep the anticipation of editing burden from beginning to depress my interest in photography.
I still have the Loupedeck, on one of the shelves behind my desk. I think I might have used it twice last year.
At the time of release of the A6000, having a mobile app to take a picture (and not having to buy an adapter like some other brands) was cool enough that you could deal with the (at the time, relatively minor) jank.
Biggest thing is they never really improved the mobile apps... and in some cases IMO they got worse.
The one for my 2016 era fuji is crap. Camera remote. Usually the connection fails between my phone and the cameras own wifi network it connects to but this failure takes a good 2 minutes to happen. So many artificial software limits with this camera, some removed in newer versions. Why can’t this one tether though? It is literally a digital camera no?
Anyone know of any fujifilm firmware jailbreaks fwiw?
> If a manufacturer comes up with additional data that isn’t included in the DNG standard, the format is extensible enough that a camera manufacturer can throw it in there, anyway.
It sounds like DNG has so much variation that applications would still need to support different features from different manufacturers. I'm not sure it (DNG) will really solve interoperability problems. This issue smells like someone is accidentally playing politics without realizing it.
Kind of reminds me of the interoperability fallacy with XML. Just because my application and your application use XML, it doesn't mean that our applications are interoperable.
I suspect that a better approach would be a "RAW lite" format that supports a very narrow set of very common features; but otherwise let camera manufacturers keep their RAW files as they see fit.
RAW is ultimately about sensor readings. As a developer, you just want to get things from there into a linear, known color space (XYZ in the DNG spec). So from that perspective, interoperability isn’t the issue.
How you process that data is another matter. Handling a traditional bayer pattern vs a quad-bayer vs Fujifilm’s x-trans pattern obviously requires different algorithms, but that’s all moot given DNG is just a container.
Seems like DNG does behave like the RAW lite format you've just described: Everything common would be stored within the base DNG file, while everything "advanced" / more specific to a camera would be stored in additional metadata properties, which do not need to be parsed to still be able to process the base image.
You can add support for these metadata on a case-by-case basis without breaking the original format, so you're not stuck re-implementing your whole raw parsing when a new camera is released as the base subset of DNG would still work.
When building a camera, you decide once and then most parameters stay fixed. It would be trivial to just append 1000 bytes for a mostly fixed DNG header to each image.
But how do you test this?
While the DNG specification is open source, the implementation was/is(?) not. Do I really need a copy of Photoshop to test if my files are good? How would I find good headers to put into my files? what values are even used in processing?
Maybe the situation has changed, but in the old days when I was building cameras there was only a closed-source Adobe library for working with DNGs. That scared me off.
Things like camera intrinsics and extrinsics are not fixed. 1000 bytes seems small to me given the amount of processing in modern cameras to create a raw image. I could easily imagine storing more information like focus point, other potential focus points with weights as part of the image for easier user on device editing.
This isn't really my area, so I'm probably wrong... I'd always assumed that RAW files were, well, raw data straight off the sensor (or as close as possible)? In which case, you could standardize the container format, but I wouldn't think it was possible to have a standard format for the actual image data. Would appreciate if anyone could correct me (a quick skim of wikipedia didn't clear it up)
Most image sensors are quite similar (ignoring weirdos like X-Trans and Foveon) so they could use the same format and decoding algorithm. It's a 16-bit integer (padded out from 12 or 14 bits) for each pixel with a Bayer color filter. Maybe throw in some parameters like a suggested gamma curve.
Foveon has awful Foss support so far. Older foveon models also require an older version of windows to run the antiquited software to process raw pics, it's maddening.
The algorithms for getting a useable image from a Foveon sensor are very non trivial from what I understand - the different layers don’t separate light perfectly into red, green, and blue bands, so there is some fancy cross layer processing you need to do.
Typically they are not to my knowledge! Though I am also not an expert. Most camera makers apply a fixed sensor profile, and possibly a dark frame to remove noise before writing out the values to whatever file. Some of them may apply lens optimizations to correct distortion or vignetting as well.
On top of that, I hear the RAW format on some smartphones is saved after the phone does its fake-HDR and computational photography bullshit, so it's even further from "raw" with those cameras.
The corrections are just metadata, the RAW data is still there. This is true for both DNG and ARW (Sony). Dont know the other brands. The corrections can even look different based on what program you use to interpret them.
I don’t think that’s true in general. As a sibling comments points out, this is not true for some DNGs - for example, the output of an iPhone is in DNG, but with many, many transforms already baked in. A DNG might even be debayered already.
I don’t know much about ARW, but I do know that they offer a lossy compressed format - so it’s not just straight off the sensor integer values in that case either.
At least it's only at ISO 80, where noise would be minimal anyway (: I rarely use noise reduction because I don't like the artificial cleanliness of the result.
It's all float value arrays with metadata in the end. Most camera sensors are pretty similar and follow common patterns.
DNGs have added benefits, like including compression (optional) (either lossy or lossless) and error correction bytes to prevent corruption (optional). Even if there's some concerns like unique features or performance, I'd still rather use DNGs without these features and with reduced performance.
I always archive my RAWs as lossy compressed DNGs with error correction and without thumbnails to save space and some added "safety".
Nitpicking correction: The sensors give you a fixed number of bytes per pixel, like 10 or 12 bits per pixel. This are unsigned integers, not floats.
Typically you want to pack them to avoid storing 30% of zeros. So often the bytes need unscrambling.
Any sometimes there is a dark offset: In a really dark area of an image, random noise around zero can also go negative a little. You don't want to clip that off, and you don't want to use signed integers. So there typically is a small offset.
The ironic part is, that basically all the closed-source photo-editing software (and of course all open-source) are just using the open source LibRaw. Any special features as color profiles comes on top of that. So yes, the camera manufacturers could just donate to LibRaw or just use DNG instead.
I downloaded a sample RAF file from the internet. It was 16 megapixels, and 34MB in size. I used Adobe DNG Converter, and it created a file that was 25MB in size. It was actually smaller.
Claiming that DNG takes up 4x space doesn't align with any of my own experiences, and it didn't happen on the RAF file that I just tested.
He probably compares a mosaiced RAF with a debayered (linear) DNG. Just... don't do that. Use a mosaiced DNG. And certainly don't embed the RAF file. Plus with lossless JPEG-XL, the difference is trivial (under 5%):
It's a very common mistake in free software to not design a system end-to-end. Free software people will design some kind of generic container format and go "look! you can put anything in the container!", declare the job done, and then not write tools which actually do put anything in the container, or tools which can make use of anything in the container other than one specific thing.
DNG has nothing to do with free software, it's an Adobe format similar to PDF. It is an open standard, but it is covered by patents and it comes with a no-cost patent license (except that it's less open than PDF because it doesn't define the interpretation for Adobe-specific tags (which ACR/Lightroom uses), not that that matters for raw file in any way).
Of course that even Adobe DNG converter can do what the GP asked for (I just tried it[1]), not that I would recommend it for Fuji files. And not that it matters anyway, since the whole point is producing DNG files directly, not converting them.
Edit: on my Fuji X-T5 files, using mosaiced data with lossless JPEG-XL compression (supported by MacOS 14+, iOS 17+, and the latest Lightroom/ACR):
This reminds me of a current problem facing the bio-imaging community; many microscopes, many formats.
My company specifically deals with one of the post-processing steps and we've had to build our own 'universal adapter'. Its frustrating because it feels like microscope companies are just re-inventing the wheel instead of using some common standard.
There has been an effort to standardize how these TB size datasets are distributed[1]. A different but still interesting issue.
While I generally prefer compatibility and shared standards, camera RAW formats seem to be a reasonable place to lean into implementation details if needed in order to gain performance and ease/quality of implementation over interoperability.
Don't know what's confusing about it... I mean, I shoot ARW with my Sony, these work fine with Lightroom and work fine with DxO PhotoLab [1] at least as long as my ARWs are not compressed (it's not that the compression is proprietary, it's that the compression is lossy and breaks denoising)
Proprietary formats require 3rd party developers to adapt their tools: While most mainstream software will be updated to support most/all cameras, this makes it harder for smaller projects to do.
If they used an open standard, the advanced features could still require additional work to be compatible (ex: If they store custom metadata), but you could normalize everything that's shared, ensuring the core capabilities will never break for a new camera with its updated proprietary RAW like it currently does.
'.dng's share the same file format structure as '.tiff's, but they have some extra fields.
For stills photography, Adobe's '.dng' format does fairly well, from 8-bit to 16-bit. It copes with any of the 4 possible standard RGGB Bayer phases and and has some level of colour look-up table in the metadata. Sometime this is not enough for a camera's special features and The Verge's article covers those reasons quite well.
For video, things get much more complicated. You start to hit bandwidth limits of the storage media (SD cards at the low end). '.dng' files were not meant to be compressed but Blackmagic Design figured out how to do it (lightly) and still remain compatible with standard '.dng' decoding software. Other, better compressed formats were also needed to get around the limits of '.dng' compression.
Red cameras used a version of JPEG 2000 on each Bayer phase individually (4 of them), but they wrapped it in patents and litigated hard against anyone who dared to make a RAW format for any video recording over 4k. Beautifully torn apart in this video: https://www.youtube.com/watch?v=IJ_uo-x7Dc0
So, for quite a few years, video camera companies tip-toed around this aggressive patent with their own proprietary formats, and this is another reason why there's so many (not mentioned by The Verge).
There's also the headache of copying a folder of 1,000+ '.dng' stills that make up a movie clip; it takes forever, compared to a single concatenated file. So, there's another group of RAW video file formats that solve this by recording into a single file which is a huge improvement.
>
Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
Why no simply provide documentation of the camera-specific format that is used?
"Sony’s software for processing ARW RAW files is called Imaging Edge. Like most first-party software from camera manufacturers, it’s terrible and unintuitive to use — and should be saved for situations like a high-resolution multishot mode where it’s the only method to use a camera’s proprietary feature."
I think the primarily reason is that they have great hardware developers and terrible software developers. So, having ARWs it is maximum they could provide to photographer, so they could take the files and run away from sony as soon as possible (i.e. do the rest in the better software).
Pentax could save DNGs, there are zero reasons for other companies not to do the same.
DNG is easy to adopt so long as your images look like a Bayer image, and you don’t need any special treatment of the raw data.
Sigma’s camera’s are notorious for their lack of support in most editors because their Foveon files require extra steps and adjustments that don’t fit the paradigm assumed by DNGs (and they claim it would release proprietary information if they used dngs).
The bigger issue is that at the end of the day the dng format is very broad (but not broad enough) and you rely on the editor to implement it correctly (and completely). DNGs that you can open in one of the major editors will simply not open in another.
And more to the point, for their foveon cameras that produced with both x3f and dng files, the image quality from their dng files are objectively and substantially worse than the x3f files.
By the way (hello, adobe).
DNG compression compresses worse than 7zip, haha.
Too much for an open standard, too hard to add LZ4 as an option in the times of having 24 core setups :D
I thought the main value of this article was that they went out and asked various vendors that question ("why proprietary formats?") and put all their answers in one place. Too bad Nikon and Fujifilm didn't respond, but I can imagine their motivations being similar to other vendors.
This is on a long list of why camera companies are dying.
There is a long list of issues like this which have prevented ecosystems from forming around cameras, in the way they have around Android or iOS. It's like the proprietary phones predating the iPhone.
The irony is that phones are gradually passing dedicated cameras in increasing numbers of respects as cameras are now in a death spiral. Low volumes means less R&D. Less R&D and no ecosystem means low volumes. It also all translates into high prices.
The time to do this was about a decade ago. Apps, open formats, open USB protocols, open wifi / bluetooth protocols, and semi-open firmware (with a few proprietary blobs for color processing, likely) would have led things down a very different trajectory.
The price new fell by just 10% over the 7 years ($2000 -> $1800).
And in a lot of conditions, my Android phone takes better photos, by virtue of more advanced technology.
I have tens of thousands of dollars of camera equipment -- mostly more than a decade old -- and there just haven't been advancements warranting an upgrade. A modern camera will be maybe 30% better than a 2012-era one in terms of image quality, and otherwise, will have slightly more megapixels, somewhat better autofocus, and obviously be much smaller by the loss of a mirror. Video improved too.
The quote of the day is: "I wish it weren’t like this, but ultimately, it’s mostly fine. At least, for now. As long as the camera brands continue to work closely with companies like Adobe, we can likely trudge along just fine with this status quo."
No. We can't. The market has imploded. The roof is literally falling in and everyone says things are "fine."
Does any know how much volume there would be if cameras could be used in manufacturing processes for machine vision, on robots / drones, in self-driving cars, on building for security, as webcams for video conferencing, for remote education, and everywhere else imaging is exploding?
No. No one does, because they were never given the chance.
> And in a lot of conditions, my Android phone takes better photos, by virtue of more advanced technology.
> I have tens of thousands of dollars of camera equipment -- mostly more than a decade old -- and there just haven't been advancements warranting an upgrade. A modern camera will be maybe 30% better than a 2012-era one in terms of image quality, and otherwise, will have slightly more megapixels, somewhat better autofocus, and obviously be much smaller by the loss of a mirror. Video improved too.
I thought the same thing, and then I went and rented a Nikon Z8 to try out over a weekend and I was blown away by the "somewhat better autofocus". As someone who used to travel with a Pelican case full of camera gear, to just carrying an iPhone, I'm back to packing camera gear because I'm able to do things like capture tack-sharp birds in flight like I'm taking snapshots from the hip thanks to the massive increase in compute power and autofocus algorithms. "Subject Eye Detection AF" is a game-changer, and while phones do it, they don't have enough optical performance in their tiny sensors/lenses to do it at the necessary precision and speed to resolve things on fast-moving subjects.
In terms of IQ, weight, and all that, it's definitely not a huge difference. I would say it's better, but not so much that I particularly cared coming from a 12-year old DSLR. But the new AF absolutely shocked me with how good it is. It completely changed my outlook.
I say this, not to take away from your overall point, however, which is that a phone is good enough for almost everyone about 90% of the time. It's good enough that even though I upgraded my gear, I only bought one body when I traded in two, because my phone can handle short-focal length / landscape just fine, I don't need my Z8 for that. But a phone doesn't get anywhere close to what I can do with a 300mm or longer focal length lens on the Z8 with fast moving subjects.
Depends. When a new camera is out, it often takes time before Lightroom, Capture One, etc. support it. While I don't care about how the raw format works, being able to keep using my usual software even with a brand new camera is something I care a lot about.
How? Support for RAW formats is reasonably complete. I can hop between different editors without much, if any, hassle at all. Since getting my first DSLR in the early 00's, I have used (in no order) Photos, Bibble, Lightroom, Aperture, Capture One, Photoshop, Pixelmator, Photomator, Darkroom, On1, Raw Power, Nitro Photo, Luminar, Darktable and RawTherapee, all without fuss. Where is the lock in?
I think the article misses the point: It's not about how complex the data structures are, it's about the result, in all its details.
Comparing different RAW converters (Lightroom, DXO), their image rendering is slightly different. If you compare the colors with the JPEG image, even more so. If the goal is to faithfully reproduce the colors as they were shown in the camera, you depend on the manufacturer's knowledge. To me, it makes not sense to have some "open" DNG format in the middle, when it's flanked by proprietary processing.
It's not about the format, it's about knowing the details, including parameters, of the image processing pipeline to get a certain look.
Ultimately, RAW formats aren't that complex, and camera firmware is mostly developed in countries that don't have strong open source software traditions.
Look at the decoders for each format that darktable supports here: https://github.com/darktable-org/rawspeed/tree/develop/src/l...
It's some binary parsing, reading metadata, maybe doing some decompression-- a thousand lines of C++ on average for each format. These aren't complex codecs like HEVC and only reach JPEG complexity by embedding them as thumbnails!
Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
Photographers rarely care, so it doesn't appreciably impact sales. Raw processing software packages have generally good support available soon after new cameras are released.
Fujifilm lossy compressed raw still isn't supported after many years [1].
[1] https://github.com/darktable-org/rawspeed/issues/366
And in my experience there has been lots of bugs with Fujifilm raws in darktable:
[2] https://github.com/darktable-org/rawspeed/issues/354
[3] https://github.com/darktable-org/darktable/issues/18073
However, Fujifilm lossless compressed raw actually does a decent job keeping the file sizes down (about 50% to 60% the file size of uncompressed) while maintaining decent write speed during burst shooting.
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
I think this is being too generous.
DNG is just an offshoot of TIFF. Having written a basic DNG parser having never read up on TIFFs before, it really isn’t that hard.
As far as experimental features, there’s room in the spec for injecting your own stuff, similar to MakerNote in EXIF if I recall.
If you are planning to do experimental stuff, I’d say what Apple pulled off with ProRAW is the most innovative thing that a camera manufacturer has done in forever. They worked with Adobe to get it into the spec. All of these camera manufacturers have similar working relationships with Adobe, so there’s really no excuse. And if you can’t wait that long, again, MakerNote it.
In my opinion, custom RAW formats are a case study in “Not Invented Here” syndrome.
It took a long time for Canon CR3 raw format to be supported by darktable because, although the format itself had been reverse engineered, there was a fear from the developers that it was covered by a patent and that they risked a lawsuit by integrating it in DT. IIRC, they had attempted to contact Cabon legal to obtain some sort of waiver, without success.
I'm fact I'm not sure how that saga ended and CR3 support was finally added a few years after the release of the Canon mirrorless cameras that output CR3.
I think that might be why a lot of camera makers don't care to use DNG - it's easier to make their own format and easy enough for others to reverse engineer it.
One thing that open source libraries do tend to miss is that very important extra metadata - for example, Phase One IIQ files have an embedded sensor profile or full on black frame that is not yet encoded into the raw data like it typically is for a NEF or DNG from many cameras. It does seem rawspeed handles this from a quick scan of the code.
It can get more tricky - Sinar digital backs have an extra dark frame file (and flat frame!) that is not part of the RAW files, and that is not handled by any open source library to my knowledge - though I did write a basic converter myself to handle it: https://github.com/mgolub2/iatodng_rs
I'm not sure how DNG would be able to handle having both a dark and flat frame without resorting to applying them to the raw data and saving only the processed (still unbayered) data.
> One thing that open source libraries do tend to miss is that very important extra metadata - for example, Phase One IIQ files have an embedded sensor profile or full on black frame that is not yet encoded into the raw data like it typically is for a NEF or DNG from many cameras.
In astronomy/astrophotography the FITS format[1] is commonly used, which supports all these things and is, as the name suggests, extremely flexible. I wonder why it never caught on in regular photography.
1: https://en.wikipedia.org/wiki/FITS
Oh interesting! This seems like it would be a good fit ;)
Especially for really old setups that had RGB color wheels and multiple exposures, exactly like a multispectral astro image might. Phase one also has a multispectral capture system for cultural heritage, which just shoots individual IIQs to my knowledge… It would work great too for multiple pixel shift shots.
Possibly, the engineers just didn’t know about it when they were asked to write the firmware? It’s funny, I think most RAW formats are just weird TIFFs to some degree, so why not use this instead.
Yes. TIFF would "fit the bil" here. It deals with multspectral satellite images. It supports 32 and 64 bits floats and 16bits integers.
Oh nice, I didn't know that TIFF could handle that as well!
Can't you just throw such frames into additional IFDs or SubIFDs?
Maybe? I’m not familiar enough with DNG to say - possibly that wasn’t a thing when Phase One first started using IIQ? I doubt it was around when Sinar was - in fact the last two (esprit 65 & S30|45 ) Sinar backs do use DNG as an option!
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), [..]
Technically speaking, implementing DNG would be another development activity on top of a RAW export, because RAW also has a purpose in development and tuning of the camera and its firmware.
It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development.
It just "happens" to be also available to select for the end-user after product-launch. Supporting DNG would mean adding an extra feature and then hiding the RAW-option again.
I can imagine it's hard to make this a priority in a project plan, since most of the objectives are already achieved by saving in RAW
> It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development.
This is what I was thinking, that there are potentially so many RAW formats because there are so many sensors with potentially different output data. There should be a way to standardize this though.
Yeah, but it's not standardised because its output is so close to "bare metal", it's wrapped into a standardised format a few steps later when a JPG/HEIC/... is created.
Supporting DNG means that those few steps later it should be standardised into ANOTHER RAW-equivalent. A format which happens to be patented and comes with a license and legal implications.
Among them the right for Adobe to every method you used to make this conversion from your proprietary bare-metal sensor-data. This is not trivial, because if you're a vendor working on sensor-tech you wouldn't want to be required to share all your processing with Adobe for free...
I have no knowledge of DNG, what I was suggesting is that someone should devise a some kind of extensible, self-describing format that can be used in place of RAW without losing any sensor data as with JPEG/HEIC/etc.
Ah I see.
Well, DNG ("Digital Negative") is such a format, defined and patented by Adobe, but with a license allowing free use under certain conditions.
The conditions are somewhat required to make sure that Adobe remains in control of the format, but at the same time they create a commitment and legal implications for anyone adopting it.
> It just "happens" to be also available to select for the end-user after product-launch
RAW (any format) is an essential requirement for many photographers. You just can't get the same results out of a jpeg.
None of this is disputed (or relevant) in this conversation
I disagree. Bufferoverflow frames raw formats as something that's really only there for R&D purposes, and it's more or less just an afterthought that it's available to photographers. In reality, Narretz points out, getting access to the raw sensor data is a key feature to many photographers; it's an essential aspect of the product from a user perspective.
Since you disagree: where in this thread did anyone state the opposite of what you just wrote, who said that RAW is NOT a key feature to many photographers?
Here:
> It is supposed to be raw data from the sensor with some additional metrics streamed in, just sufficiently standardized to be used in the camera-vendors' toolchain for development. It just "happens" to be also available to select for the end-user after product-launch.
> Technically speaking, implementing DNG would be another development activity on top of a RAW export,
What are you talking about? Canon could implement DNG instead of CR3. It's not that hard. Both of these formats are referred to as "RAW".
Just as I wrote. CR3 is used by Canon also during development and tuning of their sensors and cameras.
DNG would not replace CR3, because CR3 would still be needed before launch, and Canon has no incentive to change their entire internal toolchain to comply to Adobes DNG specification.
Especially not because the DNG format is patented and allows Adobe to revoke the license in case of dispute...
First of all, it does not "just happen" to be selectable. RAW contains information that is not available in a JPG or PNG , but which is crucial to a lot of serious photographers.
Second, the native raw images do include a ton of adjustments in brightness, contrast and color correction. All of which gets lost when you open the image file with apps provided from other companies than the camera vendor. Eg. open a Nikon-raw in NC Software and then in Lightroom. Big difference. Adobe has some profiles that get near the original result, but the Nikon raw standards often are better.
So DNG would absolutely be an advantage because then at least these color corrections could natively be implemented and not get lost in the process.
Noone is disputing the advantage of RAW. I tried to provide the view from a pure development perspective, looking at a feature backlog.
It "just happens" to be selectable because it is a byproduct of the internal development: The existing RAW format is used internally during development and tuning of the product, and is implemented to work with vendor-internal processes and tools.
Supporting DNG would require a SEPARATE development, and it would still not replace a proprietary RAW-format in the internal toolchain.
(because the DNG patent-license comes with rights for Adobe as well as an option to revoke the license)
most people who shoot RAW don't care for the in camera picture adjustments so don't care if RAW shows up looking what it did in the camera because we apply our own edits anyways, if we need something like that we shot jpeg
> It is supposed to be raw data from the sensor with some additional metrics streamed in
...and what do you think DNG is?
A patented format where Adobe standardized the exact syntax for each parameter, with mandatory and optional elements to be compliant, and (!) a patent license with some non-trivial implications which is also only valid if the implementation is compliant.
In a development environment, this format competes with an already-implemented proprietary RAW-format which already works and can be improved upon without involvement of a legal department or 3rd party.
In my personal opinion, considering a file format as something that is patentable is where you've (ie your country) has gone wrong here.
It doesn't seem to reward innovation, it seems to reward anti-competitive practices.
> it seems to reward anti-competitive practices.
That is the intended purpose of a patent. From WIPO [1]:
> The patent owner has the exclusive right to prevent or stop others from commercially exploiting the patented invention for a limited period within the country or region in which the patent was granted. In other words, patent protection means that the invention cannot be commercially made, used, distributed, imported or sold by others without the patent owner's consent.
[1] https://www.wipo.int/en/web/patents
This is not correct. Both the subhead of the article and the DNG format's Wikipedia Page state that DNG is open and not subject to IP licensing.
While having two file formats to deal with in software development definitely "competes" with the simplicity of just having one, patents and licensing aren't the reason they're not choosing Adobe DNG.
The fact that both your sources are NOT the actual DNG license text should be sufficient to humble yourself from "This is not correct" to at least a question.
--> Your information source is incomplete. Please refer to the license of DNG [0].
The patent rights are only granted:
1. When used to make compliant implementations to the specification,
2. Adobe has the right to license at no cost every method used to create this DNG from the manufacturer, and
3. Adobe reserves the right to revoke the rights "in the event that such licensee or its affiliates brings any patent action against Adobe or its affiliates related to the reading or writing of files that comply with the DNG Specification"
--
None of this is trivial to a large company.
First of all, it requires involvement of a legal department for clearance,
Second, you are in risk of violation of the patent as soon as you are not compliant to the specification,
Third, you may have to open every IP to Adobe at no charge which is required in order to create a DNG from your sensor (which can be a significant risk and burden if you develop your own sensor) and
Fourth, in case the aforementioned IP is repurposed by Adobe and you take legal action, your patent-license for DNG is revoked.
--
--> If you are a vendor with a working RAW implementation and all the necessary tools for it in place, it's hard to make a case on why you should go through all that just to implement another specification.
[0] https://helpx.adobe.com/camera-raw/digital-negative.html#dng
None of this is terrifying and seems overblown. I read the patent grant you linked to. It makes sense that one would not grant the right to make incompatible versions. That would confuse the user. Also, the right of revocation only applies if the DNG implementor tries to sue Adobe. Why would they do that?
Occam's razor here suggests that the camera manufacturers' answers are correct, especially since they are all the same. DNG doesn't let them store what they want to and change it at will -- and this is true of any standardized file format and not true of any proprietary format.
> None of this is terrifying and seems overblown. I read the patent grant [..]
Considering that you entered this discussion instantly claiming that others are wrong without having even read the license in question makes this conversation rather..."open-ended"
> Also, the right of revocation only applies if the DNG implementor tries to sue Adobe. Why would they do that?
As I wrote above, Adobe reserves the right to use every patent that happens to be used to create this DNG from your design at no cost, and will revoke your license if you disagree i.e. with what they do with it.
> Occam's razor here suggests [..]
Or, as I suggested, it's simply hard to make a case in favor of developing and maintaining DNG with all that burden if you anyway have to support RAW
That's fair. It's certainly not "open source" in that way that term is usually used. I still think that's not the primary issue and that the manufacturers are being honest about their preference for proprietary formats. But I see that Adobe legal concerns hanging over their heads isn't an advantage, for sure.
Also...
> granted by Adobe to individuals and organizations that desire to develop, market, and/or distribute hardware and software that reads and/or writes image files compliant with the DNG Specification.
If I use it for something it's not images because I want to create a DNG file that's a DNG file and a Gameboy ROM at the same time. Or if I'm a security researcher testing non compliant files. Or if I'm not a great developer or haven't had enough time to make my library perfectly compliant with the specification... Will I be sued for breaking the license?
The fatal scenario for a camera vendor would be to transition your customers to DNG over some years, then a dispute arises which causes Adobe to revoke your patent license, and suddenly all your past products are in violation of Adobe's DNG patent.
You not only have to remove DNG-support on those products, but due to warranty-law in many countries have to provide an equivalent feature to the customer (--> develop a converter application again, but this time for products you already closed development for years ago).
Alternative would be to settle with Adobe to spare the cost for all that. So Adobe has all the cards in this game.
Now: Why bother transitioning your customers to DNG...?
What? Number two would make most companies run the other way. “Whatever you use to create a DNG, secret sauce or algorithm or processing from your sensor data, Adobe can license” - you act like it’s no big deal but it’s often the closely guarded color science or such things.
You can argue that maybe those things shouldn’t be considered trade secrets or whatever. But there’s just a bit more to it than that.
A file format containing a subset of the image sensor data needed for tuning an image sensor. It's user focused rather than camera developer focused.
Neither DNG nor various vendor-specific raw formats are meant for tuning an image sensor. They can be used for that in some specific cases, but it's not what they are for. They're for taking photos and providing the user with less opinionated data so they can do the processing of their photos the way they want rather than rely on predefined processing pipeline implemented in the camera.
Despite the name, this is rarely a pure raw stream of data coming from the sensor. It's usually close enough for practical photographic purposes though.
I always thought camera RAW formats were optimize continuous shooting rates. About being able to linearly write an image as fast as possible.
I don't know the details of DNG but even the slightest complication could be a no-go for some manufacturers.
The main reason people shoot raw is to have more creative control over the final product.
A simple example is white balance. The sensor doesn't know anything about it, but typical postprocessing makes both a 2700K incandescent and a 5700K strobe look white. A photographer might prefer to make the incandescent lights look more yellow. There's a white balance setting in the camera to do that when taking the picture, but it's a lot easier to get it perfect later in front of a large color-calibrated display than in the field.
Another example is dealing with a scene containing a lot of dynamic range, such as direct sunlight and dark shadows. The camera's sensor can capture a greater range of brightness than a computer screen can display or a printer can represent, so a photographer might prefer to delay decisions about what's dark grey with some details and what's clipped to black.
?? This was not asked.
Everything you said is supported by regular image formats. You can adjust white balance of any photo and you think image formats are only limited to 16-bit and sRGB?
That’s not why we use RAW. It’s partly because (1) if you used Adobe RGB or Rec. 709 on a JPEG, a lot of people would screw it up, (2) you get a little extra raw data from the pre-filtering of Bayer, X-Trans, etc. data, (3) it’s less development work for camera manufacturers, and (4) partly historical.
> Everything you said is supported by regular image formats. You can adjust white balance of any photo and you think image formats are only limited to 16-bit and sRGB?
No - the non-RAW image formats offered were traditionally JPG and 8-bit TIFF. Neither of those are suitable for good quality post-capture edits, irrespective of their colour space (in fact, too-wide a colour space is likely to make the initial capture worse because of the limited 8-bit-per-colour range).
These days there is HEIF/similar formats, which may be good enough. But support in 3rd party tools (including Adobe) is no better than RAW yet, i.e., you need to go through a conversion step. So...
Also don't forget one of the promises of RAW: That RAW developers will continue to evolve, so that you'll be able to generate a better conversion down the line than now. Granted, given the maturity of developers the pace of innovation has slowed down a lot compared to 20 years ago, but there are still incremental improvements happening.
Another advantage of RAW is non-destructive editing, at least in developers that support it and are more than import plugins for traditional editors. I rarely have to touch Photoshop these days.
what format can i a change the white balance of the image on other then RAW in software, for all the years i have used digital cameras i can't think of one...
Try and adjust shadows and highlights in a jpg vs a raw file and see what happens. There is no data there in the jpg just black and white blown out. Raw file you can brighten the shadows and find moth man standing there with a little extra sensor noise.
Are you adjusting an 8-bit JPG (probably) or a 12-bit JPG (rare)?
Try adjusting a 8-bit RAW file and you will have the same problem.
You are conflating format and bitrate.
The bottleneck is usually in SD card write speeds, however. Sport photographers often skip raw and only use JPG because the files are smaller and as a result, one can take more photos in one burst.
For raw at high frame rates, high end cameras don't use SD cards but things like CFexpress which absolutely can keep up (and there are also various compressed RAW formats these days which apply a degree of lossy compression to reduce file size).
As I understand it, the reason some professional sports photographers don't shoot RAW (or it's less important) is more because they are in an environment where publishing quickly is important, so upload speeds matter and there isn't really much time to postprocess.
Canon’s “sport professional” camera has lower resolution than the “second tier” cameras. It has a higher frame rate and CFExpress and SDXC2 so bandwidth isn’t an issue. Last I checked you could burst 40 or 50 frames (at 20ish fps) before filling the internal buffer.
You can definitely do more than that these days. My Nikon Z8 can do 30fps with 1xCFExpress, and the flagship Z9 can do 120fps because it has 2xCFExpress and alternates writes. On the Sony side they have a closer differentiation to what you describe, the flagship (A1 II) does only 30fps compared to the next-best (A9 III) which does 120fps, while the prosumer (A7 RV) only does 10fps.
I don't know Canon well, but 120fps w/ dual CFExpress + 800-1200 frames buffer is fairly standard on top-end professional sports/wildlife mirrorless cameras these days.
I believe this might have been the case in the past, where (a) sensor resolutions were lower - so the raw images less bulky, (b) camera CPUs were slower - so you would like to take them out of the equation.
These days, the bottleneck for achieving continuous shooting rate is probably writting to the sd card (which is the standard for the consumer/pro-sumer models).
It is always written into a memory buffer first, which could be like 256 megabytes... it tooks time to fill it up, once it is filled, memory card speed becomes a bottleneck. So, actually, writing only jpegs would trigger the slowdown later, so you could take more frames before the buffer fills up
This was my guess too, get the raw bayer data from the sensor in one go + some metadata. Then as the sensors and cameras evolve they are just accumulating different formats?
DNG is a TIFF file, just like most proprietary raw formats.
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
I am a weirdo and always liked and used Pentax (now Ricoh) they do support the DNG format.
Pentax/Ricoh really is a hidden gem. I love my "dinosaur" K1 Mark II and my GR IIIx goes EVERYWHERE I go.
These formats aren't complex because they really are supposed to be raw (-:
But yeah, it would be preferable to have them use the digital negative (DNG) format, but why bother when the community makes the work for them? Reminds me of how Bethesda does things.
Traditional Nikon NEF is pretty simple. It's just a tiff. Lossy compression is just gamma-encoding with a LUT (stored in the file). I think most traditional raws are principally similar. More complex compression schemes like ticoraw are fairly recent.
What's complex is the metadata. All the cameras have different AF, WB and exposure systems.
The contents are simple. How to interpret the contents, is not simple. That is why you see internet advice advocating for keeping old raw files around, because Lightroom and Photoshop sometimes gets updates which can cram out better results from old raw files.
(Edit: I mean, if you want to get a basic debayered RGB image from a raw, that's not too hard. But if you want to cram out the most, there are a lot of devils in a lot of details. Things like estimating how many green pixels are not actually green, but light-spill from what should have been red pixels is just the beginning.)
Yet that's processing level stuff, not format stuff. Even unlikely that the manufacturer made the best possible result from the sensor input as is.
Why would you need coordination with Adobe? Their software already reads DNG files just fine.
Raw decoding is not as simple as you might think.
It’s the best place to add “signature steps.” Things like noise reduction, chromatic aberration correction, and one-step HDR processing.
I used to work for a camera manufacturer, and our Raw decoder was an extremely intense pipeline step. It was treated as one of the biggest secrets in the company.
Third-party deinterlacers could not exactly match ours, although they could get very good results.
Anecdotally, using Darktable, I could never get as good of a demosaicing result as using the straight-out-of-camera JPEGs from my Fujifilm GFX 100S. In challenging scenarios such as fine diagonal lines, Darktable's algorithms such as LMMSE would add a lot of false colour to the image.
However, modern deep learning-based joint demosaicing and denoising algorithms handily outperform Darktable's classical algorithms.
Well, it is obvious that between a RAW file and the final image there are a lot of complex processing steps. But that is independent of the file format used. DNG isn't so much different, just documented. And while the manufacturers converter might give the best results, the photographers rather use the image processing programs from Adobe or their competition which use their own RAW converters anyway.
Yeah, they could do it with DNG (I suppose), but they don't really have any reason to do so (in their minds). Personally, I like open stuff, but they did not share my mindset, and I respected their posture.
Raw decoding is an algorithm, not a container format. The issue is every is coming up with their own proprietary containers for identical data that just represents sensor readings.
It's more than just a file format.
The issue is that companies want control of the demosaicing stage, and the container format is part of that strategy.
If a file format is a corporate proprietary one, then there's no expectation that they should provide services that do not directly benefit them, or that expose internal corporate trade secrets, in service to an open format.
If they have their own format, then they don't have to lose any sleep over stuff that doesn't interest or benefit them.
By definition, a RAW container contains sensor data, and nothing more. Are you saying that Adobe is using their proprietary algorithms to render proprietary RAW formats in Lightroom?
I don’t know about Adobe. I never worked for them.
Can you share what company have you worked for?
Not publicly. It’s not difficult to figure out, but I make it a point, not to post stuff that would show up in their search algorithms.
But it was a pretty major one, and I ran their host image pipeline software team.
[Edited to Add] It was one of the “no comment” companies. They won’t discuss their Raw format in detail, and neither will I, even though it has been many years, since I left that company, and it’s likely that my knowledge is dated.
You've made it pretty clear, thank you.
That was my suspicion initially. In fact, when I read about mass DNG adoption, my first thought was "but how would it work for this company?" (admittedly I don't know much about DNG, but intuitively I had my doubts).
And then I saw your comment.
> They won’t discuss their Raw format in detail
Can you share the reason for that?
It seems to me that long ago, camera companies thought they would charge money for their proprietary conversion software. It has been obvious for nearly as long that nobody is going to pay for it, and delayed compatibility with the software people actually want to use will only slow down sales of new models.
With that reasoning long-dead, is there some other competitive advantage they perceive to keeping details of the raw format secret?
The main reason is that image Quality is the main coefficient of their corporation. They felt that it was a competitive advantage, and sort of a "secret ingredient," like you will hear from master chefs.
They feel that their images have a "corporate fingerprint," and are always concerned that images not get out, that don't demonstrate that.
This often resulted in difficulty, getting sample images.
Also, for things like chromatic aberration correction, you could add metadata that describes the lens that took the picture, and use that to inform the correction algorithm.
In many cases, a lens that displays chromatic aberration is an embarrassment. It's one of those "dirty little secrets," that camera manufacturers don't want to admit exists.
As they started producing cheaper lenses, with less glass, they would get more ChrAb, and they didn't want people to see that.
Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy." If you can apply it before, you can minimize data loss. Same with noise reduction.
Many folks here, would absolutely freak, if they saw the complexity of our deBayer filter. It was a pretty massive bit of code.
I am very skeptical that chromatic aberration can be applied before a demosaic and then the result can be stored in a Bayer array again. There seems to be no advantage in storing the result of chromatic aberration correction in a raw Bayer array, which has less information, than a full array with the three RGB values per pixel. Perhaps I am not understanding it correctly?
Thanks for the explanation. I have to question how reality-based that thinking is. I do not, of course expect you to defend it.
It seems to me that nearly all photographers who are particularly concerned with image quality shoot raw and use third-party processing software. Perhaps that's a decision not rooted firmly in reality, but it would take a massive effort focused on software UX to get very many to switch to first-party software.
> Raw files are where you can compensate for that, with the least impact on image quality. You can have ChrAb correction, applied after the demosaic, but it will be "lossy."
Are you saying that they're baking chromatic aberration corrections into the raw files themselves so that third-party software can't detect it? I know the trend lately is to tolerate more software-correctable flaws in lenses today because it allows for gains elsewhere (often sharpness or size, not just price), but I'm used to seeing those corrections as a step in the raw development pipeline which software can toggle.
I think we're getting into that stuff that I don't want to elaborate on. They would probably get cranky I have said what I've said, but that's pretty common knowledge.
If the third-party stuff has access to the raw Bayer format, they can do pretty much anything. They may not have the actual manufacturer data on lenses, but they may be able to do a lot.
Also, 50MP, lossless-compressed (or uncompressed) 16-bit-per-channel images tend to be big. It takes a lot to process them; especially if you have time constraints (like video). Remember that these devices have their own, low-power processors, and they need to handle the data. If we wrote host software to provide matching processing, we needed to mimic what the device firmware did. You don't necessarily have that issue, with third-party pipelines, as no one expects them to match.
It might be the non-mosaic one.
What does that have to do with storing the raw sensor values?
See my comment below.
Every camera has a unique RAW format even cameras from the same company. The article briefly mentions this but doesn't go into that much detail. I've got at least 10 Nikon cameras going back to 2005 and every "NEF" Nikon RAW file is different so if you buy your camera on the first day it is released you have to wait for your software vendor to add support or shoot in JPEG format. There have been a few times when the RAW files are so similar that you can use a hex or EXIF editor and change the camera model EXIF field to an older supported camera and load the file. But in theory the RAW converter has been profiled for each specific camera using ICC color targets and stuff like that.
> But in theory the RAW converter has been profiled for each specific camera using ICC color targets and stuff like that.
In practice too, if consistent results are desired. The format being identical doesn't mean the values the sensor captures under the same conditions will be identical, so a color-calibrated workflow could produce wrong results.
It would be nice to have a setting for "treat camera Y like camera X (here there be dragons)" though. I've had to do something similar with the Lensfun database to get lens corrections working on Mk. II of a lens where Mk. I was supported, but a GUI would be nice. A prompt to guess the substitution automatically would be even nicer.
One problem is that you cannot have a universal format that is both truly raw and doesn't embed camera specific information. Camera sensors from different companies (and different generations) don't have the same color (or if you prefer, spectral) responses with both their Bayer filter layer and the underlying physical sensor. If you have truly raw numbers, you need the specific spectral response information to interpret them; if you don't need spectral response information, you don't actually have truly raw numbers. People very much want raw numbers for various reasons, and also camera companies are not really enthused about disclosing the spectral response characteristics of their sensors (although people obviously reverse engineer them anyway).
> Camera sensors from different companies (and different generations) don't have the same color (or if you prefer, spectral) responses with both their Bayer filter layer and the underlying physical sensor
This is all accommodated for in the DNG spec. The camera manufacturers specify the necessary matrix transforms to get into the XYZ colorspace, along with a linearization table.
If they really think the spectral sensitivity is some valuable IP, they are delusional. It should take one Macbeth chart, a spreadsheet, and one afternoon to reverse engineer this stuff.
Given that third party libraries have figured this stuff out, seems they have failed while only making things more difficult for users.
What does RAW really mean then? Couldn't they simply redefine what RAW means to create a standard that can include proprietary technology? Like why not define it as including a spectral response?
There is no 'RAW' format as such. In practice, 'RAW' is a jargon term for "camera specific format with basically raw sensor readings and various additional information". Typically the various RAW formats don't embed the spectral information, just a camera model identifier, because why waste space on stuff the camera makers already know and will put in their (usually maker specific) processing software.
(Eg Nikon's format is 'NEF', Canon's is 'CR3', and so on, named after the file extensions.)
I don't know if DNG can contain (optional) spectral response information, but camera makers were traditionally not enthused about sharing such information, or for that matter other information they put in their various raw formats. Nikon famously 'encrypted' some NEF information at one point (which was promptly broken by third party tools).
This is confusing.
A 1920x1080 24-bit RAW image is a file of exactly 6,220,800 bytes. There are only a few possible permutations of parameters: Which of the 4 corners comes first, whether the row-major or column-major order, what order the 3 colors are in (RGB or BGR), and whether the colors are stored as planes or not. (Without planes, a pixel's R, G and B bytes are adjacent. With planes, you essentially have three parallel monochrome images, i.e. cat r.raw g.raw b.raw > rgb.raw) [1]
What the article is describing sounds like something that's not a raw file, but a full image format with a header.
[1] One may ask, how does the receiving software know the file is 1920 x 1080 and not, say, 3840 x 540? Or for that matter, a grayscale image of size 5760 x 1080?
The answer is that, with no header, you have to supply that information yourself when importing the image. (E.g. you have to manually type it into a text entry field in your image editor's file import UI.)
> what order the 3 colors are in (RGB or BGR)
Camera raw files typically come in a raw bayer mosaic so each pixel has only one colour.
> This is confusing.
We’ll, yes. You’re thinking of the classic RAW format that was just a plain array of RGB pixels without a header.
When talking about digital cameras RAW refers to a collection of vendor specific file formats for capturing raw sensor data, together with a bunch of metadata.
I did push for all my digital images to be DNG, and they are, up to around 2018, and two out of four cameras use it natively - Pentax, Leica - while the other two use their own formats - Canon, Fuji.
The reason I’m less fussy now is because the combination of edits, metadata and image data in a single file didn’t necessarily help me when I switched from Lightroom to Capture One. I would love to be able to update the files to use newer RAW processors and better IQ, but I lose the Lightroom edit information in C1. That makes sense as they do things differently. But I hoped that with DNG there was a universal format for handling edits.
My JPEGs remain the definitive version of the images but I would love to be able to recover all those original edits again in C1, or any other editing program.
DSLRs have just dropped off the wagon a long time ago, when it comes to software and especially meaningful UX innovation.
As an anecdote, I have a Sony a7r and operating it via its mobile app is one of the worst user experiences I have had in a while.
Same goes to the surrounding ecosystem of software. E.g. Adobe's Lightroom is full of obsolete paradigms and weird usability choises.
Over the past 15-20 years I've used both Sonys, Canons and Nikons, and I absolutely feel that Nikon puts a lot more effort, with much better results, into the usability of their pro/prosumer cameras - and, really, even their $500-$1000 consumer range - both in terms of the on-display UI and the ergonomics and handling of the actual camera.
What always stood out most for me compared to Canon was Nikon's larger viewfinders, letting you commit to actual photography rather than being stuck with a feeling of peeping through a keyhole, and placement of buttons on the camera body allowing for maintained control of the most necessary functions (shutter speed, aperture and even ISO) without having to change your grip or move the camera away from your face.
Nikon bodies are designed by photographers, and in the F-mount line, also by the same guy who did the Ferraris that made that brand's name.
Canon bodies are designed by engineers, who all had to prove they could palm a cinder block in order to get hired.
Sony bodies are designed by the cinder block.
Only Nikons I own are 35mm film FM2 and F4. The bodies feel like tactile bliss. FM2 has a dry lubricated system with crazy titanium honeycombed etched shutter and F4 is the last pro DSLR they made with no menu system.
On the digital front I found Fuji X-Txx series to be like tiny Nikons in their usability with all common controls on dials.
I'm (at least) a third-generation Nikon shooter, and I still have my grandfather's FTn. For its era, predating CNC and CAD, it is very comfortable to use, but the leather "eveready" case shell is welcome.
(One reason I shoot Nikon is because I can still shoot his glass on modern bodies. Indeed, that's what my D5300 spends a lot of its time wearing these days.)
True revolutions in consumer imaging excepted, I doubt I'll feel more than an occasional gadget fan's urge to replace my D850 and D500 as my primary bodies. Oh, the Z series has features, I won't disagree, even if I'm deeply suspicious of EVFs and battery life. But the D850 is a slightly stately, super-versatile full-frame body, and the D500 is a light, 20fps APS-C, that share identical UIs, lens and peripheral lineups, and (given a fast card to write to) deep enough buffers to mostly not need thinking about.
For someone like me who cares very little about technical specs, and a great deal for the ability to hang a camera over their shoulder and walk out the door and never once lose a shot due to equipment failure, there really isn't much that could matter more. I may have 350 milliseconds to get a flight shot of a spooked heron, or be holding my breath and focusing near 1:1 macro with three flash heads twelve inches away from a busily foraging and politely opinionated hornet. In those moments, eye and hand and machine and mind and body all must work as one to get the shot, and having to think at all about how to use the camera essentially guarantees a miss.
Hence the five years of work I've put into not having to think about that. I suppose I could've done more than well enough with any system, sure. But my experiences with others have left me usually quite glad Nikon's is the system I invested in.
Old school Zeiss glass is like butter for any camera body. My dad told me to stick with Nikon and spend my money on lenses first. He was not wrong. You can put 25 year old professional lenses on a mid-market Nikon body and the images will be stunning with very little effort.
I havent tried the mirrorless cameras but on dslr canon is great ux imo. Everything you need to adjust on the fly is easy. Its usually controlled with a dial that can change the parameter it adjusts with a modifier button. Saving you what might be yet another dial on like a fuji xt5.
But even then once youve metered a scene how often do you adjust iso on the fly? Hardly ever. Fixed iso, aperture priority, center dot focus and metering, off to the races.
Most hardware companies are just terrible at software in general. Camera makers are pretty average in that regard.
Usability of the camera hardware and software ecosystem is another matter. I think the common wisdom is that most paying users don't want beginner-friendly, they want powerful and familiar. So everything emulates the paradigms of what came before. DSLRs try to provide an interface that would be familiar to someone used to a 50 year old SLR camera, and Lightroom tries to emulate a physical darkroom. Being somewhat hostile to the uninitiated might even be seen as a feature.
It's also like 4 digital dials. And you can leave most to Auto until you realize each specific dial enables something you desire. Sony tried "non-scary automagic" approach, and have instantly gone back to dials.
There's also Sigma BF if that's what you want; Sigma actually do pretty good job from perspective of minimalistic, idealistic, on-point, field usable UI, though the return of that effort just isn't worthwhile. I have the OG DP1, it feels natural as IntelliMouse PS/2. I've tried dp2 Quattro once and it felt natural as any serious right-handed trackballs. They scratch so many of camera nerds' itching points.
Most people just buys an A7M4 and an 24-70 Zeiss. And then they stupidly leave it all to auto and never touches the dials. And it puts smiles on people's faces 80% of times. And that's okay. No?
Yes, fully agreed. However, the way the companies currently approach this - catering for the ever-reducing niche, will end up killing the DLSRs over time. They just don't offer enough over phones, and the UX/SW being so crappy alienates the potential new userbase completely.
> They just don't offer enough over phones
You can achieve maybe a quarter of the kinds of shots on a phone that an interchangeable-lens camera will let you make.
That's an extremely important quarter! For most people it covers everything they ever want a camera to do. But if you want to get into the other 75%, you're never going to be able to do it with the enormous constraints imposed by a phone camera's strict optical limits, arising from the tight physical constraints into which that camera has to fit.
I had two phones with 108MP sensors and while you can zoom in on the resulting image the details are suggestions rather than what I would consider pixels.
Whereas a $1500 Nikon 15MP from 20 years ago is real crisp, and I can put a 300mm lens on it if I want to "zoom in".
Even my old nikon 1 v1 with its cropped sensor 12MP takes "better pictures" than the two 108MP phone cameras.
But there are uses for the pixel density and I enjoyed having 108MP for certain shots, otherwise not using that mode in general.
Yeah, that's the exact tradeoff. 108MP (or even whatever the real photosite count is that they're shift-capturing or otherwise trick-shooting to get that number) on a sensor that small is genuinely revolutionary. But only giving that sensor as much light to work with as a matchhead-sized lens can capture for it, there's no way to avoid relying very heavily on the ISP to yield an intelligible image. Again, that does an incredible job for what little it's given to work with - but doing so requires it be what we could fairly call "inventive," with the result that anywhere near 100% zoom, "suggestions" are exactly what you're seeing. The detail is as much computational as "real."
People make much of whatever Samsung it was a couple years back, that got caught copy-pasting a sharper image of Luna into that one shot everyone takes and then gets disappointed with the result because, unlike the real thing, our brain doesn't make the moon seem bigger in pictures. But they all do this and they have for years. I tried taking pictures of some Polistes exclamans wasps with my phone a couple years back, in good bright lighting with a decent CRI (my kitchen, they were houseguests). Now if you image search that species name, you'll see these wasps are quite colorful, with complex markings in shades ranging from bright yellow through orange, "ferruginous" rust-red, and black.
In the light I had in the kitchen, I could see all these colors clearly with my eyes, through the glass of the heated terrarium that was serving as the wasps' temporary enclosure. (They'd shown a distinct propensity for the HVAC registers, and while I find their company congenial, having a dozen fertile females exploring the ductwork might have been a bit much even for me...) But as far as I could get the cameras on this iPhone 13 mini to report, from as close as their shitty minimum working distance allows, these wasps were all solid yellow from the flat of their heart-shaped faces to the tip of their pointy butts. No matter what I did, even pulling a shot into Photoshop to sample pixels and experimentally oversaturate, I couldn't squeeze more than a hint of red out of anything without resorting to hue adjustments, i.e. there is no red there to find.
So all I can conclude is the frigging thing made up a wasp - oh, not in the computer vision, generative AI sense we would mean that now, or even in the Samsung sense that only works for the one subject anyway, but in the sense that even in the most favorable of real-world conditions, it's working from such a total approximation of the actual scene that, unless that scene corresponds closely enough to what the ISP's pipeline was "trained on" by the engineers who design phones' imaging subsystems, the poor hapless thing really can't help but screw it up.
This is why people who complain about discrete cameras' lack of brains are wrongheaded to do so. I see how they get there, but there are some aspects of physics that really can't be replaced by computation, including basically all the ones that matter, and the physical, optical singlemindedness of the discrete camera's sole design focus is what liberates it to excel in that realm. Just as with humans, all cramming a phone in there will do is give the poor thing anxiety.
I generally judge a camera by how accurately it can capture sunset, relative to what i actually see. on a samsung galaxy note 20, i can mess with the white balance a bit to get it "pretty close", but tends to clamp color values so the colors are more uniform than they are in real life. I've seen orange dreamsicle, strawberry sherbet, lavender, at the same time, at different intensities in the same section of sky. No phone camera seems to be able to capture that. http://projectftm.com/#noo2qor_GgyU1ofgr0B4jA captured last month. it wasn't so "pastel", it was much more rich. The lightening at the "horizon" is also common with phone cameras, and has been since the iphone 4 and Nexus series of phones. It looks awful and i don't get why people put up with it.
Sony is famous for having the worst interfaces of all the big camera manufacturers.
Lightroom most likely has “obsolete paradigms” for the same reason Photoshop does: because professionals want to use what they know rather than what is fashionable. Reprogramming their muscle memory is not something people want to be doing. Anyway, I find Lightroom’s UI very nice to work with.
Lightroom is very intuitive, once you've spent a few years learning how everything works.
I don't know, I think the learning curve is pretty gentle. Like all complex software, it may be difficult to master, but getting started felt easy enough as far as I remember.
I found it incredibly frustrating at first, so much so that a Loupedeck became a wise and necessary investment to keep the anticipation of editing burden from beginning to depress my interest in photography.
I still have the Loupedeck, on one of the shelves behind my desk. I think I might have used it twice last year.
S̶o̶,̶ ̶i̶t̶'̶s̶ ̶n̶o̶t̶ ̶i̶n̶t̶u̶i̶t̶i̶v̶e̶?̶ ̶I̶f̶ ̶i̶t̶ ̶t̶a̶k̶e̶s̶ ̶y̶e̶a̶r̶s̶ ̶t̶o̶ ̶l̶e̶a̶r̶n̶,̶ ̶t̶h̶a̶t̶'̶s̶ ̶n̶o̶t̶ ̶"̶s̶i̶m̶p̶l̶e̶"̶.̶
That's the joke, yes.
Yes, same for my Sony A6000 and A6400. I just wanted to take some selfies and it's exhausting to use the remote app.
Yes, it's mind-bogglingly difficult and complicated.
At the time of release of the A6000, having a mobile app to take a picture (and not having to buy an adapter like some other brands) was cool enough that you could deal with the (at the time, relatively minor) jank.
Biggest thing is they never really improved the mobile apps... and in some cases IMO they got worse.
Fujifilm is no better haha, they're on their third (I think) mobile app this time for X-Series camera control, and it's still terrible.'
Amusingly, their Instax control app is actually pretty good!
There are 8+ Instax apps made by Fujifilm in the app store. Which one is pretty good for controls?
The one for my 2016 era fuji is crap. Camera remote. Usually the connection fails between my phone and the cameras own wifi network it connects to but this failure takes a good 2 minutes to happen. So many artificial software limits with this camera, some removed in newer versions. Why can’t this one tether though? It is literally a digital camera no?
Anyone know of any fujifilm firmware jailbreaks fwiw?
> If a manufacturer comes up with additional data that isn’t included in the DNG standard, the format is extensible enough that a camera manufacturer can throw it in there, anyway.
It sounds like DNG has so much variation that applications would still need to support different features from different manufacturers. I'm not sure it (DNG) will really solve interoperability problems. This issue smells like someone is accidentally playing politics without realizing it.
Kind of reminds me of the interoperability fallacy with XML. Just because my application and your application use XML, it doesn't mean that our applications are interoperable.
I suspect that a better approach would be a "RAW lite" format that supports a very narrow set of very common features; but otherwise let camera manufacturers keep their RAW files as they see fit.
Hey. I’m the guy quoted.
RAW is ultimately about sensor readings. As a developer, you just want to get things from there into a linear, known color space (XYZ in the DNG spec). So from that perspective, interoperability isn’t the issue.
How you process that data is another matter. Handling a traditional bayer pattern vs a quad-bayer vs Fujifilm’s x-trans pattern obviously requires different algorithms, but that’s all moot given DNG is just a container.
Seems like DNG does behave like the RAW lite format you've just described: Everything common would be stored within the base DNG file, while everything "advanced" / more specific to a camera would be stored in additional metadata properties, which do not need to be parsed to still be able to process the base image. You can add support for these metadata on a case-by-case basis without breaking the original format, so you're not stuck re-implementing your whole raw parsing when a new camera is released as the base subset of DNG would still work.
When building a camera, you decide once and then most parameters stay fixed. It would be trivial to just append 1000 bytes for a mostly fixed DNG header to each image.
But how do you test this? While the DNG specification is open source, the implementation was/is(?) not. Do I really need a copy of Photoshop to test if my files are good? How would I find good headers to put into my files? what values are even used in processing?
Maybe the situation has changed, but in the old days when I was building cameras there was only a closed-source Adobe library for working with DNGs. That scared me off.
Things like camera intrinsics and extrinsics are not fixed. 1000 bytes seems small to me given the amount of processing in modern cameras to create a raw image. I could easily imagine storing more information like focus point, other potential focus points with weights as part of the image for easier user on device editing.
For testing, there's the Adobe DNG SDK: https://helpx.adobe.com/camera-raw/digital-negative.html
You'll find the whole spec there, too. I think the source is also available somewhere.
The camera app on my GNU/Linux phone stores DNGs with no troubles using FLOSS only.
This isn't really my area, so I'm probably wrong... I'd always assumed that RAW files were, well, raw data straight off the sensor (or as close as possible)? In which case, you could standardize the container format, but I wouldn't think it was possible to have a standard format for the actual image data. Would appreciate if anyone could correct me (a quick skim of wikipedia didn't clear it up)
Most image sensors are quite similar (ignoring weirdos like X-Trans and Foveon) so they could use the same format and decoding algorithm. It's a 16-bit integer (padded out from 12 or 14 bits) for each pixel with a Bayer color filter. Maybe throw in some parameters like a suggested gamma curve.
Foveon has awful Foss support so far. Older foveon models also require an older version of windows to run the antiquited software to process raw pics, it's maddening.
The algorithms for getting a useable image from a Foveon sensor are very non trivial from what I understand - the different layers don’t separate light perfectly into red, green, and blue bands, so there is some fancy cross layer processing you need to do.
Typically they are not to my knowledge! Though I am also not an expert. Most camera makers apply a fixed sensor profile, and possibly a dark frame to remove noise before writing out the values to whatever file. Some of them may apply lens optimizations to correct distortion or vignetting as well.
On top of that, I hear the RAW format on some smartphones is saved after the phone does its fake-HDR and computational photography bullshit, so it's even further from "raw" with those cameras.
The corrections are just metadata, the RAW data is still there. This is true for both DNG and ARW (Sony). Dont know the other brands. The corrections can even look different based on what program you use to interpret them.
I don’t think that’s true in general. As a sibling comments points out, this is not true for some DNGs - for example, the output of an iPhone is in DNG, but with many, many transforms already baked in. A DNG might even be debayered already.
GFX 100s II’s apply a transform to RAW data at iso 80, see: https://blog.kasson.com/gfx-100-ii/the-reason-for-the-gfz-10...
I don’t know much about ARW, but I do know that they offer a lossy compressed format - so it’s not just straight off the sensor integer values in that case either.
Okay true, but that's not the format's fault (:
The GFX 100s II thing is very interesting. Totally not what I would expect from such a "high end" camera.
damn, that is a quirk that would've led me pulling my hair out if I worked with those.
At least it's only at ISO 80, where noise would be minimal anyway (: I rarely use noise reduction because I don't like the artificial cleanliness of the result.
It's all float value arrays with metadata in the end. Most camera sensors are pretty similar and follow common patterns.
DNGs have added benefits, like including compression (optional) (either lossy or lossless) and error correction bytes to prevent corruption (optional). Even if there's some concerns like unique features or performance, I'd still rather use DNGs without these features and with reduced performance.
I always archive my RAWs as lossy compressed DNGs with error correction and without thumbnails to save space and some added "safety".
Nitpicking correction: The sensors give you a fixed number of bytes per pixel, like 10 or 12 bits per pixel. This are unsigned integers, not floats.
Typically you want to pack them to avoid storing 30% of zeros. So often the bytes need unscrambling.
Any sometimes there is a dark offset: In a really dark area of an image, random noise around zero can also go negative a little. You don't want to clip that off, and you don't want to use signed integers. So there typically is a small offset.
> Photo editing software needs to specifically support not just each manufacturer’s file type but also make changes for each new camera that shoots it
You mean, your proprietary, closed-source Photo-editing sofware?
Why can't the vendors of that shit just make a library that they all share ...
The ironic part is, that basically all the closed-source photo-editing software (and of course all open-source) are just using the open source LibRaw. Any special features as color profiles comes on top of that. So yes, the camera manufacturers could just donate to LibRaw or just use DNG instead.
Compressed RAF (Fujifilm, 24MP) ~20 MB
DNG (24MP) ~90 MB
It cost about 4 times more to store RAW files in DNG format.
I downloaded a sample RAF file from the internet. It was 16 megapixels, and 34MB in size. I used Adobe DNG Converter, and it created a file that was 25MB in size. It was actually smaller.
Claiming that DNG takes up 4x space doesn't align with any of my own experiences, and it didn't happen on the RAF file that I just tested.
To test this, I downloaded a random RAF image from this gallery https://mirrorlesscomparison.com/galleries/fuji-xt2-sample-i...
Maybe your method of converting to DNG is embedding the original RAF image and ... something else?He probably compares a mosaiced RAF with a debayered (linear) DNG. Just... don't do that. Use a mosaiced DNG. And certainly don't embed the RAF file. Plus with lossless JPEG-XL, the difference is trivial (under 5%):
On my own files:
Lightroom cannot do that. I'm not sure if Iridient X-transformer have some options for this. I always ended up with massive files.
Also, I'm not confident to replace entire RAF collection with converted DNGs and delete originals.
No it doesn't. A DNG is just a container that can hold anything, including compressed data, just like the RAF.
Coincidentally, most proprietary RAW formats are just bastardized TIFFs, and DNG is also a TIFF derivative...
There is zero technical reason not to use DNG. Leica and Pentax use it just fine.
Please explain how can I convert compressed RAF to DNG with same size.
It's a very common mistake in free software to not design a system end-to-end. Free software people will design some kind of generic container format and go "look! you can put anything in the container!", declare the job done, and then not write tools which actually do put anything in the container, or tools which can make use of anything in the container other than one specific thing.
(See: ActivityPub)
DNG has nothing to do with free software, it's an Adobe format similar to PDF. It is an open standard, but it is covered by patents and it comes with a no-cost patent license (except that it's less open than PDF because it doesn't define the interpretation for Adobe-specific tags (which ACR/Lightroom uses), not that that matters for raw file in any way).
Of course that even Adobe DNG converter can do what the GP asked for (I just tried it[1]), not that I would recommend it for Fuji files. And not that it matters anyway, since the whole point is producing DNG files directly, not converting them.
Edit: on my Fuji X-T5 files, using mosaiced data with lossless JPEG-XL compression (supported by MacOS 14+, iOS 17+, and the latest Lightroom/ACR):
[1] https://llum.chat/?sl=3MCDl4This reminds me of a current problem facing the bio-imaging community; many microscopes, many formats.
My company specifically deals with one of the post-processing steps and we've had to build our own 'universal adapter'. Its frustrating because it feels like microscope companies are just re-inventing the wheel instead of using some common standard.
There has been an effort to standardize how these TB size datasets are distributed[1]. A different but still interesting issue.
[1] https://ngff.openmicroscopy.org/
While I generally prefer compatibility and shared standards, camera RAW formats seem to be a reasonable place to lean into implementation details if needed in order to gain performance and ease/quality of implementation over interoperability.
Don't know what's confusing about it... I mean, I shoot ARW with my Sony, these work fine with Lightroom and work fine with DxO PhotoLab [1] at least as long as my ARWs are not compressed (it's not that the compression is proprietary, it's that the compression is lossy and breaks denoising)
[1] Shoot ISO 12,800, process with DxO, people will think you shot at ISO 200; makes shooting sports indoor look easy, see https://bsky.app/profile/up-8.bsky.social/post/3lkc45d3xcs2x so I got zero nostalgia for film.
Proprietary formats require 3rd party developers to adapt their tools: While most mainstream software will be updated to support most/all cameras, this makes it harder for smaller projects to do. If they used an open standard, the advanced features could still require additional work to be compatible (ex: If they store custom metadata), but you could normalize everything that's shared, ensuring the core capabilities will never break for a new camera with its updated proprietary RAW like it currently does.
'.dng's share the same file format structure as '.tiff's, but they have some extra fields.
For stills photography, Adobe's '.dng' format does fairly well, from 8-bit to 16-bit. It copes with any of the 4 possible standard RGGB Bayer phases and and has some level of colour look-up table in the metadata. Sometime this is not enough for a camera's special features and The Verge's article covers those reasons quite well.
For video, things get much more complicated. You start to hit bandwidth limits of the storage media (SD cards at the low end). '.dng' files were not meant to be compressed but Blackmagic Design figured out how to do it (lightly) and still remain compatible with standard '.dng' decoding software. Other, better compressed formats were also needed to get around the limits of '.dng' compression.
Red cameras used a version of JPEG 2000 on each Bayer phase individually (4 of them), but they wrapped it in patents and litigated hard against anyone who dared to make a RAW format for any video recording over 4k. Beautifully torn apart in this video: https://www.youtube.com/watch?v=IJ_uo-x7Dc0
So, for quite a few years, video camera companies tip-toed around this aggressive patent with their own proprietary formats, and this is another reason why there's so many (not mentioned by The Verge).
There's also the headache of copying a folder of 1,000+ '.dng' stills that make up a movie clip; it takes forever, compared to a single concatenated file. So, there's another group of RAW video file formats that solve this by recording into a single file which is a huge improvement.
> Cameras absolutely could emit DNG instead, but that would require more development friction: coordination (with Adobe), potentially a language barrier, and potentially making it harder to do experimental features.
Why no simply provide documentation of the camera-specific format that is used?
EDIT: This should have been an answer to https://news.ycombinator.com/item?id=43584261
"Sony’s software for processing ARW RAW files is called Imaging Edge. Like most first-party software from camera manufacturers, it’s terrible and unintuitive to use — and should be saved for situations like a high-resolution multishot mode where it’s the only method to use a camera’s proprietary feature."
I think the primarily reason is that they have great hardware developers and terrible software developers. So, having ARWs it is maximum they could provide to photographer, so they could take the files and run away from sony as soon as possible (i.e. do the rest in the better software).
Pentax could save DNGs, there are zero reasons for other companies not to do the same.
Reminds me of all the weird and confusing object file formats. The specs for them are inevitably confusing, wrong and incomplete.
DNG is easy to adopt so long as your images look like a Bayer image, and you don’t need any special treatment of the raw data.
Sigma’s camera’s are notorious for their lack of support in most editors because their Foveon files require extra steps and adjustments that don’t fit the paradigm assumed by DNGs (and they claim it would release proprietary information if they used dngs).
The bigger issue is that at the end of the day the dng format is very broad (but not broad enough) and you rely on the editor to implement it correctly (and completely). DNGs that you can open in one of the major editors will simply not open in another.
Sigma currently doesn't produce any Foveon cameras - the fp, fp L and BF all use DNG.
Sure, and? My statement stands.
And more to the point, for their foveon cameras that produced with both x3f and dng files, the image quality from their dng files are objectively and substantially worse than the x3f files.
By the way (hello, adobe). DNG compression compresses worse than 7zip, haha. Too much for an open standard, too hard to add LZ4 as an option in the times of having 24 core setups :D
This article is just as useless as asking "why are all your file formats different and confusing?"
I mean, it's just binary data, right? Why can't they just write all their ones and zeros the same way?
I thought the main value of this article was that they went out and asked various vendors that question ("why proprietary formats?") and put all their answers in one place. Too bad Nikon and Fujifilm didn't respond, but I can imagine their motivations being similar to other vendors.
This is on a long list of why camera companies are dying.
There is a long list of issues like this which have prevented ecosystems from forming around cameras, in the way they have around Android or iOS. It's like the proprietary phones predating the iPhone.
The irony is that phones are gradually passing dedicated cameras in increasing numbers of respects as cameras are now in a death spiral. Low volumes means less R&D. Less R&D and no ecosystem means low volumes. It also all translates into high prices.
The time to do this was about a decade ago. Apps, open formats, open USB protocols, open wifi / bluetooth protocols, and semi-open firmware (with a few proprietary blobs for color processing, likely) would have led things down a very different trajectory.
Sony is still selling cameras from 2018:
https://electronics.sony.com/imaging/interchangeable-lens-ca...
The price new fell by just 10% over the 7 years ($2000 -> $1800).
And in a lot of conditions, my Android phone takes better photos, by virtue of more advanced technology.
I have tens of thousands of dollars of camera equipment -- mostly more than a decade old -- and there just haven't been advancements warranting an upgrade. A modern camera will be maybe 30% better than a 2012-era one in terms of image quality, and otherwise, will have slightly more megapixels, somewhat better autofocus, and obviously be much smaller by the loss of a mirror. Video improved too.
The quote of the day is: "I wish it weren’t like this, but ultimately, it’s mostly fine. At least, for now. As long as the camera brands continue to work closely with companies like Adobe, we can likely trudge along just fine with this status quo."
No. We can't. The market has imploded. The roof is literally falling in and everyone says things are "fine."
Does any know how much volume there would be if cameras could be used in manufacturing processes for machine vision, on robots / drones, in self-driving cars, on building for security, as webcams for video conferencing, for remote education, and everywhere else imaging is exploding?
No. No one does, because they were never given the chance.
> And in a lot of conditions, my Android phone takes better photos, by virtue of more advanced technology.
> I have tens of thousands of dollars of camera equipment -- mostly more than a decade old -- and there just haven't been advancements warranting an upgrade. A modern camera will be maybe 30% better than a 2012-era one in terms of image quality, and otherwise, will have slightly more megapixels, somewhat better autofocus, and obviously be much smaller by the loss of a mirror. Video improved too.
I thought the same thing, and then I went and rented a Nikon Z8 to try out over a weekend and I was blown away by the "somewhat better autofocus". As someone who used to travel with a Pelican case full of camera gear, to just carrying an iPhone, I'm back to packing camera gear because I'm able to do things like capture tack-sharp birds in flight like I'm taking snapshots from the hip thanks to the massive increase in compute power and autofocus algorithms. "Subject Eye Detection AF" is a game-changer, and while phones do it, they don't have enough optical performance in their tiny sensors/lenses to do it at the necessary precision and speed to resolve things on fast-moving subjects.
In terms of IQ, weight, and all that, it's definitely not a huge difference. I would say it's better, but not so much that I particularly cared coming from a 12-year old DSLR. But the new AF absolutely shocked me with how good it is. It completely changed my outlook.
I say this, not to take away from your overall point, however, which is that a phone is good enough for almost everyone about 90% of the time. It's good enough that even though I upgraded my gear, I only bought one body when I traded in two, because my phone can handle short-focal length / landscape just fine, I don't need my Z8 for that. But a phone doesn't get anywhere close to what I can do with a 300mm or longer focal length lens on the Z8 with fast moving subjects.
Because most users of the raw format are photographers and they don't care about it?
Depends. When a new camera is out, it often takes time before Lightroom, Capture One, etc. support it. While I don't care about how the raw format works, being able to keep using my usual software even with a brand new camera is something I care a lot about.
What? They did not say it was Capitalism and greed? I am shocked!
They are just trying to lock people in to their format and make them dependent on the company instead of an open source and universal format.
How? Support for RAW formats is reasonably complete. I can hop between different editors without much, if any, hassle at all. Since getting my first DSLR in the early 00's, I have used (in no order) Photos, Bibble, Lightroom, Aperture, Capture One, Photoshop, Pixelmator, Photomator, Darkroom, On1, Raw Power, Nitro Photo, Luminar, Darktable and RawTherapee, all without fuss. Where is the lock in?
I think the article misses the point: It's not about how complex the data structures are, it's about the result, in all its details.
Comparing different RAW converters (Lightroom, DXO), their image rendering is slightly different. If you compare the colors with the JPEG image, even more so. If the goal is to faithfully reproduce the colors as they were shown in the camera, you depend on the manufacturer's knowledge. To me, it makes not sense to have some "open" DNG format in the middle, when it's flanked by proprietary processing.
It's not about the format, it's about knowing the details, including parameters, of the image processing pipeline to get a certain look.