Jump to content

Color Science Means Nothing With Raw... Really?


DBounce
 Share

Recommended Posts

So I came across this test comparing the Blackmagic Designs P4k, Z Cam E2 and the Panasonic GH5S. In the video the reviewer tries to match all three cameras in post. I think you will find the result to be interesting. I also believe that this test and others I have seen... along with my own personal experience would seem to suggest that color science is in fact a thing. What are your thoughts and experiences regarding matching cameras? Do you call BS on color science, or are those that claim it doesn't matter when dealing with raw misguided?

 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

There are pro shooters who can match Canon still raw to Nikon raw, to the level that nobody recognize which is which.

The main reason its a bit difficult is that these companies don't share any data about spectral response of the CFA used on the sensor. 

Link to comment
Share on other sites

30 minutes ago, thebrothersthre3 said:

I think the issue with the BMP4K was some weird error in Resolve and not the camera itself. 

I do think color science matters, though I've seen people really work magic in post. The GHAlex lut for example. 

I have seen this "red shift" in virtually all the footage from the P4k.

15 minutes ago, Eric Calabros said:

There are pro shooters who can match Canon still raw to Nikon raw, to the level that nobody recognize which is which.

The main reason its a bit difficult is that these companies don't share any data about spectral response of the CFA used on the sensor. 

It's just that the popular view today seems to be that color science is a moot point. However, I am not so sure that this is actually the case.

Link to comment
Share on other sites

26 minutes ago, Eric Calabros said:

There are pro shooters who can match Canon still raw to Nikon raw, to the level that nobody recognize which is which.

The main reason its a bit difficult is that these companies don't share any data about spectral response of the CFA used on the sensor. 

They don't, but I did.

IMXSpectral.thumb.JPG.804fa1e9ac20c25907d339a0edce0d5c.JPG

Link to comment
Share on other sites

39 minutes ago, androidlad said:

They don't, but I did.

IMXSpectral.thumb.JPG.804fa1e9ac20c25907d339a0edce0d5c.JPG

You can get this data from most manufacturers. Much of it is freely available online in white papers. I worked with someone who helped develop the ACES profiles and every major camera manufacturer (except Red, I believe) sent gamut (chromasticities) information for their sensors to help creates the ACES profiles. They varied significantly.

And just compare the Sony spectral response curves above to Velvia:

http://www.fujifilm.com/products/professional_films/pdf/velvia_50_datasheet.pdf

(Note where 0 relative response is.)

I love the look of Velvia, and assume the extremely narrow response curves are responsible for it (and for its low ISO). I even prefer it substantially to Velvia 100. Back in the film days, spectral response curves were obviously a big thing. You'd shoot different stocks even if you were doing a DI.

So it's no surprise I've always believed a portion of the look is baked into raw images, even if the difference is much smaller than with film stocks. There's been a big change in image characteristics over the course of Canon's dSLR line, even comparing raw files, and I read an article showing much better color (richer yellow/orange) with the earlier CCD models. Even the original 5D has its own look compared with the newer cameras (imo). Canon's changed its BFA filters many times. If there were no functional difference among BFA arrays, Phase One would have never released the trichromatic back (slower but designed for better color) or Sony wouldn't tout the wider gamut on the F55 compared with the F5. At the design stage it seems to be a trade off between color and low light...

On the other hand, Red (apparently) has green and red chromasticities closer than most manufacturers, and that's said to account for the ruddy look you'd see in earlier Red footage, but the DXL seems to time that out well with its color profiling, despite having the same sensor. I've been very impressed with the DXL's images, even though I think Arri has the edge with foliage, still. So a lot can be done in post to account for different filter arrays. That's the whole point of ACES...

Despite all the evidence that BFAs result in materially different gamuts, and thus materially different looks, I can't conclude anything with any confidence because I'm not an engineer. I suppose in theory if they all cover rec709 the rest can be done with transforms? The story the guy on the ACES board told me was complicated, and way over my head. Whereas those online who claim raw is raw and it's all the same seem very confident in their belief, so, while I choose to believe what I believe because it doesn't really matter that I'm misinformed if I keep my incorrect beliefs to myself, I would generally tend to side with those who believe all raw is the same since they seem to have done the research and are very very confident in their beliefs. Maybe the Phase One back is for suckers or I misunderstand its appeal. ? 

I do think it's an interesting topic, and I'm a bit ashamed to admit that my belief that different sensors and manufacturers have different imaging and color characteristics has informed my purchasing decisions. There's a sucker born every minute... 

Link to comment
Share on other sites

I am sure every manufacturers Sensors are unique color wise. It is maybe not built into them on purpose, but different configurations are going to have different results I don't care what you do. Then you get into Lossless and non Lossless and it all gets goofier. Then you get stuff like Fuji with the X-Trans mask on a Sony sensor. That has to effect Raw like it or not. Heck even lenses recipes effect output. I also believe each "Brand" tries to add a bit of their own secret sauce to the mix as they say like it or not.

Link to comment
Share on other sites

In science there is distinct answer to every question, but there are countless ways to make a raw data, visible! So its a anecdotal process, not science.

I just put better description from two DPR forum members here:

"The term 'colour science' is somewhat meaningless, and has become a marketing and fandom term whereby companies and people can say 'my colours are exceptional due to my special magic'. It's nonsense. So, let's look at the science. Colour vision is a tri-stimulus phenomenon. Your eye has three different kinds of receptor, which have three different spectral responses, generally called S,M and L (a lot of confusion is cause by people incorrectly thinking that they are 'Red', 'Green' and 'Blue'). Your brain decodes the mix of stimuli received in these three types of cone into a colour. Any light stimulus that causes the same mix of S,M and L will cause the same colour to be perceived even if the spectrum is completely different. However, it's even a bit more complex that that. How the brain decodes the colour is not constant, it depends on the prevailing conditions. So, under sunlight and incandescent lighting, the same object will reflect different spectra, and thus produce different S,M and L stimuli, yet the brain will decode the colours the same, because it applies a 'colour temperature' correction.Your camera also has a tri-stimulus system, but its responses are not the same as S,M and L, so your camera doesn't 'see' the same colours as you do. Worse, the spectrum of the three camera stimuli is different camera to camera. Thus, you can't 'see' the colours in a raw file, and your idea that you can 'see' some differences in a raw file is just wrong. To 'see' a raw file it needs to be 'developed'. This means taking the camera-specific stimuli and calculating what human eye responses they should elicit, taking into account the lighting conditions. The human eye responses used are based on a body of work done by the by the International Commission on Illumination known as the Commission Internationale de l’Elcairage (CIE) back in the 1930s. This defined a 'colour space' based on a set of standardised stimuli called XY and Z, which provides a possible description of most human visible colours. From this XYZ space, other spaces for useful devices, such as RGB (if you have a display device based on red, green and blue emission) might be derived. Until your raw file has been transformed into such a standardised and meaningful space you can't 'see' it. When you're 'viewing' a raw file, what you're 'seeing' is a display of a file which is the result if such a development process. This might be a JPEG file embedded in the raw, which the camera generated when you took the photo, or it might be other results of a 'CODEC' operating in the background in your operating system. In either case, what you're seeing is a developed file, and any differences are likely to be as a result of the development process."

 

"Decisions regarding colour made in different converters and cameras, however, do differ. Big part of the difference is that different assumptions of typical colour perception and pleasant colour are made. Another part is that some designers are either rather illiterate in colour science, or employ obsolete tools and notions. Also shortcuts and trade-offs made to balance the system differ. Ultimately, from marketing point of view, cameras are JPEG-producing tools, and everything, including sensor spectral response, is nearly inevitably designed to produce nice-looking JPEGs faster. Sensors in different cameras may have intentionally different spectral responses, different compensations and calibrations may be applied before the raw data is written to what we call a raw file, and after all that a raw converter takes on, adding its own technologies (that may favor some spectral responses over others), trade-offs, and idiosyncrasies to the mix. Because raw file formats are not publicly documented, third-party converters may have additional troubles interpreting colour. Custom profiling of raw converters using modern software, custom-measured targets, and good target shooting discipline makes the difference to mostly go away."

 

Link to comment
Share on other sites

Quote

 

Mark has been working on the comparison for a while so that's also with a slightly older firmware version. I'm glad he finished it. The E2 footage is also with a slightly older firmware and h.265. When he has some time, I also expect that he'll do some ProRes tests since I gave him the trick for enabling it.


 

I read also this comment elsewhere about the Z Cam E2 and this particular test video demonstration.

Link to comment
Share on other sites

8 hours ago, Shirozina said:

Why?

I assume because so long as the chromaticities defined by a given sensor's filter array encompass at least rec709 or sRGB (same chromasticities/gamut, different decoding gammas), then transforms exists to map an image acquired with that sensor to an image acquired with a different sensor that also covers sRGB (assuming the final output is sRGB or rec709). And all color sensors should cover at least sRGB. This is sort of the idea behind ACES if I'm not mistaken.

The flilpside of the above argument would be that things like dynamic range, noise performance, and metamerism error can result in an image clipping, being overly noisy, or having poor tonality or colors that bleed into each other or simply false or indistinct colors. And those are inherent to the raw file. And metamerism error cannot be accounted for in software so far as I know. As I wrote above, Phase One's trichromatic back and their standard backs should both more than cover rec709, and yet the images taken with one appear more saturated and with better looking (imo) color than those taken with the other, even on an sRGB monitor. And, for instance, Red's too-close red and green chromasticities make it more difficult to get punchy green foliage in post (though the DXL proves it's possible to and I've also seen work from Company 3 that looks amazing with the Red). 

So for an expert colorist, I could see making the argument that raw files might as well all be the same... to a point. With raster, however, each company is baking things in quite differently and I think that really does matter, just maybe not where one expects. And that's where Red applies their "color science" label, in the debayer process. So personally I don't think it's entirely BS, even if I think it's mostly a marketing term. (No offense to @Mako Sports, I don't mean to speak for him or disagree with someone more experienced, just hoping to contribute to the discussion even though I'm a real neophyte with this kind of thing. I suspect you have better reasons for claiming it's entirely BS than I do for thinking it's somewhere in-between BS and material.) Also, a lot of raw isn't really really. Canon Raw Light has a lot baked in. I suspect ARRIRAW does, too.

As regards color being more important in raster than raw, a friend worked with Stephen Sonnenfeld (founder of Company 3) on a project he cut and even he wasn't able to fully account for chroma clipping on cheaper cameras, though I'm sure what he did still looked absolutely amazing. But it's specific bugaboos like chroma clipping that ended up being the hardest things to address, and with raw that's not so much an issue. Again, this is just my uninformed opinion and I don't mean to speak for anyone or insult their abilities. I'm just a fan of Sonnenfeld's work so that's my bias, to agree with what he says, but it's totally possible someone more technical has figured this out better than he could. Back when I was shooting with the F5 I couldn't grade out clipping color channels in SLOG2, then I worked with it again with a a different LUT (Sony now has Arri-emulating and Kodak-emulating LUTs) that addressed that and the image was much easier to work with. So I think for raster images the pipeline makes a big difference, but I'm not much of a colorist.

Link to comment
Share on other sites

12 hours ago, Shirozina said:

Why?

Because there is no science involved. It involves making a bunch of subjective modifications usually by people who have zero understanding of what the underlying data actually is. If it is not measurable and definable it is not science, it is guessing. You might be good at guessing but it is still guessing.

It is sort of like a group of sheepherders from 2000 years ago talking about DNA modification, lol. They know that by selective breeding through trial and error that they can get different properties in their animals, but they know nothing about the DNA basis underlying that (although that lack of knowledge won't stop them from explaining how they are doing DNA "science").

As soon as you see people throwing terms like "magic", "special sauce" and "undefinable quality" about you know they have no idea what they are doing/ It is pure subjectivity, there is no science involved, at least with what they are doing. They are the modern day versions of those sheepherders ;)

4 hours ago, HockeyFan12 said:

I assume because so long as the chromaticities defined by a given sensor's filter array encompass at least rec709 or sRGB (same chromasticities/gamut, different decoding gammas), then transforms exists to map an image acquired with that sensor to an image acquired with a different sensor that also covers sRGB (assuming the final output is sRGB or rec709). And all color sensors should cover at least sRGB. This is sort of the idea behind ACES if I'm not mistaken.

The flilpside of the above argument would be that things like dynamic range, noise performance, and metamerism error can result in an image clipping, being overly noisy, or having poor tonality or colors that bleed into each other or simply false or indistinct colors. And those are inherent to the raw file. And metamerism error cannot be accounted for in software so far as I know. As I wrote above, Phase One's trichromatic back and their standard backs should both more than cover rec709, and yet the images taken with one appear more saturated and with better looking (imo) color than those taken with the other, even on an sRGB monitor. And, for instance, Red's too-close red and green chromasticities make it more difficult to get punchy green foliage in post (though the DXL proves it's possible to and I've also seen work from Company 3 that looks amazing with the Red). 

So for an expert colorist, I could see making the argument that raw files might as well all be the same... to a point. With raster, however, each company is baking things in quite differently and I think that really does matter, just maybe not where one expects. And that's where Red applies their "color science" label, in the debayer process. So personally I don't think it's entirely BS, even if I think it's mostly a marketing term. (No offense to @Mako Sports, I don't mean to speak for him or disagree with someone more experienced, just hoping to contribute to the discussion even though I'm a real neophyte with this kind of thing. I suspect you have better reasons for claiming it's entirely BS than I do for thinking it's somewhere in-between BS and material.) Also, a lot of raw isn't really really. Canon Raw Light has a lot baked in. I suspect ARRIRAW does, too.

As regards color being more important in raster than raw, a friend worked with Stephen Sonnenfeld (founder of Company 3) on a project he cut and even he wasn't able to fully account for chroma clipping on cheaper cameras, though I'm sure what he did still looked absolutely amazing. But it's specific bugaboos like chroma clipping that ended up being the hardest things to address, and with raw that's not so much an issue. Again, this is just my uninformed opinion and I don't mean to speak for anyone or insult their abilities. I'm just a fan of Sonnenfeld's work so that's my bias, to agree with what he says, but it's totally possible someone more technical has figured this out better than he could. Back when I was shooting with the F5 I couldn't grade out clipping color channels in SLOG2, then I worked with it again with a a different LUT (Sony now has Arri-emulating and Kodak-emulating LUTs) that addressed that and the image was much easier to work with. So I think for raster images the pipeline makes a big difference, but I'm not much of a colorist.

Well, actually, if you know the chromatic profiles of each dye element used for a particular camera, you should be able to produce a correction table that will largely adjust the responses from different cameras RAW output. There will still be some differences since computation might be required to figure out more or less where in that response curve an individual particular cell is, but it is possible. Not by anyone reading this board though. It is the sort of thing the NLE producers would have to do (those that work with RAW footage that is). If they don't (or do a crappy job at it) I suppose a user could do it themselves in post manually, but it would require exceptional skill and a lot of trial and error to get it right, something most users are not prepared to do.

Link to comment
Share on other sites

8 minutes ago, Mokara said:

Because there is no science involved. It involves making a bunch of subjective modifications usually by people who have zero understanding of what the underlying data actually is. If it is not measurable and definable it is not science, it is guessing. You might be good at guessing but it is still guessing.

It is sort of like a group of sheepherders from 2000 years ago talking about DNA modification, lol. They know that by selective breeding through trial and error that they can get different properties in their animals, but they know nothing about the DNA basis underlying that (although that lack of knowledge won't stop them from explaining how they are doing DNA "science").

As soon as you see people throwing terms like "magic", "special sauce" and "undefinable quality" about you know they have no idea what they are doing/ It is pure subjectivity, there is no science involved, at least with what they are doing. They are the modern day versions of those sheepherders ;)

Ehh been hitting up the punch bowl at work have you. ?

Link to comment
Share on other sites

3 hours ago, Mokara said:

Well, actually, if you know the chromatic profiles of each dye element used for a particular camera, you should be able to produce a correction table that will largely adjust the responses from different cameras RAW output. There will still be some differences since computation might be required to figure out more or less where in that response curve an individual particular cell is, but it is possible. Not by anyone reading this board though. It is the sort of thing the NLE producers would have to do (those that work with RAW footage that is). If they don't (or do a crappy job at it) I suppose a user could do it themselves in post manually, but it would require exceptional skill and a lot of trial and error to get it right, something most users are not prepared to do.

I think this is basically the idea behind ACES. 

But it still doesn't account for metamerism error, which I believe increases as the spectral response curves of a given dye become thinner and as the sensor's inherent gamut becomes smaller. So that's where I think it's not BS. The Phase One Trichromatic back I suspect does render better color than the standard Phase One back, even in sRGB images, even though both sensors cover more than rec709, just as Velvia 50 renders better color than faster slide films. But there's a lot of debate about it and the promotional videos for that back is full of demonstrable half truths and lies and marketing talk. Do you think it makes a difference?

Unfortunately I'm not very technical, so I leave it to the experts like you and @Mako Sports for the final word. I agree most raw files of decent quality can be transformed to match if you have the software. I just think it's more true for stills than it is for video because ACR exists for stills but not for most video raw formats. I haven't compared Canon and Sony side-by-side in ACR myself, but DPreview's test charts I suspect are mostly processed through ACR (does anyone know otherwise?) and the color rendering between systems seems nearly identical. But I do suspect there are some subtle differences that show up more in faces and foliage than they do in color charts, but in theory are measurable in both. That might just be me being a sucker for marketing, though.

What I'm curious about, and what I wonder if either of you could help me understand, is this:

The F5 and F55 have different CFA arrays, one covers an exceptionally wide gamut, the other is designed for rec709. Assuming a rec709 deliverable, is there any difference in color between the two? I'm guessing there isn't unless the colorist wants to bring colors outside the rec709 gamut into it for the graded image–but for most work, there's none.

Does a camera that covers rec709 inherently have low enough metamerism error that it can be transformed losslessly to match any other such camera? Does the sensor's gamut encompass all color values that can be derived without substantial error or simply all color values that can be derived at all (with some error or without)? This is where I'm still confused. It's my understanding that metamerism error is a significant problem in terms of fine hue differentiation (and the reason thinner CFAs can be problematic), which is difficult to see in color charts but easier to see in fine skin and foliage details, but I could be tripping up on advertising terminology. As I mentioned, I'm not very technical and do get drawn into ad speak. I also have read that even if it's a problem, it's very overrated. The only sensors with material metamerism problems are Foveon sensors.

Someone here did some simple math to derive the chromasticities of the Red sensor in different development settings and found that they all cover >rec709/sRGB, but that the green chromasticity is pushed toward red, possibly accounting for the ruddy "red look." I can tell from the URL that you probably think this is BS, but it seems like their methodology makes sense:

https://web.archive.org/web/20160310022131/http://colour-science.org/posts/red-colourspaces-derivation/

An author cited that article when he later claimed that the DXL largely solves this problem with its custom software, but that the DXL doesn't fully provide color that looks as good as the Alexa, for instance:

https://www.provideocoalition.com/panavision-millenium-dxl-light-iron-red-create-winning-new-look-large-format-cinematography/

Was he monitoring in rec2020 (or another wide gamut) or is it possible that part of the "red look" is inherent to its sensor? Maybe some of those values we like in certain memory colors come from outside the rec709 gamut? It is VERY VERY lacking in greens relative to human vision, after all.

I'd like to understand more about this because the F55, F65, C300 Mk II, etc. all have incredibly wide gamut sensors. Does this matter for rec709 viewing? I suppose one could map out of gamut colors to rec709, which though mathematically wrong might be scientifically pleasing, and only certain sensors would have gamuts wide enough for that... but in real world use is there any difference or not? 

Any insight would be much appreciated, I've long been confused about this but was ashamed it was a dumb question. I'll just embrace being dumb and ask it.

Link to comment
Share on other sites

9 hours ago, HockeyFan12 said:

Back when I was shooting with the F5 I couldn't grade out clipping color channels in SLOG2, then I worked with it again with a a different LUT (Sony now has Arri-emulating and Kodak-emulating LUTs) that addressed that and the image was much easier to work with. So I think for raster images the pipeline makes a big difference, but I'm not much of a colorist.

Yes, some of the negativity around Sony's "color science" is now outdated as they've got better at it.

 

Link to comment
Share on other sites

Title Color Science Means Nothing With Raw... Really? is misleading. At least when related to the test you are using as a show case. Three different cameras were used and while the sensor is the same, 4 different codecs / formats were used and only one of them is RAW. As author of the test states bellow the video:

All clips from the E2 were shot in 10bit 4:2:0 H.265 ZLog. The GH5s was shot in 10 bit 4:2:2 H.264 VLog. The BMPCC 4K was shot in both 12 bit Cinema DNG RAW 3:1 and 10 bit ProRes HQ (which means 4:2:2 color)

So this video test proves nothing in terms of RAW video simply because only one of the cameras in some of the clips shot RAW. 

Cameras use different processors and electronics, different codecs so differences in final image are normal, expected even in cases when the sensor is the same. Even if those camera shot RAW it's reasonable to expect final video NOT to look identical.

If the goal was to color match them, the easiest and correct way would be to use color chart. Am sure after color correction with color chart the image will be pretty close if not identical. A reference point (color chart) is needed when matching is the goal as more or less this guaranties you have correct colors in post no matter what lens, codec, camera or sensors are used.

Did some test for RAW photo and with color chart can match almost perfectly images from different cameras in different lighting conditions. Am sure nobody can tell which camera was used for each image. Anyone can easily do this test and repeat the results. In photo when using RAW so called color science of the camera really means nothing. That's a proven point for me.

Video cameras shooting RAW don't quite use RAW. There is always some alteration of the data (image). Like applying LOG gamma curve, using a particular codec, color space etc. We discussed this in the other topic around the Tony Northrup tests. So IMHO whatever you call it color science or not it's logical to expect cameras using the same sensor to yield slightly different images. And cameras using different sensors, processors, codecs, etc to give different images. Difference is additionally complicated by the fact that color correction for video is more difficult and complex due to different codecs, color spaces, LOG gamma etc. Not many people outside the professional editors and colorists learn and put the effort to master the process. Additionally differences in dynamic range and other parameters of the sensors also play a role to the perception of the image. 

But with some more efforts and skills it can be done same way as in photo. Using color charts using correct color space transformations, etc. And matching could be extended even for cameras that don't use RAW codecs but much more limiting h264  8 or 10 bit 4:2:2 ones. Did some initial tests and plan to do more. Am confident it can be done and there are plenty of clues around. 

    - Zacuto did a test years ago and on big screen highly skilled cinema and video professionals in a blind test were not able to tell the difference between Panasonic GH2 with 1080p and x264 8 bit 4:2:0 codec and Arri Alexa 2.5K RAW
    - in the movie Man from U.N.C.L.E various cameras were used. Here is the full list:
   https://www.imdb.com/title/tt1638355/technical?ref_=tt_dt_spec

    Among them Canon 5D Mark II and Go Pro 3. Can you guess which scenes were shot with Canon 5D Mark II or even Canon EOS C500 ? I can't. Go Pro 3 yes, using logic where such a camera will be used but for the rest am clueless. 

    Look at this guy test. - Arri alexa vs Canon T3i. Yes there is difference in colors but they can be made quite similar, quite close
  

 

So bottom line is, it all boils down to how easy and with less effort we are able to get the colors we like. And how much we can afford to pay. Our preferences to a camera and certain "color science" are purely subjective. Which kind of contradicts the science part  :)
 

Link to comment
Share on other sites

On 12/24/2018 at 12:37 AM, Mako Sports said:

Color science IS bs

 

18 hours ago, Mokara said:

Because there is no science involved.

To anyone who says "color science is bs:" I'm curious what your definition of color science is.

From the CFA, to the amplifier, to the ADC, to the gamma curve and the mathematical algorithm behind it, to the digital denoising and sharpening, to the codec--someone has to design each of those with the end goal of making colors appear on a screen. Some of those components could be the same between cameras or manufacturers. Some are not. Some could be different and produce the same colors.

Even if Canon and Nikon RAW files were bit-for-bit identical, that doesn't negate the fact that science and engineering went into designing exactly how those components work together to produce colors. As it turns out, there usually are differences. The very fact that you have to put effort into matching them shows that they weren't identical to begin with.

And if color science is negated by being able to "match in post" with color correction, how about this: you can draw a movie in Microsoft Paint, pixel by pixel. There is no technical reason why you can't draw The Avengers by yourself, pixel for pixel, and come up with the exact same final product that was shot on an Arri Alexa. You can even draw it without compression artifacts! Compression is BS! Did you also know that if you give a million monkeys typewriters, they will eventually make Shakespeare? He wasn't a genius at all!

The fact that it's technically possible to match in post does not imply equality, whether it's a two minute adjustment or a lifetime of pixel art. Color science is the process of using objective tools to create colors, usually with the goal of making the color subjectively "good." If you do color correction in post, then you are using the software's color science in tandem with the camera's.

Of course, saying one camera's color science produces better results is a subjective claim...

8 hours ago, stephen said:

Our preferences to a camera and certain "color science" are purely subjective. Which kind of contradicts the science part

...but subjectivity in evaluating results doesn't contradict science at all. If I subjectively want my image to be black and white, I can use a monochrome camera that objectively has no CFA, or apply a desaturation filter that objectively reduces saturation. If you subjectively want an image to look different, you objectively modify components to achieve that goal. The same applies to other scientific topics: If I subjectively want larger tomatoes, I can objectively use my knowledge of genetics to breed larger tomatoes.

Link to comment
Share on other sites

1 hour ago, KnightsFan said:

 

To anyone who says "color science is bs:" I'm curious what your definition of color science is.

From the CFA, to the amplifier, to the ADC, to the gamma curve and the mathematical algorithm behind it, to the digital denoising and sharpening, to the codec--someone has to design each of those with the end goal of making colors appear on a screen. Some of those components could be the same between cameras or manufacturers. Some are not. Some could be different and produce the same colors.

Even if Canon and Nikon RAW files were bit-for-bit identical, that doesn't negate the fact that science and engineering went into designing exactly how those components work together to produce colors. As it turns out, there usually are differences. The very fact that you have to put effort into matching them shows that they weren't identical to begin with.

And if color science is negated by being able to "match in post" with color correction, how about this: you can draw a movie in Microsoft Paint, pixel by pixel. There is no technical reason why you can't draw The Avengers by yourself, pixel for pixel, and come up with the exact same final product that was shot on an Arri Alexa. You can even draw it without compression artifacts! Compression is BS! Did you also know that if you give a million monkeys typewriters, they will eventually make Shakespeare? He wasn't a genius at all!

The fact that it's technically possible to match in post does not imply equality, whether it's a two minute adjustment or a lifetime of pixel art. Color science is the process of using objective tools to create colors, usually with the goal of making the color subjectively "good." If you do color correction in post, then you are using the software's color science in tandem with the camera's.

Of course, saying one camera's color science produces better results is a subjective claim...

...but subjectivity in evaluating results doesn't contradict science at all. If I subjectively want my image to be black and white, I can use a monochrome camera that objectively has no CFA, or apply a desaturation filter that objectively reduces saturation. If you subjectively want an image to look different, you objectively modify components to achieve that goal. The same applies to other scientific topics: If I subjectively want larger tomatoes, I can objectively use my knowledge of genetics to breed larger tomatoes.

I think it's mostly the age-old canard (meme?) about using the word science when it's really about engineering. 

Anyhow, this stuff is above my head, too, and I think we should leave it to the experts. That is to say, the engineers. (Not scientists. Or marketing departments.) There's a lot of contradictory information online: Sony claims the F65 and F55 have the widest color gamut of any sensor; Canon's cinema gamut is far wider than that, though; and Arri claims that sensors don't have inherent gamuts because they can all "see" any visible color, which makes sense to me. But this is all marketing so it's difficult to get to the truth.

So I'd just defer to @Mako Sports and @Mokara here. But it makes sense that they're right (and so is Arri)–any bayer color sensor in use today can see the entire visible spectrum, which is a vastly larger gamut than anything is mastered as. I'm still hoping someone can clarify whether metamerism error at the sensor level matters or not or to what extent it matters and where it comes from and how it can be addressed. It seems in theory you could make an extremely thin CFA that's barely red or green or blue at all and improve noise performance dramatically. In theory the Phase One Trichromatic back is totally bunk, and I'm wondering if I'm a total sucker for wanting one. Engineers I've spoken with claim even the thinnest CFAs we see today are still really excellent, but why are the dyes as thick or thin as they are? If they could be virtually infinitely thin so long as they still have some color, why aren't they? Thinner CFAs would, if anything, see a wider gamut, but I'm guessing would differentiate hues worse? I'm really curious about that still and it would be nice if someone with an engineering background could explain it to us.

15 hours ago, IronFilm said:

Yes, some of the negativity around Sony's "color science" is now outdated as they've got better at it.

 

The Venice footage I've seen so far looks a lot better to me than C700 footage. And I really liked the F35, even the F3. Something about SLOG2 and the way color channels clipped just looked very "video" to me. But I might just be crazy or biased against those cameras-I didn't particularly like the F55 raw footage I worked with, and yet it's supposed to have the same CFA as the F65, which produces really nice images. Who knows... it was still pretty good! I'm not wild about the new Canons, I preferred the original C300's colors. Blues, greens, and skin tones were darker, the look was closer to color negative film, whereas the new ones seem very accurate to me, but they have more of a "video" feel but with a weird magenta tint. I think it's an engineering/market choice to try to emulate the Alexa better, but the old one had charm and wasn't just a "poor man's Alexa that has a strange magenta cast."

I think we forget how primitive most colorists are (myself included). I loved the C100 because the look was great out of the box. Same with the Alexa. Loved the 5D Mark II. I'm not claiming one can't get a much better look from plenty of other technically superior cameras (than the C100, at least), and the F55 raw trounces most of those cameras. I'm just saying 99% of colorists can't get that super flat image to look as good as settings that look good out of the box. I include in that some of the top post houses, even. I think people here forget how incredibly skilled they are compared with the general public and even journeyman professional colorists. For most people, out of camera look matters; I'm no engineer and I'm not a great colorist. Neither are most consumers.

My big issue is that the hue vs luminance curve introduces a lot of noise into the image and so trying to use that to turn a video look into more of a color negative look is difficult. But I know there are those who can do it. Still, I think this piece was graded expertly by Company 3:

And you can see some parts that are clearly video trying to look like film. At 1:51 the grass and picnic blanket have the color and saturation of film, but the luma values still look more like an additive model than a subtractive model and the grass still looks more green than blue, partly as a result. Art Adams wrote about how the DXL darkened saturated green values and stretched the colors between yellow/green and blue/green out to get rid of the "Red look" and to me there are some scenes here that still have the "Red look." The American flag during the Zapruder video portion looks more like a subtractive model to me, though, the red there really pops. I've been told parts of this are shot on film and parts on video, but I lack the eye to differentiate all of it. It's probably all video lol. I bet it's 99% Red MX.

Link to comment
Share on other sites

18 minutes ago, HockeyFan12 said:

think it's mostly the age-old canard (meme?) about using the word science when it's really about engineering.

Engineering is just applied science. To use my tomatoes example, you can genetically engineer bigger tomatoes by applying the scientific theory behind it.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...