pixelpreaching Posted February 16, 2021 Share Posted February 16, 2021 2 hours ago, deezid said: If the FX3 has the same image processing, autofocus and eND as any of the other FX cameras it'll be a no-brainer I think especially given its price tag. If it comes with the Sony Alpha processing (strong sharpening and noise reduction) they can keep it. 😉 It'll have all that except the eND (it will have IBIS instead) Quote Link to comment Share on other sites More sharing options...
deezid Posted February 16, 2021 Share Posted February 16, 2021 Welp. Seems like a rehoused A7s3, so very likely the same processing with maybe a lower noise floor due to active cooling. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 16, 2021 Share Posted February 16, 2021 1 hour ago, deezid said: Welp. Seems like a rehoused A7s3, so very likely the same processing with maybe a lower noise floor due to active cooling. Definitely low noise due to improved cooling could eke out a little extra effective usable dynamic range before resorting to Neat Video, which will be nice. This latest video continues backing up my thesis of future cool "cinema only" updates via firmware. Although I'd love to see it, I'm doubtful of internal raw due to the Red bullshit patent unless Sony did it via X-OCN and somehow that was permissible. I'm still stanning for the WDR paid firmware update which will in theory turn this and the FX6 into DGO dynamic range beasts. Quote Link to comment Share on other sites More sharing options...
TomTheDP Posted February 16, 2021 Share Posted February 16, 2021 3 hours ago, Llaasseerr said: Definitely low noise due to improved cooling could eke out a little extra effective usable dynamic range before resorting to Neat Video, which will be nice. This latest video continues backing up my thesis of future cool "cinema only" updates via firmware. Although I'd love to see it, I'm doubtful of internal raw due to the Red bullshit patent unless Sony did it via X-OCN and somehow that was permissible. I'm still stanning for the WDR paid firmware update which will in theory turn this and the FX6 into DGO dynamic range beasts. Hmm is it usable WDR or just the type where it only works on scenes with little motion. Honestly though the A7S3 and Panasonic S1 dynamic range are amazing. The Alexa is only really capturing a stop more though the latitude is better. Of course the Alexa is not using noise reduction so it’s probably more like two stops if you’re just looking at the highlights. The FX6 is my favorite camera on the market right now. END is more valuable than a little more dynamic range. Plus being able to get clean 6400 iso. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 16, 2021 Share Posted February 16, 2021 1 hour ago, TomTheDP said: Hmm is it usable WDR or just the type where it only works on scenes with little motion. Honestly though the A7S3 and Panasonic S1 dynamic range are amazing. The Alexa is only really capturing a stop more though the latitude is better. Of course the Alexa is not using noise reduction so it’s probably more like two stops if you’re just looking at the highlights. The FX6 is my favorite camera on the market right now. END is more valuable than a little more dynamic range. Plus being able to get clean 6400 iso. We'll have to see if it materializes. But based on the number of white papers for this method, there is no temporal aliasing which harks back to Red's shitty kludge from a few years ago. As an aside, I checked out the DGO implementation in the Canon cameras and that is also a bit below expectations, so this Sony sensor is to me preferable even in its current incarnation, particularly because of the fast rolling shutter. It's the best DR and rolling shutter in an "affordable" camera right now. But in the end it's only an incremental improvement. It's a welcome, but modest half a stop more than the previous a7s models. The Alexa is capturing two stops more above middle grey than any of the Sony CineAlta cameras, and the difference is huge. Sony is actually gimping their CineAlta camera sensors based on lowering the voltage - I think in order to manage power draw - so that they are only capturing a subset of what the Slog3 curve spec is capable of. This WDR upgrade could address that, but of course it probably requires a higher power draw. In linear light values, two more stops means it captures 4x the light information in the scene. You can't quite appreciate that when looking at a log-encoded curve. After working with Alexa footage for years, anything else is a bit of a disappointment because it needs to be babysat or it needs to be shot underexposed with an ND to get something approaching Alexa/film levels for bright sunlight, fire, explosions or hot specular highlights on cars etc. And then of course you need to denoise the shadows, possibly. It's not deal, but it's workable. Look at how Deakins shoots, he just exposes for a grey card like film and doesn't worry about clipping highlights because there's tons of range. I love the eND in the FX6 but it's just a convenience compared to actually capturing way more brightness information. Of course everyone has their own priorities, though. ✌️👍 There is the option to just use the FX6 and get both features, if the WDR gets rolled out. Quote Link to comment Share on other sites More sharing options...
TomTheDP Posted February 16, 2021 Share Posted February 16, 2021 2 hours ago, Llaasseerr said: We'll have to see if it materializes. But based on the number of white papers for this method, there is no temporal aliasing which harks back to Red's shitty kludge from a few years ago. As an aside, I checked out the DGO implementation in the Canon cameras and that is also a bit below expectations, so this Sony sensor is to me preferable even in its current incarnation, particularly because of the fast rolling shutter. It's the best DR and rolling shutter in an "affordable" camera right now. But in the end it's only an incremental improvement. It's a welcome, but modest half a stop more than the previous a7s models. The Alexa is capturing two stops more above middle grey than any of the Sony CineAlta cameras, and the difference is huge. Sony is actually gimping their CineAlta camera sensors based on lowering the voltage - I think in order to manage power draw - so that they are only capturing a subset of what the Slog3 curve spec is capable of. This WDR upgrade could address that, but of course it probably requires a higher power draw. In linear light values, two more stops means it captures 4x the light information in the scene. You can't quite appreciate that when looking at a log-encoded curve. After working with Alexa footage for years, anything else is a bit of a disappointment because it needs to be babysat or it needs to be shot underexposed with an ND to get something approaching Alexa/film levels for bright sunlight, fire, explosions or hot specular highlights on cars etc. And then of course you need to denoise the shadows, possibly. It's not deal, but it's workable. Look at how Deakins shoots, he just exposes for a grey card like film and doesn't worry about clipping highlights because there's tons of range. I love the eND in the FX6 but it's just a convenience compared to actually capturing way more brightness information. Of course everyone has their own priorities, though. ✌️👍 There is the option to just use the FX6 and get both features, if the WDR gets rolled out. I kind of doubt the WDR will be of much use but that would be awesome if my hunch is wrong. More so just doubt Sony would allow their lower tier camera to have a big advantage over the FX9 or even the Venice. For me even the A73 has excellent dynamic range its just gimped by the 10 bit and awful processing. I know the Alexa is the bees knees but I've seen it pitted against the Pocket 6k and the 6k actually recovered more highlights with the highlight recovery tool in RAW. Yes the highlight recovery tool is cheating but the point is these new sensors aren't that far off. The The Panasonic S1 can recover more information in the highlights and shadows than the A7S3. I am really happy with it to the point where I no longer feel left out like I did with a GH5 or XT3 or whatever other camera. Worse case on the S1 is I need a little bit of fill light for crazy dynamic range scenes. Being able to shoot 4000 iso and get clean images is a huge luxury. I can light a scene with a 100 watt light bulb even after pushing it through diffusion. I mean yeah it would be nice if I could swing like Deacons and never use fill light, which he's mentioned that he never uses. Luxury of having an Alexa. For me having a 1 pound camera that can be powered for hours off a 50 gram battery is an almost unbeatable luxury. For me the two things missing are 12 bit internal, END's, and lower rolling shutter. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 17, 2021 Share Posted February 17, 2021 2 hours ago, TomTheDP said: I kind of doubt the WDR will be of much use but that would be awesome if my hunch is wrong. More so just doubt Sony would allow their lower tier camera to have a big advantage over the FX9 or even the Venice. For me even the A73 has excellent dynamic range its just gimped by the 10 bit and awful processing. I know the Alexa is the bees knees but I've seen it pitted against the Pocket 6k and the 6k actually recovered more highlights with the highlight recovery tool in RAW. Yes the highlight recovery tool is cheating but the point is these new sensors aren't that far off. The The Panasonic S1 can recover more information in the highlights and shadows than the A7S3. I am really happy with it to the point where I no longer feel left out like I did with a GH5 or XT3 or whatever other camera. Worse case on the S1 is I need a little bit of fill light for crazy dynamic range scenes. Being able to shoot 4000 iso and get clean images is a huge luxury. I can light a scene with a 100 watt light bulb even after pushing it through diffusion. I mean yeah it would be nice if I could swing like Deacons and never use fill light, which he's mentioned that he never uses. Luxury of having an Alexa. For me having a 1 pound camera that can be powered for hours off a 50 gram battery is an almost unbeatable luxury. For me the two things missing are 12 bit internal, END's, and lower rolling shutter. Yeah, at this stage it makes more sense to put the WDR feature in the FX6 as a paid update, which was what the original rumor suggested. But it's fun to speculate it may be in the FX3 because it's the same sensor and it has the active cooling system. You never know. Agreed, the DR of the A7s3 is still great. Have you tried shooting raw? No noise then. Plus an extra 2 bits of Slog3, and importantly NO chroma compression. Re: The Pocket 6K, there's no way it can get more DR than the Alexa with highlight recovery if they are both exposed the same. As you say, highlight recovery is also just a synthesis by copying information from one channel to the other two clipped channels. BMD cameras just don't hold up. You can link me to the files if you want, and I'll take a look. I haven't checked the S1, but from the CineD Xyla test it's 12.1 stops which is almost a stop less than the a7sIII unless it's maybe more skewed to clean shadows. Personally, my interest is in the values above middle grey, though. I have checked a lot of cameras seeing if anything is close to an Alexa and consistently nothing is, which just really surprises me considering how old the sensor tech is at this point. Cameras are just getting worse because they are pursuing more pixels, for the most part. But with the a7sIII, Sony actually started to cross the Rubicon which is pretty interesting to me. Hence my interest in where they go with this sensor with the FX6 and the FX3. The way I check DR is that if the footage is not raw, I linearise log plates to linear floating point based on the technical transform provided by the vendor, or I use the vendor's ACES IDT, then I check the max value where the sensor clips. This is easiest to do in Nuke or Scratch, and is a completely unambiguous method. Quote Link to comment Share on other sites More sharing options...
Caleb Genheimer Posted February 17, 2021 Share Posted February 17, 2021 8 hours ago, Llaasseerr said: We'll have to see if it materializes. But based on the number of white papers for this method, there is no temporal aliasing which harks back to Red's shitty kludge from a few years ago. As an aside, I checked out the DGO implementation in the Canon cameras and that is also a bit below expectations, so this Sony sensor is to me preferable even in its current incarnation, particularly because of the fast rolling shutter. It's the best DR and rolling shutter in an "affordable" camera right now. But in the end it's only an incremental improvement. It's a welcome, but modest half a stop more than the previous a7s models. The Alexa is capturing two stops more above middle grey than any of the Sony CineAlta cameras, and the difference is huge. Sony is actually gimping their CineAlta camera sensors based on lowering the voltage - I think in order to manage power draw - so that they are only capturing a subset of what the Slog3 curve spec is capable of. This WDR upgrade could address that, but of course it probably requires a higher power draw. In linear light values, two more stops means it captures 4x the light information in the scene. You can't quite appreciate that when looking at a log-encoded curve. After working with Alexa footage for years, anything else is a bit of a disappointment because it needs to be babysat or it needs to be shot underexposed with an ND to get something approaching Alexa/film levels for bright sunlight, fire, explosions or hot specular highlights on cars etc. And then of course you need to denoise the shadows, possibly. It's not deal, but it's workable. Look at how Deakins shoots, he just exposes for a grey card like film and doesn't worry about clipping highlights because there's tons of range. I love the eND in the FX6 but it's just a convenience compared to actually capturing way more brightness information. Of course everyone has their own priorities, though. ✌️👍 There is the option to just use the FX6 and get both features, if the WDR gets rolled out. I’m curious if the next implementation of the Ursa 12K sensor will implement DGO... there’s already some really different sauce in that thing with the RGBW bayer, and I’d think the bayer groupings have plenty enough individual photosites (4 total, 2 White and 2 colored) in them that half could operate at an entirely different gain for some serious DR increase. Even if it pans out to a little more noise, there’s just so much resolution to work with there, even a moderate amount of noise reduction wouldn’t cripple it below DCI. Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted February 17, 2021 Share Posted February 17, 2021 3 hours ago, Llaasseerr said: Re: The Pocket 6K, there's no way it can get more DR than the Alexa with highlight recovery if they are both exposed the same. As you say, highlight recovery is also just a synthesis by copying information from one channel to the other two clipped channels. BMD cameras just don't hold up. You can link me to the files if you want, and I'll take a look. TomTheDP 1 Quote Link to comment Share on other sites More sharing options...
TomTheDP Posted February 17, 2021 Share Posted February 17, 2021 4 hours ago, Llaasseerr said: Yeah, at this stage it makes more sense to put the WDR feature in the FX6 as a paid update, which was what the original rumor suggested. But it's fun to speculate it may be in the FX3 because it's the same sensor and it has the active cooling system. You never know. Agreed, the DR of the A7s3 is still great. Have you tried shooting raw? No noise then. Plus an extra 2 bits of Slog3, and importantly NO chroma compression. Re: The Pocket 6K, there's no way it can get more DR than the Alexa with highlight recovery if they are both exposed the same. As you say, highlight recovery is also just a synthesis by copying information from one channel to the other two clipped channels. BMD cameras just don't hold up. You can link me to the files if you want, and I'll take a look. I haven't checked the S1, but from the CineD Xyla test it's 12.1 stops which is almost a stop less than the a7sIII unless it's maybe more skewed to clean shadows. Personally, my interest is in the values above middle grey, though. I have checked a lot of cameras seeing if anything is close to an Alexa and consistently nothing is, which just really surprises me considering how old the sensor tech is at this point. Cameras are just getting worse because they are pursuing more pixels, for the most part. But with the a7sIII, Sony actually started to cross the Rubicon which is pretty interesting to me. Hence my interest in where they go with this sensor with the FX6 and the FX3. The way I check DR is that if the footage is not raw, I linearise log plates to linear floating point based on the technical transform provided by the vendor, or I use the vendor's ACES IDT, then I check the max value where the sensor clips. This is easiest to do in Nuke or Scratch, and is a completely unambiguous method. HockeyFan12 linked it above. I am talking latitude demonstrated in over/under tests. I've seen the S1 against the A7S3 with internal 10 bit and the S1 or maybe it was the S1H holds highlights better along with less noisy shadows. Yes it was the S1H, though with the latest firmware update I don't think they'll be a difference. In even beats the A7S3 RAW. There is a difference between latitude and dynamic range of course. The Pocket 6k can recover highlights amazingly well. I'd get the P6k but I don't want 6k files. I'd rather shoot internal 4k 10bit on my S1 or record S35 4K raw to a Ninja, something I can't do on the P6K unless I want a small sensor FOV. Quote Link to comment Share on other sites More sharing options...
TomTheDP Posted February 17, 2021 Share Posted February 17, 2021 Ran out of time to edit my comment above. Rewatching the video the S1H has an edge in the highlights and a pretty noticeable edge in the shadows. Would be easier to see on a non compressed YouTube video of course, but it's noticeable. I think the rolling shutter benefit on the A7S3 can't be downplayed. The price of the S1 is hard to beat though. I was debating using it as my main camera with a Ninja V if the job calls for it. However a RED Scarlet recently fell into my lap so I've been toying with that. I'd probably get a Pocket 6k, despite my hate for the body. I really don't want to deal with 6k files though. I can shoot to the Ninja V in S35 4K which is something I can't do on the Pocket 6K. The S35 crop on the S1 is also a wider FOV than the Pockets. Quote Link to comment Share on other sites More sharing options...
deezid Posted February 17, 2021 Share Posted February 17, 2021 9 hours ago, Llaasseerr said: I haven't checked the S1, but from the CineD Xyla test it's 12.1 stops which is almost a stop less than the a7sIII unless it's maybe more skewed to clean shadows. Personally, my interest is in the values above middle grey, though. Same with the S5. But keep in mind IMATEST can be tricked with strong noise reduction, which quite frankly the S5 doesn't apply at all. Basically some chroma spatial noise reduction only. Comparisons like these are probably more relevant, starting from ETTR at the top: src: https://www.slashcam.de/artikel/Test/Canon-EOS-C70---Dynamik-wie-bei-Vollformat-Sensoren-.html The S5 has around 1-2 more usable stops of latitude thanks to better NR, downsampling and the IMX410 which seems to have a lower noise floor than the A7sIII/FX6 sensor in general. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 17, 2021 Share Posted February 17, 2021 8 hours ago, HockeyFan12 said: Thanks, I took a look at this and it was a fun test. Okay, I can see that with highlight recovery the Braw has some more detail in the window reflections than the Ari raw. But this was shot +7 stops overexposed, so at best it's only useful as an extreme latitude test where really you would expose correctly when filming and then if needed push it +/-2, maybe 3 stops at most. Both of the images are insanely clipped, there's no sky detail. The issue with using highlight recovery is that it's extrapolating from data that's not really captured and making a best guess synthesis. Sometimes it works, sometimes it doesn't. But it's not a strategy for capturing more dynamic range, which is probably why it's disabled in Resolve by default - even for BM's own cameras. It's more of a neat party trick when combined with extreme overexposure like this. Screengrabs uploaded. The image with the most highlight detail is the Braw with highlight recovery. Without highlight recovery, it has 0.7 stops less in the highlights that the Alexa. If it was even remotely correctly exposed, the Alexa would be holding two stops more detail. Maybe 1.5 if factoring in successful highlight recovery on the Braw. This guy's original comparison method in Resolve is not apples to apples in the way he's treating the footage differently. All I did was import the BRAW into Resolve and export it as a linear ACES EXR both with and without HR. So I'm using BM's own ACES color science for this camera. I then took them into Nuke where I had already imported the Arri Raw through ACES. So this ingest process provide a baseline consistency as far as scene referred light values captured by both cameras. I then lowered the exposure on all three by -7 stops. As you can see from the screengrabs, that brings the exposure into line if you look at the trees and the helicopter as a stand-in for shooting a grey card. The sensor clipping point is the sky and the reflections, to varying degrees. EDIIT: Forgot to mention these screengrabs are captured showing "luminance only" in Nuke, to emphasize the consistent exposure and the highlight differences in each image. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 17, 2021 Share Posted February 17, 2021 9 minutes ago, Llaasseerr said: Thanks, I took a look at this and it was a fun test. Okay, I can see that with highlight recovery the Braw has some more detail in the window reflections than the Ari raw. But this was shot +7 stops overexposed, so at best it's only useful as an extreme latitude test where really you would expose correctly when filming and then if needed push it +/-2, maybe 3 stops at most. Both of the images are insanely clipped, there's no sky detail. The issue with using highlight recovery is that it's extrapolating from data that's not really captured and making a best guess synthesis. Sometimes it works, sometimes it doesn't. But it's not a strategy for capturing more dynamic range, which is probably why it's disabled in Resolve by default - even for BM's own cameras. It's more of a neat party trick when combined with extreme overexposure like this. Screengrabs uploaded. The image with the most highlight detail is the Braw with highlight recovery. Without highlight recovery, it has 0.7 stops less in the highlights that the Alexa. If it was even remotely correctly exposed, the Alexa would be holding two stops more detail. Maybe 1.5 if factoring in successful highlight recovery on the Braw. This guy's original comparison method in Resolve is not apples to apples in the way he's treating the footage differently. All I did was import the BRAW into Resolve and export it as a linear ACES EXR both with and without HR. So I'm using BM's own ACES color science for this camera. I then took them into Nuke where I had already imported the Arri Raw through ACES. So this ingest process provide a baseline consistency as far as scene referred light values captured by both cameras. I then lowered the exposure on all three by -7 stops. As you can see from the screengrabs, that brings the exposure into line if you look at the trees and the helicopter as a stand-in for shooting a grey card. The sensor clipping point is the sky and the reflections, to varying degrees. EDIIT: Forgot to mention these screengrabs are captured showing "luminance only" in Nuke, to emphasize the consistent exposure and the highlight differences in each image. I wasn't able to edit this in time, but the "BRAW no HR" image was displaying the red channel, not luminance. So I'm uploading it again, showing luminance. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 17, 2021 Share Posted February 17, 2021 9 hours ago, TomTheDP said: HockeyFan12 linked it above. I am talking latitude demonstrated in over/under tests. I've seen the S1 against the A7S3 with internal 10 bit and the S1 or maybe it was the S1H holds highlights better along with less noisy shadows. Yes it was the S1H, though with the latest firmware update I don't think they'll be a difference. In even beats the A7S3 RAW. There is a difference between latitude and dynamic range of course. The Pocket 6k can recover highlights amazingly well. I'd get the P6k but I don't want 6k files. I'd rather shoot internal 4k 10bit on my S1 or record S35 4K raw to a Ninja, something I can't do on the P6K unless I want a small sensor FOV. With the caveat that I have not explored the S1 that much and that this is the S1H, and without knowing the specifics of this test, broadly both cameras are capable except the latitude is clearly better on the -2 underexposed ProRes Raw on the S1H. But then if you shift to the higher base ISO on both cameras instead of underexposing to such a degree, would that still be the case? In the end, for similar DR I prefer to have much better rolling shutter on the a7sIII sensor. my take on latitude for my purposes is that it's how much you can recover up or down within the bucket of the dynamic range captured. This is where generally speaking, shooting ISO invariant at one of the base ISOs with the max dynamic range is probably the way you are going to do it whether it's RAW or say, Slog3 CineEI mode. So for me, the latitude is dependent on the total DR as shot at a base ISO. But assuming I've done my job and exposed roughly correctly when shooting - say within half a stop of my intent unless I'm pushing or underexposing - basically I will only want about +/- 2 stops, maybe 3 of latitude for any given exposed image. So the overall DR is a bigger factor. These kinds of tests do not really take wide DR into account for a correctly exposed scene, because they are looking at a grey card/colour chart and typically caucasian skin tones, but generally not at extreme light sources or highlights that would clip the sensor at a give exposure. The daylight Alexa tests shot through the exposure range hosted on cinematography.net are a better example. Shadows though, are probably okay to gauge. But I don't trust that they are moving the exposure up or down in post correctly. I think they are just eyeballing it to match the base ISO image. They really need to do a mathematically correct exposure change in a linear floating point space in order to simulate an ISO/f-stop/ND filter exposure change. As far as this specific test, he has not published his colour pipeline so it's hard to do comparisons. The only thing I know for sure is at one point he shot Prores Raw. But how was it interpreted? What did he capture the non-raw footage as? If log, did he import it correctly for each camera vendor's log type? What LUT did he use as his look final look? Did he do that for both cameras? All the variables need to be removed. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 17, 2021 Share Posted February 17, 2021 3 hours ago, deezid said: Same with the S5. But keep in mind IMATEST can be tricked with strong noise reduction, which quite frankly the S5 doesn't apply at all. Basically some chroma spatial noise reduction only. Comparisons like these are probably more relevant, starting from ETTR at the top: src: https://www.slashcam.de/artikel/Test/Canon-EOS-C70---Dynamik-wie-bei-Vollformat-Sensoren-.html The S5 has around 1-2 more usable stops of latitude thanks to better NR, downsampling and the IMX410 which seems to have a lower noise floor than the A7sIII/FX6 sensor in general. For me personally, any ETTR method is not relevant and I'm only interested in exposing for middle grey. Each to their own, though. Also, is there any attempt to bring these images into a common baseline space to compare them, via camera vendor supplied colour transforms? There is no mention in the translated article. And yes, NR is a factor, that is why I would advocate for DR tests without it. Such as ProRes Raw or FX6 with NR "off", compared to an Alexa that does not have NR anyway, and stick to the Arri method for noise floor calculation with the Xyla/IMA tests for consistency. In the end, I can do NR in post with Neat Video or a deep learning method, and get a better result than an in-camera method. I never shoot with NR enabled though. Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 17, 2021 Share Posted February 17, 2021 To bring this roughly back to the FX3, I just wanted to mention that for me, Assimilate Scratch is a great piece of software for ingesting ProRes Raw from the Sony cams and getting the maximum dynamic range and image quality out of them. It then allows going into Resolve, or Nuke, of course. For my tests, I've exported from Scratch as ACES OpenEXR, which is the biggest bucket possible for dynamic range and wide colour gamut, so I'm confident I'm getting the most the camera has to offer. I accept that other people are probably looking for more convenience, though. Thpriest 1 Quote Link to comment Share on other sites More sharing options...
TomTheDP Posted February 17, 2021 Share Posted February 17, 2021 Forgive my ignorance on this subject. Dynamic range and latitude have always confused me. But my understanding is that Dynamic range is measuring the brightest and darkest regions that retain detail in an image. This doesn't take into account shadows and highlights that can be recovered in post. If the highlights/shadows can be recovered then what is the disadvantage, is it just because it requires more post work to get the image where you want it? Quote Link to comment Share on other sites More sharing options...
Llaasseerr Posted February 17, 2021 Share Posted February 17, 2021 54 minutes ago, TomTheDP said: Forgive my ignorance on this subject. Dynamic range and latitude have always confused me. But my understanding is that Dynamic range is measuring the brightest and darkest regions that retain detail in an image. This doesn't take into account shadows and highlights that can be recovered in post. If the highlights/shadows can be recovered then what is the disadvantage, is it just because it requires more post work to get the image where you want it? No worries. Yes, DR is "measuring the brightest and darkest regions that retain detail in an image". However, yes it does also "take into account shadows and highlights that can be recovered in post." This is especially true since we are recording images more and more often as ISO invariant. That's the case with raw, but also if you look at what CineEI log recording is on the Sony FX cams, it's just saying 'we record at the base ISO where sensor DR is maximized, but we will display on the camera monitor with the ISO change you want, and also put it in the metadata". So effectively it's the same as Raw or log at a base ISO. You don't get any more shadows or highlights than what is recorded in that raw or log image, but it's how the data is displayed that has led to the term "highlight recovery". Unless specifically talking about highlight reconstruction, which is where Resolve will synthesize new data from partially clipped channels in the upper range. Basically there's no highlight recovery, it's just a case of bringing certain detail into the normalised 0-1 range that would otherwise not be visible once a raw or log image is converted to display on a Rec709 or P3 monitor. More of this data can be displayed on an HDR monitor. If you look at the linear values a log curve can hold in a normalized 0-1 space, it goes up into double digits with say, S-log3 (about 38.2) or Arri LogC (about 55.0). This represents a way of compressing the raw sensor values into a 0-1 container. When you invert the log curve, you get the full range of raw values again. But you'll only ever be able to see 0-1. So in order to see extreme highlight information, An aggressive filmic S-curve is applied with highlight rolloff, after the fact. That mimics the behaviour of film that can capture very high brightness values, and aggressively compress them in a very aesthetically pleasing way. Also a gamut and gamma transform is applied to comply with the viewing device. That is how "highlight recovery" can bring superbright values back into range on the viewing device compared to a baked image, where all the above transforms have already been applied in-camera (S-Cinetone, etc). If you have set up a proper colour pipeline for your project then you don't need to think about the colour transforms moving forward - so there's no work. The highlights fall where you expect them, but assuming you have enough DR you can still adjust the exposure correctly to bring certain values into range as if you were doing it in camera, and the highlight rolloff will dynamically display correctly based on the final film look LUT that you're applying. Quote Link to comment Share on other sites More sharing options...
TomTheDP Posted February 17, 2021 Share Posted February 17, 2021 19 minutes ago, Llaasseerr said: No worries. Yes, DR is "measuring the brightest and darkest regions that retain detail in an image". However, yes it does also "take into account shadows and highlights that can be recovered in post." This is especially true since we are recording images more and more often as ISO invariant. That's the case with raw, but also if you look at what CineEI log recording is on the Sony FX cams, it's just saying 'we record at the base ISO where sensor DR is maximized, but we will display on the camera monitor with the ISO change you want, and also put it in the metadata". So effectively it's the same as Raw or log at a base ISO. You don't get any more shadows or highlights than what is recorded in that raw or log image, but it's how the data is displayed that has led to the term "highlight recovery". Unless specifically talking about highlight reconstruction, which is where Resolve will synthesize new data from partially clipped channels in the upper range. Basically there's no highlight recovery, it's just a case of bringing certain detail into the normalised 0-1 range that would otherwise not be visible once a raw or log image is converted to display on a Rec709 or P3 monitor. More of this data can be displayed on an HDR monitor. If you look at the linear values a log curve can hold in a normalized 0-1 space, it goes up into double digits with say, S-log3 (about 38.2) or Arri LogC (about 55.0). This represents a way of compressing the raw sensor values into a 0-1 container. When you invert the log curve, you get the full range of raw values again. But you'll only ever be able to see 0-1. So in order to see extreme highlight information, An aggressive filmic S-curve is applied with highlight rolloff, after the fact. That mimics the behaviour of film that can capture very high brightness values, and aggressively compress them in a very aesthetically pleasing way. Also a gamut and gamma transform is applied to comply with the viewing device. That is how "highlight recovery" can bring superbright values back into range on the viewing device compared to a baked image, where all the above transforms have already been applied in-camera (S-Cinetone, etc). If you have set up a proper colour pipeline for your project then you don't need to think about the colour transforms moving forward - so there's no work. The highlights fall where you expect them, but assuming you have enough DR you can still adjust the exposure correctly to bring certain values into range as if you were doing it in camera, and the highlight rolloff will dynamically display correctly based on the final film look LUT that you're applying. Thank you for the explanation. That was my previous understanding though less technically minded. What led to my confusion is dynamic range measurements. The Pocket 6k has considerably more shadow and highlight information than the URSA 4.6k even though the URSA is measured having more dynamic range by CineD and BM themselves. I know because I've tested both cameras and the difference not even debatable. The P6K killed the URSA. I've found the highlight recovery tool in Resolve to be very useful unless you are recovering skintones or really critical stuff. In regards to the FX3 I am definitely interested if they get rid of the nasty internal processing. deezid 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.