CaptainHook Posted October 28, 2019 Share Posted October 28, 2019 3 hours ago, Llaasseerr said: Are you talking about Sigma adding DNG metadata? Obviously that's great, and I like Resolve's ability to create an IDT on the fly using that data. If it's something else you're talking about, I'd be interested to know. What I was describing is a more high precision IDT based on spectral sensitivity measurements - not "just" a matrix and a 1D delog lut. If you look at the rawtoaces github page, the intermediate method is as you describe the way Resolve works. It only tries that method if the afore-mentioned spectral sensitivity data is not available. I can't speak for Sigma with certainty, but the matrices used in DNGs are generally calculated based on spectral sensitivity measurements - rawtoaces is just providing a tool to calculate the matrices from the spectral response data (the Ceres solver they use is just one method to do a regression fit) and then convert the file in the same way as with an IDT. They may prefer to keep the calculated matrices in float without rounding, but you don't need that many decimal points of precision to reduce any error delta down to "insignificant" so in this case rounding is of no real concern here. Rawtoaces doing the calculation also removes any preference the manufacturer may have about regression fit techniques, weighting certain colours for higher accuracy over others and the training data used (like skin patches etc), how to deal with chromatic adaptation, etc. This is really the only area a manufacturer can impart their 'taste' into their "colour science" (apart from maybe picking primaries which is irrelevant in an ACES workflow unless you want to start from a particular manufacturers gamut for grading). Noise and other attributes are IMHO not "colour science", but calibration and other image processing decisions. The ideal goal for ACES is to remove the manufacturers preferences of colour science which leaves the rest down to metamerism of the sensor and it's overall dynamic range which are the elements that survive ACES trying to make all sensors look as similar as possible once transformed into the same space. Its also why it can't fully be successful at its goal but to be fair they do allow the preferences in colour science to remain somewhat intact since manufacturers can provide their own IDTs. But they would prefer all IDTs be created the same way as it would get them closer to their goal. The Academy document they link describes the basic principles of calculating matrices from spectral sensitivity data but also offers an alternative in the appendix based on capturing known targets (colour charts) under various colour temperatures/source illuminants. I also mentioned both of these on the previous page here. So an actual IDT generated from spectral response data just contains a matrix to convert from sensor RGB/space to ACES, and a way to transform to linear if needed (it can be an equation or a LUT). Take a look at an IDT from Sony for Slog3 and SGamut3 - it just has the 3x3 matrix and a log to linear equation: https://github.com/ampas/aces-dev/blob/master/transforms/ctl/idt/vendorSupplied/sony/IDT.Sony.SLog3_SGamut3.ctl Or look at an Arri one for LogC - 3x3 matrix (at the bottom of the long file) and log to linear LUT: https://raw.githubusercontent.com/ampas/aces-dev/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3/EI800/IDT.ARRI.Alexa-v3-logC-EI800.ctl Also notice with Arri, not only is there a folder for each ISO, but multiple IDTs for the raw files for each colour temperature (CCT = correlated colour temperature) going back to what I described earlier about needing different transforms per colour temperature (DNG processing pipelines handle this automatically if the author uses two matrices in combination with AsShotNeutral tags) - and Arri also has different matrices for when you use their internal NDs as they deemed it necessary to compensate the colour shift introduced by their NDs: https://github.com/ampas/aces-dev/tree/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3/EI800 If you're really curious, Arri even provides the python script they use to calculate the IDTs (it uses pre-calculated matrices for each CCT likely generated from the spectral response data). https://github.com/ampas/aces-dev/blob/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3_IDT_maker.py So a DNG actually already contains the ingredients needed for an IDT - a way to convert to linear (if not already in linear) and the matrix (or matrices) required to transform from sensor space to ACES - most likely calculated from spectral sensitivity/response data (in the DNG case you get to ACES primaries via a standard transform from XYZ). If you have a DNG, you don't need an IDT. The information is there. Hope that clears up what I was trying to say some more. Adept and Lars Steenhoff 2 Quote Link to comment Share on other sites More sharing options...
CaptainHook Posted October 28, 2019 Share Posted October 28, 2019 On 10/27/2019 at 3:40 PM, Llaasseerr said: In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. I would offer that for matching shots (the majority of most grading work), adjusting white balance in sensor space (or even XYZ as a fallback) and exposure in linear makes a huge difference to how well shots match and flow. I see many other colourists claim they can do just as good white balancing with the normal primaries controls, but i think if they actually spent considerable time with both approaches instead of just one they would develop a sensitivity to it that would make them rethink just how 'good' the results with primaries are. Its one area i think photographers experienced with dialing in white balance in RAW files develop that sensitivity and eye to how it looks when white balance is transformed more accurately - more so than those in the motion image world who still aren't used to it. I've been a fan of Ian Vertovec from Light Iron for quite a few years, and I was not surprised to learn recently that he likes to do basic adjustments in linear because there was something in his work that stood out to me (including his eye/talent/skill/experience of course). cpc, Lars Steenhoff, Adept and 1 other 4 Quote Link to comment Share on other sites More sharing options...
androidlad Posted October 28, 2019 Share Posted October 28, 2019 Based on my analysis, 4K DNG RAW is oversampled from full 6K readout, this is possible because this so-called "RAW" is already in-camera debayered RGB format. Quote Link to comment Share on other sites More sharing options...
cpc Posted October 28, 2019 Share Posted October 28, 2019 3 hours ago, paulinventome said: +1 on working with frames, especially EXR. One benefit is if a render crashes halfway through, you still have the frames from before and can start where you left off. That's a real world thing! You should be able to do the same with any intraframe codec in a container, no? In any case, whether your intermediate is a sequence (DPX or EXR or whatever) has little to do with whether your source media is a sequence. On 10/27/2019 at 6:40 AM, Llaasseerr said: Per frame metadata is a big part of feature film production and is not going anywhere. But at the lower end of the market it is probably not that relevant. Are you talking source metadata or intermediate (post) metadata? The latter shouldn't be related to what your source is. On 10/27/2019 at 6:40 AM, Llaasseerr said: Have you actually tried it? There are many reasons that VFX and DI facilities work with frames. A single frame is much smaller to read across the network than a large movie file and the software will display a random frame much faster when scrubbing around - not factoring in CPU overhead for things like compression - or debayering. As for my example of upload to cloud, a multithreaded command line upload of a frame sequence is much faster than a movie file, and I'm able to stream frames from cloud to timeline with a fast enough internet connection. But in a small setup where you are just making your own movies at home then this all may be a moot point. I see. If you are actually streaming individual frames from a network that makes sense. On 10/27/2019 at 6:40 AM, Llaasseerr said: In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. But I agree that if you are a one person band who doesn't need to share files with anyone, then ingesting raw, editing and finishing in Resolve would be possible. Our workflows are very different because of different working environments. And for what you are doing, your way of working may be best for you. It is not for one man bands only though. I've done it on a couple of indie productions where I shared dng proxies with the editor (they did edit in Resolve). I also know for at least two production houses that do work this way. But yes, bigger productions will likely promote a more traditional workflow. Yet I think film post can gain as much from utilizing raw processing controls for correction/grading as any other production, as it is in some ways more intuitive and more mathematically/physically correct than the common alternative. Edit: I see CaptainHook has expressed a similar and more detailed opinon above on this point. 1 hour ago, androidlad said: Based on my analysis, 4K DNG RAW is oversampled from full 6K readout, this is possible because this so-called "RAW" is already in-camera debayered RGB format. It isn't necessary to debayer in order to create a downscaled Bayer mosaic, you can simply resample per channel. Quote Link to comment Share on other sites More sharing options...
Lars Steenhoff Posted October 28, 2019 Share Posted October 28, 2019 1 hour ago, androidlad said: Based on my analysis, 4K DNG RAW is oversampled from full 6K readout, this is possible because this so-called "RAW" is already in-camera debayered RGB format. Pretty sure it's not debayered. Its most likely downsampled per channel deezid 1 Quote Link to comment Share on other sites More sharing options...
Lars Steenhoff Posted October 28, 2019 Share Posted October 28, 2019 Quote Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted October 28, 2019 Author Administrators Share Posted October 28, 2019 Nice. I find that the 12bit is quite a big step up from the internal 8bit if you raise the shadows. More dynamic range. But for the uncompressed 4:4:4 look, with no big exposure changes in post (got it right in cam and lit it right), the internal 4K 8bit looks amazing and as good as 12bit. deezid, jbCinC_12 and Lars Steenhoff 3 Quote Link to comment Share on other sites More sharing options...
Lars Steenhoff Posted October 28, 2019 Share Posted October 28, 2019 I think that the 8bit internal raw looks better than most cameras with compressed files, no matter if its 8 or 10 bit. Quote Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted October 28, 2019 Author Administrators Share Posted October 28, 2019 Yeah, sure does.... It has the look of RAW. It grades like RAW. It quacks like RAW. It is RAW! No need to shoot LOG or have a View Assist is a bonus, although I would like to see those in later firmware update Something else that would be nice is full frame 2.8K RAW 10bit or 12bit internal with 3:1 compression. Bring the file sizes down some more. Lars Steenhoff and deezid 2 Quote Link to comment Share on other sites More sharing options...
Lars Steenhoff Posted October 28, 2019 Share Posted October 28, 2019 I hope they can make 10 bit to internal sd work with a firmware upgrade. Like they originally planned to have. deezid 1 Quote Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted October 28, 2019 Author Administrators Share Posted October 28, 2019 That would be great, it's just the file sizes are absolutely enormous. Uncompressed RAW vs compressed RAW... That is the discussion I want happening at the next Sigma engineers meeting Quote Link to comment Share on other sites More sharing options...
Lars Steenhoff Posted October 28, 2019 Share Posted October 28, 2019 Yes for sure, my preference is to first go for lossless compressed first. ( Like I had in the 5d mk3 with magic lantern ) No artefacts and still save half the storage. Quote Link to comment Share on other sites More sharing options...
tomastancredi Posted October 28, 2019 Share Posted October 28, 2019 Slimraw works amazing for lossless dng compression with D16. deezid, Andrew Reid and Lars Steenhoff 3 Quote Link to comment Share on other sites More sharing options...
paulinventome Posted October 28, 2019 Share Posted October 28, 2019 1 hour ago, Andrew Reid said: Nice. I find that the 12bit is quite a big step up from the internal 8bit if you raise the shadows. More dynamic range. But for the uncompressed 4:4:4 look, with no big exposure changes in post (got it right in cam and lit it right), the internal 4K 8bit looks amazing and as good as 12bit. To be expected, but have you had a chance to compare 10 bit, with a decent curve that could be as good as 12 and has more frame rate options Those new videos above show some horrific highlight clipping - i wonder what the workflow was with them... cheers Paul deezid 1 Quote Link to comment Share on other sites More sharing options...
deezid Posted October 28, 2019 Share Posted October 28, 2019 7 minutes ago, paulinventome said: To be expected, but have you had a chance to compare 10 bit, with a decent curve that could be as good as 12 and has more frame rate options Those new videos above show some horrific highlight clipping - i wonder what the workflow was with them... cheers Paul Noticed as well. But at least no weird red (BMD) or blue channel (S1) clipping. Quote Link to comment Share on other sites More sharing options...
Lars Steenhoff Posted October 28, 2019 Share Posted October 28, 2019 I think the clipping is because most of those videos have used a high contrast develop setting. I can pull down the highlights of the example dngs a lot in Adobe Camera Raw @Andrew Reid Do you have some 8 bit and 12 bit dng from the same test shot with the same exposure settings for us to have a look at? Quote Link to comment Share on other sites More sharing options...
Brian Williams Posted October 28, 2019 Share Posted October 28, 2019 1 hour ago, tomastancredi said: Slimraw works amazing for lossless dng compression with D16. Next Sigma engineers meeting- "Can someone go get the company card, lets buy out Slimraw, get this thing inside the camera for the next firmware". Andrew Reid and tomastancredi 2 Quote Link to comment Share on other sites More sharing options...
Rob6 Posted October 28, 2019 Share Posted October 28, 2019 On 10/26/2019 at 6:32 AM, Andrew Reid said: The only thing I wish the Fp had was phase-detect on-chip autofocus. And maybe an articulated screen like the RX1R II It is an amazing, unique camera as it is, and an absolute blast. Last time I had this much fun was when I first picked up a GH2 all those years ago. 60p in 12bit 1080p RAW. 4K 60p RAW Cinema DNG file sizes would be astronomical. Like 6 min per 128GB or something crazy. Yes, I wish it had 4K/60p full frame 10bit but we are not in year 2022 yet Not even S1H can do it. Super 35mm mode. Thanks for the comments! How good is the 60p in 12bit 1080 RAW in terms of sharpness and any artifacts? Thanks! Rob Quote Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted October 28, 2019 Author Administrators Share Posted October 28, 2019 2 hours ago, tomastancredi said: Slimraw works amazing for lossless dng compression with D16. Downscales as well. https://www.slimraw.com So 7:1 compressed Cinema DNG at, say, 3.5k is just a transcode away with the Sigma Fp. Going to give it a go. 1 hour ago, Rob6 said: Thanks for the comments! How good is the 60p in 12bit 1080 RAW in terms of sharpness and any artifacts? Thanks! Rob Perhaps I can give you some Cinema DNG frames out of my camera. Here is a 12bit 1080/60p frame off the SD card, and same shot but 8bit 1080/60p to give you an idea of how that holds up. Also included a 4K 8bit shot. I recommend opening in Photoshop, push it around a lot. You will see the difference especially in the shadows. I think this camera is also unique in that it is doing 120fps 1080p in ALL-I. Usually Long-GOP compression is used for high frame rates. By the way the MOV 4K quality is VERY good indeed. They did not compromise on the codec. It's like a pocket S1. Don't think it is doing 10bit MOV H.264 but it looks damn good. PS CDNG 12bit 1080p at 24p = 633Mbit/s CDNG 8bit 1080p at 24p = 422Mbit/s Still quite large. Get the big SDs ready. 4K_8bit_A001_037_20191028_000005.DNG 2K_12bit_A001_035_20191028_000005.DNG 2K_8bit_A001_036_20191028_000005.DNG Lars Steenhoff, Rob6 and Stathman 2 1 Quote Link to comment Share on other sites More sharing options...
Brian Williams Posted October 28, 2019 Share Posted October 28, 2019 40 minutes ago, Andrew Reid said: Perhaps I can give you some Cinema DNG frames out of my camera. Here is a 12bit 1080/60p frame off the SD card, and same shot but 8bit 1080/60p to give you an idea of how that holds up. Also included a 4K 8bit shot. Thanks for this- I think I will stick to the SSD recording, that 8-bit noise is no joke compared to the 12-bit. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.