Administrators Andrew Reid Posted May 23, 2021 Author Administrators Share Posted May 23, 2021 On 5/22/2021 at 1:06 AM, Thomas S said: A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image. In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing. That is done through smooth gradients. Sometimes averaging surrounding values does perfectly. For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient. Where 10bit is important is making sure the shot is captured well the first time. Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created. 10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops. We record more so we have it and can better manipulate it. Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit. It can and does happen. The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log. Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does. Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit. Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video. It doesn't even have to be something that covers the full screen. If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels. Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done. Very interesting. Thanks for the insight Thomas! Quote Link to comment Share on other sites More sharing options...
Bandido Posted May 25, 2021 Share Posted May 25, 2021 10 bit is better for green screen (chroma key). Quote Link to comment Share on other sites More sharing options...
Rinad Amir Posted May 25, 2021 Share Posted May 25, 2021 33 minutes ago, Bandido said: 10 bit is better for green screen (chroma key). Don't forget its also useful for white balance if you accidentally missed it on set PannySVHS 1 Quote Link to comment Share on other sites More sharing options...
kye Posted May 25, 2021 Share Posted May 25, 2021 On 5/22/2021 at 8:06 AM, Thomas S said: A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image. In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing. That is done through smooth gradients. Sometimes averaging surrounding values does perfectly. For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient. Where 10bit is important is making sure the shot is captured well the first time. Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created. 10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops. We record more so we have it and can better manipulate it. Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit. It can and does happen. The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log. Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does. Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit. Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video. It doesn't even have to be something that covers the full screen. If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels. Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done. Good post. I wrote a DCPX plugin that reduces the bit-depth of images and found that you can really reduce the bit depth significantly before it becomes visible on real world images. I think 6 bits was fine for many test images of skin tones, for example. Probably the biggest enemy of the 8-bit codec is the gradients that occur when filming flat-coloured surfaces like any interior wall. Not only is the light going to be unevenly applied to the wall, and not only is the lens going to vignette by a little bit, but the closer these are to perfect, the wider the posterisation banding is going to be. Still, you have to try pretty hard to get visible banding from 8-bit if the image is exposed well and isn't in a crazy-flat log profile. Quote Link to comment Share on other sites More sharing options...
Thomas S Posted May 25, 2021 Share Posted May 25, 2021 7 hours ago, kye said: Good post. I wrote a DCPX plugin that reduces the bit-depth of images and found that you can really reduce the bit depth significantly before it becomes visible on real world images. I think 6 bits was fine for many test images of skin tones, for example. Probably the biggest enemy of the 8-bit codec is the gradients that occur when filming flat-coloured surfaces like any interior wall. Not only is the light going to be unevenly applied to the wall, and not only is the lens going to vignette by a little bit, but the closer these are to perfect, the wider the posterisation banding is going to be. Still, you have to try pretty hard to get visible banding from 8-bit if the image is exposed well and isn't in a crazy-flat log profile. Agreed. All the photos we look at online are 8bit and rarely is 8bit an issue. For normal rec709 video profiles 10bit is a tad overkill although there can be some extremely rare cases where it can help. The bigger plague of 8bit is some h264 encoders that are too aggressive in how they assume color samples will not be noticed as different and merge them as a macro block. You can have a frame from a h264 video in 8bit and an uncompressed png of that image also in 8bit and get more banding from the h.264. Encoders try to figure out what is safe to compress together. The stuff we can't see with the naked eye. If we can't see it there is no point wasting bits on it. So a 8x8 pixel block with very subtle green values may decide to make that a 8x8 block of one green color. This can cause banding where one would normally not have any in 8bit. Panasonic suffered from this on the GH4 when they added log. The log was so flat that the encoder assumed it could compress areas of color into big macro blocks because the values would not be noticeable. If the shot stayed as log they were right but because the log was flatter than other log profiles it really struggled with areas of flat color like walls when graded. Sony's encoder did better at not grouping similar colors. At least up to S-log2. S-log3 could suffer from the same issues as Panasonic V-log on the GH4 on older Sony cameras. The GH5 had an improved encoder that wasn't as aggressive with 8bit and areas of similar colors. Katrikura and Towd 2 Quote Link to comment Share on other sites More sharing options...
kye Posted May 26, 2021 Share Posted May 26, 2021 13 hours ago, Thomas S said: Agreed. All the photos we look at online are 8bit and rarely is 8bit an issue. For normal rec709 video profiles 10bit is a tad overkill although there can be some extremely rare cases where it can help. The bigger plague of 8bit is some h264 encoders that are too aggressive in how they assume color samples will not be noticed as different and merge them as a macro block. You can have a frame from a h264 video in 8bit and an uncompressed png of that image also in 8bit and get more banding from the h.264. Encoders try to figure out what is safe to compress together. The stuff we can't see with the naked eye. If we can't see it there is no point wasting bits on it. So a 8x8 pixel block with very subtle green values may decide to make that a 8x8 block of one green color. This can cause banding where one would normally not have any in 8bit. Panasonic suffered from this on the GH4 when they added log. The log was so flat that the encoder assumed it could compress areas of color into big macro blocks because the values would not be noticeable. If the shot stayed as log they were right but because the log was flatter than other log profiles it really struggled with areas of flat color like walls when graded. Sony's encoder did better at not grouping similar colors. At least up to S-log2. S-log3 could suffer from the same issues as Panasonic V-log on the GH4 on older Sony cameras. The GH5 had an improved encoder that wasn't as aggressive with 8bit and areas of similar colors. I agree with your outline of why similar colours are grouped together, but I wonder if it's a function of bitrate. What I mean is that maybe the encoder would like to leave the colours separated, but due to bitrate limitations it has to sacrifice where it can do the least damage, and due to the 8-bit + Log combination these places are the least-worst and doing it any other way would simply have been worse again. One of the things I find fascinating about modern cameras is the atrocious bitrates they employ. 1080p Prores HQ was ~180Mbps, but the "standard" for 4K was only 100Mbps - even on multi-thousand-dollar flagship cameras like the A7S2. 100Mbps for 4K is laughable but seemed to be unquestioned. The bitrate for Prores is constant per pixel, so the 4K bitrate is 4x the 1080p bitrate at about 700Mbps. The H26x series are designed for broadcast not acquisition so unfortunately don't really get better in quality with bitrates the high, but they certainly do a nice job if you push them to give the highest quality they can muster. PannySVHS 1 Quote Link to comment Share on other sites More sharing options...
Benjamin Hilton Posted May 26, 2021 Share Posted May 26, 2021 So I get the whole bit depth argument, but have some questions for you Sony users on the practical side. I come from Panasonic cameras where there is a massive difference visually from 8bit to 10bit when shooting log. The 8-bit has splotchy blocks of magenta in places, and just seems way less clean. Do the 8-10bit comparisons on the a7siii mean that: A) The a7siii 10bit codec isn't a big improvement over let's say the a7iii's 8bit codec in log? B) The 8 and 10bit codecs on the a7siii are both improved quite a bit compared to the a7iii and such? Quote Link to comment Share on other sites More sharing options...
PannySVHS Posted May 26, 2021 Share Posted May 26, 2021 49 minutes ago, Benjamin Hilton said: A) The a7siii 10bit codec isn't a big improvement over let's say the a7iii's 8bit codec in log? B) The 8 and 10bit codecs on the a7siii are both improved quite a bit compared to the a7iii and such? I would say the latter, looking at Andrews samples. If the A73 has the same 100bmit codec as the A7s2, then it is rather underwhelming. Andrews examples look very solid. Quote Link to comment Share on other sites More sharing options...
Thomas S Posted May 26, 2021 Share Posted May 26, 2021 2 hours ago, kye said: I agree with your outline of why similar colours are grouped together, but I wonder if it's a function of bitrate. What I mean is that maybe the encoder would like to leave the colours separated, but due to bitrate limitations it has to sacrifice where it can do the least damage, and due to the 8-bit + Log combination these places are the least-worst and doing it any other way would simply have been worse again. One of the things I find fascinating about modern cameras is the atrocious bitrates they employ. 1080p Prores HQ was ~180Mbps, but the "standard" for 4K was only 100Mbps - even on multi-thousand-dollar flagship cameras like the A7S2. 100Mbps for 4K is laughable but seemed to be unquestioned. The bitrate for Prores is constant per pixel, so the 4K bitrate is 4x the 1080p bitrate at about 700Mbps. The H26x series are designed for broadcast not acquisition so unfortunately don't really get better in quality with bitrates the high, but they certainly do a nice job if you push them to give the highest quality they can muster. Kind of depends. ProRes is in my opinion one of the best formats the industry has ever had. With that said its not a very smart format. It just throws the same amount of bits at a frame no matter what the content. The beauty of formats like h264 is that they are smarter and they look at each frame and try to figure out how much is really needed. Yes bitrate is important for that but its such an efficient format that it can get away with a lot less bits than a dummy format like ProRes could get away with. When ProRes drops down to LT or Proxy its sacrificing quality across the frame no matter what the visual impact might be. h264 breaks the image into blocks. mpeg2 did the same thing but the image was all 8x8 pixel blocks. h264 is even more sophisticated and can use 1x1, 2x2, 4x4 and 8x8 pixel blocks. So the macro blocking can be much smaller in fine detail areas where a macro block may be 2x2 pixels instead of 8x8 that mpeg2 would use. This means we don't see macro blocking as much and visually we get a very solid picture. The image then saves bits in those flat areas so they can be used for the more detailed areas. h264 also spreads those bits across many frames. It uses difference between a group of frames to determine what has actually changed. So if a camera is locked down and only a small red ball rolls across the screen then each of the proceeding frames only have to use bits for the macro blocks that cover that red ball. The more the frames change the more bits they need for each of the following frames. The problem is yes sometimes some h264 encoders do not get enough bits. most of the time 100 mbps is enough. If you get a shot with a lot of random moving fine detail like tree leaves blowing in heavy wind then that 100 mbps may fall apart. ProResHQ is dumb but it has the advantage to look good no matter what the situation. A locked down blue sky will compress as well as those complex moving tree leaves. Its just that both will take the same 700 mbps no matter how simple or complex. h264 on the other hand can get by with much less. It would be a complete epic waste to give h264 the same 700mbps. It would not need it at all. In the above example that small red ball just does not need that much data to store 100% perfectly. I'm not sure there is a magic number as to what bitrate should be used. Really depends on the scene but for the most part 100 mbps has been pretty solid on many 4k cameras. 150 mbps for 10 bit on some cameras has been even more solid. Thats another thing to factor in. ProResHQ is 10bit 4:2:2. So its not really fair to compare to 8bit 4:2:0 h264 formats directly in terms of bitrate. Again its a dummy format and even if you send ProRes a 8bit 4:2:0 camera source it stills encodes it as if its 10bit 4:2:2. So the 150 mbps 10bit 4:2:2 h264 formats are a better comparison and visually they hold up very well compared to ProResHQ. Katrikura 1 Quote Link to comment Share on other sites More sharing options...
kye Posted May 26, 2021 Share Posted May 26, 2021 8 hours ago, Thomas S said: Kind of depends. ProRes is in my opinion one of the best formats the industry has ever had. With that said its not a very smart format. It just throws the same amount of bits at a frame no matter what the content. The beauty of formats like h264 is that they are smarter and they look at each frame and try to figure out how much is really needed. Yes bitrate is important for that but its such an efficient format that it can get away with a lot less bits than a dummy format like ProRes could get away with. When ProRes drops down to LT or Proxy its sacrificing quality across the frame no matter what the visual impact might be. h264 breaks the image into blocks. mpeg2 did the same thing but the image was all 8x8 pixel blocks. h264 is even more sophisticated and can use 1x1, 2x2, 4x4 and 8x8 pixel blocks. So the macro blocking can be much smaller in fine detail areas where a macro block may be 2x2 pixels instead of 8x8 that mpeg2 would use. This means we don't see macro blocking as much and visually we get a very solid picture. The image then saves bits in those flat areas so they can be used for the more detailed areas. h264 also spreads those bits across many frames. It uses difference between a group of frames to determine what has actually changed. So if a camera is locked down and only a small red ball rolls across the screen then each of the proceeding frames only have to use bits for the macro blocks that cover that red ball. The more the frames change the more bits they need for each of the following frames. The problem is yes sometimes some h264 encoders do not get enough bits. most of the time 100 mbps is enough. If you get a shot with a lot of random moving fine detail like tree leaves blowing in heavy wind then that 100 mbps may fall apart. ProResHQ is dumb but it has the advantage to look good no matter what the situation. A locked down blue sky will compress as well as those complex moving tree leaves. Its just that both will take the same 700 mbps no matter how simple or complex. h264 on the other hand can get by with much less. It would be a complete epic waste to give h264 the same 700mbps. It would not need it at all. In the above example that small red ball just does not need that much data to store 100% perfectly. I'm not sure there is a magic number as to what bitrate should be used. Really depends on the scene but for the most part 100 mbps has been pretty solid on many 4k cameras. 150 mbps for 10 bit on some cameras has been even more solid. Thats another thing to factor in. ProResHQ is 10bit 4:2:2. So its not really fair to compare to 8bit 4:2:0 h264 formats directly in terms of bitrate. Again its a dummy format and even if you send ProRes a 8bit 4:2:0 camera source it stills encodes it as if its 10bit 4:2:2. So the 150 mbps 10bit 4:2:2 h264 formats are a better comparison and visually they hold up very well compared to ProResHQ. Good points. I did a bunch of testing on the overall image quality of various codecs and bitrates in a separate thread some time ago, and here's the results I got from 1080p: and graphed: Prores and h264 from Resolve are in the key but not in the graph because its quality is poor and below the graphing area. Resolve is known for not having great encoders unfortunately. The rest of the thread is here: Results were similar for UHD, so not worth repeating. I do wonder about the perceptual / aesthetic aspects of the errors that each generates. My perception is that Prores always looks so much better than h26x codecs, and I wonder if that's the nature of the error. For example if you take an image and blur it then you will generate a rather significant error from the source image but it would probably be very benign aesthetically, whereas if you took vertical stripes 10px wide and increased their brightness to get the same overall total error across the image it would be almost unwatchable, and a similar thing happens comparing random noise with fixed-pattern noise, so the nature of the error is of significant aesthetic concern. Of course, it could be that Prores tends to look nicer due to its higher bitrates, or tendency to have less sharpening added prior to compression, or 422 instead of 420, etc. I also agree that 100Mbps is sufficient most of the time, but the problem is that when you buy a camera costing multiple thousands of dollars you are likely to hit the situation where it's not sufficient at some point during your ownership of that camera, and for every person that buys it and doesn't hit that situation (for example always shooting in controlled conditions) there will be another person who buys it for use in outdoor adventure and films trees in the wind, rain, snow falling, all while hand-held, and basically experiences those problems on a weekly basis. It's like putting the tyres from a corolla on a Ferrari and not being able to change them - it's a huge bottleneck for the output of the camera. Quote Link to comment Share on other sites More sharing options...
Thomas S Posted May 26, 2021 Share Posted May 26, 2021 18 minutes ago, kye said: Good points. I did a bunch of testing on the overall image quality of various codecs and bitrates in a separate thread some time ago, and here's the results I got from 1080p: and graphed: Prores and h264 from Resolve are in the key but not in the graph because its quality is poor and below the graphing area. Resolve is known for not having great encoders unfortunately. The rest of the thread is here: Results were similar for UHD, so not worth repeating. I do wonder about the perceptual / aesthetic aspects of the errors that each generates. My perception is that Prores always looks so much better than h26x codecs, and I wonder if that's the nature of the error. For example if you take an image and blur it then you will generate a rather significant error from the source image but it would probably be very benign aesthetically, whereas if you took vertical stripes 10px wide and increased their brightness to get the same overall total error across the image it would be almost unwatchable, and a similar thing happens comparing random noise with fixed-pattern noise, so the nature of the error is of significant aesthetic concern. Of course, it could be that Prores tends to look nicer due to its higher bitrates, or tendency to have less sharpening added prior to compression, or 422 instead of 420, etc. I also agree that 100Mbps is sufficient most of the time, but the problem is that when you buy a camera costing multiple thousands of dollars you are likely to hit the situation where it's not sufficient at some point during your ownership of that camera, and for every person that buys it and doesn't hit that situation (for example always shooting in controlled conditions) there will be another person who buys it for use in outdoor adventure and films trees in the wind, rain, snow falling, all while hand-held, and basically experiences those problems on a weekly basis. It's like putting the tyres from a corolla on a Ferrari and not being able to change them - it's a huge bottleneck for the output of the camera. Thats why I got a Pocket 4k. I got tired of the farting around with compressed formats. External recorders are the way to get around that nut thats also a hassle. Now I can shoot 3:1 Braw on a cheap SSD directly on the camera or at least 5:1 on an internal SD card. Now that some DSLRS are getting external raw support I still don't really care. Its an added cost to get an external recorder and its a lot more hassle than what I have now. I'm also kind of a freak for 4:4:4. I know visually it actually doesn't make a huge difference. I studied VFX in college and can pull damn good keys with 4:2:0. To me its more about why 4:2:2. Its a left over from the analogue days and we don't really need it anymore. We have fast and cheap enough media now to not worry about 4:2:2. Its an old broadcast video standard and really has no place in our digital world today. h264 and h265 are also very capable of 4:4:4 but we are barely getting cameras to add 4:2:2 and 10bit let alone 4:4:4. So Braw on the P4k represents something I have been trying to achieve ever since I started with SVHS and have been trying to get something better than video standards. Its not just because its raw. To me its because its RGB, 4:4:4 and color space agnostic. No more butchered rec709, no more unnecessary 4:2:2. I know visually I could probably do the same with a lesser format but to me its just about starting clean and go from there. It represents what I always dreamed of being able to do with video. Oh yeah and its 12bit which will be even harder to make an argument for than the 10bit vs 8bit argument. But hey its there and doesn't hurt so why not. Fun fact. 12bit has 4096 samples. DCI 4k resolution is 4096 wide. Thats exactly one sample per pixel for a full width subtle gradient or in other words the perfect bit depth for 4k. Not sure anyone could ever tell vs one sample every 4 pixels like 10bit has but hey there it is. Basically posterization should be physically impossible on the P4k shooting Braw. TheRealOG and Katrikura 2 Quote Link to comment Share on other sites More sharing options...
kye Posted May 27, 2021 Share Posted May 27, 2021 23 hours ago, Thomas S said: Thats why I got a Pocket 4k. I got tired of the farting around with compressed formats. External recorders are the way to get around that nut thats also a hassle. Now I can shoot 3:1 Braw on a cheap SSD directly on the camera or at least 5:1 on an internal SD card. Now that some DSLRS are getting external raw support I still don't really care. Its an added cost to get an external recorder and its a lot more hassle than what I have now. I'm also kind of a freak for 4:4:4. I know visually it actually doesn't make a huge difference. I studied VFX in college and can pull damn good keys with 4:2:0. To me its more about why 4:2:2. Its a left over from the analogue days and we don't really need it anymore. We have fast and cheap enough media now to not worry about 4:2:2. Its an old broadcast video standard and really has no place in our digital world today. h264 and h265 are also very capable of 4:4:4 but we are barely getting cameras to add 4:2:2 and 10bit let alone 4:4:4. So Braw on the P4k represents something I have been trying to achieve ever since I started with SVHS and have been trying to get something better than video standards. Its not just because its raw. To me its because its RGB, 4:4:4 and color space agnostic. No more butchered rec709, no more unnecessary 4:2:2. I know visually I could probably do the same with a lesser format but to me its just about starting clean and go from there. It represents what I always dreamed of being able to do with video. Oh yeah and its 12bit which will be even harder to make an argument for than the 10bit vs 8bit argument. But hey its there and doesn't hurt so why not. Fun fact. 12bit has 4096 samples. DCI 4k resolution is 4096 wide. Thats exactly one sample per pixel for a full width subtle gradient or in other words the perfect bit depth for 4k. Not sure anyone could ever tell vs one sample every 4 pixels like 10bit has but hey there it is. Basically posterization should be physically impossible on the P4k shooting Braw. I totally agree that it's about the quality and not some arbitrary technical specification. It's a bit sad that most manufacturers make us choose between heavily-sharpened low-bitrate low colour sampling h264/5 and RAW without many options in-between. Imagine if cameras let you customise a profile to independently specify the codec, bitrate, bit depth, colour subsampling, Intra/ALL-I, resolution and frame-rate.... Then imagine if they gave you a rocker switch to smoothly zoom in-between a downsampled full-sensor readout and a 1:1 crop at your selected resolution, that would be pretty sweet. I ended up with a P2K (OG BMPCC) and M2K (BMMCC) for cine-camera tasks and mostly shoot Prores HQ, and a GH5 for run-n-gun handheld travel videos using the 200Mbps 10-bit ALL-I 1080p mode. That mode on the GH5 isn't my dream spec, but I'm prioritising ease of editing so am choosing between the ALL-I modes and I did a comparison between all the modes and when you put them onto a 4K timeline and upload to YT there isn't any visible differences that I could tell, perhaps because they're all downsampled from 5K in-camera. I'd get a P4K or P6K if the cameras weren't so large, considering I want to blend into the "just another tourist with a camera - nothing to worry about" category for the projects I shoot. I also seriously looked at the Sigma FP but couldn't get past the impossible choice between the enormous data rates of the RAW files and poor quality of the internal compressed files, whereas the P2K and Prores HQ really nails the size and codec I'm looking for. Quote Link to comment Share on other sites More sharing options...
Toke Lahti Posted December 23, 2022 Share Posted December 23, 2022 I tried to find a list of 10-bit capable FF'ish cameras and this thread came up. I'm surprised. 10-bit colors are not for #1 increase dynamic range or #2 make underexposed footage look better. Idea is to but more colors (hues) in to the same color space. The advance is gain in color grading. Simple test would be to shoot in tungsten WB with daylight lights. Then you should see difference between 8bit and 10bit. Remember that grading is not about correcting. It's about creating. FHDcrew 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.