TheRenaissanceMan Posted August 3, 2015 Share Posted August 3, 2015 10bit is good when working with log, shoot jps with a log profile and you can see the problems. Personally I want good compressed raw because it's actually a better way of compressing files (14bit per pixel vs 30bits per pixel (10bit 4:4:4))See, now that's a valid argument. I was really excited when BM gave the BMCC compressed CinemaDNG in firmware, because the increase in quality is huge, the filesize isn't that different, and the workflow is already built into Davinci Resolve. It makes me wonder why Cineform isn't more widespread. Quote Link to comment Share on other sites More sharing options...
Don Kotlos Posted August 3, 2015 Share Posted August 3, 2015 8 bit is mostly a delivery bit depth. It is true that you can replicate most colors. Try to mess with the gamma and push the image around and you start seeing the limitations. So one of the reasons that the C-log from Canon performs much better is because is not pushing the image as much as the S-log. It is using a normal gamma for most of the range and compresses the highlights more. That was a wise decision from Canon because they understand what an 8bit image can do and what it cannot. Here there is more information:http://www.xdcam-user.com/2011/12/canon-c-log-on-the-c300-compared-to-s-log/ TheRenaissanceMan 1 Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted August 4, 2015 Share Posted August 4, 2015 What Andrew and I are saying is that 8bit compression vs 10bit compression, everything else being equal, is not a visible difference for 90% viewers and even editors/colourists. We are not saying there is NO difference, just that it's not enough to take the size penalty that comes with it. 10 bit doesn't make the image look better, it doesn't transform it into a visibly higher quality image. Higher resolution, more DR, better lowlight performance, highlight roll-off, less rolling shutter, all are way more important than the jump to 10bit, they are visible to everybody. 8bit when implemented well enough, is absolutely gorgeous. C300, C100, 1Dc images, GH4, NX1, A7s images. For example, if you scientifically try it, the jump from 4:2:0 to 4:2:2 chroma subsampling makes a more visible difference. Bottom line, it's not the holy ticket to higher IQ, compression is complicated, other than bit depth. What makes images look better in terms of compression, is not just that. There'a the compression that starts with downscaling the image, chroma subsampling, bit-rate, Gamma curve, codec efficiency and the actual algorithm of the codec, whether it does All-I, Long Gop, whether it compresses motion more, colour more, how much it prioritizes certain colours or frequencies, etc A great sensor, with a great downscale, with a great 8 bit codec (C300) gives eway better images than a poor sensor with a 12bit codec (BM4K). A identical sensor with an 8bit codec and a 10bit codec shows practically identical images for all needs just with slightly less banding in your skies, yet with a bigger file. I would hate to see companies starting to push 10bit to consumers as the holy grail, I want them to keep pushing better sensors, better noise, better DR, better downscaling, better colour science, better gamma curves, better ergonomics, audio. Numbers and theory mean absolutely nothing, yes reading on the internet that 10bit has 4 times the colour of 8bit it must be a 4 times better image in colours, it's not, it just reduces banding slightly, that it. And a side note, the over estimation of 10bit comes from the theoritical numbers but also from certain cameras, ones that shoot 10bit in a vastly superior codec to 8bit, therefore people assumed it's bit depth that made that difference. For example shooting 8bit AVCHD on the FS700 and 10bit out, or shooting 8bit internal on the F3 vs 10bit out, and so on. The fact is, the improvement of going to 10bit is extremely minimal and only visible in a sky shot when grading it, the increase in video quality that's visible is coming from jumping from poor AVCHD with heavy NR and LogGop compression to pristine ProRes, all-I, 4:2:2 (4:4:4 on F3 and added S-Log gamma). Remember this also, all your monitors show only 8bit, anything extra is solely for grading purposes. JPEG is 8bit 4:2:2. I am still looking for a camera that shoots 550D JPEG quality video under 20K. Closest is 1DC but at 8mp. JPEG is wonderful robust 8bit codec when implemented as Canon/Nikon do on their DSLRs. Long story short, I believe 10bit is over estimated in the video world and I would hate it if companies started marketing it for consumers instead of other features. Quote Link to comment Share on other sites More sharing options...
TheRenaissanceMan Posted August 4, 2015 Share Posted August 4, 2015 What Andrew and I are saying is that 8bit compression vs 10bit compression, everything else being equal, is not a visible difference for 90% viewers and even editors/colourists. We are not saying there is NO difference, just that it's not enough to take the size penalty that comes with it. 10 bit doesn't make the image look better, it doesn't transform it into a visibly higher quality image. Higher resolution, more DR, better lowlight performance, highlight roll-off, less rolling shutter, all are way more important than the jump to 10bit, they are visible to everybody. 8bit when implemented well enough, is absolutely gorgeous. C300, C100, 1Dc images, GH4, NX1, A7s images. For example, if you scientifically try it, the jump from 4:2:0 to 4:2:2 chroma subsampling makes a more visible difference. Bottom line, it's not the holy ticket to higher IQ, compression is complicated, other than bit depth. What makes images look better in terms of compression, is not just that. There'a the compression that starts with downscaling the image, chroma subsampling, bit-rate, Gamma curve, codec efficiency and the actual algorithm of the codec, whether it does All-I, Long Gop, whether it compresses motion more, colour more, how much it prioritizes certain colours or frequencies, etc A great sensor, with a great downscale, with a great 8 bit codec (C300) gives eway better images than a poor sensor with a 12bit codec (BM4K). A identical sensor with an 8bit codec and a 10bit codec shows practically identical images for all needs just with slightly less banding in your skies, yet with a bigger file. I would hate to see companies starting to push 10bit to consumers as the holy grail, I want them to keep pushing better sensors, better noise, better DR, better downscaling, better colour science, better gamma curves, better ergonomics, audio. Numbers and theory mean absolutely nothing, yes reading on the internet that 10bit has 4 times the colour of 8bit it must be a 4 times better image in colours, it's not, it just reduces banding slightly, that it. And a side note, the over estimation of 10bit comes from the theoritical numbers but also from certain cameras, ones that shoot 10bit in a vastly superior codec to 8bit, therefore people assumed it's bit depth that made that difference. For example shooting 8bit AVCHD on the FS700 and 10bit out, or shooting 8bit internal on the F3 vs 10bit out, and so on. The fact is, the improvement of going to 10bit is extremely minimal and only visible in a sky shot when grading it, the increase in video quality that's visible is coming from jumping from poor AVCHD with heavy NR and LogGop compression to pristine ProRes, all-I, 4:2:2 (4:4:4 on F3 and added S-Log gamma). Remember this also, all your monitors show only 8bit, anything extra is solely for grading purposes. JPEG is 8bit 4:2:2. I am still looking for a camera that shoots 550D JPEG quality video under 20K. Closest is 1DC but at 8mp. JPEG is wonderful robust 8bit codec when implemented as Canon/Nikon do on their DSLRs. Long story short, I believe 10bit is over estimated in the video world and I would hate it if companies started marketing it for consumers instead of other features. I disagree with almost all of that, but you're also getting off topic. This is about which manufacturer we think will implement internal 10-bit, not whether we think 10-bit footage is a meaningful gain over 8-bit. If you want to discuss that, I suggest making a new topic. Quote Link to comment Share on other sites More sharing options...
vaga Posted August 4, 2015 Share Posted August 4, 2015 Rumours were true they did reach out but not sure much came of it, they also reached out to me over a book but nothing came of that, I think they are just scouting / testing the waters really. They did a great job on the NX1 let's hope they can now do a great job on a raw shooting full frame camera how come you haven't written a book for it Andrew? I would be quite ready to buy it! Quote Link to comment Share on other sites More sharing options...
Jimmy Posted August 4, 2015 Author Share Posted August 4, 2015 I guess when people actually start using 10bit everyday, they might get it. To suggest a colourist wouldn't know the difference is scarily off the mark. TheRenaissanceMan 1 Quote Link to comment Share on other sites More sharing options...
hmcindie Posted August 4, 2015 Share Posted August 4, 2015 I'd happily shoot 14bit if the Canon ML cams had a slow motion option. 5D3raw is still may fave image of all the cameras I have owned, including FS700+OdysseyThey do. I've shot 60p many times on the 5d mark III raw. The image has to be desqueezed and the aspect ratio is a bit funky (also the resolution has to be lowered), but it still looks great, better than the 60p h264.On another note, most people think 8bit is really bad because of compression, not 8 bits itself. Remember that h264 is basically 6-7 bits precision. Quote Link to comment Share on other sites More sharing options...
TheRenaissanceMan Posted August 4, 2015 Share Posted August 4, 2015 They do. I've shot 60p many times on the 5d mark III raw. The image has to be desqueezed and the aspect ratio is a bit funky (also the resolution has to be lowered), but it still looks great, better than the 60p h264.On another note, most people think 8bit is really bad because of compression, not 8 bits itself. Remember that h264 is basically 6-7 bits precision. Can you post any article or white paper about that last bit? I had no idea H.264 affected tonal precision that much. Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted August 4, 2015 Share Posted August 4, 2015 you're also getting off topic. No I am not. Quote Link to comment Share on other sites More sharing options...
Jimmy Posted August 4, 2015 Author Share Posted August 4, 2015 8 bit can be implemented very well as Canon has shown. But, for more challenging shots, you have to get it spot on in camera.Bandind aside, one of the biggest flaws of 8 bit is saturation. If you want to push it hard, in post, 8 bit is often not enough and starts to show the colour steps. This is vital for skin tones, gradients etc.As has been noted though, the only people using words like "holy grail" are those saying 10 bit is not that great. The rest of us just see it as another piece of the puzzle and one we would like to see hit the mainstream. TheRenaissanceMan 1 Quote Link to comment Share on other sites More sharing options...
hmcindie Posted August 4, 2015 Share Posted August 4, 2015 Can you post any article or white paper about that last bit? I had no idea H.264 affected tonal precision that much.Just take a nice shot of the sky (uncompressed but 8bits) and encode with h264 with medium bitrate settings.You can see how it starts banding, and a lot. Higher bitrates help but h264 still destroys tonal precision quite a bunch.One reason why 10bits is better with h264 is that it degrades it less.http://x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf". For example, a 1 error bit on a 8-bit signal provide the same relative error than 3 bits of error in a 10-bit signal: 7 bits only are actually meaningful in both cases."That's why the uncompressed 8bits is still relatively good (or very low compression). High bitrate Mpeg2 encoders can be even better, that's why the C300 looks so good eventhough it's 8bits. Quote Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted August 4, 2015 Administrators Share Posted August 4, 2015 Still too many words and not enough pictures on this thread!Take some 10bit ProRes and convert to a high quality 8bit codec at the same bitrate (around 250Mbit/s would be good)Then post screen shots of the image quality (hint - they will look identical) and also the grading issues you find (hint - you will never push it that far to breaking point in a real world grade)...LOG is more important to image quality & dynamic range than 10bit vs 8bit. Quote Link to comment Share on other sites More sharing options...
Jimmy Posted August 4, 2015 Author Share Posted August 4, 2015 I'll just use my 15+ years in the industry to feel comfortable with my point of view. TheRenaissanceMan 1 Quote Link to comment Share on other sites More sharing options...
sunyata Posted August 4, 2015 Share Posted August 4, 2015 I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen. (the preview window lets you see more closely what the artifacts look like) Gregormannschaft, Xavier Plagaro Mussard, vaga and 2 others 5 Quote Link to comment Share on other sites More sharing options...
Gregormannschaft Posted August 4, 2015 Share Posted August 4, 2015 I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen. (the preview window lets you see more closely what the artifacts look like) This is amazing, great tune. So 10bit, there's still banding, but it's not quite as visible. 16bit, there's still banding (?) but it's almost invisible. Quote Link to comment Share on other sites More sharing options...
sunyata Posted August 4, 2015 Share Posted August 4, 2015 yes, the thing with banding is that it does help to have noise or grain to promote dithering but this was done with 32bit float radial gradients, so oddly, the 8bit RGB uncompressed holds up nicely because if it's inherent dithering. Quote Link to comment Share on other sites More sharing options...
Jimmy Posted August 4, 2015 Author Share Posted August 4, 2015 Something that video also highlights is the idea that if you are on an 8 bit monitor, you cannot see the benefits of shooting at a higher bit rate. Quote Link to comment Share on other sites More sharing options...
sudopera Posted August 4, 2015 Share Posted August 4, 2015 Don Kotlos 1 Quote Link to comment Share on other sites More sharing options...
Don Kotlos Posted August 4, 2015 Share Posted August 4, 2015 Still too many words and not enough pictures on this thread!Take some 10bit ProRes and convert to a high quality 8bit codec at the same bitrate (around 250Mbit/s would be good)Then post screen shots of the image quality (hint - they will look identical) and also the grading issues you find (hint - you will never push it that far to breaking point in a real world grade)...LOG is more important to image quality & dynamic range than 10bit vs 8bit.The test should be grading a RAW still of a face and an 8 bit log frame. If someone has an A7s and willing to spend few minutes getting and posting the raw data here, it would be great so we can all try to match the two. Then after we all agree that is impossible to match the two we can move on to the 10 bit. Sudopera's test is great (thanks!) but a V-log would make the differences even larger especially when shooting fine tonalities such as in skin. So I hope he repeats the test when the V-Log gets released... I am attaching some crops from the video which I don't even have to identify. Differences are obvious. Quote Link to comment Share on other sites More sharing options...
sudopera Posted August 4, 2015 Share Posted August 4, 2015 The test should be grading a RAW still of a face and an 8 bit log frame. If someone has an A7s and willing to spend few minutes getting and posting the raw data here, it would be great so we can all try to match the two. Then after we all agree that is impossible to match the two we can move on to the 10 bit. Sudopera's test is great (thanks!) but a V-log would make the differences even larger especially when shooting fine tonalities such as in skin. So I hope he repeats the test when the V-Log gets released... I am attaching some crops from the video which I don't even have to identify. Differences are obvious. Sorry, that is not my test it is just a video I saw a while a go. If the test was done properly, which I don't have any reason to believe otherwise, I think the difference is very clear.If I'm not mistaken, it was already posted on some topic here before, I believe some time around when Shogun came out. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.