Zach Posted January 31, 2013 Share Posted January 31, 2013 I'm a little bit lost when it comes to the technical details of certain camera paramets. I understand 4:2:0 4:2:2 etc, but what is the difference between 8 bit, 10, bit, 12 bit color? I hear a lot of people complain about it, but what does it mean practically? Does it give me more room in post? Just more accurate colors? Does anyone have any examples of this? Also, what exactly do more Mbps get you? Does it just offer more detail? Thanks folks. Feel free to insert your own stupid questions on this thread to have answered :) Quote Link to comment Share on other sites More sharing options...
richg101 Posted January 31, 2013 Share Posted January 31, 2013 All of these parameters are very important to 2 types of people:- 1. The ones who shoot big budget films, cinematic automotive commercials and other high budget stuff. 2. The people who need a reason to not go out and shoot anything. Blaming the limitations of 8bit colour etc for their lack of ideas or personal drive. I used to hanker after 444 10bit, uncompressed blah blah blah. now it bores me. IMO you'll probably be happier not knowing about it. Mondo 1 Quote Link to comment Share on other sites More sharing options...
Zach Posted February 1, 2013 Author Share Posted February 1, 2013 All of these parameters are very important to 2 types of people:- 1. The ones who shoot big budget films, cinematic automotive commercials and other high budget stuff. 2. The people who need a reason to not go out and shoot anything. Blaming the limitations of 8bit colour etc for their lack of ideas or personal drive. I used to hanker after 444 10bit, uncompressed blah blah blah. now it bores me. IMO you'll probably be happier not knowing about it. I found this quite amusing and you're probably right to an extent. However I shoot daily and am not in a place where I'm wondering which camera to buy over this :) I more was wondering for educational reasons. Today I told my friend "my birthday is in a month, I want a C300." He was like "You have a 5d which has a bigger sensor and higher bitrate, isn't it better?" After my initial laughter I told him theres a reason it costs 5 times as much and that is more to the image than the bitrate. I didnt know exactly why that was though. So please, someone inform me :) Quote Link to comment Share on other sites More sharing options...
HurtinMinorKey Posted February 1, 2013 Share Posted February 1, 2013 To make a long story short.... The c300 uses a proprietary sensor readout that makes it relatively immune to certain visual artifacts like moire and rolling shutter. As far as bit depth is concerned, a higher bit depth gives you a lot more leeway in post. Camera's like the c300 compress information so that you get a great image out of camera. But this information is compressed to give you the best image right out of the camera. However, when using 8-bit, you lose sensor information that informs what the scene would look like under a different exposure or white balance(which could be considered extraneous). For example, if your out of camera image reads shadow detail as 100% black, you will only get greys when you brighten it. At a higher bit depth, those shadows will contain detail that can be elicited through brightening. Does this make sense? Zach 1 Quote Link to comment Share on other sites More sharing options...
Zach Posted February 1, 2013 Author Share Posted February 1, 2013 To make a long story short.... The c300 uses a proprietary sensor readout that makes it relatively immune to certain visual artifacts like moire and rolling shutter. As far as bit depth is concerned, a higher bit depth gives you a lot more leeway in post. Camera's like the c300 compress information so that you get a great image out of camera. But this information is compressed to give you the best image right out of the camera. However, when using 8-bit, you lose sensor information that informs what the scene would look like under a different exposure or white balance(which could be considered extraneous). For example, if your out of camera image reads shadow detail as 100% black, you will only get greys when you brighten it. At a higher bit depth, those shadows will contain detail that can be elicited through brightening. Does this make sense? Yes, Thank you! So its basically similar to the difference of RAW and JPEG images? Simple enough Quote Link to comment Share on other sites More sharing options...
GravitateMediaGroup Posted February 1, 2013 Share Posted February 1, 2013 All of these parameters are very important to 2 types of people:- 1. The ones who shoot big budget films, cinematic automotive commercials and other high budget stuff. 2. The people who need a reason to not go out and shoot anything. Blaming the limitations of 8bit colour etc for their lack of ideas or personal drive. I used to hanker after 444 10bit, uncompressed blah blah blah. now it bores me. IMO you'll probably be happier not knowing about it. best thing i've seen in a long upstream color, act of valor, red tails, and episodes of CSI and tons of others it didn't seem to matter. and now days with external recorders, it almost doesn't matter what bit the camera has, but more importantly clean HDMI out The only limits a camera has, are the excuses people give them. I've seen pretty impressive videos shot on an iPhone 4. There are things like rolling shutter and moire than an make a video less appealing, but you have to work around it instead of making excuses of "can't" if you plan on making a movie with a lot of REAL mixed with CGI, it may matter a "bit".....no pun intended I could go on for another hour about this, but you get the point. and Zach, your friend is wrong. Your 5D doesn't have a higher bit rate unless of course you have magic lantern installed Quote Link to comment Share on other sites More sharing options...
jgharding Posted February 1, 2013 Share Posted February 1, 2013 Simply put: The number of different colour shades the camera can record. It's bit depth, because unlike a bit-rate, it isn't measured over time. Essentially (as technical as we need to be) 8-bit allows 255 different levels of colour for each channel. 10-bit allows 1023 different levels. Each channel is: R G and B for 4:4:4 or uncompressed, while it's Y (white), Cb (blue) and Cr (red) if it's sub-sampled (like 4:2:2 or 4:2:0). . The result is that 10-bit codecs record a greater number of subtle shades, great for skin, the sky, and other subjects with subtle gradients. I say codecs rather than cameras, because most of these sensors convert analogue light to digital data at higher depths, like 14-bits, which has 16384 possible levels per channel. The rather crippled codecs then interpolate (pick the nearest one) out of 256 shades, so you can see how much data you're losing. That's why people are always complaining, because a lot of companies make a conscious decision to cripple their hardware for commercial reasons. There are certain considerations, like processing, heat dispersal and buffer speed and size, but the main concern of certain companies is profit margin. Which is why those companies are so huge. Now, in practice the division of these bits across dynamic range spectrum blah blah is very complex, but it doesn't matter. More is nice. But you can make a movie that gets Oscar nominated, or a promo that makes you a fortune, with 8-bits, so don't worry too much even though it's nice to know and nice to have more. galenb and Julian 2 Quote Link to comment Share on other sites More sharing options...
jgharding Posted February 1, 2013 Share Posted February 1, 2013 As an addendum, you can get a lot from 8-bit source in post. Most of those complaining about footage "falling apart" are in fact complaining about the 4:2:0 spatial compression aspect, that destroys red-channel resolution, not the limited 8-bit colour pallete, or the temporal compression of bit-rate. For this reason I tend to avoid capturing 4:2:0 footage with a warm balance. I always head towards green and blue, there's more resolution there, then bring out the red in post, when you work in a programme like After Effects for example. Work in a 32-bit space, and you eliminate compression aspects as much as is possible. Your footage is treated as individual RGB, 32-bit frames, instead of 4:2:0 sub-sampled 8-bit heavily compressed GOPs. You can't gain back information you lost in capture, because compression is destructive unless it's lossless, but you can get the best from your footage this way since you aren't limited to 8-bit colour values, and there's no need to transcode. If you don't clip your highlights, and keep the lows away from the zero point where the codec compression is worst (Cinestyle does this, and you can do it on sony cameras too) you have a greater chance of achieving a film look. Not clipping and lifting black reduces your range of bits that are used for colour shades BUT i find it far easier to bring in more shades in post, than I do to recover clipped highs and shadows, which is of course, impossible. But of course, if you know exactly the look you want, and it can be achieved in camera, doing it that way when using heavily compressed footage will actually yield a nicer result. Decisiveness and consistency on set pay off... galenb and Julian 2 Quote Link to comment Share on other sites More sharing options...
Bruno Posted February 1, 2013 Share Posted February 1, 2013 Something else important about bitrate is the codec used, and the settings the codec is using. Some codecs can be more efficient at 30Mbps than similar ones at 70Mbps. Actually, even the same codec could be more efficient at 30Mbps than at twice the rate, depending on which settings it's using. According to the Magic Lantern guys, Canon's H264 compression on DSLRs is not very efficient at all, and that might be the reason why Sony FS100's compression is so much cleaner at half the bit rate! Leang and jgharding 2 Quote Link to comment Share on other sites More sharing options...
jgharding Posted February 1, 2013 Share Posted February 1, 2013 Yeah that's true! Also, each sensor and supporting ADC and circuitry has very different noise characteristics. So though Canon codec implementation isn't hugely efficient, the noise pattern (especially after using Magic Lantern hacked bit rates) is an order of magnitude better for denoising than Sony AVCHD, I've found when mixing in timelines under the same lighting. While the Canon EOS footage denoised cleanly, the Sony was horribly damaged in the lows, with very bright blue noise everywhere. The noise pattern and heavy compression of Sony AVCHD (I used RX100 and FS700) looks clean as is, but for the higher ISOs where you have to denoise, it's a mess... Also, there's the profile level to consider. H264, for instance has profile standards of Baseline, Main and High (among others), each requiring greater processing power for playback and encoding than the last, but also providing far higher quality than the last at the same bitrate and file size. So although 120mbps All-I from the 600D hacked may sound awesome, it is still Baseline H264 at 8-bit 4:2:0, and isn't really directly comparable to Avid DNxHD 120 for example. Furthermore, newer and more efficient codecs are quite amazing in terms of using bit rate. The H265 standard (approved just recently) shows a 70% decreasing of bit rate compared to MPEG2 of the same quality, for example. This means that with powerful in-camera processing, 50mbps H265 may not be so bad. Quote Link to comment Share on other sites More sharing options...
Leang Posted February 1, 2013 Share Posted February 1, 2013 Yeah that's true! Also, each sensor and supporting ADC and circuitry has very different noise characteristics. So though Canon codec implementation isn't hugely efficient, the noise pattern (especially after using Magic Lantern hacked bit rates) is an order of magnitude better for denoising than Sony AVCHD, I've found when mixing in timelines under the same lighting. While the Canon EOS footage denoised cleanly, the Sony was horribly damaged in the lows, with very bright blue noise everywhere. The noise pattern and heavy compression of Sony AVCHD (I used RX100 and FS700) looks clean as is, but for the higher ISOs where you have to denoise, it's a mess... Also, there's the profile level to consider. H264, for instance has profile standards of Baseline, Main and High (among others), each requiring greater processing power for playback and encoding than the last, but also providing far higher quality than the last at the same bitrate and file size. So although 120mbps All-I from the 600D hacked may sound awesome, it is still Baseline H264 at 8-bit 4:2:0, and isn't really directly comparable to Avid DNxHD 120 for example. Furthermore, newer and more efficient codecs are quite amazing in terms of using bit rate. The H265 standard (approved just recently) shows a 70% decreasing of bit rate compared to MPEG2 of the same quality, for example. This means that with powerful in-camera processing, 50mbps H264 may not be so bad. speaking like a champ! Get ready for XAVC jgharding 1 Quote Link to comment Share on other sites More sharing options...
galenb Posted February 1, 2013 Share Posted February 1, 2013 In simple terms: The more bits you have, the smoother you gradients will be. :-) jgharding 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.