Guest Ebrahim Saadawi Posted February 16, 2014 Share Posted February 16, 2014 We've heard some pretty reasonable statements from all over the internet confirming that 4k when downscaled yields in 4:4:4 1080p. And even 8 bit downscaling to 10 bit. The rumor (information) out there now is 4K 4:2:0 8bit equalls 1080p 4:4:4 10bit So, how true is that? I want to hear some of you experts discuss this. Are there articles, proves, tests done you can direct me to (haven't found any) or is this just theorizing? If that is true, what would 4K 4:4:4 footage downscaled to 1080p give us? Will shooting 4K finally solve the nasty 8bit banding and limited color range we've been suffering from? Link to comment Share on other sites More sharing options...
Appleidiot Posted February 16, 2014 Share Posted February 16, 2014 I'm certainly no expert, but maybe, when you export out of say FCPX to ProRes, ProRes is 422 10 bit. But, that would not be a true 422 10bit file. There's no way of manufacturing a higher color space, you could down sample a recorded 422 10bit file, which I wouldn't imagine anyone doing, but the opposite, I don't see how. Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted February 16, 2014 Administrators Share Posted February 16, 2014 The idea is to oversample from the 4K, so you build better quality pixel data from 4x the data available in a normal 1080p stream. I think it needs special software to do, and yes some transcoding to ProRes 10bit 4444. But even with a basic downscale in Premiere or FCPX you are smoothing out any 4:2:0 artefacts in the 4K like aliasing on brightly coloured red, blue or green edges for example. 4K is like a big pipe from the sensor to the card. Link to comment Share on other sites More sharing options...
Joachim Heden Posted February 16, 2014 Share Posted February 16, 2014 Using error diffusion dithering, it is mathematically true that 8bits oversampled by a factor of 2 (4K) can contain the same information as 10bits downscaled to the intended resolution (2K/1080). http://en.wikipedia.org/wiki/Error_diffusion But, the codec must do just that, employ error diffusion dithering and do it correcty. Somebody needs to dig up a whitpaper on the codec. Joachim Link to comment Share on other sites More sharing options...
HurtinMinorKey Posted February 16, 2014 Share Posted February 16, 2014 Here's a proof by counter example for why it is not always possible to interpolate the extra bit depth from more pixels of information. Imagine 4 pixels of digital information. Imagine the actual analog information (let's just use an arbitrary measure of brightness) is: (1.1) (1.2) (1.3 ) (1.4), so it's a smooth gradient of increasing brightness. Now assume that 8-bit maps everything less than 1.5 to 1, but 10-bit will map more precisely, so that anything less than 1.25 is mapped to 1, but everything above 1.25 is mapped to 1.5. So after analog to digital conversion, our 8 bit info is stored as (1) (1) (1) (1) and our 10 bit info is stored as (1) (1) (1.5) (1.5) Now let's assume that we only have half the pixels from the 10 bit version: (1) (1.5) In this simple example, you can see how no amount of subsampling will resurrect the contrast information that is lost in the 8bit conversion. Does this make sense? Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted February 16, 2014 Administrators Share Posted February 16, 2014 So bit depth stays around 8bit but sampling is indeed 4:4:4 after conversion? Link to comment Share on other sites More sharing options...
HurtinMinorKey Posted February 16, 2014 Share Posted February 16, 2014 So bit depth stays around 8bit but sampling is indeed 4:4:4 after conversion? 3rd edit. i think 4:2:0 25% less chroma information than 4:4:4. So you should be able to reconstruct 1080 4:4:4 from 4K 4:2:0 because 4K has 4 times the resolution of 1080p. I'm much more certain about the bit depth, than I am the chroma though. Link to comment Share on other sites More sharing options...
JohnBarlow Posted February 16, 2014 Share Posted February 16, 2014 This is how to do it.... 1 In the NLE, place clip on timeline and overlay 3 copies. 2 Normalise the result to 10 bit to prevent over exposure 3 Shift copy1 vertical by 1 pixel 4 Shift copy2 vertical by 1 pixel and horizontal by 1 pixel 5 Shift copy3 horizontal by 1 pixel Output to 1080p 444 10 bit (even though it is 11 bit) ta da Andrew Reid 1 Link to comment Share on other sites More sharing options...
HurtinMinorKey Posted February 16, 2014 Share Posted February 16, 2014 This is how to do it.... 1 In the NLE, place clip on timeline and overlay 3 copies. 2 Normalise the result to 10 bit to prevent over exposure 3 Shift copy1 vertical by 1 pixel 4 Shift copy2 vertical by 1 pixel and horizontal by 1 pixel 5 Shift copy3 horizontal by 1 pixel Output to 1080p 444 10 bit (even though it is 11 bit) ta da If i understand you correctly, you're showing why you can resurrect the chroma, but not the bit-depth. . Here is my "proof by color" of why i think you should be able to get 4:4:4 1080 from 4:2:0 4K. The bottom left its 1080 4:4:4, and the top right is 4K 4:2:0 Link to comment Share on other sites More sharing options...
Joachim Heden Posted February 16, 2014 Share Posted February 16, 2014 Here's a proof by counter example for why it is not always possible to interpolate the extra bit depth from more pixels of information. Imagine 4 pixels of digital information. Imagine the actual analog information (let's just use an arbitrary measure of brightness) is: (1.1) (1.2) (1.3 ) (1.4), so it's a smooth gradient of increasing brightness. Now assume that 8-bit maps everything less than 1.5 to 1, but 10-bit will map more precisely, so that anything less than 1.25 is mapped to 1, but everything above 1.25 is mapped to 1.5. So after analog to digital conversion, our 8 bit info is stored as (1) (1) (1) (1) and our 10 bit info is stored as (1) (1) (1.5) (1.5) Now let's assume that we only have half the pixels from the 10 bit version: (1) (1.5) In this simple example, you can see how no amount of subsampling will resurrect the contrast information that is lost in the 8bit conversion. Does this make sense? Yes HurtinMinorKey, But if Error diffusion is employed in the 8 bit codec - the accumulated error ( 0.1 + 0.2 + 0.3 + 0.4 = 1.0 ) will be applied to the last 8 bit encoded value, resulting in an encoded sequence of (1) (1) (1) (2) - in which case results in a down sampled (averaging each pair) sequence of (1) (1.5) the same result you got from the 10 bit example. Of course, I'm assuming the the down sampled data is stored at 10 bits, but that's the whole point... J Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted February 16, 2014 Administrators Share Posted February 16, 2014 Seems possible guys http://www.eoshd.com/content/12140/discovery-4k-8bit-420-panasonic-gh4-converts-1080p-10bit-444 Link to comment Share on other sites More sharing options...
HurtinMinorKey Posted February 16, 2014 Share Posted February 16, 2014 Yes HurtinMinorKey, But if Error diffusion is employed in the 8 bit codec - the accumulated error ( 0.1 + 0.2 + 0.3 + 0.4 = 1.0 ) will be applied to the last 8 bit encoded value, resulting in an encoded sequence of (1) (1) (1) (2) - in which case results in a down sampled (averaging each pair) sequence of (1) (1.5) the same result you got from the 10 bit example. Of course, I'm assuming the the down sampled data is stored at 10 bits, but that's the whole point... J Error diffusion? How does it know where to place the error? Is this standard? But I don't think it matters. 10 bit can capture more DR than 8 bit, so error diffusion or not, there has to be some 10bit values of the brightest white, or the darkest dark that cannot be represented in 8 bit (while maintaing the contrast in the rest of the picture), regardless of error diffusion. Link to comment Share on other sites More sharing options...
Joachim Heden Posted February 16, 2014 Share Posted February 16, 2014 Error diffusion? How does it know where to place the error? Is this standard? But I don't think it matters. 10 bit can capture more DR than 8 bit, so error diffusion or not, there has to be some 10bit values of the brightest white, or the darkest dark that cannot be represented in 8 bit (while maintaing the contrast in the rest of the picture), regardless of error diffusion. I simplified a little - and in your example, probably the accumulated error would drop in on the 3:rd value, making the sequence (1) (1) (2) (1) - but that would still give you the (1) (1,5) end result. If you want more, start here: http://en.wikipedia....Error_diffusion Again, this all hinges on the codec employing error diffusion / dithering - since this discussion seems to be centered around the GH4, somebody needs to dig up a white paper on the codec used in that camera. Joachim Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted February 16, 2014 Administrators Share Posted February 16, 2014 Another thread merging moment. Please continue here as before, very welcome discussion '?do=embed' frameborder='0' data-embedContent>> Link to comment Share on other sites More sharing options...
Recommended Posts