User Posted March 19, 2018 Share Posted March 19, 2018 I'm on a long hall documentary with material shot on 4 different cameras. When, for example, the 5DM2 material collides with the C100MkII, one can often see the difference between the two under certain conditions. I'm wondering, with the current crop of reasonably affordable 4k cameras today, if the difference between cameras will be less noticeable in 10 years? Have we more or less arrived to a place where digital image quality is will hold up the way motion picture film did? 4k seems adequate enough, and we have that. Dynamic range will probably see incremental bumps, but how much further can that go? Any thoughts? Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted March 19, 2018 Share Posted March 19, 2018 I've yet to see 4k that, per pixel, is as sharp as 1080p downscaled from a 4k source. But stills I've seen from TOTL dSLRs certinly show that potential... the 8K Reds might provide meaningfully more resolution, I haven't seen raw footage from one. Early Red footage was SUPER soft. Now we're seeing 4k that's meaningfully sharper to the eye than 2k/1080p... On what display is the question. Even on IMAX, 2k Alexa looks good to me. And the C100MkII is just as sharp (worse in other respects, of course). Both are way sharper than 35mm film (as projected, but even look at blu ray stills from film and see how surprisingly soft they are). But on a 5k iMac, I notice a bigger difference since I sit so close to it. Even there, the difference isn't huge between 4k and 5k, though. It is with UI elements, not so much with video. And the difference between 4k cameras will be even way less significant. For me, 2k is enough for video. For most, I think 4k will be. For now... I think the only substantive shift (beyond already significant meaningful aesthetic differences between cameras and lenses) will be when HDR takes off. And HDR imo is going to evolve in a different direction entirely, even more naturalistic, maybe HFR, etc. maybe even integrating with VR. The C300 Mk II and Alexa (and in practice the F65 and new Reds and Venice probably) all meet that standard of 15+ stops 10 bit rec2020 etc., But HDR is changing fast so I wouldn't even sweat it unless HDR delivery is important to you. Of course, I don't know if by "reasonably affordable" you mean an A7S or an F55. I think there's already a rather big difference there, though the two can be seamlessly intercut if you're careful to mitigate the A7S' weaknesses. User and Aussie Ash 2 Quote Link to comment Share on other sites More sharing options...
Aussie Ash Posted March 19, 2018 Share Posted March 19, 2018 3 hours ago, User said: I'm wondering, with the current crop of reasonably affordable 4k cameras today, if the difference between cameras will be less noticeable in 10 years? Have we more or less arrived to a place where digital image quality is will hold up the way motion picture film did? Look at how robust ARRI workflow choices are : http://www.arri.com/camera/alexa/workflow/ As far as affordable 4k cameras under Aus $4,000 Canon and Sony are giving us the thinnest codecs they can get away with. User 1 Quote Link to comment Share on other sites More sharing options...
kye Posted March 19, 2018 Share Posted March 19, 2018 Historically we will plateau when IQ becomes close to the limits of our perception, HOWEVER we are nowhere near these limits yet. Think 4k is enough because movie theatres were fine in 2K? What about VR capture - if your field of view is 120 degrees (just to pick a nice number) then that means you need 6k to spread 2K over your entire vision, but that's not what we're talking about here - we're talking about the angle of view of a TV - which is something more like 45degrees, which means you need 8 * 2K horizontal, just to properly render sitting in a room watching a 2K TV. What about VR simulation of a 4K TV? Forgetaboutit!! But why just talk about resolution - let's talk about pixel depth.. what is the DR of the human eye? When they get "retina DR" devices, what will the bit depth of that video signal look like across that amount of dynamic range? I'm thinking that 10-bit isn't going to cut it at that point. And we're only talking 2D here.... what happens when we want 3D environments to be simulated from AI processed video feeds? Just mount a grid of cameras in the ceiling perhaps 12 inches apart and feed all those to an AI that creates a 3D VR environment and UH OH! The AI doesn't have enough information to go by and so the virtual attendees of the latest Hollywood whatever event can't see the expressions on the stars faces (or tell the difference between Elijah Wood and Daniel Radcliffe!) so we need to up the resolution. Now deaf people can't lip read, up it again. Now the FBI gets interested and decides that the tech is now ready for AI facial microexpression monitoring at all airports and other critical locations - more resolution needed! To many of you this will sound like science fiction, but you will be proven wrong. if you want evidence of this - pick up your smartphone and start talking to your virtual assistant, and then cast your mind back 35 years when many people still had Black and White Television. This is an array of audio microphones with computer processing... 8 years ago. https://www.wired.com/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/ User and IronFilm 1 1 Quote Link to comment Share on other sites More sharing options...
User Posted March 19, 2018 Author Share Posted March 19, 2018 10 hours ago, Aussie Ash said: As far as affordable 4k cameras under Aus $4,000 Canon and Sony are giving us the thinnest codecs they can get away with. Ain't that the truth. And towards this, and the idea of using (documentary) material captured on older/ lesser cameras to be combined with new cameras in the future, part of me wonders, if in ten years, there will be software that will be able to bite in these 'thin' codecs and 'fatten' them up. Now I know you can't exactly pull something out of an image that isn't there, but it feels like this is, somehow, about to change. Quote Link to comment Share on other sites More sharing options...
tomekk Posted March 19, 2018 Share Posted March 19, 2018 5 hours ago, kye said: Historically we will plateau when IQ becomes close to the limits of our perception, HOWEVER we are nowhere near these limits yet. Think 4k is enough because movie theatres were fine in 2K? What about VR capture - if your field of view is 120 degrees (just to pick a nice number) then that means you need 6k to spread 2K over your entire vision, but that's not what we're talking about here - we're talking about the angle of view of a TV - which is something more like 45degrees, which means you need 8 * 2K horizontal, just to properly render sitting in a room watching a 2K TV. What about VR simulation of a 4K TV? Forgetaboutit!! But why just talk about resolution - let's talk about pixel depth.. what is the DR of the human eye? When they get "retina DR" devices, what will the bit depth of that video signal look like across that amount of dynamic range? I'm thinking that 10-bit isn't going to cut it at that point. And we're only talking 2D here.... what happens when we want 3D environments to be simulated from AI processed video feeds? Just mount a grid of cameras in the ceiling perhaps 12 inches apart and feed all those to an AI that creates a 3D VR environment and UH OH! The AI doesn't have enough information to go by and so the virtual attendees of the latest Hollywood whatever event can't see the expressions on the stars faces (or tell the difference between Elijah Wood and Daniel Radcliffe!) so we need to up the resolution. Now deaf people can't lip read, up it again. Now the FBI gets interested and decides that the tech is now ready for AI facial microexpression monitoring at all airports and other critical locations - more resolution needed! To many of you this will sound like science fiction, but you will be proven wrong. if you want evidence of this - pick up your smartphone and start talking to your virtual assistant, and then cast your mind back 35 years when many people still had Black and White Television. This is an array of audio microphones with computer processing... 8 years ago. https://www.wired.com/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/ Good points. That's why I'd rather focus on getting great today with today's tech. What's the point of having my movies future proofed if they suck and I'm forgotten tomorrow? (Although, I think, good 4k 60fps raw with good DR is my minimum for future proofing before VR takes off ;)). Quote Link to comment Share on other sites More sharing options...
kye Posted March 20, 2018 Share Posted March 20, 2018 7 hours ago, User said: Ain't that the truth. And towards this, and the idea of using (documentary) material captured on older/ lesser cameras to be combined with new cameras in the future, part of me wonders, if in ten years, there will be software that will be able to bite in these 'thin' codecs and 'fatten' them up. Now I know you can't exactly pull something out of an image that isn't there, but it feels like this is, somehow, about to change. You can't pull something out of an image that isn't there, but you sure can have a good look at what is there and take a guess about what is missing. Humans do this kind of interpolation all the time... there's a loud noise and we miss a few words in a sentence but we can guess by the context what they were, it's raining really hard and the windscreen wipers on your car aren't keeping up so half the time the view of the road is blurry and the other half it's got big drops all over it but not only can we keep the moments when we could see properly in our heads when the rain obscures everything but we can somehow correlate what we saw clearly with what is now blurred and if something moves suddenly during the blurry part our brain has kept track of what it was and even what it might look like if we could see. This is how computers will do it. The first phase is being able to recognise objects (which Google etc now do pretty well from photographs), the net phase will be to be able to understand an image in multiple layers and for them to have an understanding that the part of the person that is behind a pole still exists but is obscured, and the third will be to have a huge database of objects and be able to stretch and manipulate them in 4D space with all the physics applied and then match them to what it's seeing and put in the detail that isn't in the image. If you and I see a blurry image of a person wearing a scarf we know that the scarf is probably knitted (coarse pattern of woven thread) that has an array of fine textured fur around that coarse structure. AI will soon know this and substitute it in for us. 5 hours ago, tomekk said: What's the point of having my movies future proofed if they suck and I'm forgotten tomorrow? Absolutely!! Quote Link to comment Share on other sites More sharing options...
mercer Posted March 20, 2018 Share Posted March 20, 2018 10 hours ago, User said: Ain't that the truth. And towards this, and the idea of using (documentary) material captured on older/ lesser cameras to be combined with new cameras in the future, part of me wonders, if in ten years, there will be software that will be able to bite in these 'thin' codecs and 'fatten' them up. Now I know you can't exactly pull something out of an image that isn't there, but it feels like this is, somehow, about to change. You could always do the old mini dv trick of stacking the same shot on two different video tracks. A B&W video track of the same clip with a lower opacity can beef up the image a touch. webrunner5 1 Quote Link to comment Share on other sites More sharing options...
User Posted March 20, 2018 Author Share Posted March 20, 2018 1 hour ago, mercer said: You could always do the old mini dv trick of stacking the same shot on two different video tracks. A B&W video track of the same clip with a lower opacity can beef up the image a touch. I'll try and remember you said this when its 2028 Quote Link to comment Share on other sites More sharing options...
Kisaha Posted March 20, 2018 Share Posted March 20, 2018 In the recent C200 vs EVA shootout Logan stated it clearly, and everyone agreed. "Maths do not lie". Raw>10bit>8bit. If one wants an archive of videos, raw is their best bet. I would never base my workflow in future AI decisions. This is the religious equivalent of Science, being our Lord and Savior! You can not plan in technology that hasn't been invented yet. VR will be great, can't wait for a worthy small-ish camera to play with (which doesn't exist at the moment), and I am planning to buy sound equipment for pro VR sound (Sennheiser Ambeo together with a Sound Devices) this year or the next, but I do not see 2D video going extinct for at least another hundrend years, if ever; here is a lot of convinience in the 2D world. Quote Link to comment Share on other sites More sharing options...
kye Posted March 20, 2018 Share Posted March 20, 2018 2 hours ago, Kisaha said: I would never base my workflow in future AI decisions. This is the religious equivalent of Science, being our Lord and Savior! You can not plan in technology that hasn't been invented yet. I'm assuming you're referencing my post as I was the one that was talking about AI.. it seems there was a mis-understanding. I wasn't saying to rely on AI to somehow future-proof my footage. I was saying the opposite - I was saying that although AI will do fascinating things, in the not-too-distant future things will have changed so much, and from there looking back our 4K will be potato vision. It will be unusable except for nostalgia. and to directly answer the OPs questions: On 19/03/2018 at 10:46 AM, User said: I'm wondering, with the current crop of reasonably affordable 4k cameras today, if the difference between cameras will be less noticeable in 10 years? Yes. They will look back and see no difference. To them both will look like complete unusable mindblowingly bad poop, or it will be cute and retro and trendy and have no seriousness about the IQ applied at all. How much do you look at Charlie Chaplin films and distinguish between the ones before he could buy a sharper lens and those before? Do you care? Could you even tell if you wanted to? If you want to future-proof your films, make them good enough that people in 10-years will still watch them even though the quality was bad. Charlie Chaplin isn't still a famous name because of the image quality of his films! On 19/03/2018 at 10:46 AM, User said: Have we more or less arrived to a place where digital image quality is will hold up the way motion picture film did? 4k seems adequate enough, and we have that. Dynamic range will probably see incremental bumps, but how much further can that go? We can go further than anyone in this forum can possibly imagine. IIRC we've created sensors that can detect individual photons of light, so once they're miniaturised sufficiently we'll have sensors that have the bit-depth of the universe and billions and billions of pixels. The Apple A11 cpu inside my iPhone has 4.3 billion (BILLION!) transistors in it, and is probably smaller than full-frame. If those transistors were photo sites (a reasonable comparison as it's a simple circuit that requires power, data in and data out connections), that would be about 90,000 pixels wide by 50,000 pixels high. Just for fun I'm assuming the future is still 16:9. Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted March 20, 2018 Super Members Share Posted March 20, 2018 Whatever fantastical AI driven gazillion megapixel light field camera recording onto a grain of rice that manufacturers come up, you can bet your life they still won't put a microphone input on the entry level ones. samuel.cabral, Kisaha and kye 3 Quote Link to comment Share on other sites More sharing options...
IronFilm Posted March 21, 2018 Share Posted March 21, 2018 On 3/20/2018 at 2:27 AM, kye said: This is an array of audio microphones with computer processing... 8 years ago. https://www.wired.com/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/ I saw a demo once of two people talking into a mic, both talking over on top of each other, but the computer managed to process it out into two separate signals! :-o On 3/20/2018 at 6:22 PM, Kisaha said: VR will be great, can't wait for a worthy small-ish camera to play with (which doesn't exist at the moment), and I am planning to buy sound equipment for pro VR sound (Sennheiser Ambeo together with a Sound Devices) this year or the next, but I do not see 2D video going extinct for at least another hundrend years, if ever; here is a lot of convinience in the 2D world. Zoom has a firmware update for the F8/F4 to support ambisonic decoding of the Sennheiser Ambeo Quote Link to comment Share on other sites More sharing options...
Kisaha Posted March 21, 2018 Share Posted March 21, 2018 1 hour ago, IronFilm said: I saw a demo once of two people talking into a mic, both talking over on top of each other, but the computer managed to process it out into two separate signals! :-o Zoom has a firmware update for the F8/F4 to support ambisonic decoding of the Sennheiser Ambeo I know, and Sonosax a whole hardware module! Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.