Jump to content

Camera resolution myths debunked


cantsin
 Share

Recommended Posts

Required viewing for anyone interested in the subject: Steve Yedlin, DoP of among others "Brick", "Looper" and "Star Wars: The Last Yedi", debunks the myth that more camera pixels equal higher perceivable resolution, with test footage he shot with 35mm & 65mm Alexa, RED, 35mm & IMAX film. Using Nuke, he also demonstrates how the imaging pipeline is a comparatively underrated factor for perceived resolution:
http://yedlin.net/ResDemo/

(Best download the two videos and watch them, without scaling, in 1:1 pixel resolution in an external video player - from beginning to end, the full 1 hours and 15 minutes with all slow-paced, detailed explanations.) 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
  • Super Members

You're watching it and have the exciting revelation of "Right, they're all the same in the final analysis so all I'll ever need is something that shoots 2K RAW so fuck all you manufacturers trying to brain wash me to lust after higher specs than that, its clearly a waste and I'll never need it and I'll now never even think twice about anything with a higher resolution than that."

Its truly liberating and you feel you've broken free of the shackles of this ludicrous pursuit of something you don't need and that you'll never be able to justify having financially, technically and especially creatively.

Hurrah!!

And then the Alexa 65 came on.

 

disappointed dog.jpg

Link to comment
Share on other sites

6 hours ago, BTM_Pix said:

You're watching it and have the exciting revelation of "Right, they're all the same in the final analysis so all I'll ever need is something that shoots 2K RAW so fuck all you manufacturers trying to brain wash me to lust after higher specs than that, its clearly a waste and I'll never need it and I'll now never even think twice about anything with a higher resolution than that."

Its truly liberating and you feel you've broken free of the shackles of this ludicrous pursuit of something you don't need and that you'll never be able to justify having financially, technically and especially creatively.

Hurrah!!

And then the Alexa 65 came on.

 

disappointed dog.jpg

Sure, but at a normal resolution they still look pretty similar. Remember, he's punched in 4X or something by that point and you're watching on a laptop screen or iMac with a FOV equal to a 200" tv at standard viewing distance. So yeah, on an 800" tv I would definitely want 6k, heck, even on a 200" tv I would. But the biggest screen I've got is a 100" short throw projector or something so the only place where I can see pixels with it is with graphics and text.* I've also been watching a lot of Alexa65-shot content on dual 4k 3D IMAX at my local theater and tbh I can never tell when they cut to the b cam unless it's like a GoPro and then I can REALLY tell. :/

Furthermore, I don't think you need RAW video. That's another marketing myth. Anything high end (C300 Mk II, Varicam, Alexa, even arguably the shitty F55 etc.) is a total wash in terms of RAW vs ProRes.

Honestly I'm most impressed that he got the F55 to look that good! And I want that Nuke script!

I think the bigger lesson is that you can't apply logic against marketing. Red staked its reputation on a 4k camera that in practice had a much softer image than the C300 (the original Red pre-MX was softer per-pixel than the 5D Mark III–by a lot! I think the Genesis/F35 was sharper despite being 1080p)... and it worked! Netflix and YouTube are pushing hard with heavily compressed 4k... although at least YouTube lets you use the Alexa. 4k LCD panels are gaining some traction in the low end market where the image suffers from a million other problems and the screens are pointlessly small. It's like the megapixel wars. Most digital photographers aren't even printing, let alone printing wall-sized (and those I know who do printed wall-sized were satisfied with 12MP). But the big number works well for marketing departments, and that trickles up to everyone. No one who matters to marketing will watch this video anyway.

*I thought my projector's LCDs were misaligned until I noticed that what appeared to be chromatic aberration or misalignment was really just sub-pixel antialiasing! I think 4k would benefit a lot for cartoons, text, etc. Anything with really sharp lines. And for VR I think ultra high resolutions will matter a lot. I love my Rift, but the image is 1990s-level pixellated. I'm not big on VR video, though. 

Link to comment
Share on other sites

  • Super Members
2 hours ago, HockeyFan12 said:

Furthermore, I don't think you need RAW video. That's another marketing myth. Anything high end (C300 Mk II, Varicam, Alexa, even arguably the shitty F55 etc.) is a total wash in terms of RAW vs ProRes.

 

It was just an excuse to use the Vladimir Putin dog picture to be honest. 

RAW would be wasted on me anyway.

Link to comment
Share on other sites

As embedded computing power increases, the use of RAW will rapidly diminish. There are still breakthroughs ahead based on generative mathematics- fractals, DNA etc., that will encode information without resolution, allowing any desired output to be rendered.

Here's one example based on machine learning (not clear if they can generate arbitrary output resolution, however the compression quality is very high): 

 

Link to comment
Share on other sites

Like Yedlin rightfully says, it won't be long before all the tested cameras are obsolete. ? More interesting to me than the resolution tests themselves was how well they were able to match the footage from all these cameras and the discussion of how the entire pipeline, from acquisition to post to delivery, to how viewers consume the finished product affects perceived image quality. Yedlin's test has already sparked wars on social media, with 90% completely misunderstanding what he's done, for example:

What the fuck do I care what Steve Yedlin prefers, these are all within subjectivity and what you personally prefer. 

But at least one commentor was astute enough to observe that crafting stories is the most important part of the pipeline.

Here's an interview with Yedlin from American Cinematographer.

Link to comment
Share on other sites

5 hours ago, tihon84 said:

I heard about Yedlin's Nuke script last year, but what is it? How does it work?

It's basically just a film emulation LUT except that it's incredibly good and comprehensive. It has transforms to make the Alexa look like 5219. Procedural grain that's very realistic. Halation that accurately emulates film. Just a great looking film look script.

Link to comment
Share on other sites

On 7/24/2017 at 5:52 PM, cantsin said:

I can see a noticeable resolution difference between his 4K and 2K examples at 1:1 on Dell 4K desktop monitors. More so, I can see a huge difference in 4K vs. 2K in the theater. For HDTVs, I'm still using 1080p, mostly because there's so little 4K content for viewing. When I do finally upgrade, I'll get larger panels to take advantage of 4K (current HDTVs are 52" and smaller).

I did a similar test a while ago to show how the 1DX II's "soft" 1080p can be scaled up to 4K, sharpened in 4K (Lumetri), then fine grain noise added in 4K, to produce an image perceptually similar to a native 4K shot:

Watching true 4K (or close to it) on a desktop computer is massively better looking than 2K. A very large 4K HDTV sitting at a similar FOV (which is pretty close for most people) is similarly more impressive than a smaller HDTV in 1080p.

I think most people will agree that the native 4K clip looks nicer and more organic than the up-rez'd 2K clip, especially if viewing on a 4K monitor.

So while I agree with Steve that we can do tricks with 2K upsampled to 4K and sharpened+grained to come closer to 4K, the truth is 4K is still better than 2K, so I'm not sure what he's debunked?

Some day Super Resolution and technology based on Generative Compression will be able to provide massively better upscaling algorithms in NLEs and even future HTDVs (real-time hardware). http://www.infognition.com/articles/what_is_super_resolution.html (still no plugins or NLE support on the commercial market?). Generative Compression algorithms will break an image (or series of frames) into features which can be re-rendered at any desired resolution, similar to rendering vector art / spline curves.

Link to comment
Share on other sites

28 minutes ago, jcs said:

I can see a noticeable resolution difference between his 4K and 2K examples at 1:1 on Dell 4K desktop monitors...

So while I agree with Steve that we can do tricks with 2K upsampled to 4K and sharpened+grained to come closer to 4K, the truth is 4K is still better than 2K, so I'm not sure what he's debunked?

His overall point wasn't "pixels don't matter," it was that people place way too much attention on K count than on all the other aspects that make up an image. You have to admit, the difference in the 4K master when downscaled from 6K and upscaled from 2K was not very different at all! Noticeable? Yes. But would the 2K master really get in the way of great story telling? Hell no!

That last statement of yours ("the truth is 4K is still better than 2K) is kinda the whole reason he put this video together: "We shot with cameras that range from 3K to 11K, and the highest actual resolving power is in the middle of that range at 6K. The 3K camera is better than some of the 6Ks, and one of the 6Ks is better than the 11K.  You just can’t tell what you’re going to get based on counting Ks."

 

Link to comment
Share on other sites

20 minutes ago, EthanAlexander said:

His overall point wasn't "pixels don't matter," it was that people place way too much attention on K count than on all the other aspects that make up an image. You have to admit, the difference in the 4K master when downscaled from 6K and upscaled from 2K was not very different at all! Noticeable? Yes. But would the 2K master really get in the way of great story telling? Hell no!

That last statement of yours ("the truth is 4K is still better than 2K) is kinda the whole reason he put this video together: "We shot with cameras that range from 3K to 11K, and the highest actual resolving power is in the middle of that range at 6K. The 3K camera is better than some of the 6Ks, and one of the 6Ks is better than the 11K.  You just can’t tell what you’re going to get based on counting Ks."

 

Agree storytelling (even sound!) is more important- if either of those are off, it really hurts the production regardless of other factors. However those are orthogonal to the image quality debate, right?

If everything else is equal- color accuracy, skin tones, noise structure, low light, motion cadence, dynamic range, no sampling artifacts, then more resolution is always useful unless storage space is a major factor. For example, more resolution means you can shoot wider and be able to punch in in post. Super time saver! Image stabilization also benefits from extra resolution.

Would I pick a Red or Sony over an Alexa if given the option for the same price? Almost certainly the lower resolution Alexa (ProRes), unless I could first test an F65 to make sure I could easily get great skin tones (and if shooting XAVC was good enough quality. F65 RAW is too data heavy unless someone else is paying for storage :)).

I shoot mostly "soft" 1080p on the 1DX II, which works fine for medium to close up shots (the aliasing hasn't been a problem so far. If I encounter it will just shoot in 4K and lose a little shallow DOF (and get massively bigger files). Similarly, I shoot mostly 1080p (pretty sharp) on the C300 II for medium and right shot. For full body (green screen) wide, I shoot 4K.

On my 4K desktop displays, even on the YouTube, there's a huge difference in quality watching in 4K vs. 1080p: 

Don't you agree?

 

Link to comment
Share on other sites

3 minutes ago, jcs said:

If everything else is equal- color accuracy, skin tones, noise structure, low light, motion cadence, dynamic range, no sampling artifacts, then more resolution is always useful unless storage space is a major factor. For example, more resolution means you can shoot wider and be able to punch in in post. Super time saver! Image stabilization also benefits from extra resolution.

I doubt that Yedlin would disagree with you. I certainly don't. All else equal, bring on the pixels!

In his words he made the video because "I’ve seen multiple situations where filmmakers or studio decision makers are shown something that’s meant to be a comparison and they are being shown this not by a technology expert from their own company but by a vendor who stands to gain by whatever decision is made based on the demo they are giving. That’s not really a fair comparison situation. So these decisions are not only being made with entrenched presuppositions about what makes an image look the way it does, but these false comparisons that are only nominally scientific and actually more of a marketing manipulation."

Link to comment
Share on other sites

1 hour ago, jcs said:

On my 4K desktop displays, even on the YouTube, there's a huge difference in quality watching in 4K vs. 1080p: 

Don't you agree?

You take the compression factor out of the equation (very poor on Youtube, so that up-rez'd HD looks noticeably worse). If YT used better or less compression and if the overall quality was better, but if they had to stop supporting 4k due to limited bandwith, I would prefer this. My monitor upscales pixels that are compression artifacts to a very high degree.

And you ignore what Yedlin proved about perceived sharpness. I watched Yedlins part two on a 5k display in full screen, viewing angle/FoV of around 60°, not measured exactly, the width of my display and my distance to it tend to be roughly a equilateral triangle. And since the clip was 1080p only, compressed in a quality-preserving way, and since I couldn't see individual pixels, neither in the 4k part one nor in part two ... (Edit: like part 2 shows, you never actually see the pixels if your viewing device's resolution is dense enough).

What Yedlin says is not, we should all shoot in 1080. He says we are all barking up the wrong tree. Would I prefer to watch Dunkirk in a 2k cinema? No. I was one of few who could compare Inception (4k mastered and 4k delivered) on a 2k and a 4k screen back then. I could sit even closer to the screen than usually. But there is a limit, because if you sit before, say, row 6, the image is perspectively distorted too much.

I only later learned that The Revenant  (4k mastered and 4k delivered, watched by me sitting in row 6) was 87% shot in 2,5k. A 4k projection that upscales a 2k image or shows a native 4k image will look somewhat more brilliant than anything projected in 2k. So 4k TVs and monitors do have their benefits. It's just that we all grotesquely overestimate this factor. 

 

Link to comment
Share on other sites

China is investing huge money in TV panel industry. They want to make 65inch panels as cheap as today mainstream 42inch. So next step will be descending the 85inch from luxury segment to mid-range. Our content should be ready for that. 

Link to comment
Share on other sites

3 hours ago, jcs said:

I can see a noticeable resolution difference between his 4K and 2K examples at 1:1 on Dell 4K desktop monitors.

Of course you can.
The content was created to watch on a 1080 screen and the term used 'noticeable difference'. The difference is not noticeable watched at 100%. It's start to get noticeable at 200% and even more noticeable at 400%. ...however at 100% you will not be able to see the differences. On a 4K screen that means using only a quather of your screen.
So if you saw a difference you either where looking fullscreen ( and now your screen starts to create 4 new pixels for every one pixel of the content, for allready two different pics) or your screen has pixels that are somehow better then the pixels of a 1080 screen and in that case I for one would really like to see a screen shot of the differences you saw on the pixels of your screen, as I don't think that is even possible.

 

Link to comment
Share on other sites

46 minutes ago, Axel said:

You take the compression factor out of the equation (very poor on Youtube, so that up-rez'd HD looks noticeably worse). If YT used better or less compression and if the overall quality was better, but if they had to stop supporting 4k due to limited bandwith, I would prefer this.

And you ignore what Yedlin proved about perceived sharpness. I watched Yedlins part two on a 5k display in full screen, viewing angle/FoV of around 60°, not measured exactly, the width of my display and my distance to it tend to be roughly a equilateral triangle. And since the clip was 1080p only, compressed in a quality-preserving way, and since I couldn't see individual pixels, neither in the 4k part one nor in part two

Upscaling 1080p to 4K and sharpening, or sharpening while upscaling (using a more advanced resampler) to 4K, then adding fine noise/grain, and even using local contrast enhancement (a form of Unsharp Masking), and especially if using Super Resolution (using aliasing information to generate actual real detail over frames of video), will result in a visible perceived detail/quality improvement over just upscaling 1080p to 4K (bilinear/bicubic scaling). I can clearly see detail differences in Yedlin's tests, especially in the eyes and skin detail. I didn't see any in Yedlin's tests that showed anything new, really much ado about nothing! As for super high resolutions not showing more improvement, that makes total sense based on Nyquist sampling theory. Past Nyquist the extra resolution is still providing additional post crop/zoom options, so still useful if one has the storage space. The Sony F65 clearly shows an advantage at "8K" for 4K delivery vs. lower resolution cameras. More info here: 

"I only later learned that The Revenant  (4k mastered and 4k delivered, watched by me sitting in row 6) was 87% shot in 2,5k. A 4k projection that upscales a 2k image or shows a native 4k image will look somewhat more brilliant than anything projected in 2k. So 4k TVs and monitors do have their benefits. It's just that we all grotesquely overestimate this factor."

Who is "grossly overestimating this factor" in 2017? It appears most people today agree about story/sound/color/DR/noise etc., and more resolution being useful too. A few years ago EOSHD was all about resolution (still panned the 6D II for no 4K; Canon is protecting the 5D4...), especially around the time of the GH2 hacks (and lots of slamming the 5D3 for being soft), even I made detailed posts regarding various camera resolutions. In 2017 it's more about color, DR, motion, manageable file sizes etc.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...