-
Posts
8,104 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I was being a bit provocative, mostly just to challenge the blind pursuit of specifications over anything actually meaningful, which unfortunately is a bit like trying to hold back the ocean. I have seen a lot of footage from the UMP12K and the image truly is lovely, that's for sure. Especially, looking at the videos from Matteo Bertoli shoots with UMP12K and the BMPCC6K, because it's the same person shooting and grading both so the comparison has a lot less external factors, the 12K has a certain look that the P6K doesn't quite reach. The P6K is also a great image too, so a high standard to beat. The idea of massively oversampling is a valid one, and I guess it depends on the aesthetic you're attempting to create. In a professional situation having a RAW 12K image is a very very neutral starting position in todays context. I say "in todays context" because since we went to digital, the fundamental nature of resolution has changed. In film, when you exposed it to a series of lines of decreasing size (and therefore higher frequencies) at a certain point the contrast starts to decrease as the frequency rises, to the point where the contrast was indistinguishable. The MTF curve of film shows a slope down as frequency goes up. In digital, the MTF curve is flat until aliasing kicks in, where it might dip up and down a bit and then it will fall off a cliff when the frequency reaches half the pixel distance. In audio this would be the Nyquist frequency, and OLPFs are designed to make this a nicer transition from full contrast to zero contrast. While there is no right and wrong, this type of response is decidedly unnatural - virtually nothing in the physical world operates like this, which I believe is one of the reasons that the digital look is so distinctive. The resolution that the contrast starts to decrease on Kodak 500T is somewhere around 500-1000 pixels across, so the difference in contrast on detail (otherwise called 'sharpness') is significant by the time you get to 4K and up. So to have a 12K RAW image is to have pixels that are significantly smaller than human perception (by a looong way) so in a sense it takes the OLPF and moire and associated effects, of "the grid" as you say, out of the equation, but it also creates an unnatural MTF / frequency response curve. In professional circles, this flat MTF curve would be softened by filters, the lens, and then by the colourist. If you look at how cinematographers choose lenses, their resolution-limiting characteristics is often a significantly desirable trait in guiding these decisions. Going in the opposite direction, away from very high resolutions with deliberately limited MTF properties that Hollywood normally chooses, we have the low resolutions which limit MTF in their own ways. For example, a native 1080p sensor won't appear as sharp as a 1080p image downsampled from a higher resolution source. 1080p is around the limits of human vision in normal viewing conditions (cinemas, TVs, phones). In a practical sense, when people these days are filming at resolutions around 1080p, MTF control from filters and un-sharpening in post is normally absent, and even most budget lenses are sharper than 1080p (2MP!) so this needs some active treatment to knock the digital edge off things. The other challenge is that these images are likely to be sharpened and compressed in-camera, so will have digital looking artefacts to deal with, these are often best un-sharpened too as they are often related to the pixel-size. 4K is perhaps the worst of all worlds. It isn't enough resolution to be significantly greater than human vision and have no grid effects, but also has a flat MTF curve that extends waaay further than appears natural. Folks who are obsessed with the resolution of their camera are also more likely to resist softening the MTF curve, so are essentially pushing everything into the digital realm and having the image resemble the physical world the least. I find that "cinematic" videos on YT shot in 4K are the most digital / least analog / least cinematic images, with those shot in 1080p normally being better, and with the ones shot in 6K or greater being the best (because up until recently those were limited to people who understand that sharpness and sharpening aren't infinitely desirable). The advantage that 4K has over 1080p is that the compression artefacts from poor codecs tend to be smaller, and are therefore less visually offensive and more easily obscured by un-sharpening in post. Ironically, a flat MTF curve is just like if you filmed with ultra-low-noise film and then performed a massive sharpening operation on it. The resulting MTF curve is the same. I'm happy to provide more info if you're curious. I've written lots of posts around various aspects of this subject. Yep, massively overenthusiastic amateur here. I mostly limit myself to speaking about things that I have personal experience with, but I work really hard behind the scenes, shooting my own tests, reading books, doing paid courses, and asking questions and listening to professionals. I challenge myself regularly, fact-check myself before making statements in posts, and have done dozens / hundreds of camera and lens tests to try and isolate various aspects of the image and how it works. I have qualifications and deep experience in some relevant fields. I also have a pretty good testing setup, do blind tests on myself using it, and (sadly!) rank cameras in blind test by increasing order of cost! 😂😂😂 I'm happy to be questioned, as normally I have either tested something myself, or can provide references, or both. Sadly, most people don't have the interest, attention span, or capacity to go deep on these things, so I try and make things as brief as possible, so they end up sounding like wild statements unless you already understand the topic. Unlike many professionals, I manage the whole production pipeline from beginning to end and have developed understandings of things that span departments and often fall through the cracks or involve changing something in one part of the pipeline and compensating for it at a later point in the image pipeline. Anything that spans several departments would rarely be tested except on large budget productions where the cinematographer is able to shoot tests and then worked with a large post house, which unfortunately is the exception rather than the norm. Ironically, because I shoot with massively compromised equipment in very challenging situations, I work harder than most to extract the most from my equipment by pushing it to breaking and beyond and trying to salvage things. Professional colourists are, unfortunately, used to dealing with very compromised footage from lower-end productions, but they are rarely consulted before production to give tips on how to maximise things and prevent issues.
-
Additionally, it's easy to look at residual noise on the timeline and turn up our noses, but by the time you have exported the video from your NLE, then it's been uploaded to the streaming platform, then they have done goodness-knows-what to it before recompressing it to razor-thin bitrates, much of what we were seeing on the timeline is gone. The "what is acceptable and what is visible" discussion needs to be shifted to what is visible in the final stream. Anything visible upstream that isn't visible to the viewer is a distraction IMHO.
-
I'm actually not that fussed by 8-bit video anymore, assuming you know how to use colour management in your grades. If you are shooting 8-bit in a 709 profile you can transform from rec709 to a decent working space, grade in there, then convert back to 709. Assuming the 709 profile is relatively neutral, this gives almost perfect ability to make significant exposure / WB changes in post, and by grading in a decent working space (RCM, ACES) all the grading tools work the same as with any other footage. The fact you're going from 8-bit 709 capture to 8-bit 709 delivery means that the bit-depth is mostly kept in the same place and therefore doesn't get stretched too much. The challenge is when you're capturing 8-bit in a LOG space, or a very flat space. This is what I faced when recording my XC10 in 4K 300Mbps 8-bit in C-Log. I have spoken in great detail about this in another thread. This was a real challenge and forced me to explore and learn proper texture management. Texture management isn't spoken about much online, but it includes things like NR (temporal, spatial), glow effects, halation effects, sharpening / un-sharpening, grain, etc. I found with the low-contrast 8-bit C-Log shots from the XC10, that by the time I applied very modest amounts of temporal NR, spatial NR, glow, and un-sharpening, that not only was I left with a far more organic and pleasing image, but the noise was mostly gone. It's easy for uninformed folks to look at 8-bit LOG images like the XC10 and think they're vastly inferior to cameras where NR isn't required, but this isn't true - the real high-end cinema cameras are noisy as hell in comparison to even the current mid-range offerings, and professional colourists are expected to know about NR. A recent post in a professional colour grading group I am in was about NeatVideo and the post mentioned that NR was essential on almost every professional colour grading job. I'd almost go so far as to say that if you can't get a half-decent image from 8-bit LOG footage then you couldn't grade high-end cinema camera footage either. There are limits though, and things like green/magenta macro-blocking in the shadows were evident on shots where I had under-exposed significantly, but on cameras that have a larger sensor than the 1" XC10 sensor, and if exposed properly, these things are far less likely to be real issues.
-
I've been talking about OIS and IBIS having this advantage over EIS for years..... sadly, most people don't understand the differences enough to even know what I'm talking about.
-
Interesting stuff, and it makes me think about the future of VFX integration. My understanding of the current VFX workflows is this: Movie is shot Movie is partly edited, with VFX shots identified Footage for VFX shots is sent to VFX department with show LUT VFX department does VFX work, and applies the elements that are required to match the rest of the film (e.g. lens emulations, etc) but does not include elements not present in the real-life footage (e.g. colour grade) VFX department delivers footage and this is integrated into edit Picture lock Colour grading occurs on footage from real-life footage as well as VFX shots In this workflow, things like colour grading etc get applied by the colourist to the whole film. Things like film grain would only be applied by the VFX department if it was shot on film (and would only be applied to the VFX elements in the frame, rather than the whole finished VFX frames). I wonder if in future, VFX will become so prevalent that the colour grade might become integrated within the VFX framework, rather than the VFX being considered a step that occurs prior to the colour grade. The discussions that I have seen imply that advanced things like lens emulations / film emulations / etc (which are more image science / colour science, rather than colour grading) are beyond the scope and ability of most colourists.
-
What do you mean? It looks flawless to me!!! 😂😂😂 Doing it in post in Resolve looks like it might be a mature enough solution now, but I can't imagine that devices will be good enough to do it real-time for quite a few years. I started a separate thread about this semi-recently: https://www.eoshd.com/comments/topic/78618-motion-blur-in-post-looks-like-it-might-be-feasible-now/
-
The more you look at something, the more you notice. When I first started doing video, I couldn't tell the difference between 60p and 24p, now I can tell the difference between 30p and 24p! BUT, having said that, my wife is pretty good at telling very subtle differences in skin tones and has spent exactly zero time looking at colour grading etc, so we all start off seeing things differently as well.
-
Just un-sharpen more. Unless you like the digital look? TBH I find that if something hasn't been un-sharpened, even if it was shot with RAW video and not processed at all, I find that it is trying to shout at me over the content of the video. I suspect the culprit is the post-sharpening that is done by YT / streaming service. Comparing your upload to YT shows quite a substantial difference.
-
Yeah, high resolution is great if you want high resolution. Not so good if you are interested in making a meaningful final film.
-
The image quality side-by-sides aren't even close..
-
170Mbps... not terrible. Let's see what the implementation is like, especially the sharpening, NR, and auto-awesome AI. I like the idea there's two of them, with the "Pro" one having a larger sensor and more resolution etc. I feel like most action products are the generic model and somehow they never release the higher up models.
-
Yes, I think the appropriate reaction is the 😂 face... we're happy for you, we are thankful for the idea, but we also know ourselves and thus, the tears!
-
In addition to the above, one of the previous wipes they apply is often the show LUT, which also reveals a bunch of stuff about the colour grade too. Things like: cooling shadows / warming highlights highlight rolloff subtractive saturation effects shifts to skin tones etc Normally the VFX will only have the show LUT and the "final" shot in the VFX breakdown isn't the same as the final shot in the film because the final VFX shot will be exported without the show LUT and will be coloured by the colourist in the final process, but it'll be in the same overall direction and colour breakdowns are often not available for those movies so it's good info to have.
-
I've had some contact with ILM and discussing how they emulate lenses in post and they do very detailed and meticulous work and overall it's very impressive. So much so that I actually find the VFX breakdowns a very instructive tool in understanding the look development process. When they show the scene starting with mesh and then wipe after wipe shows each stage of the VFX process, often with the final wipe being the one that just adds that cinematic magic. In analysing what that final wipe does, I normally see: vignetting un-sharpening overall, and often more in the corners bokeh and defocusing glow / halation / mist If you take the average nice-but-digital looking shot from a modern camera and apply similar effects, the flavour of image quickly goes from video to cinema..
-
Yeah, this is one of the other hidden costs of higher resolutions - the lack of ability to have reasonable bitrates with the higher resolutions. Obviously RAW scales with the resolution of the image, but so does the bitrates of the Prores codecs too... but here's the issue - the screens don't get bigger! and your vision doesn't get better either! You: I'd like to buy a higher resolution camera please. Shopkeeper: Sure. Here is a stack of larger hard drives, here is a huge new computer, here are the blazing media cards, here is ........
-
Quick - you've got just over a week to stock up!!! Seriously though, this is good. Anything that helps you focus on your goals more and be less distracted is a good move 🙂 While there's always things I'm curious about, sadly, there aren't a huge list of things I'd like to buy - they just don't make the things I really want!
-
Just un-sharpen in post. I remember a great thread on a colourist forum some months ago asking about sharpening, and the responses were that mostly people don't sharpen, and many reduce the sharpness of the image to avoid a digital look. I think that unsharpening might be one of those hidden things that camera fondlers would consider heresy but is widely done by the pros for high end work. They were talking about cameras that shoot RAW too, not compressed codecs from consumer cameras.
-
This is the internet... such things are irrelevant when arguing about technical matters!
-
Yep.. it's the darnedest thing - on a film set they're thimble-sized and in danger of being lost, but if you pick one up and then walk out of the film set it starts to grow... as you walk through crowded tourist hotspots it has become quite large, perhaps the size of a toddlers head, but as you walk away from the crowds it rapidly inflates to be the size of a watermelon, with passers-by stopping and staring at you.. by the time you leave the areas with moderate foot-traffic it has become the size of a dozen adult-themed helium balloons and gathers about the same amount of attention.
-
Hot damn! You mean we can shrink the camera bodies by just lopping bits off? Where's my hacksaw!!
-
Sony a9 III global shutter high ISO / dynamic range tests
kye replied to Andrew - EOSHD's topic in Cameras
Most sensors do a full readout in the highest bit-depth at 24/25/30p, but at higher frame rates typically reduce the bit-depth of the read-out. Assuming this was to save a bit on data rates and processing, then that means they have been making progress - they just spent it all on resolution instead of bit-depth. Yet another hidden cost of this preposterous resolution pissing contest that the entire industry is doing, with consumers cheering all the way down. -
If only they weren't so large!
-
Availableism - nice! It reminds me of approaches like Dogme95 etc, which integrate that sort of element and go a lot further with it as well. Sadly, I'm not surprised about the grant decision. I've been to enough film festivals to know that the thinking is often enormously traditional / blinkered, and also motivated by who-you-know and all that crap too. One student film festival I went to had a film in the documentary category that was old people talking about their sex lives - it was very entertaining and the old folks were all very cute and it definitely deserved to win an award for concept / direction / producing, which it did. However it was shot terribly, there were booms in shot on half-a-dozen occasions, the camera wasn't held steady sometimes and was bumped significantly and obviously a couple of times, but it also won best cinematography and best sound, which was completely ridiculous. One of the things that dominates the overall architecture of how traditional films are made is that many of the people who are involved are not critical thinkers, they learned how to perform their role but they don't understand the other roles, the overall process, or even how to make a film. There are often territorial disputes as people defend their patch, etc. The film-making process is a factory production line, and most workers in a factory don't understand it's possible to redesign a factory and make it work better, let alone be ok with it when someone suggests it. Yeah, that's a big drawcard about the FX3 that makes it stand out. I don't understand why there aren't more cameras with the ISOs further apart. Most cameras are have lower base ISOs, and have a much smaller interval between the lower native ISO and the higher native ISO, combining to give the FX3 a huge advantage in low light.
-
Never mind that colourists working on high-end material still regard noise reduction as a critical tool for every shoot, including the ones where all the material was exposed properly in-camera and recorded at native ISO! It's fascinating to download cinema camera footage for the first time and see that it has more noise in it than a mirrorless low-light test. I got a bit of a shock when I saw that for the first time.
