-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
What are you hoping to achieve with these personal comments? How is this helping anyone? This whole thread is about the perception of resolution under various real-world conditions, to which you've added nothing except to endlessly criticise the tests in the first post. This thread has had 5.5K views, and I doubt that the people clicked on the title to read about how one person wouldn't watch the test, then didn't understand the test and then endlessly uses technical concepts out of context to try and invalidate the test, then in the end gets personal because their arguments weren't convincing. This is a thread about the perception of resolution in the real world - how about focusing on that?
-
I understand the logic, but the Alexa 65 suffers from the same problem. If we compare the width of Super35 at 24.89mm and divide it by the width of the Alexa 65 at 54.12mm it is 46% as wide. If we take 46% of the width of the 6.5K from the Alexa 65 then we get 3K, so the Alexa 65 can only shoot 3K in S35 crop mode. I won't pretend to know if 3K (from the Alexa 65) vs 2.7K (from a 1.5 crop into a FF 4K sensor) is a meaningful difference, but it's interesting that the Alexa 65 is a large format flagship model and I would imagine that a C700 with 20 stops of DR would also be trying to have that premium market too. and it also needs pretty rigorous testing processes in terms of managing the image pipeline too. Not an easy thing to test consistently even if you are paying attention to all the details and don't have a vested interest!
-
I second the Fujifilm cameras, they seem to put the most effort into their built-in colour profiles, although for home use most cameras these days create modern video-looking images so if that suits your tastes then you could get away with using the built-in profiles.
-
Do I?
-
When I read this sentence I thought you were going to suggest that 4K wasn't enough. Haven't you heard that Hollywood is finally catching up to the amateur market and going bigger=better with sensor size? Go read the interviews of people who shot with the Alexa 65 and see how they talk about the sensor size. The scale of the image from the larger sensor was completely critical to their vision, they all say it over and over again!
-
You criticised Yedlin for using a 6K camera on a 4K timeline and then linked to a GH5s test (a ~5K camera) as an alternative... what about the evil interpolation that you hold to be most foul? Have you had a change of heart about your own criteria? Have you seen the light? You even acknowledge that the test "is not perfect" - I fear that COVID has driven you to desperation and you are abandoning your previous 'zero-tolerance, even for things that don't matter or don't exist' criteria! Yedlins test remains the most thorough available on the subject, so until I see your test then I will refer to Yedlins as the analysis of reference. Performing your own test should be an absolute breeze to whip up considering how elevated you claim your intellect to be in comparison to Yedlin, who also published such a test.
-
Cool that you're getting use out of yours! I'd imagine that people using it for drone work are also quite satisfied, as that's what it's really designed for. To each their own 🙂
-
Well?
-
You left out drinking. Anyone who goes out and drinks at a bar / pub and gets drunk, even only once per week, is easily buying a cinema camera every year... at Australian prices anyway.
-
The comments that I read was mostly about the sensor, but the rest of the camera may have different chipsets and other stuff going on under the hood so who knows. I highly doubt you'll be able to get firmware from one and load it on the other - obviously things like buttons and UI are probably very different. Like with anything else, we're at the mercy of however much capability the manufactures want to give us.
-
The stream was in the middle of the night for me, but I just watched the recording of the session earlier today and I have to agree - it was spectacular. For those that couldn't attend, here are some of the things I learned from the class: The challenge isn't to grade a shot with 40 nodes, it's delivering the same look over a 3000 shot feature or (even worse) a series that is shot over time, and also doing it within the time/budget that the production has allocated On top of that, what if you're going in the wrong direction and the director doesn't like it? How long did you just waste, and how can you effectively change the grade without breaking all the qualifiers, etc..? Start with an overall adjustment across the whole project, this should be based on the look that the DP wants for the project, and if you've handled the colour spaces correctly, it should correctly expose all the shots on the output that the DP exposed correctly on capture - for many projects that are shot very well this might be all that is required From there you can apply scene-specific adjustments such as warming or cooling scenes so they're coherent with the emotional arc of the story From there you should only be adjusting tiny things on a per-shot basis such as small changes in exposure (eg due to changing lighting conditions) or tweaking distracting elements etc This should mean you can show the complete film to the director with that rough grade and get feedback, with room to change the overall look, the look of various scenes, and then have time left over for incorporating VFX and troubleshooting any difficult sections that need more attention, and mastering for SDR/HDR etc. Interestingly, if you build the global look starting with a CST to transform from the camera colour space/gamma into something standard (such as ARRI LogC or Cineon) and build your look on that, then you can take that same overall node tree and apply it to a different project shot on a different camera and by simply adjusting the CST to convert from the new colour space for the new project then you can quickly use a look between projects. Walter did this live several times and in literally a couple of minutes had a very solid looking grade on a completely different project. He also took great joy in roasting various YT colourists, LUT peddlers, and those that shout at the internet from the very shallow end of the pool, so I found the class hugely entertaining. For members of Lowepost, he also did a short workflow explanation here: https://lowepost.com/insider/colorgrading/hollywood-colorist-walter-csi-about-his-color-grading-process-r48/ Did anyone else watch his masterclass?
-
A video from the OG BMPCC (the P2K!): https://youtu.be/aKyG5JdSUNc. (for some reason YT won't embed the video) I agree. Saying something is technically possible is one thing, but having the skill required and then the time to do so is another thing altogether. Steve Yedlin has done great work matching the Alexa to film, but he built his own software to do so, and he says this about the capability of colour grading software, even as advanced as Resolve and Baselight: My attempts to replicate the look of the P2K and M2K were never focused on achieving a perfect replication, or even of a passable one, but more as a "shoot for the stars and potentially hit the moon" scenario where worst-case is that I'll learn more about colour grading, and I have definitely done that. I have learned a bunch of stuff that I think really bridges the gap between, say, the GH5 and the BMPCC. There are other things I'm still working on though. Shadow contrast and levels are still a huge thing I'm experimenting with now, for example. And the 14mm and 7.5mm both have the same filter thread size, so if you use them as a pair then you can just swap the whole filter stack between them when you change lenses - making the setup must simpler and streamlined.
-
I wasn't saying that it was aimed at the same user, or that the overlap would be 100%, but it's a lot closer than other parts of the market. My point was really that cine cameras have a bunch of things that make sense for cinema, but are a royal PITA for other things, and on this point the FP is quite well aligned. I shoot travel content with a GH5, and if I list every reason that a P4K wouldn't be suitable for me then almost all of them apply to the FP. If I then listed all the things I would be looking for if I shot a narrative piece, the P4K and FP share most of them.
-
I guess I really haven't succeeded then. The imaging pipeline is complex enough that it's difficult to understand the whole thing (which is one of the reasons why Yedlins video is an hour long) and so when you get two people both talking at length using technical language its hard to understand which one of them is correct. This challenge happens in any topic that is complex and where people have vested interests (for example, the topic of carbon dating and the implications it has for the age of the earth and the religious implications for Bible literalists). Is there something I can do to better explain why Yedlins test is valid and Tupps criticisms aren't valid? The reason I haven't backed down is because I don't want people to come away from this thread thinking it doesn't hold up, but unfortunately the more mud gets thrown at something the more that its hard to tell where the truth is.
-
This is from a resolution test of the ARRI Alexa: Source is here: https://tech.ebu.ch/docs/tech/tech3335_s11.pdf (top of page 10) Pretty obvious that the red has significantly less resolution than the green. This is from the number of green vs red photosites on the sensor. But you're totally right - this has no impact on a test about resolution at all!
-
I wish I lived in your world of no colour subsampling and uncompressed image pipelines, I really do. But I don't. Neither does almost anyone else. Yedlins test is for the world we live in, not the one that you hallucinate.
-
If history has taught us anything, it's that Canon will take forever to release the camera we want, and when they do it will be a huge disappointment. If you have work to do and can do it with the equipment you already have, then use that. If you need a new camera to do it, then buy a used good condition copy of the cheapest camera model that can get the job done, and if the stars align and Canon releases a camera that doesn't overheat, combine LOG with 8-bit, or have the DR of the iPhone 4, then you should be able to sell what you have for close to what you paid for it and buy the Canon unicorn.
-
Actually, the curve on the right is closer to what I would typically do:
-
I think there's a real art to blacks in colour grading, I've learned that getting the right levels in the dark parts of the image has a huge impact on image pop and the overall look. I'd suggest putting in a pretty aggressive knee, so that anything lower than a certain value gets compressed but doesn't go completely to black and get clipped. You could put that knee quite close to 0 IRE so you don't have to get washed out looking images, but also it would mean that you'd keep whatever information is in the shadows but still squash the noise so it's not too obvious, and it would also make the image look a bit higher end too as a significant part of the look of high-end cine cameras is how they handle the shadows. I often set up a curve that compresses the shadows more than the highlights and grade under that. This is a random image I found online that shows what such a curve might look like: My curve is often more aggressive than this, and the more aggressive you make the curve the more filmic the final image will look. When you first apply such a curve everything will look over-contrasty, and you will need to manually grade every shot underneath it. Often the Lift Gamma and Gain (LGG) controls are great for this, as the Lift places how far down the curve your blacks go (and also defines overall perceived contrast and adjusts saturation), the Gain places the highlights and gives a nice rolloff (making the edges of any clipping much less obvious) and then you can adjust the overall brightness of the shot with the Gamma. Often you have to go back and forward with these controls as you often pull the Lift down to get the shadows right, then pull Gamma up to adjust the mids but that also pulls the shadows up a bit, so you pull the Lift down more, etc, until you've pushed/pulled the exposure to a point that looks good. I've graded many projects by just applying such a curve, then on each shot tweaking WB, then using LGG controls to get levels, then Saturation, and often that will be all the project needs. If you have a control surface then the LGG adjustments can take very little time and you can rip through an edit very quickly. Happy to elaborate further, just ask.
-
You're really not getting this... You rejected the test because it involves interpolation, which is common to almost every camera, as most cameras have less photosites than their output resolution has colour values. You also rejected the test because the Alexa is a 6K camera and not a 4K camera and therefore involves interpolation. The Alexa isn't a common camera, sure, but it shares the same colour subsampling properties of most cameras, shares the same 'over-capture' aspects as many other workflows, and is a high quality imaging device, so if you can't tell 2K from 4K from an Alexa 65 then it's a good test and it is applicable to most other situations. A camera with a Foveon sensor does not share the same colour subsampling properties of most cameras, therefore isn't a good test, which is why it's a red herring and not applicable to any sensible conversation about perception.
-
I ordered my Tiffen Black Promist 1/8 filter in 58mm as that fits all my lenses, but didn't want to have to use step-up rings on my tiniest setups (GF3 + 15/8 or 14/2.5) so looked around for the cheapest 52mm diffusion filter I could find. Enter the "52mm Softfilter Spezialeffekt Diffusor Weich Filter für DSLR kameraobjektiv" which cost EURO5.49 with free shipping. Hey big spender! It took quite some time to arrive here in AU. It came with the normal hard crystal plastic case and in a padded envelope, of course I'd have preferred that the filter was IN the case, rather than WITH the case in the envelope, but that just made me clean it and carefully check it for any damage, but it seems unharmed. It is a little strange though. The surface is a bit bumpy, and this is what it looks like: and here is a test with and without the filter (GF3, 14/2.5, RAW stills): It seems to be quite a strong effect, although I find it a bit deceptive because if you zoom in there isn't that much blurring. Given that I bought it for use on 1080p cameras, that will lessen the impacts on fine detail, so I'll have to put it through its paces on some real footage.
-
If 2K and 4K+ are only perceptually different with cameras that are very uncommon then who cares. You might care about this as a theoretical exercise for its own sake, but I'd suggest that not many other people do. If this was a thread comparing perceptual differences between 600x400 and 640x480 then no-one would have cared because it doesn't apply to the real world or to our lives in any way. This thread is only useful because of its applicability. See my above point.
-
I actually think that it's a pretty killer camera for cine uses. If you're a P4K / P6K potential customer then I think you're also an FP / FP-L potential customer because you're willing to rig the camera up for external storage, you're willing to deal with large file sizes, you're willing to deal with external power solutions (although that's way better on the FP than the BMs), you're willing to sacrifice RS performance for image, etc. The compromises that prevented me from buying an FP were to do with the lower bitrate codecs and the elements that are more 'video' than 'cinema', but even those could be fixed in firmware updates down the track potentially. And yeah, alternatives? None. Most cameras have competitors that overlap with all or almost all features, but even the closest alternatives to the FP or FP-L have quite a number of significant differences, so really there's nothing even close.
-
I've heard zoom lenses are super tricky to service, so you wouldn't be alone. Also, unless it's an Angenieux or something then it doesn't really matter - most of the zoom lenses ever made are cheap now because they were common in the past and undesirable now. I've recently started exploring the FB film-making and equipment groups and vintage lenses with dented filter rings are common, people posting "I bought this lens from a seller who said 'no fungus' but look at these pictures - should I return it?" is common, and I've seen more than one post of "I'm abandoning my attempt to convert my collection of <famous vintage lens name> lenses, lens repairer X quoted me $Y and all the parts are here - offers above $Z please".
-
The whole point of the test was to compare the perceptibility of 2K vs higher resolutions. This is the point you keep missing. Determining if there is a difference between 2K and some other resolution on a camera that no-one ever uses is a useless test. Once again, missing the point.