-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Right, now I get it. Throughout this thread I thought that people were saying "Look how huge it is..... by giving it the body of the S5 they made it huge" when actually they were saying "Look at how the mk2 is the same size as the mk1 - that's outrageous - every camera update should be drastically smaller!" TBH, if we're going to judge everyone that didn't give us what they could have given us, we'd all better be saving up for an Alexa, because the people who worked on every other camera released in the history of the world are going in front of a firing squad tomorrow at dawn.
-
-
I think I understand where @IronFilm is coming from - the advantage of a larger body is that you get dedicated buttons and other things that are useful on set. Think about it, if there was no use for something then they wouldn't add it to the camera, regardless of how large they were allowed to make it. On a controlled set you'd imagine that they'd have a proper cinema lens with remote follow-focus etc attached, matte box, v-mount power, monitor, and the whole thing would be rigged appropriately. By the time you add all that then the difference between an FX3 and FX6 is maybe only an extra 25% to the size of the whole rig.
-
Comparison with S5iiX... Connor shows that sometimes it looks over-sharpened: But that's still not as bad as I remember previous models being.
-
It might be time to upgrade I think! That is a far cry from previous models.
-
Just found this video outlining a bunch of new features in Baselight (the other worlds best colour grading platform) and it confirms exactly what my thoughts were about sharpening and blurring, starting at 31m: He set up a frequency generator that starts at low frequencies on the left and goes higher towards the right, then uses the waveform monitor to show the amplitude, essentially showing a frequency response plot. The plot looks like this before any transformations: Adding sharpening to the image increases the amplitude of smaller frequencies: and a simple Gaussian blur reduces them: which is what we want to counteract the effects of in-camera sharpening, as well as replicate the decreasing response of film: I'm HUGELY encouraged now that I've worked out that a blur is a reasonable approximation of a more analog look, and that it also reverses sharpening. Another thing he demonstrates is that by increasing contrast on the image, you are amplifying the contrast of both the fine detail as well as the lower frequencies, so you're changing the texture of the image in ways that may not have occurred had the scene been lit with a greater contrast ratio. At 46m he shows an example (from Mindhunter, which was graded heavily in Baselight, fun fact!) with this being how it was shot: then you might add an effect in post like there was an additional light source: which has the effect of amplifying the contrast on everything, including fine details (ie, what sharpening does), but then he uses a special tool they've developed that only adds contrast to the lower frequencies and doesn't amplify the finer details, and voila: Now it looks natural, like that could have been how it was on set. I suppose you might be able to build this operation using frequency separation, but in Baselight it's just a slider. I guess that's one of the many reasons why instead of buying Resolve for $299 you'd buy Baselight, which is more of a "mortgage your house" sort of thing. The rest of the talk is excellent and worth watching, and this one that is more recent has some new stuff that is also fascinating:
-
Ah, indeed I was! My bad. I re-read the article and it doesn't look like there's any concrete information in it at all about where they shot and how many people were present. A lot like most other journalism then - you read it and feel like you've been told a lot of stuff but when you look again there's just a stream of vague statements 😕 So, who knows why, I guess.
-
Canon mirrorless market lead. What went wrong for Panasonic and Sony?
kye replied to Andrew Reid's topic in Cameras
Wait until the autonomous vehicles and autonomous other-things market really starts ramping up - those things will have dozens of cameras, each needing an entire camera module of its own. It is decades away, but the future is likely to have many times more autonomous devices than there are human beings, and they'll all need an array of various sensors. -
I suspect it could have been because it was small and they didn't want to get too much attention. "so his crew consisted of just his actors (Scoot McNairy and Whitney Able), a sound tech, a line producer, a translator and a driver. Edwards operated the camera, grabbing footage guerrilla style whenever they came upon a compelling location while traveling through Central America." I'd imagine that security might have been a concern in that region (maybe I'm wrong on that) but combine that with not wanting to draw attention from local officials who might hassle you for permits etc. Also, if you're bringing equipment into a country and are going to leave with it again you will need to declare it at the border (so you don't have to pay import taxes - I can't remember what this is called) and this process is a huge PITA for documentary crews etc, so getting around that would be a huge time advantage (I've seen docos where it takes the better part of a day for each border crossing because of this).
-
I would tend to agree with your assessment that it's deliberate and they'll continue with it. If it was accidental then it's a pretty amazing thing to screw up. Also, while it seems to do enormous "damage" to the video from a technical point of view, the images all looked absolutely fine to me from an aesthetic point of view, so the probability they accidentally created a problem that large that didn't look worse is basically zero. It's yet another example of how you can make huge changes to the video signal and they can be invisible or even desirable - being "accurate" is the starting point of the creative process rather than the goal itself.
-
Thanks! It is a tricky thing that basically every piece of technology faces - how do you cater to entry-level, intermediate, and expert users all with the same product: - If you make it too simple then the experts complain about lack of customisation (which is what Apple has done here - there aren't enough options). - If you make it too complex then the entry-level people can't work it out or end up setting things wrong, then blame you for the bad results. - You can't even make a "pro" mode because half of the entry-level people who don't know anything think they're "pro" and will screw it up and blame you for the results out of embarrassment alone.
-
I got quite excited when they originally released the Prores support in their previous model as every camera I've seen with Prores had great images that had minimal processing, but unfortunately they just implemented it as a high-bitrate version of the same heavily processed mess that they sent to h265 compression. However, I'd be extraordinarily happy to be proven wrong! I use my phone as the second camera on my travels and it would be great if it could live up to its potential 🙂
-
*shrug* only looked marginally better than my own footage from my 12 Mini. Until they stop pummelling the footage basically to death internally it won't produce anything other than a brittle digital looking image. I've just spent about 90 minutes trying every blur technique I can imagine on some sample footage I have from my 12 Mini and then jumped over to some similar shots from the GX85 and it was night-and-day different because it wasn't over sharpened before the compression. The GX85 is 8-bit vs the iPhones 10-bit, but the GX85 was still outright superior, despite both cameras being used outside during the day, so have absolutely tonnes of light, so both sensors would have been base ISO. The RAW footage from the android phones looked great - even after being put through the YT compression which is far more severe than the compression that is applied by any smartphone manufacturer. I don't know why Apple are so intent on ruining the video signal they're pulling off the sensor.
-
Sorry - just realised I screwed up the math so cut it out of the above posts while the edit window was still open, then re-thought it again. Here's the corrected version.... I'm confused about the math, depending on how you define "sine waves". Here's what I think is the correct one: At 10 cycles/mm, which is where there is no softening, on Super35 3-perf which is ~25mm x 14mm, that would be a horizontal resolution of 500 x 280. The curves end at about 75 cycles/mm, which is ~3750 x 2400, where the contrast has dropped to between 40-15% of the contrast in the scene. There is a chance they mean it in a different way, which doubles the resolutions to 1000 x 560 up to ~7,500 x 4200. I think the above one is more likely to be correct though. These curves are obviously dropping as the level of detail gets smaller, but if you were to extend them then they might have a 1% contrast response at over 600 cycles/mm, which would be approaching 30K resolution. This is an incredible assumption and should be taken with an entire bag of salt that you buy at the hardware store to put in your pool, but still..... this is why specifying the resolution of film in pixels is an inappropriate measure, it simply doesn't work like this. All this is just the resolution of the negative. I've heard knowledgeable people say that the resolution of film is a lot, but by the time that you print the negative, cut it up in the edit (in a purely analog pipeline), then duplicate that as a master, then take several prints off that to then duplicate to make the prints that go out to each megaplex, the resolution has been reduced to approximately 2K. This is why DI systems were at 2K for so long during the days of film acquisition and distribution. During that period people didn't suddenly start saying that movies were blurrier and asking for their money back - a 2K DI was sufficient.
-
I've been looking at film, not as the ideal reference to replicate, but as a data-point to help understand what we are seeing. What I have been noticing is that on film, finer detail is present but at a lower contrast ratio, whereas digital keeps the same level of contrast regardless of the size of the detail (up to the sensor resolution limits, and subject to the lens characteristics of course). For example, here's an image from Game of Thrones, shot on Alexa/RED: and here's a closeup: Notice that the individual strands are basically as bright as the larger areas of his hair, right up to the resolution of the file. Same with this closeup of an image I posted earlier: There is a little bit of contrast loss in these examples, but lenses and compression are also in the mix too. Contrast that with this shot from the original Blade Runner, which was obviously shot on film: Here's a closeup from the above: Note how the individual strands have far less contrast with the background than the more solid areas with many strands of hair. Here is the MTF chart of Kodak Vision3 500T, which confirms these observations. So, what is the actual resolution? <snip> see below..... Ironically, digital sharpening methods have a completely inverse response - they increase the contrast on fine detail rather than decreasing it. So, digital cameras take a readout from the sensor that has an essentially flat MTF curve, and then apply a transfer function that does the exact opposite of what film did. Awesome..... screw you too! Food for thought.
-
In the description he also says: So overall, I don't know what to make of the video. Plus, when you eliminate all the fast cutting by watching it at 0.25x speed, it looks pretty sub-standard.
-
-
Canon mirrorless market lead. What went wrong for Panasonic and Sony?
kye replied to Andrew Reid's topic in Cameras
The domestic segment might be the part of the picture you're missing... I work corporate and the people who wear business attire to the office and are "into photography" tend to buy "the best" like the 5D and 24-70 F2.8. When I was filming my kids sports games the parents either used their phones or had enormous DSLRs like 5D or D800. When I used to go to photography meetups maybe 10-15 years ago about two thirds had pocket cameras and one third had 5D / D800 level cameras. One woman whose husband had a fancy job was a stay-at-home mum and had a 1DX and all three of the F2.8 zooms, but she had a longer lens too which was seriously long and looked ridiculous TBH. Looking at the lenses now I think it might not have been the 100-400, but might have been the 200-400mm F4. It was so long it made the 1DX look small. Obviously a low percentage of normal people are buying Canon DSLR / MILCs, but there are so many more affluent amateurs than there are professionals, it would be easy for the consumers to dominate the sales figures. They're also a hidden segment - unless you go to kids sports carnivals then you'd never know they existed. You could say that this market segment is driven by reputation, but let's be honest, for stills Canon cameras are absolutely spectacular. -
I recently got organised and pulled a selection of sample footage from each of my cameras to develop colour grading looks and do experiments. This includes the OG BMPCC and the BMMCC, Canon 500D with Magic Lantern as well as much more modern 4K cameras like the iPhone, XC10, GH5, GX85, GoPro and Sony X3000. The footage from the modern 4K cameras look like video, and whatever I do to them is only small incremental improvements, but the BM footage looks like cinema camera footage straight out of the RAW tab and with literally no grades at all. Something is seriously wrong when you can't grade GH5 or XC10 footage to look even remotely as good as ungraded footage from 2012.
-
That would have required work instead of waving it around, reading the spec sheet, and pontificating to camera!
-
Canon mirrorless market lead. What went wrong for Panasonic and Sony?
kye replied to Andrew Reid's topic in Cameras
Canon are also one of the few that don't use Sony sensors. The colour science of the other brands has been gradually improving, but Canon had it right from the start, in addition to their lenses which was a huge advantage of course. -
Yeah, just a little frustrated about the whole world seemingly taking digital cameras and giving them a bunch of crap that is increasingly fringe and specialist at the expense of simply having nice looking images. Everyone loves how the Alexa looks, even the original, but then when it comes to what features we demand in a camera, somehow image quality comes last, with all the BS somehow being more important. Non-cinema cameras have worse image quality now than they did in 2013. No-one stopped to ask... if the first 2 million pixels looks like crap, why would I want to have 62 million more?
-
The original quote that I disagreed with was: So, @gt3rs, you're saying that because you, as a professional, are frequently asked for a mix of horizontal and vertical content, that the sensors in my phone, in my laptop camera, in my cars reversing camera, in every GoPro, in the GX85 update, in the Sony A7C, in the A6800, etc, should all be square? Really? I've said recently that the primary failure of folks is to not understand that others aren't the same as them, and the second failure is that when they learn of differences that they automatically think that everyone else is less valid than they are, less deserving, less relevant. To think that every camera on earth should be changed just because it would be nicer for you.... I mean, that's the complete peak of ignorance and arrogance, right?