-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
@Django @PannySVHS You guys are hilarious. I wonder if you can tell me what the colour science of the next canon camera will be like? I mean, I know you haven't seen it, but you also haven't seen the Mini adjusted in post to match the 65, and you're confidently speaking about that! Maybe you can compare the GH6 to the next Canon camera? If I promise to take a video of the lottery numbers next week, can you tell me what that video contains? That'd be great, thanks!
-
@Django @PannySVHS I really truly would encourage you to compare the footage yourself. There are SO MANY tiny differences between them, and they're often too subtle to see unless you're flipping between them in an NLE. However, once you equalise them, or get close even, you'd be amazed at how much the gap closes. Some differences aren't that subtle though. Like you said - let's look at 13:50. The 65 is quite obviously brighter. Guess what. Making an image brighter makes it nicer! A common trick in colour grading is to track a window over the main subject in a shot and add a bit of contrast and raise the brightness. Even a subtle touch of this changes the entire composition of an image, it's such a powerful effect. Here, you get it for free from the camera/lens/grade. There are a dozen other things different between the shots, mostly too subtle to see, but they add up. Doing the comparisons yourself is a masterclass in aesthetics, and why this isn't a good comparison for sensor size.
-
That's a fascinating video, but probably for all the wrong reasons. I spent over a dozen hours when it was released with a range of shots from it, adjusting them in post to be a more equal match. By the time you take the Alexa Mini shots and adjust the vignette, barrel distortion, contrast, colour, diffusion, sharpening, and cropping to make the frames much more comparable, the differences are greatly diminished between the s35 and LF sensors. I HIGHLY HIGHLY HIGHLY recommend that everyone downloads the video, pulls it into Resolve and tries to adjust the images to be more similar. I learned an incredible amount about imaging from this exercise. It really gave me an appreciation for how much better the colour science is on the 65 than the Mini - despite the Mini being spectacular in its own right to begin with. ARRI truly did an incredible job in its development. It's a difficult thing to match them as the vignetting is very non-linear and so all I could do was approximate it. I went the full-way and spent hours adjusting one image while it was on top of the other with a "difference" blending mode so I was only seeing the mathematical difference between the images, trying to line up everything perfectly to minimise the errors. I performed individual colour grades on different parts of the image in this mode, eliminating tints and shifts in the colour science, etc etc. Most of the adjustments were non-linear in this sense. Unfortunately I couldn't get them to match closely enough to "see through" the various optical differences and answer the question if there was a difference between the sensors. It's a great test to demonstrate the difference between the two cameras, but a terrible test to show the effects of only the sensor size. As great as an Alexa Mini is, the 65 is a far superior image, so in a way it's like comparing a GH1 kit-lens combo with a Sony Venice Masterprime combo and saying the difference is due to the sensor sizes.
-
They're around, but were always quite expensive... good luck!
-
That makes sense, as definitely more classic FF stills lenses around than other formats. Your "no math involved" logic isn't so straight-forwards though, in cinema at least. For years I consumed content designed for consumers where FF was the reference. I learned that a 50mm lens was a 'normal' lens as its focal length was similar to the diagonal on a 35mm frame, compares to how the mind 'sees', and I also learned that the 50mm is a very common lens on shooting films. I learned that S35 was the standard format for film and all about 3-perf and 4-perf and Vistavision and ..... umm, hang on! When we learn that The Godfather was shot purely on a 40mm lens, that's on film, so is a 58mm FF FOV - right? and Psycho and its 50mm is a 72mm FF FOV - right? I genuinely don't know how many times I heard about a 50mm lens and thought "that's a 50mm FF FOV". https://www.premiumbeat.com/blog/7-reasons-why-a-nifty-50mm-lens-should-be-your-go-to-lens/ Maybe your comment "no math involved" is right. But instead of math being required, maybe we should replace it with confusion rather than clarity! This is another one that I'm not sure about. If this were true then a 10mm lens on a S16mm camera should be unwatchable, yet it looks similar to a 28mm lens on FF. In fact a 10mm lens was a pretty common lens back in the day - so common that almost anyone can afford to buy an Angenieux! https://www.ebay.com/itm/154807066202?hash=item240b385e5a:g:-BAAAOSwehBh4xlM
-
The quote from Ozark presents a common view - "the FF look is having shallower DoF". Or a fast wide image. I don't want to say that it's wrong, but it's kind of not actually about sensor size, but about lenses. The myth is that you can't get shallow DoF on crop sensors, but the reality is that nothing is stopping this from happening except that they don't make the lenses required (eg, fast wides) or they're so expensive that the people who can afford them just shoot FF anyway (eg, Voigtlanders). It's like if I bought a set of paint-brushes and only bought black and white paints, painted in black and white for ages, and then said "these paint brushes have a 'look' - they lack a certain lifelike quality to the images". Obviously this would be ridiculous as the lifeless quality would be the paint I used with the brushes, not the brushes themselves. Of course, there's not much point in saying that an iPhone sensor can do anything a FF sensor can do, all it needs is a lens that no-one has ever designed or built or sold, because that's ignoring the fact you need a lens to make an image. I imagine that much of this reasoning (of shallow DoF) was historic rather than now where fast glass is everywhere, so it's less-relevant now, I think. The same type of logic could be argued for other factors as well, like DR, noise, etc, which differed between sensors due to limits of the technology rather than the physical size of the sensor having some fundamental limitation. Having said all that, I'm still wondering if there's other factors that sensor size contributes to that might be causing an aesthetic difference. I'm curious to hear any other thoughts you might have.
-
I'd be curious to see which tests you're talking about that get different results. Not questioning that you've seen it, but curious as to what other factor is probably causing what you saw. The quality of camera comparison tests is very very poor, even with professional cinematographers testing out high-end cameras. One of the things these people love to do is dial in 5600K on each camera and just assume that's the same WB, and when you look at the footage it's clearly no-where near the same. They then talk about what they're seeing and completely ignoring the fact they've stuffed up the whole test. Other people do that and then WB in post, but this ignores the colour science that might be applied if the footage wasn't shot in RAW. I see basic errors that invalidate tests time and time again. What you saw was probably someone messing up the test like that. There's a reason that R&D departments of huge companies have enormous budgets - doing proper testing where everything is controlled properly is extremely difficult and is called "science" and is an entirely different field of study that even professional scientists get wrong much of the time, let alone cinematographers who may well have never even studied science in high-school, let alone understand the fundamentals of experiment design and the scientific method.
-
This page at Adobe says "You can still edit JPEG and TIFF images in Camera Raw, but you will be editing pixels that were already processed by the camera. Camera raw files always contain the original, unprocessed pixels from the camera." That's why I suggest it might be an option. IIRC TIFF can also be uncompressed and can contain high bit depths so it seems a good container to keep the benefits of RAW files but get the pixels into a container that works with ACR. It would still have to be de-bayered in Resolve of course, but I doubt that the aesthetic difference that the OP is seeing is some revolutionary approach to de-bayering. Of course, maybe it is and this approach may not work, but if it didn't then Resolve would be out, and it's a convenient tool considering the OP already has it.
-
(I would imagine that shooting in 240p would make sensor sizes very difficult to discern, so it makes me wonder if increased resolutions have the opposite effect. Once again, trying to gather peoples impressions and see if we can learn anything about what might be causing these impressions for people)
-
What I was asking was if higher resolutions made sensor size more 'noticeable'. ie, back in the day, we had S16, MFT, S35, and FF cameras that all shot a maximum of 1080p. At this time there would have been people shooting video on all of these and so there would have been a certain aesthetic impression of what sensor size contributed. Now, we have MFT, S35, FF, and MF cameras that are shooting 4K or above. I'm wondering if there is now more of an aesthetic difference between these sensor sizes. ie, "When we shot 1080p it wasn't that noticeable, but now with 4K I can tell a FF sensor from a mile-away!" or the opposite. The reason I ask is that you said that your impressions of MF/FF vs smaller was only about stills images. Stills are likely to be higher resolution than video, so I was wondering if it was the resolution bump from the stills that made format differences visible to you.
-
I bought a pair of manfrotto xume adapters to add a fixed ND and found them to be super convenient, so definitely a useful purchase for working quickly. One thing that comes to mind is I don't know how well they would work with a vND as if you try to rotate the filter it might slide on the magnetic interface rather than actually change the vND strength, same with any other filters that are adjustable like this. You don't seem to be using any but just thought it was worth a mention. The other thing is that you have to mount the magnetic rings onto the filters themselves (two of them if you want to be able to add/remove each filter independently) so be aware that it will make your filters thicker and that might cause challenges for whatever you're storing the filters in when they're not on the lens. The level of work to integrate two complete separate BIOS's would be staggering. Assuming that this move from them is indicative of a future direction where hybrid cameras aren't in a parallel universe from their cine cameras, it would make sense for them to start again and build a new one from the ground up. They might have been doing this in the background for years. Certainly, other corporates I've been in often have a little 'pie-in-the-sky' project for concept development, that then moves into feasibility and high-level scoping after a few years, then into a preliminary design phase, before having its tyres kicked by the bean-counters and potentially green-lit for development into an actual prototype etc. My impression on some of their previous limitations (eg, why Canon 1080p was really just soft 720p) was that it was related to legacy hardware, and even legacy hardware architecture. If they were sufficiently pushed into a corner by the market to do something about it then I'd suggest that developing a unified hardware architecture and a unified BIOS architecture would make sense. They could structure it to be configurable so that the BIOS could be configured for the presence/absence of various interfaces, processing chips, etc. It could also be configured to easily cripple-hammer too, so they could arbitrarily disable various functions in software on a particular model even if the hardware would support it. It could even enable paid upgrades to unlock features. Time will tell I guess.
-
@ChrisInColour @Andrew Reid of course - that makes sense. Bummer. Could you use another format from Resolve that Adobe could import? TIFF perhaps?
-
I'm not really familiar with the difference between motion tracking and capture (am now - I just looked it up!) but obviously I'm at the edge of my knowledge 🙂 One thing that stands out about the difference would be that the accuracy of motion tracking the camera/lens/focus ring would have to be spectacularly higher than for motion capture of an object in the frame. Unless the sensors for these things were placed very close to the camera, which would limit movement considerably. I guess we'll see though - @BTM_Pix usually delivers so I wouldn't be surprised to see a working prototype in a few days, and footage a few days after that!
-
Happy for them to have all the opinions in the world about aesthetics. It's questionable if you'd want a YT channel to have opinions about the usability of something like a cinema camera, when all they understand is making YT. Opinions are absolutely NOT welcome when it comes to the engineering. Sadly this puts them in the land of "alternative FACTS" which I like to just call "lies". Sadly, there's a lot of the middle one from camera YouTubers, and a smattering of the latter, mostly concentrated to a minority. Fair enough that you were talking about stills. Do you find that with the higher resolution cameras (BMPCC 6K, R5 8K, etc) that sensor size makes more of a difference than it did in 1080p? If so, it could be a resolution thing? I've subscribed to his channel for years, but don't watch many videos. IIRC It's the channel for a retail store and has relatively similar style to CVP - ie, based on specs and practicalities and designed to assist people in understanding equipment prior to purchase or rental. Yeah, like I said - lots of variables change and so there's no comparison that's even remotely straight-forwards.
-
I disagree about them being camera related. The whole idea of a vND is that they act as a colour-neutral filter to simply let through a proportion of light. Any NEUTRAL density filter will attenuate all frequencies of light in equal proportion. We know they're not perfect and this results in colour shifts, however, all cameras 'see' in the same basic RGB colours. There can be very slight differences between which frequencies of light different manufacturers sensors are sensitive to, however these will be very very small differences and if a filter is so crazily built that it's very different for one camera than another then you'd want to avoid it at basically all costs as its colour response would be spectacularly non-neutral. Here's the plot comparing a range of digital cameras - not much difference: Source: https://www.researchgate.net/publication/342113086_Introducing_the_Dark_Sky_Unit_for_multi-spectral_measurement_of_the_night_sky_quality_with_commercial_digital_cameras
-
Any chance of sharing the test, and original files? I'd be VERY keen to play with them and see how they compare!
-
For those people there are only two kinds of delays: ones that take less than the 0.2s that their ADHD will tolerate, and larger ones that mandate Instagram usage. My experience of living with ADHD teenagers is that normal life provides 100+ moments when social media can be utilised, therefore the delay in mode switching is basically invisible! Thinking more about their choice to make it a Dual-Boot OS reveals something very interesting. By going dual-boot they're essentially showing that it's easier to take an existing Cinema BIOS and port it to the R5C hardware than to integrate new features into the stills BIOS. This implies that: the hardware in the R5C must be relatively similar to a cinema camera, otherwise it would have been too difficult to port and they would have gone the other way (like they normally do) the divide between the stills and cinema divisions must be far less now, as the development would have required someone from the cinema department work with the stills department to develop the hardware and get the software to work Obviously it's not clear what the politics are, but it means a cultural change within the company to either let it happen (if you're the cinema department) or to order it to happen (if you're in charge and can make people do things they don't want to do).
-
The big drive for resolution in Hollywood comes from the VFX teams, who require the resolution for getting clean keys but also for tracking purposes. I've heard VFX guys talk about sub-pixel accuracy being required for good trackers as by the time you use that information to composite in 3D elements, which could be quite far into the background, the errors can add up. Obviously each technical discipline wants to do its job as well as it can, and people do over-engineer things to get margins of safety, but I got the impression that sub-pixel level accuracy was in the realm of what was required for things to look natural. The human visual system and spatial capability is highly refined and not to be underestimated, but of course this will be context-dependent. If you were doing a background replacement on a hand-held shot of a closeup that involved mountains that were relatively in-focus then a tiny amount of rotation will cause a large offset in the background and it would be quite visible. Altering the background of a shot that has moderately shallow DoF and only involves the camera moving on a slider would be a far less critical task.
-
I've watched a number of tests comparing vNDs over the years and agree - the quality is limited regardless of budget. Also, cost isn't a predictor of performance either, with some mid-priced options out-performing higher priced options, often quite considerably.
-
I'm not questioning if there is a look, I'm trying to work out what technical aspect might be causing it. Anything you can understand you can work with, and potentially accentuate or minimise for creative effect. One of the main challenges with trying to compare sensor sizes is that you can't change single variables - every change effects so many variables simultaneously that it's almost impossible to do any kind of aesthetic tests. What I mean is that you can get two cameras with different sensor sizes and two lenses and set them up to have an identical FOV and POV. If you organise your test well you can also perfectly match one or two other variables at the same time, but you'll have probably 5+ other variables all different. I think that's why there are a lot of very well done technical comparisons that only focus on one or two variables (for example Steve Yedlins excellent comparison of Lens Blur on different formats) but only very subjective comparisons of the overall 'look' between formats. I've read a lot of these accounts of subjective comparisons and tried to discern what technical aspects might be behind them, and I'm yet to actually find anything in-particular that is fundamentally different, but there are still many factors I haven't ruled out, and I've definitely learned a lot of stuff along the way. One thing that I thought was especially interesting was the effect that background defocus had on 3D 'pop'. In my lens tests I have consistently found that even a small difference in background defocus (ie, shallower DoF) had a large impact on depth. One test I did involved comparing lenses all at the same aperture and looking with one eye through a roll of cardboard so that I couldn't see the edges of the image, and comparing how much depth I perceived from the image. The interesting thing was that there was a surprisingly strong perceived difference in my test between a 55mm lens and a 58mm lens at the same aperture. Obviously the 58mm lens had slightly shallower DoF, but it was so slight that I had to actually measure the bokeh balls in the background to confirm it was different, but subjectively it made a much bigger difference than you'd imagine. My current thoughts on it is that it's likely to be a combination of a range of factors that accumulate to form the aesthetic impression. Of course, I still have much more to learn about it so this isn't a conclusion but rather more of a working theory. I do find that it's actually been a very good question to ponder, as it has lead me down quite a number of paths of enquiry that have taught me a lot about the technical aspects of a digital imaging system as well as the aesthetic implications of various technical aspects of such. Like all things, the value is in the questioning... Interesting about the Panasonic vs Sony EOSHD Pro Colour sales, but not entirely surprising. Sony used to have terrible colour! My impression of Panasonic colour is that it was ok with the GH5 but has gotten nicer with subsequent releases. If the GH6 has Panasonic S1H level colours then that would be a huge draw-card for me in upgrading I think. There's a rumour that something will happen in their live-stream this week, so I guess we'll see about that 🙂 I quite like Kai but with a few caveats. Firstly, he doesn't make the mistake of stepping out of his expertise (or doesn't do it like others anyway). He doesn't pretend to know the tech, doesn't try and explain it, and doesn't pretend that his testing is anything other than waving a camera around in a relatively hap-hazard way. I've delved into the world of professional DPs and seen their camera tests (which are very difficult to find BTW as they're normally on Vimeo with cryptic titles) which typically only test one variable at a time and aren't meant in any way to be a review, just exploration of the tech. There is also a world of semi-professional DPs who do commercial work but also do YT and non-DP revenue streams (like Tom Antos, Matteo Bertoli, Humcrush Productions, etc) but even these guys often have elements of their testing processes that aren't controlled for. Of course they're not normally claiming that a test is pristine and not claiming to know the tech or try and explain it, but sometimes I'll watch a test and think "I wish they'd manually WB the cameras beforehand rather than just set the colour temperature" or similar things like that. This leaves the poor YouTubers with basically no hope. They don't actually shoot things professionally like the "hybrid" DPs so they can't talk about the concerns or working methods of real sets, and they also don't have the discipline that sets often involve like a DP requesting a particular T-stop and lighting ratio and doing things by the numbers. They also don't have the technical discipline to review things because they are in the business of producing, hosting, filming, editing, and selling advertising on a show, rather than in the tech itself. Some of these people understand that and keep within the lines, and others just don't, and make fools out of themselves in the process. Of course, the sad thing is how many people don't know enough to know the difference, which is why these people can have lots of followers and yet fumble most of their content. What are your thoughts on the OG BMPCC and BMMCC in this regard? I thought they were well known for their magic / mojo. If so, they are an interesting example because they're doing it despite their sensor size rather than because of it. They do raise an interesting element though, which @Andrew Reid touched on earlier when talking about how much of the lens image circle falls onto the sensor. Despite the BMPCC / BMMCC having smaller sensors they are often used with c-mount lenses that were designed for this sensor size, or potentially even smaller, and thus they are looking at almost all of the image circle from many lenses they are used with. I must admit that I find them less magical when used with glass designed for larger sensors like MFT or FF.
-
I'm guessing that if you have the studio version of Resolve you could do it? I'd imagine that the files from the R5/R3 will be readable in Resolve and it can definitely export CinemaDNG files. You'll need the paid (Studio) version though as the free version has a resolution cap on it (4K perhaps?) and I'd assume you're dealing with larger resolutions from those cameras.
-
Why am I now having visions of a BMPCC6KPro with a 15mm F8.0 Body Cap Lens 🙂 🙂 The other combination, well, I'll leave some mystery to that one until I do it.
-
I tried the 3x bitrate on my 700D and found it did very little to improve the standard compressed file, as both were low resolution (700D had ~1.7K) and were still over sharpened. YMMV of course and the EOS-M is a different camera too. This is a great feature as 2.8K is a sweet spot in resolution I think. It's hard to make a true 1920 sensor look sharp because it's not 1920 4:4:4 so the debayering adds a blur, but 2.5-2.8K adds just the right amount of oversampling without having 6K or 8K ridiculous-o-vision and the file sizes to match.
-
OMFG! Famous for good reason!! I've been trying to work out what might cause different sensor sizes to have a different look and one aspect that I haven't ruled out was to do with the percentage of the sensor that was able to absorb light (ie, not the area between the pixels). This plays into how the threshold between in-focus and out-of-focus would be rendered - the "roll-off". Now that manufacturers have managed to make the gaps between the pixels smaller, have you noticed if this changes the look of the format? An upgrade at last! In the case of the GH5 (which I own and appreciate) it should be noted that it has quite average colour science. Compared to the superior colour science of the OG BMPCC or BMMCC the GH5 pales, and so do the images, to me at least, despite these having smaller sensors than the GH5. The problem with assessing the aesthetics of such things its very difficult to create a direct comparison where everything else is equal. A slight difference in brightness or contrast or saturation or WB or DoF can overrule some of these more subtle aspects like what we're talking about here. I would suggest that almost all camera reviewers understand less than half of what is going on inside the camera. Most less than a third and probably the majority approaching 10%. Chris and Jordan are particularly bad because it's obvious that along with the tech, they also don't understand how people significantly different to them use cameras either, which are probably responsible for the majority of images created.
-
Ah yes, I had forgotten you'd shared info early on the first page. I guess I got distracted by the subsequent posts where it seemed nothing had been established except an increasing urgency!