-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Everyone wants a new camera all the time. Except when they're asleep, and when they're thinking about what they're shooting and what is in the final video.
-
No, I wasn't. This is from CineD. Source is: https://www.cined.com/content/uploads/2018/10/recent-results.jpg What's the article that shows the chart you showed? I don't recall having seen that before.
-
The better I get at grading the more I can see aspects in the OG BMPCC / BMMCC images that aren't there in modern sensors. I made that thread about every camera looking like a Sony some time ago and it's still the case - almost every camera looks like a Sony sensor in a box. The theory is that a perfect camera can emulate any other camera, but there are two caveats with this: The perfect camera has to surpass the "target" camera in every way, and there are certain aspects of image quality that the OG BMPCC BMMCC cameras still surpass every modern camera except the Alexa and Red and perhaps the Venice and other random examples. The colourist has to be up to the challenge. Steve Yedlin famously emulated film with an Alexa, but this was pretty controversial because most colourists are simply not capable of that level of image processing and he had to write his own custom software to do it, let alone taking a Sony image and making it look like an Alexa. To put it bluntly, even if Sony made a sensor good enough (and you could afford it), you're not good enough to grade it, and neither is almost anyone else, perhaps on the planet. In the camera sensor landscape there are essentially four players that I know about. On Semiconductor who make the Alexa sensor, Fairchild who made the OG BMPCC and BMMCC sensors, Canon who make their own sensors, and Sony who make basically everything else. The development of the Alexa sensor and the Fairchild sensors was done at a time when the noise and colour reproduction of film were the most crucial aspects of image quality for their customers. The primary driving factors for Sony is resolution and dynamic range. You're right that smartphones will replace most cameras, but not because they will be better - it's obvious that they aren't - but because people will change their tastes to align with what smartphones can do. If all you see are "Sony" images then you'll be happy with them because you don't know any different. Almost everyone on these forums has now adjusted to the look of Sony sensors, which is an increase in technical specification and a huge decrease in emotional performance. This emotional performance is why companies like Leica stay in business making cameras that are technically worse but still sell for many times the price.
-
You're thinking about this wrong. The P4K / P6K / P6K Pro have a very strong feature-set and this addition gives them PDAF. The only other ways to add good AF to a setup the same way is to either hire a focus puller, or to upgrade to a camera that has everything the BM cameras have but also has PDAF. Either of these options easily runs into the tens-of-thousands-of-dollars. The problem with cameras is that people don't compare the full package, only isolated features, but when you're buying a camera you can get one feature very cheap, but for each extra feature you want it multiplies the cost - by the time you have internal RAW and 13 stops of DR the addition of PDAF creates a pretty small and expensive list of potential candidates.
-
You're thinking about this wrong - when you shoot LOG you create highlight rolloff in post using the methods I have mentioned. For all practical purposes, all cameras are just large arrays of linear light measurement - they don't have any highlight rolloff at all. The "Look" of each camera is defined almost exclusively by the colour processing that happens after the image is captured (and a tiny bit by the sensor) and when you shoot log you're in control of the vast majority of that processing that occurs. I couldn't find reliable DR tests for the Z6, but the GH5 doesn't compare well to the XT-3... GH5 v1 has 10.8 stops, GH5 v2 has 11.5 and XT-3 has 13 stops. DR can make more of a difference than many people think when shooting in uncontrolled conditions and when it's not just art but needs to be informational as well. In shooting the GH5 v1 I am often forced to choose between clipping the whole sky and being able to identify the person sitting in the shade. A choice between "here's a photo of Susan and the sky is digital white" and "here's a nice photo of .... someone? who is that?" isn't a choice that I enjoy having to make. Of course, this choice is actually a factor of how usable the image is in latitude tests rather than just DR as a single digit. Having a photo of Susan where she's bright purple is better than her not being recognisable, but still leaves a huge amount to be desired.
-
The ultimate statement about DR is perhaps that people shooting with an Alexa or RED which have the highest DR capturing around still light and control their contrast ratios. Yes, they can push and pull the RAW / Prores files from those cameras in ways that we can only dream of, but they still take the time and effort to capture it right in-camera. There's no substitute for making your scene look how you want it, even if you have 15+ stops of DR and have 14-bit RAW.
-
There's no substitute for dynamic range. Ultimately you can emulate the nice rolloff (highlights, shadows, or both) of high-DR cameras by using curves but that will increase the contrast of the image. If there's already lots of contrast in the image then you can emulate the lower-contrast look by reducing contrast so the mid-tones are softer, but that will ruthlessly reveal your white and black clipping levels. Another trick is to raise the levels of your image, which means you have some extra space to lower contrast or rolloff the highlights nicely, but that will raise your noise and black levels, so the trick is to apply noise to mask this. The cost of this look is that you'll have elevated black levels and lots of noise in the image. It's not really for modern work but it can work for vintage / film looks though. High DR cameras are desirable because they give you the flexibility in post to choose whichever look you want, but lower-DR captures (like 709 profiles that don't include the full DR of the camera) simply can't do everything - you have to choose the trade-offs. Higher DR cameras are also a challenge in post because you're trying to pack all the DR into the lower-DR 709 profile to publish them, so in a way your lower-DR capture is just doing in prod what you would have to do somehow in post anyway, except that you don't get to fine-tune or apply curves like you can in post. I shoot higher DR shots with moderate DR cameras (GH5 HLG which is almost 11 stops and OG BMPCC and BMMCC which are 12.5 stops) and even these are difficult in the grade when you are capturing the full DR on these.
-
Thanks for sharing - always interesting to see people's work. I noticed that the stills are very different to the video (subject matter, composition, etc) - every other wedding shooter I've seen includes the same in both, so the wedding films tell the story with the getting ready / dress / rings / first-look / venue / guests / waiting / bride entrance / ceremony / kiss / rice confetti / reception venue / buffet / entrance / toasts / speeches / first dance / party mayhem.... Curious your thoughts on taking a different approach?
-
My tips for highlight rolloff are to have a curve that makes the transition more gradual, so things have a really gradual transition into clipping. That requires you to either add contrast, or to lower the white-point, so that's a creative choice and either can help if things are clipped. It's also nice to desaturate the whites too, which hides it when the channels clip at different levels. Ultimately, if you want to shoot and 'fake' log then the best way is to shoot subjects that are low-contrast to begin with, that way the files SOOC will be quite flat because the subject was flat.
-
Have you tried playing with Saturation vs Colour Boost? Sat expands all saturation evenly, whereas CB only expands the low-sat stuff and doesn't increase the high-sat stuff, so using various combinations of them you might have a setting that doesn't need any masks. Maybe a combo of Sat with negative CB to boost stronger colours and not over-do the skintones? Any time you can apply an overall adjustment vs pulling a key is always safer. You could also try the spider-web tool (Colour Warper) to boost sat for all hues except the skintones?
-
Yeah, they seem to over-blur the things close behind the subject and under-blur the things a long way behind the subject. I would have thought that they'd apply a progressive blur? The depth-camera is surely able to give zones where things are further away, although the more zones you have the more edges you have to navigate so I guess that's a potential issue. For me it's the lack of progression to being blurred that makes them look odd. I saw a video the other day where the subject filmed at a wide aperture in front of a landscape with everything a long way from the subject and it created this two-levels kind of look which immediately looked fake, but I think it was real and just how the scene was. It's kind of like when 3D graphics first got good and you'd get used to the tell-tale signs of how they looked fake, but every now and then you'd see one of those signs in real-life and do a double-take.
-
The P2K is under 70mm tall. The Crane M is 346mm tall, and the rig looks like it would be taller with the camera mounted on it... so literally 5 times taller! I'm not even sure it would fit into my bag either 🙂 Maybe you don't get odd looks when you're using it in public, but I've used a phone gimbal before and people looked at me like I was carrying a sword rather than a gimbal. I'm not entirely convinced that it would hold up in the howling wind either, which is basically the point - if the wind wasn't so strong I wouldn't have had so much of a rotation problem to begin with and OIS would have been fine.
-
In a sense, yes, but the devil is in the details. One example is with the GH5, if you turn on AF then it enables face-detect and the auto-exposure automatically exposes the skintones in the face correctly, but if you turn off AF (or have a manual lens) then it disables face-detect and will only expose the frame as if it's a landscape, letting the face / skin-tones (which are clearly visible) fall where they may as if they don't matter in the slightest. So the automatic metering is fine, but it's about as reliable as AF - meaning that it can do it but reserves the right to screw up for random reasons and at random times.
-
Hahaha... the flat background is unmistakable. I like it how in an attempt to make the photo have more depth, it just makes the background look like a green-screen or a poster mounted a few feet behind the subject of the photo.
-
That looks really good - great use of tech and I'm sure that it would be really useful to lots of people. You need an electronically controlled lens of course, but I wonder if a similar setup might be possible using something like a wireless follow-focus perhaps.
-
I love the touch of fake-DoF blur on the brim of the woman's hat!
-
The challenge is that I want to keep the rig as small as possible - the beauty of the camera is that it's so small. I also own the BMMCC but with a monitor and cables you get strange looks which I'd like to avoid. A gimbal would make the vertical size of the camera 10x larger, hardly a compact solution! Realistically, the smallest addition would be a side-handle, but even then it's doubling the width of the camera and making it look far more odd to the public. I was using it in wind so strong that I couldn't keep the camera still enough with my two hands. A steadicam would probably break in the kind of conditions I was in - let alone make the shaking worse by acting as a large sail. Yeah, the handle / viewfinder is the typical solution, but it doesn't work well for what I want. Firstly, the viewfinder is great if you want to make your cinematography "I shoot from eye-level" but that's not a great way to make interesting images. The handle underneath the camera also doesn't work in strong winds as the issue is that the wind hits the camera from the side and the camera rotates around where you're holding it - which in that case would be below the camera - creating a roll. People don't seem to understand my original post - I was cradling the camera with my left hand which was under the body of the camera and holding the lens, and my right hand was firmly on the hand-grip on the right-hand-side of the camera (and is actually quite a good grip as it's quite deep and covered in grippy rubber) but the camera was still being severely shaken with the wind.
-
Sure - I'll revise my statement to "They're essentially really good, really small, really affordable, camcorders." Were you comparing the "Smell of Village" video to the OG BMPCC? If so, the video looked good and was obviously shot as RAW, but beyond it being RAW and deep DoF, it doesn't look anything like a OG BMPCC to me. It looks more like a smartphone that can record RAW. I looked at OG BMPCC footage only days ago, and even SOOC the footage just screamed "film" to me. The grain and texture and resolution and sharpness are all on-point for film, and nothing like the high-quality high-resolution very-modern presentation of that "Smell of Village" film. I've said it in other threads but it's worth repeating - I think people have forgotten what film actually used to look like. If someone posted stills of non-recognisable moments from big blockbuster films and TV shows shot on film, the response would be akin to "your lens is broken" rather than "looks cinematic".
-
Interesting stuff, but still a way to go. The Xiaomi 12S Ultra has a crop-factor of 2.7 (calculated from sensor width) with an f1.9 fixed wide (which equates to a 23mm f5 lens) and an f4.1 zoom (which equates to an f11 5x zoom lens) and a f2.2 ultra-wide (which equates to a 13mm f6 lens). This means that even wide open its got deeper depth of field than decent lenses on a S16 camera: https://www.vintagelensesforvideo.com/category/super16/ The codecs are getting better though - I noticed that my iPhone 12 does 10-bit video, which makes a huge improvement to the subtlety of colour and also to the DR which seems to just be chopped-off in 8-bit mode compared to 10-bit mode. Prores and RAW are better still. They're essentially really good, really small, camcorders. Great if you want that look, but without ability to change lenses they're useless if you want any other look, which these days is what most people want. They're also a pain to put an ND on if you want them to still fit in your pocket, so most users will just expose with SS making terrible motion cadence.
-
Yeah, you can (sort-of) dial in however much stabilisation you want. I haven't tried on this footage yet but will try to hit a balance where the shake doesn't look overwhelming and the blur isn't either, but I'm worried there won't be a middle ground where neither looks bad.
-
I shot a bunch of clips hand-held in terrible weather with my OG BMPCC and 12-35/2.8 and now looking at the footage I realise the OIS did a great job with pan/tilt but zero reduction in roll / rotation (which OIS can't stabilise). I've had a few goes at stabilising it in post but much of it is blurry due to 180 shutter. I don't mind a hand-held look, but the mismatch between the pan and tilt being almost perfectly stable and the roll being jittery as hell is really not a good look. I realise that my options are some combination of stabilising the roll in post and de-stabilising the pan and tilt (adding shake) so the aesthetic works. I've been using IBIS on GH5 and GX85 for years now, and I have OIS in my X3000 action camera, but haven't used OIS-only in a "cinema" setup before and hadn't really realised that this would be a major issue. The OIS in the 12-35/2.8 is far too good to match with the lack of roll stabilisation. How do the OIS-only hand-held shooters out there deal with this? Build a huge rig so you don't get roll jitter? (In which case, OIS isn't really needed that much - right?) Stabilise in post and deal with the shutter-speed blur? Seems to be an incredible advantage of IBIS over OIS.
-
That's interesting and makes sense - although it's something that EIS could also compensate for.
-
I thought that gyro would allow the camera to eliminate the warping as it would know the focal length and the cameras direction, but it seems to not be so. That was literally the only advantage of gyro stabilisation over EIS. I'd conclude that gyro is worth trying in post to see if it does better than EIS, but doesn't seem to have any advantage. Both are last-resorts compared to IBIS / OIS, or physically controlling the camera with a gimbal, tripod, monopod, slider, crane, jib, etc etc. (Disclaimer... the below might sound harsh, but it's directed at the theories you're presenting, not you! Hopefully my comments are useful and informative and correct some of the staggering misinformation floating around). That's nonsense. The data coming off the sensor is RAW - it's whatever the resolution is x the bit-depth x 3 (RGB channels). I think that Osmos and GoPros have the best stabilisation based on two factors - 1) they have fixed lenses and can tune their algorithms based on that, and 2) the entire success or failure of those products rests on how well they implement this feature. That's also partly nonsense. Man, the internet really doesn't understand WTF is going on with stabilisation. I realise that manufacturers measure stabilisation in stops. This is complete marketing crap - it's correct but irrelevant. Think of the suspension system on a car. The tyre follows every tiny bump on the road and the body of the car doesn't want to feel any of those bumps. The goal of the suspension system is to connect the two but without transmitting the shake from the road to the car. It's not a perfect parallel as there are differences between these examples, but it's good enough for our purposes. The cars suspension system can be viewed in two scenarios. 1) how well it smooths small bumps, and 2) what the maximum bump size is that it can handle. In the first scenario, you're driving down a road and there's a small pothole. You drive over it, you hear a thump from the tyres, but feel almost nothing in your seat. This is a reduction in the vibration, and in cameras, this is measured in stops. It is the ratio of how much vibration goes in vs how much gets through the mechanism. In the second scenario, you drive up a large curb. If you're in a small city car, the tyre flexes, the shock goes all the way in, but the wheel hits the end of the shocks and sends an enormous thump up into the car, sending you and the contents of your car flying. If you were in a huge off-road 4wd, you would hear a thump but the tyres and shocks would have enough vertical travel to absorb it. You would still feel it to some extent, but it wasn't a disaster. The second scenario is what you're seeing in your OIS/IBIS mechanism when you see the footage still have shake. This is what separates small sensors from larger sensors - it's the amount they can travel, not the "stops" of IS. The sensor simply runs out of travel and can't move far enough. The math is very clear. Take 5 stops for example, that's a reduction of vibration by a factor of 32x. So, you move the camera by 32 pixels but the image only moves by 1 pixel. You move the camera 50% to one side, and the image moves by 1/64 of the frame - in 4K that would be 60 pixels when you moved the camera almost 2000 pixels....and that's only 5 stops. This is why the stops don't matter. The issue is how far the camera can move the sensor. Larger sensors probably don't have as much room to move as a smaller sensor. This is one reason for having a MFT camera the size of a FF MILC - to accommodate this mechanism. The other biggest challenge is the wobble of IS (both OIS and IBIS) on wide-angle lenses. This is a problem because spherical lenses are, well, spherical, and sensors are flat. In terms of EIS, it's essentially a complete fail on behalf of the manufacturers to compensate, as GoPro and DJI have shown by doing it properly. I looked at a package that corrects lens distortion in post (and also does things like RS correction and flicker elimination) but didn't buy it as it was closer to $1000 than I would have liked, but it's possible. I could even tell you the math, but I haven't worked out how to implement it yet, unfortunately. Resolves EIS pipeline isn't designed correctly to do it.
-
Or in summer of any country that doesn't snow in winter.
-
My hand-held shooting is mostly of static shots now, so of course they stabilise quite well with IS or EIS, but I do the odd follow-shot when walking with family, which is where the Sony X3000 comes in handy - OIS in an action camera. I just did a quick search for samples of walking shots with it and found a few that seemed nothing special, but I know when I do those shots I am always surprised when I see the footage as it looks gimbal-like almost every time. I try and do the ninja walk and hold my hand floating in space, but I haven't practiced it and wouldn't say I'm particularly talented for it, but who knows. Perhaps the best EIS is from the 360 cameras, where the EIS has infinite crop factor into the lens and the lens distortions are all cancelled out completely. Of course, if you're filming, a normal shot then you'll crop into the image so much the IQ will be unusable, so that's the downside. I'm not sure what the sensor size and stops of light have to do with IBIS?