-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Thanks for sharing - always interesting to see people's work. I noticed that the stills are very different to the video (subject matter, composition, etc) - every other wedding shooter I've seen includes the same in both, so the wedding films tell the story with the getting ready / dress / rings / first-look / venue / guests / waiting / bride entrance / ceremony / kiss / rice confetti / reception venue / buffet / entrance / toasts / speeches / first dance / party mayhem.... Curious your thoughts on taking a different approach?
-
My tips for highlight rolloff are to have a curve that makes the transition more gradual, so things have a really gradual transition into clipping. That requires you to either add contrast, or to lower the white-point, so that's a creative choice and either can help if things are clipped. It's also nice to desaturate the whites too, which hides it when the channels clip at different levels. Ultimately, if you want to shoot and 'fake' log then the best way is to shoot subjects that are low-contrast to begin with, that way the files SOOC will be quite flat because the subject was flat.
-
Have you tried playing with Saturation vs Colour Boost? Sat expands all saturation evenly, whereas CB only expands the low-sat stuff and doesn't increase the high-sat stuff, so using various combinations of them you might have a setting that doesn't need any masks. Maybe a combo of Sat with negative CB to boost stronger colours and not over-do the skintones? Any time you can apply an overall adjustment vs pulling a key is always safer. You could also try the spider-web tool (Colour Warper) to boost sat for all hues except the skintones?
-
Yeah, they seem to over-blur the things close behind the subject and under-blur the things a long way behind the subject. I would have thought that they'd apply a progressive blur? The depth-camera is surely able to give zones where things are further away, although the more zones you have the more edges you have to navigate so I guess that's a potential issue. For me it's the lack of progression to being blurred that makes them look odd. I saw a video the other day where the subject filmed at a wide aperture in front of a landscape with everything a long way from the subject and it created this two-levels kind of look which immediately looked fake, but I think it was real and just how the scene was. It's kind of like when 3D graphics first got good and you'd get used to the tell-tale signs of how they looked fake, but every now and then you'd see one of those signs in real-life and do a double-take.
-
The P2K is under 70mm tall. The Crane M is 346mm tall, and the rig looks like it would be taller with the camera mounted on it... so literally 5 times taller! I'm not even sure it would fit into my bag either 🙂 Maybe you don't get odd looks when you're using it in public, but I've used a phone gimbal before and people looked at me like I was carrying a sword rather than a gimbal. I'm not entirely convinced that it would hold up in the howling wind either, which is basically the point - if the wind wasn't so strong I wouldn't have had so much of a rotation problem to begin with and OIS would have been fine.
-
In a sense, yes, but the devil is in the details. One example is with the GH5, if you turn on AF then it enables face-detect and the auto-exposure automatically exposes the skintones in the face correctly, but if you turn off AF (or have a manual lens) then it disables face-detect and will only expose the frame as if it's a landscape, letting the face / skin-tones (which are clearly visible) fall where they may as if they don't matter in the slightest. So the automatic metering is fine, but it's about as reliable as AF - meaning that it can do it but reserves the right to screw up for random reasons and at random times.
-
Hahaha... the flat background is unmistakable. I like it how in an attempt to make the photo have more depth, it just makes the background look like a green-screen or a poster mounted a few feet behind the subject of the photo.
-
That looks really good - great use of tech and I'm sure that it would be really useful to lots of people. You need an electronically controlled lens of course, but I wonder if a similar setup might be possible using something like a wireless follow-focus perhaps.
-
I love the touch of fake-DoF blur on the brim of the woman's hat!
-
The challenge is that I want to keep the rig as small as possible - the beauty of the camera is that it's so small. I also own the BMMCC but with a monitor and cables you get strange looks which I'd like to avoid. A gimbal would make the vertical size of the camera 10x larger, hardly a compact solution! Realistically, the smallest addition would be a side-handle, but even then it's doubling the width of the camera and making it look far more odd to the public. I was using it in wind so strong that I couldn't keep the camera still enough with my two hands. A steadicam would probably break in the kind of conditions I was in - let alone make the shaking worse by acting as a large sail. Yeah, the handle / viewfinder is the typical solution, but it doesn't work well for what I want. Firstly, the viewfinder is great if you want to make your cinematography "I shoot from eye-level" but that's not a great way to make interesting images. The handle underneath the camera also doesn't work in strong winds as the issue is that the wind hits the camera from the side and the camera rotates around where you're holding it - which in that case would be below the camera - creating a roll. People don't seem to understand my original post - I was cradling the camera with my left hand which was under the body of the camera and holding the lens, and my right hand was firmly on the hand-grip on the right-hand-side of the camera (and is actually quite a good grip as it's quite deep and covered in grippy rubber) but the camera was still being severely shaken with the wind.
-
Sure - I'll revise my statement to "They're essentially really good, really small, really affordable, camcorders." Were you comparing the "Smell of Village" video to the OG BMPCC? If so, the video looked good and was obviously shot as RAW, but beyond it being RAW and deep DoF, it doesn't look anything like a OG BMPCC to me. It looks more like a smartphone that can record RAW. I looked at OG BMPCC footage only days ago, and even SOOC the footage just screamed "film" to me. The grain and texture and resolution and sharpness are all on-point for film, and nothing like the high-quality high-resolution very-modern presentation of that "Smell of Village" film. I've said it in other threads but it's worth repeating - I think people have forgotten what film actually used to look like. If someone posted stills of non-recognisable moments from big blockbuster films and TV shows shot on film, the response would be akin to "your lens is broken" rather than "looks cinematic".
-
Interesting stuff, but still a way to go. The Xiaomi 12S Ultra has a crop-factor of 2.7 (calculated from sensor width) with an f1.9 fixed wide (which equates to a 23mm f5 lens) and an f4.1 zoom (which equates to an f11 5x zoom lens) and a f2.2 ultra-wide (which equates to a 13mm f6 lens). This means that even wide open its got deeper depth of field than decent lenses on a S16 camera: https://www.vintagelensesforvideo.com/category/super16/ The codecs are getting better though - I noticed that my iPhone 12 does 10-bit video, which makes a huge improvement to the subtlety of colour and also to the DR which seems to just be chopped-off in 8-bit mode compared to 10-bit mode. Prores and RAW are better still. They're essentially really good, really small, camcorders. Great if you want that look, but without ability to change lenses they're useless if you want any other look, which these days is what most people want. They're also a pain to put an ND on if you want them to still fit in your pocket, so most users will just expose with SS making terrible motion cadence.
-
Yeah, you can (sort-of) dial in however much stabilisation you want. I haven't tried on this footage yet but will try to hit a balance where the shake doesn't look overwhelming and the blur isn't either, but I'm worried there won't be a middle ground where neither looks bad.
-
I shot a bunch of clips hand-held in terrible weather with my OG BMPCC and 12-35/2.8 and now looking at the footage I realise the OIS did a great job with pan/tilt but zero reduction in roll / rotation (which OIS can't stabilise). I've had a few goes at stabilising it in post but much of it is blurry due to 180 shutter. I don't mind a hand-held look, but the mismatch between the pan and tilt being almost perfectly stable and the roll being jittery as hell is really not a good look. I realise that my options are some combination of stabilising the roll in post and de-stabilising the pan and tilt (adding shake) so the aesthetic works. I've been using IBIS on GH5 and GX85 for years now, and I have OIS in my X3000 action camera, but haven't used OIS-only in a "cinema" setup before and hadn't really realised that this would be a major issue. The OIS in the 12-35/2.8 is far too good to match with the lack of roll stabilisation. How do the OIS-only hand-held shooters out there deal with this? Build a huge rig so you don't get roll jitter? (In which case, OIS isn't really needed that much - right?) Stabilise in post and deal with the shutter-speed blur? Seems to be an incredible advantage of IBIS over OIS.
-
That's interesting and makes sense - although it's something that EIS could also compensate for.
-
I thought that gyro would allow the camera to eliminate the warping as it would know the focal length and the cameras direction, but it seems to not be so. That was literally the only advantage of gyro stabilisation over EIS. I'd conclude that gyro is worth trying in post to see if it does better than EIS, but doesn't seem to have any advantage. Both are last-resorts compared to IBIS / OIS, or physically controlling the camera with a gimbal, tripod, monopod, slider, crane, jib, etc etc. (Disclaimer... the below might sound harsh, but it's directed at the theories you're presenting, not you! Hopefully my comments are useful and informative and correct some of the staggering misinformation floating around). That's nonsense. The data coming off the sensor is RAW - it's whatever the resolution is x the bit-depth x 3 (RGB channels). I think that Osmos and GoPros have the best stabilisation based on two factors - 1) they have fixed lenses and can tune their algorithms based on that, and 2) the entire success or failure of those products rests on how well they implement this feature. That's also partly nonsense. Man, the internet really doesn't understand WTF is going on with stabilisation. I realise that manufacturers measure stabilisation in stops. This is complete marketing crap - it's correct but irrelevant. Think of the suspension system on a car. The tyre follows every tiny bump on the road and the body of the car doesn't want to feel any of those bumps. The goal of the suspension system is to connect the two but without transmitting the shake from the road to the car. It's not a perfect parallel as there are differences between these examples, but it's good enough for our purposes. The cars suspension system can be viewed in two scenarios. 1) how well it smooths small bumps, and 2) what the maximum bump size is that it can handle. In the first scenario, you're driving down a road and there's a small pothole. You drive over it, you hear a thump from the tyres, but feel almost nothing in your seat. This is a reduction in the vibration, and in cameras, this is measured in stops. It is the ratio of how much vibration goes in vs how much gets through the mechanism. In the second scenario, you drive up a large curb. If you're in a small city car, the tyre flexes, the shock goes all the way in, but the wheel hits the end of the shocks and sends an enormous thump up into the car, sending you and the contents of your car flying. If you were in a huge off-road 4wd, you would hear a thump but the tyres and shocks would have enough vertical travel to absorb it. You would still feel it to some extent, but it wasn't a disaster. The second scenario is what you're seeing in your OIS/IBIS mechanism when you see the footage still have shake. This is what separates small sensors from larger sensors - it's the amount they can travel, not the "stops" of IS. The sensor simply runs out of travel and can't move far enough. The math is very clear. Take 5 stops for example, that's a reduction of vibration by a factor of 32x. So, you move the camera by 32 pixels but the image only moves by 1 pixel. You move the camera 50% to one side, and the image moves by 1/64 of the frame - in 4K that would be 60 pixels when you moved the camera almost 2000 pixels....and that's only 5 stops. This is why the stops don't matter. The issue is how far the camera can move the sensor. Larger sensors probably don't have as much room to move as a smaller sensor. This is one reason for having a MFT camera the size of a FF MILC - to accommodate this mechanism. The other biggest challenge is the wobble of IS (both OIS and IBIS) on wide-angle lenses. This is a problem because spherical lenses are, well, spherical, and sensors are flat. In terms of EIS, it's essentially a complete fail on behalf of the manufacturers to compensate, as GoPro and DJI have shown by doing it properly. I looked at a package that corrects lens distortion in post (and also does things like RS correction and flicker elimination) but didn't buy it as it was closer to $1000 than I would have liked, but it's possible. I could even tell you the math, but I haven't worked out how to implement it yet, unfortunately. Resolves EIS pipeline isn't designed correctly to do it.
-
Or in summer of any country that doesn't snow in winter.
-
My hand-held shooting is mostly of static shots now, so of course they stabilise quite well with IS or EIS, but I do the odd follow-shot when walking with family, which is where the Sony X3000 comes in handy - OIS in an action camera. I just did a quick search for samples of walking shots with it and found a few that seemed nothing special, but I know when I do those shots I am always surprised when I see the footage as it looks gimbal-like almost every time. I try and do the ninja walk and hold my hand floating in space, but I haven't practiced it and wouldn't say I'm particularly talented for it, but who knows. Perhaps the best EIS is from the 360 cameras, where the EIS has infinite crop factor into the lens and the lens distortions are all cancelled out completely. Of course, if you're filming, a normal shot then you'll crop into the image so much the IQ will be unusable, so that's the downside. I'm not sure what the sensor size and stops of light have to do with IBIS?
-
How to make RAW-like corrections to 10bit log in silly old Premiere
kye replied to hyalinejim's topic in Cameras
I'm no expert, as I only know Resolve, but from my understanding its normally worth investing in things like shortcuts as they'll pay off in the long-term. In terms of learning Premier shortcuts vs customising them, I'd suggest reviewing the Premier ones and seeing if they suit your workflow. The problem in editing that no-one talks about is workflow and how some peoples workflows are very very different. Some examples: Do you add all your clips into a timeline, then make selects by removing the material that is no good (bad takes etc), or do you review the material from the Bins and add the good stuff? Fundamentally different shortcuts and even mindset. For larger non-linear projects, do you sort your selects into timelines (eg, b-roll by location, interviews, events, etc) or do you do this filtering via tagging and metadata and not via timelines at all? My understanding is that Premier and FCPX have much more power in their media management (like tagging etc) than Resolve, which is why editors don't think Resolve isn't really "ready" for big projects yet. Do you build your final edit by a subtractive or additive way? ie, do you pick all the good bits and pull them into a timeline, then cull and re-arrange and tighten in passes until you're done, or do you just pull in the absolute best bits and add and re-arrange things until you're done. If its the former then you'll spend a lot of time looking at clips that will eventually get cut (if a clip makes it through many passes and gets cut at the last minute), but once you've cut something you're unlikely to ever look at it again. Alternatively, if you add the good stuff then you'll potentially be reviewing your entire collection of clips every time you want to find the next clip to add. If your media is well curated (metadata, tagging, etc) then this can be an efficient process but if not you could get lost forever as there's no guarantee that you're making progress. When you're making an edit, do you want to move the timing of the edit point to suit the content of the clips or do you want to change the content of the clips around a fixed timed edit point? If you were editing dialogue you'd do the first but if you were editing to music you'd do the second - fundamentally different shortcuts and thinking. When you're editing, are you concentrating on the edit point, or the clip? For example, if you're looking at the edit point then the Clip End will be of one clip and the Clip Start will be of the next clip - so the editor will be focusing on editing two clips at once. If you're in a clip focussed approach, Clip Start and Clip End will be of the same clip. I hit this issue in Resolve as I edit in a clip-centric way and edit to music but some of the Speed Editor controls (very handy ones I might add) are in a edit-point-centric way, and there's no way to change this. etc... The other thing to realise is that there are many small tasks that must be done in an NLE, and the different editors may have different ways of doing them. It's easy to compare shortcuts, but it might be that to accomplish that outcome in one NLE can be done at (potentially) a fraction of the time by using one mindset over another. This is the hidden aspect of NLEs - they are designed to edit in a particular way and may be less efficient to be used in another way (or may not really support that other approach at all). Obviously this is very personal to what you edit, how you edit, how large the projects are, etc, so the answer ultimately can only be determined by you, but don't only learn how Premier can do what Resolve can do, try to learn what Premier can do that Resolve can't do. -
Yeah, I think you're right. I have a small 1080p action camera that would be pretty good so that's my current best plan. I could either mount it to the scooter or maybe get a mount for my helmet, which would be stable and more likely to be remotely level too. One thing I really like about the hypersmooth stabilisation on the latest action cameras is that when you mount them to a vehicle or something they aren't locked to the vehicle, so you see it moving around in response to the terrain (smoothly of course) but it's a nice effect I think. Can you turn the stabilisation on your cameras down to a lower setting perhaps? Not sure if that's something that's available?
-
How to make RAW-like corrections to 10bit log in silly old Premiere
kye replied to hyalinejim's topic in Cameras
Just kidding... 🙂 My understanding is that Premier is probably still a better editor than Resolve (although Resolve is closing the gap) and that the main issue with Premier is that it crashes all the time. Save early, save often. In terms of colour though, we all love to think that the 5,984 controls that Resolve has are required for good colour, but it's not true - if you get the basics dialled in then you can get great looking images with the tools that basically any NLE has. -
Yeah, I suspect that it's often under the threshold of what is perceptible. I also have a theory that this threshold is getting higher over time as people slowly get used to cameras that expose with SS. Your comment about compression from online platforms is an interesting one, as, YT in 4K has more resolving power than basically any affordable camera had a decade ago, so that's actually gone through the roof, but peoples perception has dulled more than enough to compensate. I've actually gone the other way in my work - I used to shoot quite dynamic shots and stabilise in post a lot, whereas now my shots are much more static and I basically don't stabilise in-post at all. This forum used to be full of people talking about motion cadence, which despite never really getting a good definition was a pretty subtle effect at the best of times, and yet now people seem to be comfortable with the blur not matching the cameras movement, which I would imagine would be an effect at least one or two orders of magnitude more significant than motion cadence. I also find it amazing that people have adjusted to 4K being cinematic, when even now many cinemas are 2K, and every movie (apart from those on 70mm) basically had 2K resolution by the time you saw it in a theatre. How perception changes over time!
-
I'd be a bit careful about interpreting this type of Information - I remember statements from the launch of the Alexa 35 that indicated otherwise. Most likely everyone was telling the truth (there's unlikely to be a scandal here!) but that everyone was using carefully chosen words.
-
If you have 180 shutter and shake the camera then your images will have shake and motion blur. This will look normal because the blur will match the shake - if you shake / move left the blur will be horizontal and the size of the blur will match the shake / motion in the shot. If you stabilise in post, you remove the shake but not the blur. If you stabilised in post completely so that the shot had no shake then it would look like a tripod shot because the camera movement would be gone, but all the blur would remain, so a stationary shot would blur in random directions at random times for no conceivable reason. This is a test I did some time ago comparing OIS / IBIS vs EIS (stabilisation in post is a form of EIS). The shot at 25s on the right "Digital Stabilisation Only" shows this motion blur without the associated camera shake. The IBIS + Digital Stabilisation combo was much better and is essentially the same as OIS + Digital Stabilisation. The issue here is that people using IBIS or OIS often have all the stabilisation they need from that, so the gyro stabilisation is aimed at people who have neither. This "blur doesn't match shake" also happens in all action and 360 cameras when they shoot in low-light and their auto-SS adjusts to have shutter speeds that include blur (which is why I bought an action camera with OIS rather than EIS).
-
With Sony and BM now offering gyro stabilisation, what are your thoughts about stabilisation in post as it relates to the 180 degree shutter rule?