Jump to content

kye

Members
  • Posts

    7,942
  • Joined

  • Last visited

Everything posted by kye

  1. Definitely agree that lots of operations are better done in colour spaces other than the ones supported by Resolve natively. Are you using OKLab for your Lab conversion? I plan in integrating that into my tool once I get back to developing it. I'd keep all the secondary adjustments like vignetting etc as power-windows in Resolve, as getting a single vignette slider that looks good across many/all scenarios probably isn't possible. I haven't played with spatial adjustments in DCTLs yet, so I'm not sure what the performance hit is compared to OpenFX tools. I'll probably investigate this at some point though, as there are a few operations I'd do that might benefit from being integrated into a single DCTL. I played with the colour slice tool and I think it is actually very disappointing as I found it broke images incredibly quickly, while also simultaneously being too broad for lots of adjustments I'd like to make. I've had much more success in doing more targeted adjustments in Lab that didn't go anywhere near breaking the image. I find Lab is a far cleaner space to work in for lots of operations as the things you might want to do that are colour-slice-esque are often far simpler and far more universal. There are tonnes of things you can do in Lab that target certain ranges but are applied globally, so won't break the image, like how doing (most) adjustments with the Channel Mixer can't break the image.
  2. I've been working on a similar one, but in L*a*b colour space and offering a lot of more advanced tools to quickly do things I do all the time. One advantage of doing things in a DCTL rather than using the GUI controls is that when grading in Resolve while travelling etc, where you just have a small monitor and no control surfaces etc, you can make the viewer larger (IIRC using Shift-F) and it essentially gives you the viewer and the DCTL control panel on the right-hand-side of the screen, so it's a really efficient layout for grading using only the keyboard/mouse. @D Verco if you're looking for ideas on how to expand the tool I'd suggest thinking about it for use with power-windows as well as over the whole image. For example, my standard node graph has about 6 nodes with power-windows already defined that are ready to just enable if I want them. I have ones for a vignette, gradients for sky and left and right, and four large soft power windows for people where I will typically do things like brighten / add contrast / sharpen, and do basic skin operations like hue rotations / hue compressions / etc. Most of the operations I'd do with those windows are covered by your tool, but not all of them, and if a tool can be used for a range of other tasks other than just basic image processing then all the better. If you're taking a leaf from how Lightroom works, one of the most powerful features I used to use all the time (and wedding photogs would absolutely swear by) was the preset brushes. I had brushes for skin smoothing, skin brightening, under-eye, redness, etc, and of course they all used the standard Lightroom controls, but in specific combinations they really worked well. Something to think about. I'm all for people being able to charge money for their efforts, but in todays climate, the more value you can provide the easier it will be to get people to part with their (often hard-earned) money.
  3. kye

    The Aesthetic (part 2)

    I've been musing how to proceed next, but figured out I have two challenges: - work out what lens characteristics I can emulate in post vs those that can't be emulated easily / practically - work out what lens characteristics I actually care about The reason the first one is important is that there's no point in testing how much vignetting there is between various lenses when I can simply apply a power-window or plugin in post and just dial in what I want. This will then leave the lens testing to compare the things I can't do in post, like shallow DOF etc. I figure there's no point choosing a lens with strengths I can emulate in post over a lens with strengths I can't. In my setup the real question is why wouldn't I just use the Voigtlander and Sirui combination, because it is relatively sharp anamorphic with reasonably shallow DOF, and then just tastefully degrade it in post. So, naturally, I asked ChatGPT which things were which.. ==== Characteristics You Can Emulate in Post (2D effects). These are primarily image-level artifacts that don’t depend on actual 3D geometry of the scene: Vignetting (darkening at frame edges) – trivial to emulate. Chromatic aberration (color fringing at edges, longitudinal CA is harder) – lateral CA is easy to add/remove, longitudinal CA (color fringing in front/behind focus) is less convincing. Distortion (barrel, pincushion, mustache) – geometric warping filters can mimic this accurately. Flare & ghosting – lens flares, veiling glare, rainbow ghosts can be faked, though authentic multi-bounce flare with complex aperture geometry is tricky. Bloom / halation – glow around highlights is very doable with diffusion filters or glow nodes. Contrast “roll-off” from flare veiling – can be mimicked with tone curves and haze overlays. Field curvature (apparent focus falloff toward edges) – can be approximated with selective blur masks. Edge softness – can be faked by blurring vignette. Focus breathing (slight FoV change with focus) – can be keyframed with a digital crop/scale. Bokeh shape overlays – you can fake oval bokeh, cat’s-eye bokeh, onion rings to some extent with blur kernels, though it looks “pasted on” if pushed. Characteristics That Are Hard or Impossible to Emulate in Post. These are fundamentally tied to how the lens bends and gathers light in 3D: Depth of field & true bokeh rendering – You can blur in post, but real optical bokeh depends on entrance pupil geometry, field position, occlusion, and interaction with 3D scene depth. You can approximate with depth maps, but never get the organic overlap of layers of out-of-focus light. True lens breathing with parallax – cropping simulates magnification change, but real breathing also subtly shifts perspective/parallax in the image. Perspective rendering (due to focal length + entrance pupil position) – can’t be changed in post without full 3D scene reconstruction. Aberrations that depend on 3D ray paths – e.g., coma (stretched points off-axis), astigmatism (different focus for tangential/sagittal rays). You can fake streaky stars, but not with the depth-dependent accuracy of real glass. Micro-contrast / “lens pop” – the way fine details render due to MTF, stray light control, and glass polishing isn’t really capturable in 2D. Polarization effects – coatings reduce or enhance reflections differently based on polarization of incoming light; impossible to recreate after capture. True specular highlight interaction with aperture blades – starbursts, iris shape, and diffraction spikes are fundamentally optical. Subtle 3D flare interactions – when a bright light causes ghosts that float relative to the scene geometry, which post plugins often fail to mimic convincingly. Sensor–lens interaction effects – e.g., color shifts or vignetting from oblique ray angles on digital sensors, microlens interference. ==== It's not a bad list, although some things are a bit mixed up. The things in the above that stand out to me for further investigation are: Longitudinal CA I wonder how much of the Tokina + wide-angle adapter look was this, it seems to have subtle CA though everything Experiment with oval inserts to get a taller bokeh shape, maybe between the taking lens and Sirui adapter ...and just setup some tests in post for barrel distortion, edge softness, resolution (blur and halation). The rest of the stuff above I either know about or don't care about.
  4. I don't really know much about what their highlight recovery actually does, but if you're talking about super-whites (levels above 100) then they might get decoded and might be available in the nodes? My GX85 records super-whites and in my standard node tree for it I just pull down the Gain slightly and they just get pulled down into the working range. If you're talking about the smart behaviour of using some colour channels to recreate a clipped colour channel then I think reducing the saturation of the highlights might do broadly the same thing. There are also other tricks you can use like combining masks with the channel mixer to effectively copy one (or both) other channels to the one that's clipped. That will preserve the saturation of the recovered area, useful for recovering things like tail-lights or flames, whereas reducing the saturation obviously won't.
  5. and also confirmed here (linked to timestamp):
  6. Also, the new iPhones all use a new "Apple Log 2" profile, so that will require Resolve to be updated as well to support the new colour space:
  7. They mention it at 1:06:21 - below is linked to timestamp: I must admit that if BM camera supports it then it's a step towards it being supported in Resolve. It's not currently listed on the BM website, but that makes sense as (IIRC) no iPhone can record it yet: I'll be ordering a new phone and my GH7 already records Prores RAW internally, so I'll be in a strange place where both my main cameras support internal Prores RAW but my NLE doesn't! 😆 😆 😆
  8. Sad to hear. Living and working with someone that long is truly remarkable, even if it didn't end up lasting. Gotta be careful about the wedding ring thing though, I've noticed couples that both simultaneously stopped wearing them and it's turned out to be them preparing to renew their vows and getting their existing rings polished / embellished / etc before the celebration! It was even a surprise so their close friends didn't even know it was happening!
  9. With great interest and care! As an enthusiast with complex requirements myself I sometimes come upon a new set of requirements and try to find something that will suit my situation but it's hard to find something that meets all the specifics I have. This is typically when I would post as people who know more than me might know of a particular thing that meets all my needs. This is why it frustrates me when people just give random thoughts and riff on my carefully set-out requirements. TBH, now I just ask AI.
  10. kye

    The Aesthetic (part 2)

    Yes, it has an emotional component doesn't it - that's a good way of putting it. There are lots of parallels between cinema and dreaming or memories, so in some ways the more realistic the image is the less aligned it is with all the other things about it that are dreamlike or like a memory (e.g. we can teleport, jump time, speed up and slow things down, we can see something happening without being there, we can fly, etc). Considering that memories are formed more readily at points of heightened emotion, I think there's a link between a slightly surreal image and something feeling emotional. Unfortunately the front element rotates with focus on this lens. I had it in my head that means I can't use it with the anamorphic adapter but just realised that's not true as the taking lens should stay at infinity... I'll have to try that next! I'm not sure about hard-edged bokeh.. maybe I'll come around but not sure. Our attention is directed by a number of things, one of which is sharpness, and so focusing the lens is the act of deliberately directing the viewers attention by adjusting the focal plane to be on the subject of the image. When the out-of-focus areas have hard edges, this takes the viewers attention and pulls it towards things that have not been chosen as important in that image, so it sort of undermines the creative direction of the image in a way that you can't really counter.
  11. kye

    The Aesthetic (part 2)

    Today I tested the Voigtlander 42.5mm and Helios 58mm F2.0 lenses with the Sirui 1.25x adapter. GH7 >> Voigtlander 42.5mm >> Sirui 1.25x adapter.. Wide open at F0.95: ...and more stopped down (I didn't have enough vND for this much light): GH7 >> M42-M43 Speedbooster >> Helios 58mm F2.0 >> Sirui 1.25x adapter There's a certain look to the Helios, but it's hard to separate it from the changing lighting conditions, and I wonder how much of the differences could be simulated in post too.. reducing contrast by applying halation / bloom, softening the edges, etc. In pure mechanical terms the Voigts go much better with the Sirui, and the extra speed is really handy, so that will probably be my preference over the Helios.
  12. kye

    The Aesthetic (part 2)

    Thanks. If you like contrasty light, my next batch of images should please you! What is it about them you like? Is it how imperfect / degraded they are? They certainly have an incredible amount of feel, that's for sure. This zoom is the Tokina RMC 28-70mm F3.5-4.5 and I understand these were held in reasonably high regard back in the day, so it's potentially not a disaster, although I'm not sure that it's the sharpest / cleanest vintage zoom in the world. Looking back at the initial set of images I posted in my first post, this seems to have gotten the closest with the softness of the OOF areas and the CA / colour fringing etc.
  13. kye

    The Aesthetic (part 2)

    Shot a quick test with the Tokina RMC 28-70mm F3.5-4.5 zoom wide open with the wide-angle adapter. Combined with Resolve Film Look Creator it's super super analog, and maybe a bit too analog for what I would find uses for. Still, interesting reference. GH7 shooting C4K Prores 422 >> M42-M43 SB >> Tokina 28-70mm >> cheap wide-angle adapter >> cheap vND Resolve with 1080p timeline: CST to DWG >> exposure / WB >> slight sharpening >> FLC >> export Personally I am not really a fan of the hard-edged bubble-bokeh and the flaring with strong light-sources in frame is too much for most of what I shoot (mostly exterior locations and uncontrolled lighting) but it's great to know that looks with this level of texture are possible. I'm keen to compare it to the setup but without the wide-angle adapter. That would be less degraded but maybe in the right kind of way.
  14. Surprisingly, it seems that Minolta didn't make so many slower lenses? This list here might not be complete, but it barely lists any slower ones: https://www.rokkorfiles.com/Lens Reviews.html and the page on the lens history doesn't include many extras either: https://www.rokkorfiles.com/Lens History.html Is Flickr still a thing? maybe some searching on there might reveal some other options, and with bonus sample images too.
  15. OP asks for lenses that focus the Canon way.... Queue long discussion of lenses from Nikon, Pentax etc. OP asks for lenses that are slow to use wide-open.... responses include stopping down faster ones! 😆 😆 😆 Lots of people really just waiting for you to finish talking so they can go back to stream of consciousness without any thinking required!
  16. Canon FD might do it - price might be an issue perhaps? Plenty of F3.5 or F4 lenses in the lineup, they focus the way you want, and you even get a choice of coatings (normal, S C, or S S C). Some of the slower ones are macro lenses too! https://cameraville.co/blog/list-every-canon-fd-lens-ever-made Zooms are also an excellent idea. An out of the box idea is to use faster lenses, but to keep the aperture wide open and cut a round hole in a lens cap and get the aperture you want that way. I'm not sure if this would reduce the DOF in the right way? It would definitely lower the exposure though, and would definitely keep the bokeh the shape you want.
  17. I don't really use the audio for dialogue so can't really comment on it specifically. To zoom out and think more holistically about sound, and also a bit about getting in front of the camera, there are a few approaches. High-quality sound on location. This is great but you pay for it in terms of paying for extra equipment, extra faff of charging it, setting it up, using it, cleaning and maintaining it, etc. High-quality in-camera audio is the most convenient and the most expensive to get. External audio makes it easier / cheaper but creates extra work to keep the audio files managed and to sync them in post. Acceptable sound on location. This could be through a combination of average in-camera audio and average external audio. The in-camera audio could potentially using on-camera mics like the Rode VideoMicro or a plugin Lav mic that don't require any power and are plug-and-forget but are dependent on the quality of preamps in the camera. The external audio could be as simple as using a smartphone right next to your mouth, or using the integrated mic in the headphones as a short lav mic. I've seen lots of vloggers do this in very noisy environments and it works fine. First example, second example. Record in post. ADR (Automated Dialogue Replacement) is where actors re-record their dialogue in post production to match their lips in the footage. It is so common that many films simply didn't bother to get good sound and created the audio (dialogue, sound effects, sound design, the lot) in post after the fact. I've done this before on short films and if you take a bit or time to do it then you can get results indistinguishable from doing it on-location. Recording in post also comes into the idea of appearing on screen, or not. By taking lots of notes and recording your thoughts during the trip (potentially just using voice memos on your phone at the end of each day) you can then narrate the finished film and have a significant presence in the finished edit, have high quality audio, and also take away the burden of getting great audio for everything that happens throughout the whole trip. Narrating the film will also enable you to communicate ideas and feelings and information in a concise way with carefully chosen words, rather than trying to piece together a coherent summary from fragmented snippets of footage. Narrating the final piece also takes a huge burden off the footage too, because anything you didn't capture can be explained in V/O so the film doesn't rest solely on the footage to be self-explanatory, which is a high-bar to achieve. There's a hidden mindset that you're at the cutting edge of (and being cut by), which is that the entire film-making industry assumes that anyone wanting high-quality equipment doesn't mind it being large, and that if you care about size then you don't care about quality. I've gone round and round with people online and it's like "small and good" is a combination that somehow doesn't exist in their world-view. This has lessened over recent years, but is still the elephant in the room. Speaking of the elephant in the room, be careful not to lose sight of the final prize, which is an engaging final product. I don't know how much editing experience you have, but making a doc is like making a bunch of lego pieces without knowing what you're eventually going to want to make, then designing the building, and then trying to assemble the building out of the lego blocks you have made. Obviously those with a lot of skill will be able to anticipate the final result better, and will make better pieces, but to a certain extent the more pieces you make and the more variety you include, the easier it will be to assemble the finished product you want. This is why I advocate for setups that: 1) you will use (giving you more footage) 2) is fast to use (giving you more footage of things that happen quickly) 3) is flexible (giving you more variety of footage) 4) is enjoyable to use (making you more likely to use it and also making the whole thing more enjoyable and more likely to be successful overall). Remember, this person is ready for anything, but misses almost everything switching equipment, and has a hernia by the end... You should start with the idea of just using your phone and only add equipment that will make the end result better, not worse.
  18. 100% agree about the size, and when compared to the GH5 the difference in the hand is a lot more than what it looks like in pictures, so it's sort-of deceptively chunky. By the time you're looking at a GH7 "small camera" territory is so far off you can't even see it in the rear-view mirror! Perhaps the compromise is that the GH7 has an integrated cooling fan whereas neither the R5 nor Z6 III have it, and will be a larger again by the time you add on additional accessories etc.
  19. I can vouch for the GH7 as a workhorse. In terms of low light, I'd say it's fine. Here are a couple of stills from the GH7 with the Voigt 17.5mm F0.95 lens. I can't remember if the lens was fully wide-open or not, but I think the GH7 was at ISO 1600? These have a film grain applied, so the grain is deliberate. GH7 ISO tests are available online if you want to see the grain at various settings. Also remember that NR exists in post, and compression does a pretty good job of NR as well. The first shot is lit from the candle and the light of the fridge: and this is just the candle: Here's are some shots from the OG BMMCC from 2014 at its base ISO of 800, the 12-35mm F2.8 lens and shot at a 360 shutter to cheat an extra stop. These locations looked about this bright with the naked eye, and I have excellent night vision. You actually need far less low-light performance than most people think. Thanks! The issue is that you're either showing a very wide FOV, which will have significant distortions, or you're cropped in to the point where the quality is low because you're cropping out most of the data. IIRC, If you have a 100Mbps 360 image then by the time you crop to the FOV of a 24mm lens you're down to something like only a few Mbps. This is why I said the bitrates are what matters most.
  20. I agree. The ability to reframe in post is incredible. It even goes beyond that because you are essentially recording every camera angle at all times, so if there was something that happened around you, you could cut between multiple angles of the same event. Even if you were psychic and were always pointing your normal camera in the best direction at all times, you couldn't record multiple angles at the same time with one camera, so it goes even beyond the mythical psychic camera person. I saw a great example of this many years ago.. it was a guy recording his family walking through a fairground with mum and the kids walking behind him. The sequence was something like: his kids calmly looking around someone in a scary costume approaching from ahead his kids not seeing them scary monster seeing the kids and having the idea to scare them and starting to approach mum seeing him and smiling, knowing what is about to happen the kids suddenly seeing him and reacting very suddenly / loudly the monster reacting to their reaction the kids laughing the monster laughing mum laughing monster walking away It was essentially a three-camera shoot, and like all good reality TV I'm pretty sure he overlapped the shots to extend the event, which probably only took about 5 seconds. The killer thing is that just by having a 360 camera you're recording all the camera angles all the time, so when the thing happens you've probably got all/most of the angles to show it happening. Just get the one with the highest resolution and highest bitrates. When you crop in you're drastically reducing the quality of the footage.
  21. kye

    The Aesthetic (part 2)

    Now we're getting into it! Terminology isn't our friend in these discussions, but it doesn't mean we can't make some headway. What I want to achieve is a look that I have only seen in moving images made for cinema and high-end moving images made for TV (e.g. GoT etc). I call this "cinematic" or "cinema" because everything I have ever seen in the cinema had this look (except for that one 3D movie I watched - yuck). What I don't want is something that looks different to what I've seen in the cinema. I call this "video" for want of a better word. I've seen plenty of things made for cinema / high-end TV and plenty of home video and in my perception there is a very clear distinction between them, and everything that doesn't look like cinema looks like video to me. My logic is that some things are required (you can't get the look without them) and others are deal-breakers (you can't get the look with them). So far my testing has shown that: 24p/25p are required, 30p/60p are deal-breakers 180-shutter, but acknowledging that this can vary if there's a reason for it (e.g. Saving Private Ryan) Harsh clipping is a deal-breaker Also, some things aren't absolutely required (because I've seen the look without them): Actors and dialogue aren't required because Koyaanisqatsi didn't have them and still had the look Artificial lighting or modifiers aren't required because I've seen examples were they weren't present (e.g. LINK) Anamorphic lenses Shallow DOF But, those aren't enough. So, some other things I think might be in the mix: Film or authentic film emulation isn't required because not all films have it (most do but not all), however all films either have film or a professional colourist who has done lots of stuff so there might be something they're doing that is required Black-levels are on my radar too. Camera shake / drift may be a deal-breaker despite being used for some parts of movies Widescreen. I suspect this isn't required, but maybe it helps. Not sure. Resolution probably doesn't matter unless it's above a certain threshold. Sharpness probably matters a lot as having too much breaks the illusion. Grain isn't required as modern films won't have any, but it might impact sharpness and it is probably required to manage banding in low-bitrate streams I've also seen people like Gawx pull off the look shooting the kinds of things I shoot, so I know it's possible. My challenge is that I can follow all the rules I have identified and it's still not enough, so I know there are other ingredients that matter than I haven't identified. Perhaps the biggest challenge is that we see the world differently. You say that filmmakers like Lynch or Soderbergh have used DV or even iPhones and still delivered pure cinema, but it just didn't look right to me. Maybe you're referencing something I haven't seen, but the things I have seen looked like smartphone videos, so I suspect it's a case of them either not having something that is required for me, or that they had something that's a deal-breaker for me (e.g. 30p). Unsane by Soderbergh looked absolutely terrible, Tangerine looked like it was shot on an early smartphone (because it was) and Behold by Ridley Scott looked absolutely terrible at 30p. Even when I slow it down to 24p (which gives it the advantage of being in slow motion) some shots still look like smartphone video, which is a type of video look that is a looooong way from looking like cinema. When I first started shooting video I couldn't tell the difference between 24p and 60p, now I can't stand 60p or 30p. When TV was introduced to Britain people looked at their tiny little B&W TV's and said "It's like they're here in the room with us". Maybe no-one else here is seeing what I am seeing, but it doesn't mean it doesn't exist and doesn't mean I don't want it and doesn't mean I don't know it when I see it. I don't really care about what "cinema" is, but there's a look that high-end productions consistently have that I am trying to achieve, and so far I haven't worked out what all the required pieces are.
  22. I also doubt that "weather resistant" is sufficient for the random deluges that are likely to happen over that duration of trip, although it's absolutely worth reading the manufacturers description of what "weather resistant" means, just so you know what they are thinking of when they use the phrase. It might be a lot more (or less) than what you might be thinking. This is something I have pondered for some time but haven't gotten around to. Better to just get something completely waterproof and be done with it. Then you can record in monsoon rains and get good footage of waist-deep water, which would be a highlight of the doco in itself. I would also suggest that the "bad weather low-light" situation isn't really that important. Realistically, if it's bad weather due to rain or due to dust at night then you can't see that much anyway. Just turn on your bike lights or headlamps and film the chaos. My setup doesn't cover the "long-zoom low-light" combination because it's not a thing that you need to shoot normally, and while it would be great to have, I have only ever wanted this combination for taking shots out of the hotel window at night in Seoul, and that's hardly a situation to design my whole setup around. I'm also surprised at how compact the 28-200mm lens is on the S9, it seems quite manageable.
  23. kye

    The Aesthetic (part 2)

    Yesterday I played around with FLC and grading some clips from Korea and the GH7 and 14-140mm. I plan to do a range of tests around settings for softening / sharpening / adding grain / other texture treatments in post, but YT compression is pretty diabolical so I'll need to do quite a number of upload tests to see what settings in Resolve get you what result on YT. I also went much bolder with the colours too, thinking of Gawx. This was a first attempt just to work out the ballpark of where to start. Probably the immediate takeaway is how the grain is quite different per shot, despite it being applied evenly to all shots. Here is a comparison between the timeline in Resolve, the 42Mbps h264 4K export, and the 12.6Mbps h265 4K YT download. Shot 1 - Resolve: Shot 1 - Export: Shot 1 - YT stream: Shot 2 - Resolve: Shot 2 - Export: Shot 2 - YT stream: Shot 3 - Resolve: Shot 3 - Export: Shot 3 - YT stream: Impressions: I'm told that film grain is most noticeable in the mids and shadows, so the distribution is consistent with film, which partly explains why the first image has less noticeable grain as most of the image is quite bright or quite dark. The sky shot seems to have lots of grain as it's a flat surface in the right luma range, but it seems that more grain is retained on the YT stream because there is less movement in the frame for the compression to cover. Whereas on the street scene the grain is considerably reduced despite having similar darker flat surfaces. I didn't apply any softening to this video, so the sharpness is direct from the 14-140mm -> GH7 5.7K to C4K downsample -> C4K 500Mbps Prores 422 -> 1080p timeline image path. The 14-140mm isn't tack sharp but it's not too bad. I've noticed in the past that grain can make images look sharper than they really are, but in this example the grain combined with the compression probably softens detail as a net result. I will definitely be exploring this relationship more. Film is known to have a sharpness of >1 at its maxima, so having some sharpening applied seems appropriate. Overall it seems to do a pretty good job capturing the grain, here's the next frame from the YT download so you can compare what is grain and what is detail and texture in the scene.
  24. Great info from John and definitely agree on the zoom + fast prime combo. I have the 14-140mm and love it. I was tossing up between it and the 12-60mm F2.8-4.0 because I wasn't sure how often I'd use the 60-140mm part of the lens, but since getting it I was really surprised at how often it really comes in handy. Essentially, it means you can shoot whatever you can see, which really helps when you're trying to give a sense of a place. It's slower than the 12-60mm but neither is a low-light lens and the DOF differences aren't relevant in a doco situation. Here's a video I did showing the stabilisation, but it should show you the versatility of the lens. I shoot travel videos and have found that AF zooms best allow you to document the places and experience you're in, as they support the approach I've developed to shooting: Shoot a good variety of shots so you have lots of options in the edit Shoot the wide so you have an establishing shot, shoot the people, shoot the buildings, shoot the motion, shoot the colour Shoot the space (especially if it's large), shoot the details, look down at the ground and look up at the buildings / trees and the sky Think about what makes this place special and shoot that Think about what makes this place feel the way it does and shoot that In general, the faster you can shoot the more you will capture and the more authentic it will be because it will be more spontaneous and more based around your initial impressions rather than shooting slowly and having too much time to think about it. Plus, sometimes things happen very quickly and often they're the most important things to capture. I'd also second @tbonnes idea of combining the action cam and mirrorless. The action cam can be mounted on the bike ready to grab footage at a moments notice and doesn't need to be put away even in torrential rain or a dust storm. Then, once you've stopped you can pull out the mirrorless and get some shots. If you're a masochist then you can even go ahead, setup the mirrorless and hit record, go back again and ride through the frame, then go back and retrieve the camera. It seems like a great way to shoot a film and a spectacular way to remove as much pleasure from the experience as possible. This raises the other option - a drone. It's the fastest way to get shots of you without having to ride the same section of road three times. The laws for flying drones seem to have stabilised in a lot of places allowing drones under a certain weight, but it's something that would require an incredible amount of research beforehand to make sure it wouldn't get confiscated or get you into hot water just for having it.
×
×
  • Create New...