Jump to content

kye

Members
  • Posts

    8,085
  • Joined

  • Last visited

About kye

Profile Information

  • Gender
    Not Telling
  • Location
    a
  • Interests
    a
  • My cameras and kit
    a

Contact Methods

  • Website URL
    a

Recent Profile Visitors

122,161 profile views

kye's Achievements

Long-time member

Long-time member (5/5)

5.9k

Reputation

  1. I heard this recently and think it's pretty interesting. I'm not sure if it's the best definition I've read, but it's more practical than other ones, so is useful from that perspective. “He who works with his hands is a laborer. He who works with his hands and his head is a craftsman. He who works with his hands and his head and his heart is an artist.” - Saint Francis of Assisi I'm 100% for not gatekeeping. Even from a practical perspective, saying someone/something is or isn't 'art' doesn't mean anything, and people who like to be critical are really just telling us about themselves, not the thing they're talking about.
  2. Luc Forsyth likes it, or what he saw at NAB anyway. This is a setup they had with a broadcast servo-zoom lens on it. His comments (link with timestamp) He's worked on the survival show Alone for a few seasons and they use dozens of GoPros, but the footage always looks like it came from a GoPro This new model with a proper lens attached looked like footage from a real camera The broadcast zoom setup (with a phone as a monitor) handled like a proper camera He doesn't use AF when rigging cameras to vehicles etc most of the time so the lack of AF doesn't bother him in that context
  3. I think it's a loss for us, as he was doing independent testing of things that no-one else was, like DR, and commenting on various combinations of modes and features, especially which combinations of modes and features couldn't work together. This is all information that the other people don't bother with because their 'reviews' are really just product showcases or first-looks. As much as the camera journalism and independent review ecosystem is in a sorry state, it just got worse.
  4. Great write up and thanks for making the effort. I can see that shooting the Alexa with a neutral / clean lens with deeper DOF and crushing the whites / blacks and it not having carefully sculpted lighting etc would mean it would be an easier act to follow for MFT. As I see it, the limitations of the GH5 compared to the Alexa would be the colour science on skin tones etc, DR, and shallower DOF with character lenses.. most of which weren't significant in how it was shot. I have no experience with an Alexa but I've heard that it's a two/three person camera and that operating it solo is difficult. When I think about things like that, combined with the weight and form factor, I can really understand how limiting it would be to operate compared to how fast the GH5 etc are. I do have some idea about coverage and how incredibly demanding actual "real" productions are. When I analysed Parts Unknown and saw the quantity and quality of shots required for a 40 minute episode I was blown away. Most shots were professional but not incredible, but there were something like 1000 of them in each finished episode. Which they manage to get in something like 5 days on location. I suspect the speed and flexibility difference between the Alexa and GH5 is really a microcosm of the DSLR Revolution. Sure, some of that would be shooting style from the operators and some would be camera choice (ARRI made the Amira for being much more portable/faster) but even between an Amira and GH5, if the goal was getting as much acceptable quality coverage as quickly as possible then the lighter camera has the edge for sure. Pair it with one of those tripods where a single mechanism releases all the joints simultaneously and you'd be able to really cover a scene very quickly. I remember doing a graphic design course back in the day and they said that you can use whatever stock images you like for your projects, and as long as they don't actually clash with the theme of your project then no-one will notice. Since hearing this I have paid attention to such things and it's definitely true - the graphics really don't have to be related at all. I suspect b-roll is partly like this too, as long as you have someone talking and include things that are vaguely related to the subject then it'll work like forgettable eye candy to keep the viewers attention. The Kuleshov Effect is working in your favour for sure. I love the quote "kids love colour and motion" which I think was from a movie and used very sarcastically, but I suspect some of the purpose of b-roll is just to keep that part of our brain from getting bored while we're listening to the person say the thing. Of course, there is an art to it for sure and talented people will be shooting and making edits that create magic by being a lot more than the sum of their parts. Great to hear you were able to navigate the politics and that the end result was a success in the eyes of the boss. Going back to the ARRI takeover and strategy, the fact that ARRI created the Alexa Mini as a 'special use' camera and then everyone switched to it for the whole production says (to me at least) there's a demand for smaller camera packages. It would be amazing if the new management didn't realise this and see what they can do with smaller bodies still. I'm sure ARRI would have a good idea about sales figures for things like the RED Komodo and Komodo-X etc, which are very small, which must further emphasise the demand for smaller packages. I understand that cinema cameras potentially do things like heat/cool the sensor so it's at the optimum temperature and this requires size/weight for the mechanisms and also significant battery power too, so maybe making things smaller is more difficult than we'd imagine. I like to point out to people that the GH7 has a lot more stuff in it than the smaller cameras people compare it to (IBIS, cooling, internal RAW, etc), but in this case we're comparing mirrorless cameras with cameras that literally have heaters in them, so it's not a straight comparison by a long shot.
  5. Fascinating, and reassuring too. This is why I concentrated on colour grading - the hardware was good enough and the gap was squarely with me. Can you shed any light on what colour grading / image processing was done to get an acceptable match? Was there any particular way you shot with the GH5, or lenses etc you used in order to get it to match? I would think (if it was me) that going out shooting with a GH5 knowing it would have to be intercut with Alexa footage would trigger lots of thinking about how to best go about it so it would be good enough.
  6. kye

    Resolve 21

    It's an interesting update for sure. While I find the upgrading process to be too much of a PITA to upgrade unless there's a killer feature in the next version I really want to use, there are a few things in there that are interesting from an AI perspective. The first is the AI Face tools, with AI Face Reshaper & AI Face Age Transformer. This is interesting because it shows their ability to track and understand faces is vastly improved from the previous generation of Face Refinement tool, which was obviously designed to have very soft masks because their tracking wasn't that great. I did an excellent course in Beauty Retouching which used Resolve and basically you apply different treatments to each area of the face as each has a different tone/colour/texture and you had to mask each one manually yourself. The ultimate would be for the AI face tools to detect the face and output a mask for each area of the face, automating the masking/tracking. The second is the Adjust Focus with AI CineFocus, which simulates a shallower DOF, and is a combination of a blur plugin with their depth map plugin. When the depth map plugin came out I tried it on some deep DOF shots to see how it did, and the results were worse than the iPhone 'cinematic mode' with the edges being a very obvious blurry transition, and you couldn't apply anything more than a barely perceptible blur before the edges ruined the shot. The fact this is now an integrated plugin means it's gotten better to the point they're willing to put it forward for this application. It's probably still a long way from blurring the background but keeping each hair on the subject in-focus, but it shows increased confidence. I know they are also doing tonnes of little things in the background too. I went through a phase of posting to the BM forums and suggesting features as I came upon things that annoyed me, and to my amusement I had professional colourists (including from Company3) reply and say they've been suggesting the same improvements to BM for year after year, and I notice that a number of these have gotten fixed in the last few versions. Still, there are gaps in the things I'd really like. One is the stabilisation, which can't handle any kind of shot that isn't perfectly rectilinear, and has no support for removing rolling shutter etc. This is possible, and I went down a deep dive at one point some years ago looking for a solution and there was a product that did it flawlessly, but the product was in the thousands-of-dollars price range so wasn't worth it for me. The stabilisation also lacks the ability to stabilise the tilt/pan/roll/zoom in different amounts. If I shoot with the BMMCC and an OIS lens for example, the lens stabilises the tilt/pan quite well but has zero roll stabilisation. I'd like to stabilise the roll almost to 100% to keep it almost perfectly fixed, while also stabilising the tilt/pan maybe 40% just to smooth off the rough edges. This isn't possible, except if I build something in Fusion, which apart from forcing me to learn Fusion, also requires I go into the Fusion window to track the shots as well, I can't build a custom OFX and then apply it in the Edit or Colour page. I don't know why BM didn't just make the stabilisation occur in a node, that way you could just apply it several times however you wanted, but it's a 'special' thing that happens once in the image pipeline, and once only. My biggest wish for Resolve 22 is lens emulation. Like the Face and CineFocus tools, the lens emulation ingredients are all there if you combine them yourself manually, but integrating them into one plugin would be pretty sweet!
  7. CineD posted this interview. I haven't watched it yet, as I'm not exactly in the market for an ARRI anything, but I'd be curious to know if there are any plans in there to go for smaller cameras.. With MFT bodies getting larger and larger and ARRI bodies getting smaller, maybe we're at the point where they will meet!
  8. Yeah, agree with you, and you should have fun with your lens choices - some cool stuff in there. One thing you might have fun with is matching the 9mm PanaLeica to your less clinical lenses, I have the 9mm and it's incredibly sharp - my sharpest lens by a long shot I think. Just remember, amateurs ask "what is the best way to sharpen your footage?" and the pros reply "actually, I soften the image on most projects I grade". I find the GH7 to be almost invisible, which seems bizarre when you consider the size and weight of it, but it's true. With my AF zoom lenses I find a composition by eye, raise the camera, adjust the screen if required, adjust the zoom, hit the AF-ON for it to focus, check the histogram to make sure exposure is good and adjust vND or ISO if required, then hit record. With manual lenses I do the same thing but normally hit record then manually focus the lens. In situations where the light isn't that variable, it's just a matter of vND -> hit record -> focus. It's like the camera is just a screen and a record start/stop button on it, and the rest of the action happens on the lens and in front of the camera. The experience is more like I'm operating a lens rather than operating a camera. People talk about how complicated the menus are in this camera or that camera and TBH I don't really know what they're talking about. You buy a camera, work out what modes you might be interested in, test them, then choose which mode you'll use and save that to a profile, and from that point on the camera is essentially a box where you only adjust something once in a blue moon. The reviewers act like you're taking one shot in 48p 5.7K Prores HQ V-LOG and the next in 23.976p 1080p h264 HLG and the next in 30p C4K h265 in Sepia or some BS. Quick - the bride is about to walk down the aisle - change settings! Why can't I assign the shadow slider in the picture profile menu to a hot button?!? It would save me so much grief!!
  9. What are you shooting and what lenses are you contemplating? I can't recall what your other normal equipment is, but MFT opens up a Pandoras Box whole world of possibilities with adapters etc too (if you don't need AF) so that can be fun too. I've been shooting street video at night with two combos that work incredibly well.. the first is my 42.5mm F0.95 with the Sirui 1.25x anamorphic adapter on the front, making it an equivalent of 68mm F1.5 and creating beautiful rendering wide open, but the combo is 1.3kg / 46oz so I also got a Takumar 50mm F1.4 on a speed booster, which is equivalent to a 71mm F2.0, which is much more vintage but is tiny and almost a full 1kg lighter. I'm contemplating a native 35mm F0.95 or F1.4 to replace both and have a less vintage but still lightweight for travel shooting. I also rock the 14-140mm for day shooting while travelling, and in brighter places the 12-35mm F2.8 is pretty hard to beat. So many great lenses. If you don't need crazy DOF then MFT is a great option, even with the larger body sizes.
  10. I have used several c-mount lenses on my GX85/GH5/GH7 cameras without incident, along with many others on these forums. I have the right adapter that goes in a bit further and gives infinity focus, so there should be enough room. I can't speak for other adapters or OM1 cameras, but it's definitely possible with the MFT flange distances.
  11. Getting good affordable 960p would be cool for lots of people. I see the science explainer channels showing bad quality 960p and the richer channels with Chronos setups. Don't get me wrong about them not being cameras that appeal to a large number of people. They're very good for getting the new "EVERYTHING IS AWESOME AND WIDE AND SMOOTH AND DEFINITELY SHARP SHARP SHARP!!!!" style of video that looks more like video than anything ever made before, but as soon as they say it's a cinema camera, there are 27 things they have to change from every other model ever made, and to bet they'll get every single one of them right is a very long shot indeed.
  12. Well WTF - turns out "1 inch sensor" is marketing BS and it's actually 13.2mm wide, making the crop factor 2.73x, and only just a touch larger than Super 16 which is 2.88x. Source Lots of MFT glass and also lots of c-mount options too as already suggested. Ironically, with the lack of electronic contacts you can't control the aperture on most modern MFT lenses, so much of the sharpest glass will be unavailable (making the 8K sensor spec rather redundant!). However getting shallow DOF will probably be more difficult, especially as we have no idea what kind of focus assists it will have, so stopping down might be the best move (focus wide open then stop down to eliminate any slight errors) and that will sharpen up older / lesser lenses. This might end up being the mythical tiny S16 cinema beast that people have wanted (or said they wanted!). It's much smaller than even the BMMCC and that's before you realise the BMMCC doesn't have a screen so you have to rig it up to use it. I am still skeptical though - there's lots of stuff we still don't know about and given GoPros history I have complete faith that they'll include at least one fundamental fatal flaw that will prevent this.
  13. Looks interesting with an MFT mount and the claim the Hypersmooth works with any rectilinear prime (their stabilisation is a pretty important feature, especially with how small they are). Of course, if they don't have animal eye-detect PDAF then the internet will skin them alive!!
  14. Even better than that, I have the camera on a wrist strap and shoot with it at chest height like you describe, which means that when I'm walking / standing around the camera is barely visible, unlike a shoulder-strap where the strap and camera are front-and-centre all the time. Lots of other things come to mind.. If there are people standing around in clumps, stand right next to one of them. This way you'll sort-of become part of the group, so people walking by will just identify there's a group of people there and 'see' all of you as one thing and walk around you, and people looking around won't be drawn to you as much as if you're on your own against a clean backdrop - this is sort of like camo clothing where you are trying to obscure your silhouette. Pause a few seconds before showing the camera. If you walk up near someone and stop, they'll probably glance at you to see who you are, what you want, etc. If all they see is someone doing nothing (ie, not a threat or opportunity) they'll go back to what they're doing. Shoot people who are distracted and doing things. Most people who are distracted are just on their phones, but contrary to internet hype people do still do other things, and unless you're working on your doco series "People on their phones - Episode 27" its good to seek out these moments. Shoot through people / things. Be careful how you move and approach shots. I try and be very focused on things that are just becoming visible. As soon as you can see them, they can see you, so it's best to not get closer than you need to. The further you are away the more likely there is to be layers to shoot through too, so that's a bonus. People also have a sixth sense that someone is looking at them, even if you're looking "at them" on your camera screen, so although you can approach someone from the side or even from behind and they'll just turn and look right at you. I'm not sure how to navigate this, but I'm sure there's some way to influence it that I haven't worked out yet. This lady was facing directly away from me when I started filming and then turned suddenly a few seconds into the shot: The guy nearest me suddenly turned around to look at me, despite none of his friends noticing me beforehand: I know people do look around sometimes, but the timing is uncanny, so it's definitely a thing. The old trick of finding the backdrop and waiting for someone to come into shot is a good one too, which is what this shot was. It has the benefit that you're not coming into their environment, they're moving through yours. Any situation where you're shooting through layers has the potential for someone to come into shot too. I was shooting compositions using the bikes mirrors and then a lady came and parked her bike right in front of me. I'm pretty sure she knew I was there, but as I was already standing there when she arrived I wouldn't have triggered that 'a new person just arrived' reaction, and also as she arrived at the situation from somewhere else she was probably quite distracted as the whole situation was new and she was trying to park her bike too, so it's possible she was completely oblivious to my presence. Anyway, that's some further thoughts. There's a lot online about how to stealthily take street photos (e.g. Garry Winogrand pretending to fumble with his camera, etc), but much less about street videography where you have to essentially remain motionless for many seconds while rolling, plus you can't 'drive by' people and freeze them with a short shutter speed either. For one reason or other most of the street photography tricks don't really work. I'd imagine that @BTM_Pix would be deep down this rabbit hole..
×
×
  • Create New...