Jump to content

kye

Members
  • Posts

    7,835
  • Joined

  • Last visited

Everything posted by kye

  1. I love it when people claim to speak on behalf of an entire world-wide industry, but let's introduce some science... Here is the table of required sample size to give a certain margin of error. Ignore the blue box, someone else put that there. We want to look at the bottom left corner - to get a 95% confidence we're within 5% of what we say, we need to have 384 samples. So, for those saying easyrigs are common, please provide photographic evidence of 384 films using an easyrig. For those saying they're not common, please provide a time-lapse video for 384 productions showing the DoP over the full shoot schedule not using an easyrig. I look forward to your responses. On a separate note, I'm thinking of buying a new Canon cinema camera to replace my GH5 for my families home videos which I shoot handheld and I was wondering if it will have IBIS or if I will need to buy an easyrig. Thanks.
  2. Yeah, but can it shoot 8K without overheating? Getting the shot is overrated.... it's all about that sweet sweet megapixel count, everyone knows that.
  3. Yeah, I suppose. I guess from the outside it just looks like they're all fawning over whatever looks cool at any given moment, with the guiding principles of 1) MORE=more, 2) MOAAR!!!!, 3) See #1, and 4) How high do you think I can count? It seems odd to think that the hype would go against the more is more principle, but I guess the golden rule is the golden rule. Interesting, and makes sense. I wonder what a setup with the 2X anamorphic adapter looks like, assuming the optical path will work? I kind of liked the look of the ground glass.. it had way too many issues across the whole frame, but the texture in the centre was lovely.
  4. I agree with your summary from the perspective of a stills-first shooter. You're right that FF lenses and the speed booster lend huge support to a new system, which is hugely important commercially. It will be interesting to see if FF ends up being the sweet spot. When you have something where bigger-is-better it tends to only be better because all the things you've seen are underneath the 'best' size, or it ends up that although the thing you're looking at keeps getting better with size, other things take over in importance. No-one is going to carry around an 8x10 sensor camera even if the image was beyond belief, except on crazier projects, like that movies where they hand-held the IMAX film camera(!). You're right that it will depend on the other manufacturers getting in on the act and collectively PR-ing people to death and making them upgrade. I can also understand Fuji using this as a strategy. My question is really "if I have to change from MFT and bigger is better then why would I settle for FF if there are better options?" If there was a MF camera with the features of the A7S3 and lenses available then I think the camera trendies would be plastering it all over YT.
  5. Cool technique and I can see why you are able to use aperture to set exposure. Not a lot of folks on this board seem to understand that a lens closes down. My overall philosophy is 1) get the shot, and 2) make it as 3D as possible. Making it as 3D as possible explains many / most of the things that are done on controlled big budget sets, and as I don't do some of those things because of (1) I overdo the rest, but just by a little. I prefer to shoot with a DOF effect similar to my eyes, but just a little shallower in order to help with depth. Even on bright sunny days we still see blur in backgrounds, so that's how I use the aperture. In low light I will open up completely and although it's a radical thing with fast lenses, it looks more natural because our eyes naturally open up in low-light conditions so it doesn't seem unnatural. I shoot in public in uncontrolled situations so it's nice to be able to separate out the subject from the environment to create focus. For example: Excuse the colour grading, they are ungraded or only a quick job. You can definitely over-do the shallow DoF thing. Philip Blooms film shot in Greece with the GFX100 had DoF that was too shallow for my tastes, so I'm not a bokeh fiend or anything. Sometimes you want to take everything in (once again, basically ungraded): My thoughts about DoF are that it's about the story you want to create. It's about controlling what the focus of the shot is. Sometimes the situations are so crowded that frames would be chaos and it's difficult to tell who the subject is or know where you're meant to be looking. Obviously that's the main job of composition, and to an extent DoF is about how 'deep' your composition is. Very shallow DoF says "that other stuff back there isn't important" and deep DoF says "everything here is important". It's a creative tool, which is why I don't want to be forced to use it to control exposure. Exposing with SS isn't my preferred aesthetic, but the number of times that I only just get a shot of the kids doing something funny means that shaving every second off the reaction time really counts. I've even taken the first two frames of a shot (or the first two in focus) and used Optical Flow to create a kind of moving snapshot of the moment, and did that because frame #3 of the clip showed the smiles fading or the kids noticing the camera or something else coming into play and the moment is clearly gone. Literally, I got the shot but had I been 0.08s later, I'd have missed it. This has happened more times than I can count and often they're the best moments. I don't know about you, but I can't set a variable ND to correct exposure in under 0.04s. When a camera uses auto-ISO and auto-eND to set exposure I will set the shutter angle to 270 and leave it there forever. I say 270 instead of 180 because I slightly overdo the things I can in order to compensate for the things I have no control over, like lighting conditions, or sometimes even my filming location (like if I'm stuck in my seat on a tour bus / boat / etc).
  6. You're right that photographers were all about FF, but I think you have to remember that photographers lusted after Medium Format, but didn't buy it because it was far far too expensive, and was too slow with slow AF and slow burst rates (or no burst rates at all) etc. The fact that we have a MF camera coming out that is HALF the price of what previous models of MF have been, and is usable in real-world conditions instead of just being a 'studio camera', well, that changes things. I think photographers have more lust for megapixels and sharper lenses than they do for FF, so if they had to choose between FF and MF I think they'd choose MF in a heartbeat. Of course, that involves changing systems, so that's not going to be something that will happen quickly, and the price may have to come down for the majority of the stills market to start thinking of it as an accessible option. So yeah, I think people didn't talk about MF because it was viewed as unattainable, but now that's changing, we might see the lust start to emerge. They are slow to change, and rightly so, if the image was the only thing that mattered then they'd change quickly, but as you know, it's not the most important thing on a film set, and it takes time to understand the new lenses and the new colour science and all that stuff. Plus, it's not like shooting with an Alexa is a terrible place to be as your default option! I agree that it will be quantifiable, but we haven't quantified it yet, and i've dug pretty deep in this stuff. I think people don't know. It's probably some specific combination of things, which of course makes it that much more difficult to zero in on. In the meantime we only have our aesthetic impressions to go from. There are quite a lot of subjective accounts from highly respected people that larger sensor sizes and certain lenses or lens designs have a certain X-factor, which is really one of the main attractions of MF and the purpose of this thread. My entire philosophy of image has changed over the last few years as i've gone on this journey. I started out with the philosophy of getting a neutral high-resolution high bitrate image shot with sharp lenses and then degrading it in post to give the aesthetic that is desired. What I learned along the way is that: resolution does matter and I want less of it, not more, so now I shoot 1080p DoF has a much larger impact on aesthetic than I thought, so now I shoot with wider aperture lenses (sometimes but not always used at larger apertures) bit-depth and bit-rate are highly important, so now I shoot 200Mbps 1080 at 10-bit in a 709 profile (not log) halation, flares, and contrast can't be simulated very well in post, so now I try and have lenses with a less clinical presentation in these areas In short, I worked out what is more important and what is less important, and I worked out what I can and cannot do convincingly in post. Therefore, I moved these things to be done right in-camera. If medium format becomes accessible for video (it's not in my price-range yet!) then that's another thing I'll be able to shift from trying to do in post (but not knowing how, like your comment about not knowing what is going on outlines) to doing things in-camera and then having it baked-in to begin with.
  7. Absolutely, and that's why I am even tangentially interested in the format. My expectations of a camera are that the 'rig' is the camera body, an SD card, a lens, an on-camera mic and a wrist strap and then I put a couple of spare batteries and a couple of other lenses in my bag and I'm off for an 18-hour day, during which I shoot anything and every that that peaks my interest. In those circumstances my phone would take better images than a Phase One because my phone would suit the conditions and the Phase One would be a PITA. I'm looking at it from the perspective of video-only and also from the perspective of getting good-enough quality with a portable package. For me, shooting with an external monitor / recorder is a downside as it means the rig is larger, heavier, requires more complexity in power solutions, has messy cables, and creates unwieldy file sizes. The A7S3 internal codecs are good enough for me (actually they're radically more than what I'd need, but luckily they have high quality 1080p and ALL-I codecs). I just saw that the GFX100 can do 1080 at 400Mbps so I guess that's fine for my purposes. In a sense the more integration that they build into these cameras the less they will make them usable with things like focus peaking and exposure tools etc, so that's not a good thing. Anyway, it's good to have the option of external RAW but keeping good internal quality should remain a high priority. I meant that with a 100MP sensor it's odd that it can only do 4K. Considering the hype has moved to 6K and 8K which are now settled as standards you'd think that offering these would be a 'home turf' advantage of MF. If you think about MFT, doing 6K or 8K means having to work on new sensors and dealing with all kinds of new issues, but MF was already the king of high resolution, so you'd think that these things would be playing to the strengths it already has. Ok, that makes sense. I guess I see the crop factor as being a strength and weakness. It's great if it can use FF lenses, but that also means that the sensor isn't so much larger than FF. Given a hypothetical 6x4.5 camera as a competitor, it wouldn't be able to use FF lenses, but would have a huge sensor size advantage over FF, so would be easily worth the trouble. I guess that brings us to.... For me, I see MFT as having the advantage of being what I already have lenses for. FF as being the thing that is now good enough, has a larger sensor, and has heaps of lenses and overall support. MF represents going away from what I already have, and where all the lenses are, but you'd do it for the mojo. Considering that the GFX is only a little bit larger than FF, but is the best you've ever seen that lens and is also almost good enough to use the M-word, maybe a 645 sensor would be crazy good and worth all the 645 lens shenanigans that would be required. To me, a format that is only just a little bit better than FF seems to be skimping on the thing that it really has going for it. Now, of course, there are limits - I'm not going to be lining up at the camera store to buy an 8x10 camera for shooting my travel films, but MF needs to offer something significant over FF to really make it worth the hassle of going through that transition. @mercer is talking about a certain X-factor that can occur with larger sensor sizes. I've been trying to chase down what this might be, and you're right that it's not FOV or DoF, but it's important to know that the math doesn't explain everything that's going on with sensors and lenses. I've tested a lot of lenses in controlled conditions and when you do these tests you start to see differences that there are no readily available explanations for. An example of this is the Takumar lenses, which render images that are noticeably flatter and less 3D-looking than other lenses, and this is under controlled conditions with everything else being equal. Same focal lengths, apertures, same lighting, camera position, etc etc. It's something that the Takumars are known for. The question is, if the background is the same level of blurriness, then how is the perception of 3D space different? I've been looking at this question for years and haven't come up with anything, except that I've seen it myself enough times to know that something is going on. Sensor size can have a similar effect, some things look more 3D than other things. Not sure why, it just does. This is one of the attractions of larger sensors. See @BTM_Pix comments above about the Contax lens being better than any other camera he's seen it on. Why would this be the case? Who knows. I've played with things like this and these effects hold up even if you decrease the resolution, bitrate, and even colour depth and even if you make the images B&W, so I can't readily find an explanation for it. FF only took a few years to 'catch up' to where MFT was, and the MF cameras we're talking about aren't that far away from FF in terms of sensor size. Certainly they're a lot closer to FF than FF was to MFT. If there is market demand, which is debatable considering FF still has a lot of hype and many haven't moved from MFT or S35 to it yet, it could be that MF 'arrives' in a few years.
  8. I don't see a difference between 24p and 30p. Or at least, 30p doesn't have the same look that 60p has to me. I figure the only way to get SS you want is by having an ND. You can have a variable ND or you can use fixed NDs and then vary your aperture (assuming it's declicked) to fine tune it. The reason I say that is because there is very little tolerance for variations in SS if you want some motion blur in the frame. To put it in context, let's say you're aiming for 180 degree shutter. Obviously a 170 degree shutter would still be fine, but where are the boundaries of the aesthetic. Some say that 360 shutter gives too much blur, but let's say that we're ok with it. Steven Spielberg famously used a 45degree shutter on Saving Private Ryan because he wanted the aesthetic to be jarring and he wanted the audience to be able to see the bits of peoples bodies splattering everywhere when things exploded. Let's say you don't want to go this far and so a 90 degree shutter is our limit. That's a 90-360 shutter. Take the sunny-16 rule. For outdoor exposures at ISO 100 you'd typically have a 1/100s exposure when set to f16. No-one contemplating not using an ND will be wanting to shoot at f16, so let's say they're going to be shooting more like f4. That's 4 extra stops of light we have to get rid of in the SS, so that's a SS of 1/1600. That's a 180 shutter if we're shooting 800fps. This is absolutely no-where near what we need for 24fps, 30fps, 60fps, or even 120fps, so changing from 24p to 30p because you don't want to use an ND doesn't work. Using a graduated ND won't take 4 stops off the brightest part of the image, so that does't work. Polarisers won't work either. Maybe you never shoot in bright sunlight, sure. For me, I realised that I would need NDs that went from something like 6-stops to zero, and even then I'd still need to dial them in every time and miss a bunch of my shots. So I just abandoned using one. I used to use a fixed one on my XC10 because it couldn't do a short enough SS to exposure during the day with the aperture wide-open, as it was a cinema camera and not a hybrid. I will happily go back to using an ND when they implement a built-in eND that is controlled by the camera automatically like the Sony cinema cameras have. That way I control the aperture because it's a creative tool, I control the SS because it's a creative tool, and the camera controls the ND and the ISO to get the full range of exposure values, because neither ND nor ISO is a creative tool.
  9. I shoot handheld and don't have problems with pans without a subject, but they're not really a big part of what I shoot, especially now I have a 15mm equivalent lens, so landscapes etc don't require that much panning. The issue with 24p panning is that the 180 shutter obscures the detail, but that doesn't bother me as I'm shooting with auto-SS for exposure, so normally very short shutter speeds. I'm aware it makes the video less cinematic, but not having to use an ND means I get about 20-30% more usable shots, considering that much of the time I see something happening and only just get the camera going and in focus, and in the edit I end up using the first 2-3 seconds of the clip, so if I had to manually expose with an ND then I would have missed the moment. I chose the GH5 because of the 10-bit internal and IBIS. Even now, if I was re-buying my whole setup I would still consider the GH5 as the best option due to the 200Mbps 422 ALL-I 1080p 24p and 60p modes downscaled from 5K, the IBIS, the fact it's much lighter than things like the S1H, the fact I manually focus so don't care about AF, and the MFT crop factor gives me a 2X zoom on long lenses which I use for filming sports in the 120p mode. Is it the best camera available, no. Is it a camera with only 4K60 as its only defining factor? No way. It is still considered a workhorse today, and the GH5 FB group I'm in has 37k+ members and has a steady stream of people buying a GH5 for the first time and asking questions as they familiarise themselves with the camera, or asking advice about buying GH5/GH5s as a second or third camera for their setups. The group is interesting in that it seems to be full of people who are shooting things, posting their work, and are still very excited about the benefits that it provides.
  10. Good info. Interesting to hear it's great for both stills and video. Are you saying that it's better than the Phase One and Hassleblad MF cameras? Or are they in a different category in your eyes? The addition of Prores RAW certainly gives it a serious shot in the arm for video, although that puts it more in cine-camera territory in terms of form-factor, compared to cameras like the P6K or A7S3 that can record to ferocious bitrates internally or to nicely compact USB-C drives. It also gives a camera a big price shift in the wrong direction. I'm also a bit surprised that with a 100MP sensor that it's only 4K30 12-bit. A P6K can do 6K60 internally. Considering that MF is all about huge resolution it's a bit odd... I did appreciate the composition in the video at 1:44 though! How often are you utilising the full resolving power of the sensor in your still images? I stopped taking stills when I got into video but recall the old "no-one needs more than X megapixels because the larger the print the further you view it from" adage, and although I disagreed with people asserting that the magic number was 2MP, 3MP or 5MP, my gut suggests that it's probably less than 100MP unless you're the FBI picking out terrorists from a crowd or whatever. It's interesting that these are so close to FF, it means you can use FF lenses but it also means that any "Tom the DP" size advantage that it gives you (or @TomTheDP ) would be relatively minimal. It's interesting you mention that, because in one of the other Prores RAW videos I saw, I noticed a Canon TS lens pop in there.. Do they all cover larger sensors?
  11. I found this an interesting video: There's a few things about this that struck me. First, it looks like an ad, which is odd. The other things are that they shot the whole thing on the camera hand-held, that the lenses seemed to cover the basics (but weren't especially fast) and that it didn't look fundamentally different to a well-shot video from <insert nice cine camera here>. @elgabogomez I agree that you have to consider the whole package in terms of codecs, tech features, battery life, ergos, etc etc. I also agree about the relative new-ness of the format with lenses and other supporting factors as @noone says, but think about where FF was 3 or 5-years ago, without the frame rates, stabilisation, etc that FF now offers and is even considered a requirement rather than notable feature. Your Polaroid 600SE might appreciate in value proportionately to the newer MF systems and you might be able to swap the Polaroid for a complete MF setup at some point in the future as more players enter the MF space and lens systems are built out etc. I think it will be a very very long time before video can do fake DOF like you're suggesting @Video Hummus - as you say the effect has to be consistent across many frames and so considering the push for higher and higher increases in resolution, the bar is continually being raised about what level of quality the fake effect has to create. I suspect that if you programmed the latest iPhones to take 24 Portrait-mode photos per second and downscaled the result to 640x480 the effect would be perfect, maybe it would hold up at higher resolutions than that even, but 4K? Not a hope in hell. Sensor technology might have to have an oversampling factor too, like you might need an optical and depth camera pair that operate at 5x the resolution of the output image for the result to be video-perfect, but as sensor resolution increases so does the expectation about distribution resolutions, so it might be an equation that won't be answered for some time.
  12. No offence taken! I've played with shutter angle and waving my hand on front of my face and I've gotten a sense of how 24p is different to reality. The subjective experience for me is that 24p has a 'look' which is made to look the least un-natural by having a shutter angle somewhere in the 120-240 degree range, depending on your mood and if it's dark etc. But the thing is that 60p doesn't look more neutral to me, it looks like it has about the same amount of a 'look' in comparison to reality that 24p does, but the aesthetic of that look is very different. 24p seems to have a kind of 'heightened sense' aesthetic, like realty can have in moments of strong emotion. Kind of like the visual component of "time slowed down" and in a sense it's an effect that kind of increases the romance and emotion and depth and pain and very texture of experiencing the world as an emotional animal. 60p has an aesthetic that makes reality seem like every atom has been lubricated and everything is kind of slipping all over itself, kind of like everything is falling in slow-motion except that it's doing it at the speed of reality, and perhaps a little bit too fast for comfort. It has an aesthetic like the love child of slipping over in the bath, being scammed by a con artist that was so good the only warning that you got was that everything was happening slightly too easily, and what I imagine it would be like taking pills that make you smarter and give you superhero reflexes. In my mind, 24p has a more relatable aesthetic, it fits with things that I occasionally experience in my sober real-life, but it's also familiar from watching movies and TV, so that's an advantage too. 60p has an aesthetic that I have never experienced in sober real-life. 24p disappears but 60p never seems to fade-away into the background, it's like I've had my brain downloaded into a robot body and somehow they got the code wrong. My answer to your question about what to film for a simulation ride was 60p, but not because it mimics reality, but for two reasons - the first is that in motion-simulations it's been shown that lower frame rates make people nauseous and that it doesn't look like reality or like 24p. So people would come out of the ride having kept their lunch and having had an experience that they'd say "wow, it really was an experience" rather than say "I watched a movie and the seat moved". Talking about frame rate and shutter angle to mimic reality is like talking about drawing with crayons to mimic a moving sculpture - there's enough similarity to make it seem reasonable to ask the question but only good enough to choose between fundamental challenges that cannot all be met.
  13. My first answer would be "find someone else to do this". My second answer would be "no really, I'm not the person for this task". My third answer would be 60p 360 shutter. But that's only because you said that the displays are limited to 60p. If I had access to a 24-240fps display that could do any frame rate in-between then I would test the 1080p VFR mode at 24/30/60/90/120 and 180fps. I would then downscale the whole lot to SD in order to eliminate the fact that the slower frame rates have more data per pixel than the higher frame rates. Based on that I would then look at the resolution/framerate/bitrate combinations that the camera could offer, and work out how to choose between them, shoot identical test materials, and do a blind test with an audience to determine which mode to use. Of course, it's a ridiculous question, along the lines of "if you needed to paint the ceiling but only had a drawer of cutlery to apply the paint, which utensil would you choose, and by the way you're limited to only a fork or spoon".
  14. Yeah, creativity comes first, not second, or ninth, or last... but a DSLR form-factor operating at a high-SS and frame rate would be great for sniping shots from very high-risk angles, like holding it up above the people in front of you, holding it out a window from a moving vehicle, or trying to track an athlete or vehicle as they go screaming past you on a track. The upgrade from 20MP 10fps stills to 50MP 60fps slightly compressed stills would be an enormous upgrade in terms of getting the shot. I took my modified action camera out today to a nearby national park and filmed some stuff, and the shooting experience was completely different holding a camera the size of a matchbox compared to holding a full-size camera - people don't notice you in the same way, and you can throw it around in a different way too. My GH5 captures lovely images, but those images are different because the camera effects where you are, what you shoot and how the people you're pointing it at react to it being there. No IQ upgrade can make those changes, and why your RED got put back in its box.
  15. Shoulda-woulda-coulda. I remember hearing about this funny thing called "bitcoin" that was a "virtual currency" and people were generating them with their home computers. I contemplated setting up my home PC to generate some. I was also contemplating SETI@home, but never got around to either of them. My thought was that if I did I'd just run it all night and all day when I was sleeping and at work. I genuinely have no idea when that was, but I remember when it hit the news that you could buy a pizza with bitcoin I had already heard of it, so it was even before the pizza buying days. Had I done it, not lost the digital wallet, not got robbed, and not had a ego-centric-rockstar-complex-related-breakdown from getting rich, I probably would have been a billionaire by now.
  16. This idea comes up every so often, the first time it came up was when mere mortals could get their hands on 4K cameras. The results at the time were "sure" and (IIRC) Popular Electronics magazine featured a cover photo that was a still from a 4K camera, which obviously with 8K and hyper-datarates of todays cameras would be far higher IQ. One of the interesting counter arguments came from Peter Hurley, a headshot photographer who did a trial with a RAW-shooting cine camera, was that the benefit of always "getting the shot" by never missing the moment was more than offset by the work in post of having to find the frames you wanted, and all the media management that comes with the huge files. Obviously that was the perspective of a stills photographer, so if you're shooting video and stills, then pulling the stills out of the video would be a no-brainer, as even though it will take work to go through the footage and find the right frames, you're partly reviewing the footage during video editing anyway, you're colour correcting the footage anyway, you're managing your media anyway, plus you don't have to pay attention during the filming process to stills and video separately, and can capture both simultaneously using the one set of equipment. Plus it gives you the ability to reframe in post for the 4K(TM) that every client seems to want, despite not knowing WTF it even is, let alone knowing that resolution is the last thing that makes a video great. I'd suggest that you'd still want the old three-camera setup for the ceremony, reception speeches, reception dancing, although the ability to shoot wider and reframe in post might mean you can get away with not having a second shooter but still "simulating" one in post with things like getting close-up then getting a wide and maybe even a pan/tilt if your editing style warrants it.
  17. Yeah, FF to MF is one "step"... just like MFT to APSC/S35 is 1.33, APSC/S35 to FF is 1.5, and FF to MF is either 1.27 or 1.56 depending on the sensor. Adding to @BTM_Pix comments below, MFT -> FF is very similar to APSC -> MF, with 2x and 1.89x or 2.3x. I just looked up 645 lenses and wow, I didn't realise that the 6x4.5 sensor has a crop factor of 0.58, which is even more extreme than the Phase One etc MF cameras. You're right about anamorphics, essentially increasing the sensor size through optical compression/decompression techniques. Although with the 'lesser' size increases in the MF cameras on offer, maybe we should be using those open-gate with anamorphic adapters on 645 lenses.. Why did I think of the Titanic when I read that? Anyway, moving on! You're right about the wow factor and feeling like you should be getting something impressive after re-buying all your equipment. Do we know what the new GFX100S image is likely to look like? If MFT was abandoned and I switched systems then a compact MF camera with a decent 1080p mode would certainly tempt me to just skip FF. Do you have any idea why MF should be fundamentally a nicer image than FF?
  18. From where I sit, as a GH5 owner noting the year-on-year absence of a GH6, combined with Hollywood types having their own FF fanboi crush on the new "LF" cinema cameras, the future looks potentially like larger sensor sizes. In anticipating a FF future, and thinking of my love of vintage lenses, I had a period of a few days where I contemplated buying some FF lenses just to keep in case I'm forced to abandon MFT some years in the future. During the tumble down that particular rabbit hole I wondered if I should jump straight to Medium Format lenses, as potentially they've escaped the FF-fad-inflation-factor, and it would also mitigate me against a take-over of Medium Format cameras. I have since worked out that if that happens then I can just factor in the lenses to my system swap, and that with the plethora of new cheap fast glass coming from China, there will be acceptable options available when I make the change. I also realised that the 'look' of vintage primes is mostly due to some combination of simpler optical recipes, lower manufacturing tolerances and less sophisticated coatings. The fact that cheaper lenses typically have simpler optical recipes and lower manufacturing tolerances covers off those angles, and the less sophisticated coatings can be emulated with filters, which are widely available. This all made me curious... What is the current state of MF video? Resolutions, bitrates, bit-depths, codecs? Do you think that MF will overtake FF? Before you say it's not possible because of sensor read speeds and sensor size for IBIS motors, consider that the GH5 was miles ahead of the FF video offerings when it was released but the current crop of FF cameras have made up most/all of that ground in the subsequent years. I estimate this will continue and so having 6K120 and 5.5-stop IBIS on a Medium Format camera is simply a matter of time and market demand, not problems with the laws of physics. I have absolutely LOVED the video I have seen from medium format cameras in the past, and I'm not sure if that's the lenses (which are ridiculously high-quality) or the larger sensor capturing more light or the shallower aperture or the fact that the colour science of a 5-figure camera is potentially better than CaNikon is famous for, but the images were moving in a way seldom seen from other cameras, and almost all MF video had that glorious feel. Thoughts?
  19. Human vision is fundamentally different to the way that video works, so there is no frame-rate & shutter angle combination that makes sense. To expand on this, imagine you have a fan with only one fan blade, and imagine that it's spinning quite quickly. We would see the fan blade as a blur between (let's say) the 12-oclock position and the 6-oclock position. Then a tiny bit of time passes, and now we see the fan blade as a blur between 1-oclock and 7-oclock. etc. To put it into traditional video terms, the shutter angle is much much more than a 360 degree shutter. There have been attempts to actually simulate this. They filmed scenes with a very high frame rate and using a 360 shutter, and then you can combine many frames together, let's say that output frame #1 has capture frames 1-100, then output frame #2 has capture frames 11-110, etc. In this way, you can have a shutter angle that is larger than 360 degrees. You could also do things like have the motion blur be a fade rather than all parts of the motion blur be the same. I think this might be what we're running into when we talk about 24p vs 60p. Maybe 24p has the right motion-blur, but 60p has the right refresh rate, but can't have a shutter angle more than 360 degrees. I believe that computer games have worked out that the human eye can't detect anything more than a certain frame rate, ie, 120fps or 240fps or something, so in that instance there's no point rendering a game at faster than that. So what we need is a frame rate at that pace, but with motion blur around 1/50th of a second (corresponding to 24p 180 shutter) which with current technology isn't possible. Thus, the 24p 60p debate will never be resolved because the technology isn't the right kind of design. Actually, it's that 24p is a problem because people who do video use equipment designed for computer gaming, but don't know that that's what they're doing. Do you recall my earlier post where I said that film-making is deceptively simple and that people don't know what they don't know? This is one of the things I was talking about. There's no real effort required to get great 24p - just buy equipment designed for film-making and not for computer games. There are a huge number of external display adapters that are available for purchase, and they're very affordable too. BlackMagic sells a bunch of them here: https://www.blackmagicdesign.com/products including the Decklink which is $145 for 1080p and $195 for the 4K version. I suspect these only work with Resolve, but there would be others that work with other NLEs. These will also give you support for 10-bit, HDR, and SDI if you have SDI monitoring equipment, and perhaps best of all is that they are a completely managed colour pipeline, so the operating system and display drivers and all the crap can't stuff up your colour calibration, giving you a completely calibrated display to work from. Most monitors will happily display a 1080 or 4K signal at 24p if that's what the hardware is giving them, so all you need is one of these interfaces and all the problems you're facing will go away. You could make the argument that this gives you a great 24p pipeline but it doesn't solve it for everyone viewing your videos, and that's true. For them, it will be a mixture of watching on computers designed for gaming, phones, and smart TVs. People watching on computers probably aren't going to have good 24p playback, but as has already been mentioned, will they even notice? I'm not sure about phones, but smart TVs may do this happily, considering they're designed for media consumption, not for gaming, but it might well be a patchy. I remember setting up my media boxes to be PAL and not NTSC (before I had a completely smart TV) so they were definitely broadcast focused rather than PC/gaming mentality. Also, you'd be surprised at how many people can spot the 50p "soap opera effect". I doubt that many would spot the difference between 24p and 30p, but you never know. If you ever sort out your equipment to give you proper 24p (or 25p) playback you could test your friends and family and see if they can tell. You might be surprised.
  20. kye

    The D-Mount project

    As promised... Shot on the mighty SJ4000 with replacement 8mm M12 lens. I brought the footage into Resolve and pounded it with a hammer until it no longer looked like a cheap modern camera, but reminded me of an expensive older camera. I might have been a little heavy-handed with the film-grain though, I thought YT would compress it slightly more, anyway, enjoy.
  21. Sounds familiar, except the part about 8mm footage being available. I think there's a VHS tape with me on it when I was about 10, but I lack a VCR, so it'll sit in a box for a while I'd imagine. I seem to remember the tape wasn't that interesting lol. I do wonder if the footage would be interesting to our future selves and descendants, but if I ask the question of myself, the answer is that yes, I'd be very interested in seeing clips of my grandparents or great-grandparents, even if they were shot on a potato. But if it's not the case and one day my storage goes belly-up I'll have had enough fun along the way for it to have all been worthwhile! It makes sense that higher framerates might be more revealing and therefore make CGI imperfections more visible. Certainly HD and FHD had that effect - I heard that productions had to spend more money on makeup because little imperfections that didn't used to be visible became a problem and they had to work slower. Lots of stuff in here. I understand about BRAW, but wasn't explicitly aware that you could get lower resolution BRAW from the whole sensor. I'd be interested to know if it's downsampling or simply line-skipping / pixel-binning. If it's downsampling then that's a cool thing. The comparisons between RAW / BRAW / Prores and h265 / AVCHD performance in post is really about IPB vs ALL-I. Another example of something that the GH5 does right but no-one else is doing because they can sell you their own RAW flavour and make you buy their external recorder or their NLE. Tech companies are assholes sometimes. I understand that extra resolution in post has advantages and the more sophisticated your workflows then the more resolution is useful to you. I've kind of gone the other way with my workflow development. I started out wanting the highest quality capture (which meant 4K with sharp lenses) and was aiming to do all the hard work in post. Now I've worked out what look / aesthetic I want, I am aiming to get it right in camera as much as I can. I have moved away from shooting to crop, post-stabilisation, and shooting log to colour later. I've worked out that I have much more in common with film-makers rather than videographers. My client is me and I can make technical decisions in order to please myself and fulfil my vision and objectives, instead of having clients to please who change their minds after the shoot and don't know much but demand the most impractical but least relevant things, like 4K delivery for social media, or zooming into things in post etc. If I was a videographer I'd probably be shooting in 6K RAW, putting that on my business cards, and recycling my hard drives on a regular basis. It's not a job I envy, that's for sure!!
  22. Very interesting and I understand the logic completely. 'Slow glass' is an interesting concept for sure. How do you edit your videos? or do you edit at all? One thing I had to work through for myself was how to edit, in terms of the philosophy of editing. For example, the real-life experience of travel is of there being yelling and stress before leaving because the kids left their packing to the last minute and then can't find things and didn't put their devices on charge (they're teenagers so we let them make their own mistakes lol) and then boredom and awkward conversation in the uber going to the airport and then stress at the airport followed by boredom waiting for the flight and then..... I had to work out if I wanted to put that stuff in there or not. Putting it in would be more like Slow Glass but it's not the kind of travel video that anyone wants to watch, and filming it certainly isn't something that would help the situation while it's happening! I concluded that I would only shoot when it didn't hinder the activity itself, and would only put things in the edit that my wife would accept or that the kids would be ok with when they are in their mid-twenties. I also realised that the process of editing (and by extension when you even pick up a camera and hit record) is editing of some kind, which is the process of sorting things according to some criteria, and your chances of that criteria not being hugely biassed is very low, especially with people you know or love. I do have a secret long-term project of sorts in that I sometimes shoot random test footage in the house for things like low-light performance and stuff, and that often includes little shots of the family watching TV or whatever happens to be happening, and the 'project' is that I don't delete it unless I'm made to do so on the spot. I won't be pulling that footage out and editing it, but it will be there when I'm old and I suspect that the family will be able to look back and not care that they were in their pyjamas and hadn't combed their hair or whatever.
  23. One of the things that 24p gives me is a certain surrealist aesthetic. What I mean is that 24p isn't quite real, it's more like an impression of reality rather than an accurate representation of reality itself. Things that make video more realistic like 60fps, rec709 accurate colours, HDR, super high resolution, 3D, etc seem to make it less 'cinematic'. Of course, this is an aesthetic choice - if you want to make videos that seem very real then those things are great. Games or POV videos should be more realistic, so those things are benefits in that case. I shoot travel and events of my family and friends, so my videos are like a vignette of memory, and in alignment with that the aesthetic I want is fuzzy and impressionistic like memory. I also like the idea of giving the same larger-than-life aesthetic that feature films have when viewed in the cinema. I find that 24p is one of the things that helps generate that aesthetic.
  24. I understand the attraction of codecs like BRAW, but going back to my original point - it's the bit-depth and DR that is the main attraction for them. The only reason you can WB in post is because of the extra bit-depth, and the extra DR is just that they haven't clipped the DR from the sensor, but both of those can easily be matched by other codecs, Prores 4444 and XQ for example, if implemented correctly. The downsides of any RAW/semi-RAW format are that you're either getting the full-sensor resolution or you're getting a cropped image. The full sensor resolution option requires more processing power in post to decode that resolution, then downscale/upscale it to whatever timeline resolution you're running, and only then can you process it at your timeline resolution. This gives you the benefits of oversampling, but it makes your computer do the work in post, every time you hit play. The other issue is getting a cropped image. This has three downsides: you don't retain the FOV of your lens, and you lose the oversampling, which causes both a loss of colour subsampling resolution (a 3840x2160 / 1920x1080 sensor readout is only what - 420 colour?) and the other thing you lose is the noise reduction effect of downsampling from many pixels. A proper implementation of Prores 4444 / XQ or even a 12-bit h265 ALL-I file with sufficient bitrate that was downsampled from the whole sensor would side-step all of these issues. It's why I shoot with the 200Mbps 10-bit 422 ALL-I 1080p mode on the GH5 - it gives me all the things I'm talking about except the 12-bit. The GH5 was released in 2017. Things have only gotten worse. I predict that in 3 years two-thirds the 4K devotees here will be tearing their hair out because the $4000 cameras will be offering 2500Mbps 8K and 80Mbps 4K and 20Mbps 1080 and everyone will be crying at how much it costs to have a computer that can edit 10-bit 8K h265 files. Then everyone will make the investment, and a couple of years after that..... h266. Any time you mix those two frame-rates you're going to have the 8/16 minute problem. 24/25p is a whole different thing. 23.976 vs 24p has a frame rate difference of 0.004 fps. 24 vs 25 has a 1 fps difference. I'll let you do the math 🙂 With drones it doesn't really matter anyway - no audio to sync to and I doubt you're doing much stuff where the timing is critical.
  25. I've also found that most of the times Resolve went a bit funny (ie, something wasn't doing what I expected) then I just save and restart Resolve and re-open the project and it goes back to working fine. It may not fix your issue, but it's worth a try if you hit an issue while you're in an edit.
×
×
  • Create New...