Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Lenses

    Nice work! I like the colour grading and overall image processing, the texture is nice too. The long shutter on the movement is cool, and getting the right level of shake to represent the experience of riding a motorcycle on modern streets was a nice touch. 🙂
  2. I was suggesting that there might be one. I thought you were saying there wouldn't be any. I wouldn't imagine they would be common, but I did think there would be sufficient demand in the market for the most film-centric rental house in the middle of Hollywood to have at least a single unit.
  3. I would imagine there would be a niche larger than you might think. I think the revival of 35mm still film is a reasonable parallel - it is much easier to shoot digital and emulate it in post with one of the many excellent plugins available. But people like shooting film because it's somehow "authentic". Noam Kroll shoots a lot on film, as I am sure you're aware, shooting a number of short films on it, and shot this ad on super 8mm: https://noamkroll.com/shooting-super-8mm-red-gemini-for-banana-republic-in-joshua-tree/
  4. kye

    24p is outdated

    Actually, the Orange and Teal look is copied from reality (but sometimes dramatically overdone). Any time the sun is shining and the sky is blue then objects that are lit directly from the sun will appear one colour and objects in the shade will be lit solely from reflected light, which a significant amount will have come from the blue sky, so shadows are more blue than things lit by the sun, and in comparison, things lit by the sun are more orange (the opposite of the colour of the sky) than the shadows. This is a subtle effect, but is observable. I did the test myself. Here's a RAW photo of my fence at sunset: If we radically juice up the saturation, then we get this: and if we shift the white-balance cooler, then we get this: So, although reality doesn't look anything like how strong this colour grade is, the orange/teal look is part of reality, not a fictional thing that's made up. Also, a great many movie colour grades don't have the orange and teal look. Here are a bunch of movie stills from Blockbusters: (you have to click and expand the image to view it large enough) Many of them are almost one hue, with almost zero colour separation: But, once again, since you missed the point I was trying to make... with all the equipment and talent these movies have at their disposal, why on earth would they look like this if they were trying to make them look realistic? You've got it all wrong - it's the other way around. People who make movies want to make things a certain way, and shooting 24p is one of the (dozens / hundreds) of ways they accomplish this. I'm not sure what the point is that you're trying to make? Genuinely? If it was simply that movies were in 24p but were trying to be as real as possible in every other way, then yes, you could make the argument that it was a legacy choice, but there is practically no aspect of movie-making that is trying to imitate reality. I think you're exactly right. If these movies were 'realistic' then they would look like small reality shows. The evolution of film-making started by recording theatre productions. There were no cuts, it was like you were sitting in the crowd watching a play. They didn't think they could edit because real-life doesn't suddenly jump to a new location. When they worked out that cutting was fine and the human mind didn't get disoriented if you did it, they still thought that the mind wouldn't understand if there were jumps in time, so they had continuity editing, which meant that if someone entered the room then you'd have the whole sequence of them opening the door, walking through it, closing it behind them, then walking across the room, and only then starting to speak to the person inside. Turns out we're completely fine with cutting most of that out - in dreams we experience time jumps and the theory is that we're fine with time jumps because dreams do it. The history of film-making is a journey from one-shot films that made you feel like you were at a theatre production, and have gradually evolved into Bayhem, Momento, Interstellar, etc. If anyone wanted realism then they've been walking in the wrong direction for an entire century now. Either the entire history of cinema was done by people who are completely incompetent, or, they're aiming at something different than you are.
  5. kye

    Panasonic G9 mk2

    Everyone is complaining about their EVF and LCD screens, but she's not!
  6. kye

    24p is outdated

    The whole idea of movie stars when I was growing up was that they were "larger than life". I think, once you start looking, you'll find that practically nothing about cinema is even remotely realistic / real-looking. Look at the visual design / colour grading for a start.... I mean, these projects all had the budget, had highly skilled people, and had every opportunity to make things look lifelike, but none of these things look remotely like reality. Even the camera angles and compositions and focal lengths - none of these make me feel like I'm looking at reality or "I am there". Bottom line: studios want to make money, creative people want to make "art", neither of these are better if things look like reality. I'm in reality every moment of every day, why would I want movies to look the same way? It's called "escapism" not "teleport me to somewhere else that also looks like real-life... ism".
  7. kye

    24p is outdated

    He does. I applaud Markus for the work he does and the passion that he brings, trying to beat back the horde of Youtube-Bros who promote camera worship and the followers who segue this into the idea of camera specs above all else. But there's a progression that occurs: At first, people see great work and the cool tools and assume that the tools make the great work - TOOLS ARE EVERYTHING Then, people get some good tools and the work doesn't magically get better. They are disillusioned - TOOLS DON'T MATTER Then they develop their skills, hone their craft, and gradually understand that both matter, and that the picture is a nuanced one. TOOLS DON'T MATTER (BUT STILL DO) This is the same for specs - they are everything, they are nothing, then they matter a bit but aren't everything. By the time you get to the third phase, you start to see a few things: Some things matter a LOT, but only in some situations, and don't matter at all in others Some things matter a bit, in most situations Some things matter a lot to some people, but less to others, depending on their taste Film-making is an enormously subtle art. Try replicating a particular look from a specific film/show/scene and you'll find that getting the major things right will get you part of the way, but to close the gap you will need to work on dozens of things, hundreds maybe. The purpose of any finished work is to communicate something to the audience. For this, the aesthetic always matters. Even if the content is purely to communicate information, if you shoot a college-looking-bro delivering the lines sitting on a couch drinking a brew filmed with a phone from someone lying on the floor, well, it's not going to seem like reliable or trustworthy information, unless it's about how many beers were had at the party last night (and even then...). The same exact words delivered by someone in a suit sitting at a desk with a longer lens on a tripod and nice lighting will usually elicit a very different response (sometimes one of trust, and sometimes a reaction of mis-trust, but different all the same). A person wearing glasses and a lab coat standing in front of science-ish stuff in a lab is also different. Humans are emotional animals, and we feel first and think second. There isn't any form of video content that isn't impacted by the aesthetic choices made in the production of the video. Some might be so small that they don't seem relevant, but they'll still be there in the mix.
  8. Even if budget was no option, sadly it's the GX85 for me, and that's a very compromised option in almost every way, but is the right balance of trade-offs. In a way I'm lucky that my best camera isn't ridiculously expensive, but also, it means there is nothing I can do to work towards a better setup.
  9. kye

    Panasonic G9 mk2

    ...beard flowing down majestically into the unbridgeable abyss...
  10. Maybe some niche rental house in LA? Even just if folks making music videos wanted to rent it?
  11. kye

    24p is outdated

    Time will tell if it's just familiarity or if it's actually something innate. I wouldn't be so quick to assume that the human visual system has nothing to do with how we feel about what we see, and that it's all equal and is just what we're used to.
  12. kye

    24p is outdated

    I'd be curious to see proper research on the topic. My predictions would be that a certain percentage would identify that it "looks different" in some way, and would be able to identify the effect in an A/B/X style test. I suspect that a further percentage would say it looks the same, but that it somehow feels different, essentially anthropomorphising it to be sadder or more surreal or something. I've done this test with a few people in a controlled environment, where I recorded a tree in my backyard moving in the light breeze with my phone in 24p, 30p and 60p, and then put them all onto dedicated timelines in Resolve, then by playing back each one through the UltraStudio 3G it would switch the monitor to the correct frame rate for each one. It wasn't perfect, but all the clips were recorded with very very similar settings, so it wasn't a bad test. Perhaps the best test would be to render a 3D environment in the three frame rates but have each render start with the same random variables and so the motion of the scene would be exactly the same. Of course, if you were going to do it, I'd put in a few other things too, like varying the shutter angle. It would be a huge amount of work and would require a pretty large sample group to get meaningful results. Good question. I've noticed the gulf widening between "video should look like reality so technology advancement is awesome" and "video should be the highest quality to democratise high-end cinema and advance the state-of-the-art". There are also the "technology is always good, why are you talking about a story?" folks, but they're best ignored 🙂 The challenge in this debate is that if we're not even trying to achieve the same goals, then what's the point of discussing the tools? Interesting concept! I think you might be overstating the take-over of the heavy-VFX component of the industry, and even perhaps the nature of the segments themselves. Certainly a majority of Hollywood income might be from VFX blockbusters, but the world is a lot larger than Hollywood. The majority of films made likely weren't VFX-heavy, and the majority of film that people actually cared about definitely wouldn't have been. If you asked me if I'd seen <insert blockbuster here> then I probably couldn't answer, because truthfully they're mostly forgettable. On many occasions I've been pressured into watching a movie with my family that one of my kids chose, and the experience was mostly the same - famous actors / bursts of action / regular laughs / the USA wins in the end, and then a few days later I remember that I watched the film but genuinely couldn't remember the plot. This is counter to something like Roma where years later I remember some aspects but I also remember how it felt in critical moments and how my life is very different to theirs.
  13. kye

    24p is outdated

    Computer displays are a long way from being superior to human vision, so it's all about compromises and the various aesthetics of each choice. I would encourage you to learn more about how human vision works, it can be very helpful when developing an aesthetic. A few things that might be of interest: Here's a research paper outlining that the human eye can perceive flicker at 500Hz: https://pubmed.ncbi.nlm.nih.gov/25644611/ Here's a research paper saying that people could see and interpret an image from a frame shown for 13ms, which is one frame in 77fps video: https://dspace.mit.edu/bitstream/handle/1721.1/107157/13414_2013_605_ReferencePDF.pdf;jsessionid=6850F7A807AB7EEEFA83FFEEE3ACCAEF?sequence=1 The human eye sees continually without "frames" and has continual motion-blur, normal video also has these with the motion blur represented as a proportion of the frame rate (where 360 degrees is 100%) but the human eye has a much faster frame rate than motion blur, so we might (for example) have a "shutter-angle" of dozens or hundreds of frames Video is an area where technology is improving rapidly and a lot of the time the newer things are better, but that's not always the case. The other thing to keep in mind is that there are different goals - some people want to create something that looks lifelike but other people want to create things that don't look real. Much of the tools and techniques in cinema and high-end TV production are to deliberately make things not look real, but to look surreal or 'larger than life' etc. I've been doing video and high-end audio for quite some time and have put many folks in front of high-end systems or shown people controlled tests of things back-to-back, and often people do notice differences but don't talk about them because they don't know what you're asking, or don't have the language to describe what they're seeing or hearing and don't want to sound dumb, or simply don't care and don't want to get into some long discussion. Asking people who have just seen a movie for the first time if they noticed "anything different" is a very strange approach - if they hadn't seen the film then everything about the film would have been different. Literally thousands of things - the costumes, the lighting, the seats, how loud it was, how this cinema smelled compared to the last one, etc. Better would be to sit people in front of a controlled test and show them two images with as few variables changed as possible. Even then it can be challenging. When I first started out I couldn't tell the difference between 24p and 60p, now I hate the way 60p looks and quite dislike 30p as well. Lots of people also knew what the 'soap opera effect' is, without being camera nerds..
  14. kye

    Lenses

    I'm really interested in how long the longest end needs to be. You've basically said that 70mm is a bit short, that 75 or 80 are better, but that you use a 35-150mm outdoors. How do you feel about the 100-150mm range? Do you use it a lot? If so, what specific types of compositions and situations do you use it for? How do you feel about the 150-200mm range? Gathering from the above and other posts, you seem to have traded it in for other considerations, but is the focal length useful at all? Or is it too long for what you shoot? If you got given a weightless 28-200mm F2.8 lens then how many of your compositions would be above 150mm, and what would they be? What about 28-300mm? I guess what I'm looking for is feedback on the creative elements like that anything above 150mm is too compressed, or that it's only useful in certain situations, or that it feels too distant and out-of-context in an edit, or that it's not useful at all, etc.
  15. kye

    Lenses

    I'm keen to get some feedback on focal lengths. As many know, I shoot travel and want to be able to work super-quickly to get environmental shots / environmental portraits / macro / detail shots, and have narrowed down to three options: GX85 + 12-35mm F2.8 GX85 + 12-60mm F2.8-4.0 GX85 + 14-140mm F3.5-5.6 The GX85 also has the 2x digital punch-in which is quite usable. GX85 + 12-35mm F2.8 This is essentially a 24-70mm (48-140mm) with constant aperture, and is the best for low-light. The question is if this is long enough for getting all the portrait shots. GX85 + 12-60mm F2.8-4.0 Same as above but slower and longer. I'm still not sure if this is long enough. GX85 + 14-140mm F3.5-5.6 Slower but way longer. This would be great for everything, and also zoos / safaris too, but isn't as wide at the wide end, which is only a slight difference but is still unfortunate. I'm keen to hear what people's thoughts are in terms of the practical implications of these options in terms of what final shots these will provide to the edit. My experience has been that the more variety of shots you can get when working a scene the better the final edit. I'm not that bothered about the relative DoF considerations, but aperture matters for low-light of course. I'm shooting auto-SS so am not fiddling with NDs, so the constant aperture doesn't matter in this regard. There are obvious parallels to shooting events, weddings, sports, and other genres - keen to hear from @BTM_Pix @MrSMW and others who shoot in similar available-light / uncontrolled / run-n-gun / guerrilla situations.
  16. When I made the move from Windows to Mac, the primary reason I did that was I wanted to use a computer, but not have a part time job as a systems administrator, which is what Windows forces you to do in order to just use the computer. I was stunned at the time how much time I used to have to spend on the computer not doing the things I wanted to do, but doing technical things to enable those things to be done. TBH I'm over that, so while I'm perfectly capable of managing the IT of a medium sized business, I'd rather just use the computer for what I want to use it for, and not have to troubleshoot an array of file and network management infrastructure.
  17. There were horses and monkeys, so maybe sports something? I recognised very few people there, so maybe it's the MFT YouTubers rather than the Sony FF fan club?
  18. Yes, I understand the logic. One element of colour science often forgotten is the choice of RGB filters for the filter array, but I'm not sure if that would also be the same between these cameras, or if they'd just be supplied by Sony and therefore be identical between brands? In theory this would create differences due to the corners of the gamut, but once transformed to XYZ differences between two different filters on the same sensor would be rounding errors at best. The best discussion I've seen on digital processing is the discussion of the Alexa 35 - page 52 onwards. https://www.fdtimes.com/pdfs/free/115FDTimes-June2022-2.04-150.pdf As you say, much of what is going on might well be things that occur on the sensor. The other component that is worth looking at between RAW sources is the de-bayer algorithms between cameras. AFAIK you can't choose which de-bayer algorithm gets used on RAW footage, so there might be differences between the manufacturers algorithms contributing to the differences we see in real life.
  19. kye

    Panasonic G9 mk2

    Don't be so skeptical... there is much enlightenment available! https://variety.com/2023/digital/news/mean-girls-free-tiktok-23-parts-paramount-1235743213/
  20. These days I don't think even California would be big enough!
  21. I didn't realise you had access to every camera. I realise now that this makes every statement you make correct! I'll learn my place eventually....
  22. kye

    GoPro Hero12

    You might benefit from the Voice Isolation feature in Resolve. I'm not sure if this is a paid feature or in free version, but it seems to work pretty well in some situations. In general though, having something on the microphones to block them from wind is essential. You can't side-step the laws of physics 🙂 That's true of almost every camera. The closest you get it right in-camera the better the output will be.
  23. Gotta love a demo video that starts with a world-famous model / actress... All else being equal, products like this probably help to keep film alive. There's a chance the rich might adopt things like this, and perhaps burn through film like they're rich (because they are). For the rest of us, if you wanted to shoot on 8mm film then get yourself a second-hand real film camera and then benefit from the film consumption of the rich to give economies of scale. Personally, I think that the iPhone / smartphones have finally gotten here. The compression artefacts (and even the crushing over-processing) are made invisible by the time you add enough blur and grain to get a semi-decent match, and they've finally caught up in terms of dynamic range, so you can have contrasty mids with strong saturation but gentle and extended rolloffs. Also, the more these images trend on social media, the more my OG BMPCC and BMMCC go up in value!
  24. Agreed that it's often about the processing. I think there's a bunch of stuff going on and often people don't understand the variables, or aren't taking into account all the relevant ones. Also, people forget that the main goal of any codec is getting nuanced skin tones in the mids of whatever display space you'll be outputting to. In that context: RAW in 12-bit is Linear, which is only about the same quality as 10-bit Log LOG in 10-bit isn't significantly better than 709 in 8-bit when exposed well It's commonly believed that only 12+bit RAW let's you adjust exposure and WB in post, and that 10-bit LOG is required for professional work, but thanks to colour management we can adjust exposure and WB in any of these (obviously 12-bit RAW > 10-bit LOG > 8-bit 709 from most cameras in real life and for big changes but if you're only making small changes then the errors are often negligible) 8-bit LOG is much worse than 8-bit 709 unless you were delivering to HDR (because if you convert LOG to 709 then you're pulling the bits in the mids apart significantly, which is absolutely visible) HLG is sometimes better than LOG - despite "HLG" being a marketing phrase and not a standard (unlike rec2020 or rec2100) it typically has a rec709 curve up to the mids then has a more aggressive highlight rolloff above that to keep the whole DR of the sensor, and combines this with a rec709 level of saturation. This is brilliant because it typically means that you get the benefits of having a 10-bit image with 709-levels of saturation and the full DR of the sensor. This is superior to a 10-bit LOG profile from the same camera (unless you clip a colour channel) because there is greater bit-density in the mids for skin tones (roughly equivalent to 12-bit LOG) and you get to keep the whole DR. You can also change exposure and WB in post with proper colour management. The GH5 and iPhone implementations of HLG are like this. The pros require greater quality than consumers because they have to keep clients happy when viewing the images without any compression. Much of the subtleties get crunched by streaming compression, and unless you're shooting in controlled conditions you can't really expect to keep the last levels of nuance right into the final delivery. I'm keen to learn more about what happens at this stage of processing. If you have resources on this, please share! The other aspect to consider is that RAW and uncompressed aren't the same thing. The megapixel race has obscured the fact that very close to 100% of material is now being shot at resolutions above the delivery resolution. This is fine, and oversampling is great for image quality, but if you want to shoot RAW (and get the benefits of having no processing in-camera) and not have to deal with 5K / 6K / 8K source files when delivering 4K or even 1080p, then you have to deal with the sensor crop and having your whole lens package changed because of it. The alternative to this, and what I think is the big advantage of Prores, is to have a full sensor readout downscaled in-camera, but unfortunately this typically means that you have some degree of loss of flexibility to the source image (either by compression, bit-depth reduction, or even straight-out over processing like iPhone 14 did). The alternative is to downscale in-camera and then just save the images as uncompressed. There is nothing stopping manufacturers from implementing this. The GH5 downscaled 5K in-camera and many cameras record Cinema DNG sequences in-camera - just combine the two and we're there.
  25. Just got around to watching this - interesting stuff. One thing described as a key element to the French New Wave was the availability of affordable and portable 16mm film cameras, allowing the available-light, hand-held shooting style. I see from wikipedia that the Italian Neorealism was a precursor to the FNW, with Italian Neorealism in full swing from ~1945-1955 and FNW not really getting started until the late 1950s. I couldn't find any good timeline about when 16mm film became affordable - but I did note that wikipedia said "The format was used extensively during World War II, and there was a huge expansion of 16 mm professional filmmaking in the post-war years." so maybe the Italian Neorealism was the first movement to really benefit from this technological advancement? I was also under the impression that the FNW was the innovative movement that took the new tech and developed new techniques that fully utilised it, but maybe that's not the case?
×
×
  • Create New...