Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. Was the Panny using IBIS and OIS in its Dual IS mode?
  2. Every time I see a blind camera test, I am looking at the colours. It's not something I decided to do, or have to focus on, it's what I naturally find myself looking at. Especially skin-tones. All the blind tests I've done, where I've done them blind, taken notes, and then scored and ranked the cameras, before looking at the results to see what was what, I correctly chose things in descending order of price. There was one test where I put the Alexa lower than cheaper cameras, but it had a strange green tint to it that the other cameras didn't have, so I suspect something was wrong with the test. The colour and texture of skin tones is about 40% of the grade I think. Drastically under-rated, and barely discussed with any depth.
  3. It was like you were reading my mind as I was reading. The other thing that people often forget is that film is a creative medium. I know it's heresy and I'll flame myself after posting this, but it matters because if you're zooming while filming then your choice of compositions and even choices in how you move the camera etc will be made on the basis of what you see in the frame. People forget that the tech influences the operator, and the operator controls the camera and also influences the crew and even the cast (if they're grumpy or happy enough), and those things are far more relevant to the actual end product than any technical whatever that might exist as a counter-argument.
  4. Wow.. "I was there" wasn't a reply I expected! What else can you tell us about the shoot? Anything that looks that crazy on film has got to have some stories that go with it!
  5. That doesn't look good to me at all... It had the blur bursts, warping, also towards the end wasn't stabilised well with sort of jumpy and floaty motion. Getting it right in-camera still matters for now, until AI can sort out this issue too.
  6. A FF box-cam would make a lot of sense as a crash-cam, for drones, or for mounting in odd locations etc. It would have FF, good DR, good low-light, thermal management fans, and potentially the ARRI LogC curve / Prores RAW in it. L-mount primes can be quite compact, and the camera body itself would probably be smaller than the GH7 as well!
  7. Yeah, now they've got the license, they might be able to release it for other cameras perhaps. One thing that comes to mind is that GH7 has IBIS, which isn't so good if you want a crash cam or something where there will be significant vibration (which was the purpose of the GH5S) but all Panny cameras now have it, don't they?
  8. Another GH7 + LogC and ARRI discussion.. TLDR; Brady used the combo on two narrative projects, they matched up nicely in post (but not perfectly), and he sees it as a great option for a B-cam to an Alexa if you didn't have the budget for two Alexas. There's some side-by-sides in the video, including graded to match and also not.
  9. No knowledge with that camera, but if the focus selection AI just randomly changes modes on you, it sounds like there might be a problem. As a standard troubleshooting step, maybe try resetting the camera to default settings, and upgrade to the latest firmware for your S5 and each of your lenses too.
  10. I'm continually amazed at how much energy goes into pixel peeing at 4K or higher, and yet the high-end productions deliberately blur un-sharpen the footage. Compare that video with the reference footage from real projects and the difference is obvious. The visibility of 1080p vs 4K vs 6K is a debate that won't ever die, but the visibility of 6K 4:4:4 vs 6K 4:2:0 will be absolutely zero once it's been put onto a 4K timeline, un-sharpened to an aesthetically pleasing amount, exported and uploaded to a streaming service, and then heavily processed and brutally compressed before streaming to the end user. Unless people are literally doing green-screen work or VFX, putting money into lens sharpness or 6K+ resolutions is just paying to look less high-end, not more.
  11. I am wondering if sharpness might have something to do with it for me. As a reference, what do you think about this?
  12. Same impression for me during the opening few scenes in Trigger Warning which is new on Netflix. The VFX shots were very budget, but the footage had a real video look to them. It was shot on real equipment so I'm also not sure what it was. Also, the trailer on YT looks much better than the early parts of the film. I had noticed this "ingredient X" appearing on random things previously, and tried to ignore it because once you learn to see something you can't un-see it, and I was pretty sure that my preferred cameras wouldn't look good in this regard, but now I've seen it to the degree I can't ignore it, so I've reluctantly started investigating.
  13. For artists... it could win many Oscars and Academy Awards. For technicians... well, just keep reading this thread to find out.
  14. See - the question was wrong from the start!
  15. People that make cinematic videos on YT that cut my eyes have clearly never been to the cinema. The difference might as well be iPhone 4 and Alexa. If you can't see that your jagged harsh digital and clinical cinematic video has nothing in common with cinema, then the level of blindness is almost complete. The deeper into this I go, the more I realise that not only are most answers online wrong, but the questions don't even make sense. Asking "what is the resolution of film?" makes little more sense than asking "how many bananas are in sadness?"... and the discussion is nonsensical right from the start.
  16. Modern "cinematic" videos cut my eyeballs, but actual cinema doesn't. Please don't cut my eyeballs.
  17. Riiiight.. so you're not a critic, you're a recovering addict! I gather this is yours then? It's.... A vibe.
  18. When you are in any mode (that I've tried) the camera is taking the full width of the sensor and downscaling it to the output resolution you have chosen (assuming you're not using the full open gate mode which obviously doesn't need scaling). The horizontal FOV is the same regardless of the output mode. If you engage the 2x or 4x mode, it will crop into the sensor by 2x or 4x, then scale that to the output resolution. These crops are independent of the output mode. If you engage the ETC mode, it takes a crop of the sensor that is the same size as the mode you have selected, so the crop is a 1:1 crop with no down or upsampling and is obviously dependent on the mode you have chosen. Cropping 1:1 in ETC mode when you're in a 4K mode into the ~5.2K sensor gives a total crop factor of 2.7, which is pretty close to the 2.88 crop factor of Super16. The ETC crop of FHD is double that, and close to the crop factor of Super8. +1 I think if you do it well then you either look like a you're covering an action sport or selling fast food ads, and if you do it poorly you look like an influencer. Really neither of these lend themselves to the wedding vibe..
  19. Sure does! Both in 29.97p and 23.98p (which is probably 23.976).
  20. I tested a bunch of modes and was really impressed that the only times it cropped in were when you explicitly told it to do so (the ETC and 2x and 4x digital zooms) and everything else was downsampled. That even included things like the FHD 60p mode in 2x digital crop - it was still downsampling from an area half the width of the sensor ( ~2.5K ) down to the FHD output resolution. It made all the modes I tested really high quality. I still read about the hodge-podge of gotchas and limitations on current generations of cameras and just shake my head.
  21. How odd, I just checked and the GH5 3.3K 4:3 mode and it's the same horizontal FOV as the 16:9 modes, so downsampled not cropped.
  22. Just fired up my GH5 and it has 3328x2496 at 59.94p, so I'd be surprised if the GH5ii, GH5S, GH6 and GH7 didn't have at least one >30p mode in open gate..
  23. It's never too late for a rebrand and a fresh start!!
  24. I think the answer really depends on your individual circumstances. People tend to give advice based on their own situation, which is fine, but almost always underestimate how different the situations of others can be from their own. Some key considerations to frame the discussion should include: If you are going to be collaborating with anyone else obviously, if you're planning to collaborate then having the same platform makes certain things 100x easier. What your workflow is like Some people shoot, edit the footage with a simple rec709 conversion, then Picture Lock, then sound design / music / VFX / colour grading / etc all get done, then output and distribution. Others may do the edit but will go back and fine-tune it as colour grading and VFX and sound design etc are all being integrated. Others my do all the stages simultaneously, in workflows where the concept of Picture Lock makes no sense whatsoever. How well your footage is shot Some projects have complete control over lighting etc during filming and need basically zero colour grading in post. Others need a lot more finesse, which is well beyond the scope of FCPX. What the turn-around times are for the project If you need to turn something around in 3 days, but need to do round-trip the colour, get Picture Lock before the composer can start composing the score, etc etc then good luck, but if you're in Resolve and editing in the cloud (or with Resolve Server) then you can have the whole team working on the project literally at the same time What codecs you're using What tools you need for VFX / colour grading / audio What tools you're used to using and how much time (if any) you can devote to switching I feel like there is an increasing divide between the formal "industry" way of doing things, where (in theory) everyone does their bit in a sequence and (in theory) the process doesn't go back to an earlier stage, and everything can be done in separate pieces of software.. and the way that small teams or solo operators might sculpt an edit, with everything done within the same package and where everything is able to be finessed right up to hitting Export.
  25. You're still thinking about the GH8...? I'm wondering if the GH9 will have 12k120 in 10-bit 4:2:2. I have given up on it being internal raw in that mode, but my cat videos won't be cinematic if it's only 4:2:0 and doesn't have at least 17 stops of DR. As an astute poster recently commented, if ARRI can do 17 stops with the Alexa 35 then why not Panny? I don't think I'm asking for too much when I expect the cameras specs to compensate for my complete lack of skill.
×
×
  • Create New...