Jump to content

kye

Members
  • Posts

    7,834
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Panasonic G9 mk2

    Enjoy!! I think it's a solid choice - apart from the size of the body making people very excited it looks like a great spec'd camera.
  2. kye

    DJI Pocket 3?

    Being able to stop down a few stops won't help you much during the day, you'd need to use an ND filter anyway. Of course, in combination with an ND, using the aperture to fine tune the exposure would be useful, but it's not going to eliminate the need for an ND. It wouldn't be noticeable - the 1 inch sensor giving a 20mm FOV means it has a 7.4mm F2.8 lens - not exactly a shallow DoF monster! This lens has infinite focus unless focused very close: When focused at 5.8ft, the focal plane is from 2.9ft to infinity When focused at 5.0ft, the focal plane is from 2.7ft to 36.1ft When focused at 3ft, the focal plane is from 2ft to 6.2ft I very much doubt that anyone shooting with this camera and focusing closer than 5ft would see the background slightly out of focus and want to stop down. I mean, maybe someone somewhere would do that, but it's hardly going to occur frequently enough for DJI to add an aperture mechanism to an entire product.
  3. kye

    DJI Pocket 3?

    One difference I can see is that on a drone you can't just put on a stronger ND, but you likely could adjust the aperture remotely. Of course, I agree with you that an aperture would have been a handy addition. I'd imagine including the ability to have shutter speeds that are crazy short is a cheaper alternative and achieves the same exposure.
  4. kye

    Panasonic G9 mk2

    I'd imagine that it would have a partial effect to reduce any moire, but may not be strong enough for moire with large contrast. The mist filters essentially do a blur of a small percentage of the light that goes through them, with the blur being stronger at small distances, so it's sort-of like a mostly transparent OLPF.
  5. kye

    Panasonic G9 mk2

    I don't think it works that way though. A 12K sensor would be sensitive to moire if there were repeating patterns that happened to align with the gaps between the pixels, just like a 4K sensor. It might be that common causes of moire are around a certain size and therefore impact one combination of sensor resolution / sensor size / and focal length more than other combinations. Also, lower resolution sensors might be more prone to moire as they're typically older and there were larger gaps between the pixels than there are now. Lower resolutions are likely to have issues on cheaper cameras too, due to the camera line skipping and therefore effectively creating very large gaps between the active pixels. Sadly, there's lots of different ways to create moire, and many of them tend to come from strategies to make the product more affordable!
  6. kye

    Panasonic G9 mk2

    Yes and no. The Foveon sensor would mean that demosiacing wouldn't be required but it might still have dead spots between the photosites, which is what I was talking about. Here's an image showing what I mean... the yellow areas indicate the light from the scene that would be captured by each photo site, and the blue areas indicate the light from the scene that would not be detected by any photo site on the sensor: If the micro lenses were configured to make sure all light made its way onto a photo site then it would eliminate this issue, like the below shows on the right with the gapless micro lenses: It's worth mentioning that not all lenses project the scene onto the sensor from directly in front either - some lenses have the light hitting the sensor at quite an angle, which can interfere with how well the micro lenses are able to direct all the light to a photo site. These lenses are typically from older cameras which were designed to project onto film, which didn't care about the angle it got exposed from. This is why different lenses often vignette differently on different cameras, especially vintage wide angle lenses that might be projecting the image circle from the middle to the edges at quite a steep angle.
  7. When I was in South Korea earlier this year they had ads for 8K that were TVs showing demo material - they were everywhere in airports and shopping centres etc. When you walked up to them and inspected the image it was highly compressed and obviously being streamed from some central location, by the compression that source might have been some other planet - it really was completely trash - probably worse than YT in 1080p. This strikes me as being an interest in technology for its own sake, rather than for the end result. I can imagine such a thing driving the whole market in certain places. I can imagine a shift to the Alexa 35 from even the 65 based on the improved dynamic range and new colour science (LogC V4 vs V3). The lenses might also be lighter too, although not sure how much that actually influences the big budget productions.
  8. kye

    Panasonic G9 mk2

    I suspect that there will be a GH7, and that it will be out sooner than their usual release schedule (as the GH6 wasn't well received) but I also wouldn't wait for it, it could be a couple of years and still be "ahead of schedule". In terms of moire, the only guaranteed solutions are an OLPF or to have pixels that have 100% coverage (ie, no gaps between the pixels). If you don't have either of those, there is always the possibility that a repeating pattern will fall into the gaps between the pixels and therefore be completely invisible to the sensor at that location where the pattern and pixels are identically spaced. AI can't deal with moire because it happens before the image is digital. The only way AI could deal with it would be for the AI to re-interpret the whole image and replace sections of the image because it "knows better" than what the camera captured. Getting a good result in video is a long way off I suspect. Your eye will have moire because the interference comes from the actual objects exhibiting this behaviour, not the capture mechanism. The "lines" shown in the below images are formed because the objects physically line-up when viewed from this particular vantage point, and the areas that don't have "lines" are because the objects didn't line-up from that vantage point. I've been saying this for years now. In the conversation that follows, people resume talking about PDAF and CDAF like my comment never existed. It's nice to hear someone else say it, but don't expect to raise the level of discussion!
  9. kye

    Panasonic G9 mk2

    I couldn't find any DR results either. Let's hope that the people who give thorough technical reviews are still working on them, rather than it just being passed over for deeper technical investigation.
  10. Interesting stuff, thanks for giving a summary. I think the workhorses represent the quiet majority, with all the discussion going to the exceptions. It makes sense - the things that would get remarked about are the things that are remarkable, and the unremarkable things aren't worth remarking on. It's all literally in the dictionary definitions 🙂 Considering that professional equipment should have a very long service life, unless accidentally destroyed a good proportion of the cinema cameras manufactured over the last decade are likely to still be in-use, quietly getting the job done and as they get a bit beaten up over the years will start flying under the radar because the rental houses and studios sell them off to owner/operators and so the cameras don't appear on any sales figures or rental house stats, etc.
  11. kye

    Panasonic G9 mk2

    I'm guessing there should be sufficient tests of the S5ii by now to be able to tell? I was also wary of the GH6 because of that mode, and specifically because they obviously had a problem with it with the horizontal banding in high-DR situations, but de-prioritised it. Apart from that, I'd benefit more from having a camera with dual-native-ISO rather than a dual-gain mode and only one native ISO because of the extended low-light it would give. I saw this review talking about how the highlight of the G9 ii is the 300 FPS mode - linked to timestamp: In terms of waiting vs buying, I adopt a risk management strategy. If an option in front of you is worth buying even if there were no new products, and you couldn't wait (you need the features now) then I say buy. If you can wait, then wait until you can't make do with what you have and then buy then. Worst cases are that: 1) you bought and then a better option was released - but if you needed the tool for the job then it was an investment and also you can just trade-up with the interim projects helping to justify the loss, or 2) you can wait and so you do wait and no better option is released - but that's fine too because either you never buy because you never needed to upgrade or you eventually do need to upgrade and you buy then when the product is cheaper.
  12. kye

    2024 Plans

    Ah, I'm with you. When I read your previous post I thought somehow that you were using both cameras simultaneously on the same head, and that other heads weren't up to having two cameras mounted at the same time. It makes more sense if you have them mounted separately. I occasionally would like to capture multiple FOVs simultaneously from the same setup, but it's not something easily done or rigged up.
  13. kye

    2024 Plans

    Why use dual cameras? One wider angle and one tighter angle? What types of shots would you use this for?
  14. These young people, are they expressing this interest in a film-making context? Are they film students? I'm curious... I've never known what the cool kids were doing!
  15. kye

    Panasonic G9 mk2

    I'd wait. Both are far too large for me! But for you, I'd say you should think about the total package, including lenses, batteries, accessories, etc, and work out what suits you and your workflow best. TBH, the camera body probably matters least out of everything...
  16. Do you find that having RAW is more of a benefit on the wider cameras? I'm having vague memories that RAW matters more on wider lenses because there's more detail visible in the scene and things are likely to be sharper due to having a deeper DoF. I know on the iPhones the wide camera has historically been worse, but I'm not sure if that was just ISO performance. I did direct comparisons with my iPhone 12 Mini cameras and found the main camera had equivalent amounts of ISO noise to my GH5 at about F2.8 but the wide was about F8!
  17. kye

    Panasonic G9 mk2

    I'm not really up with all the latest things that all the flagships have in them, but I'd imagine there's a bunch of things that they could do that would fit into the "spirit" of the GH line, being that each one isn't really head-line grabbing but cumulatively it creates a real workhorse. Things that come to mind (but is by no means everything): PDAF with all the modes (face, eye, dog/cat/gerbil-eye-AF) Variable eND External RAW support (BRAW and Prores RAW) Focus breathing compensation Shutter angles All the prores modes internally (including the 4444 12-bit mode) Dual native ISO with a nice high second ISO for serious low-light performance Support for more than 2 channels of audio Clean HDMI out Updated BM-like UI where you can choose the resolution, frame-rate, codec, quality settings, audio settings etc all in one simple place 1000/1500/2000+NIT screen They could also make an effort to create a good package / rig, by offering products like: Updated interface module that offers XLRs, TC, other stuff On-camera hotshoe shotgun mic like the Sony ECM on-camera mics which make a nice compact package Bolt on ARRI LPL adapter Integration with DJIs system so the camera can talk to the gimbals (for follow-mode) and their LiDAR products so you can have AF on manual lenses etc A custom-designed grip that provides extra batteries but also integrates a swappable SSD NVMe (the long skinny ones) so you can record to SSD without having to mess with cables I mean, there's nothing life-changing in the above, and we're not breaking the laws of physics by suggesting a 20-stop DR from an MFT sensor, but if it came with half of that stuff then it would be a serious offering I'd say. I mean, if you think there's no improvement above other offerings just watch a few videos where people list the 27 reasons the FX3 isn't a cinema camera, or the equivalents from any of the other brands too. No camera has all of the good features, they're all missing a random smattering of them.
  18. I hope that logic applies in reality, and from that Cooke video, it looks like maybe it is. Did you seeing the shift towards larger sensors, and now away from them, in your travels? Or are those cameras too high end for the projects you're on?
  19. This video from Cooke, suggests that the new Alexa 35 is creating a "return of super 35", and Jay and Christopher suggest that "it opens the world" and "as we moved into larger and larger sensors as a trend, that limited the options for lenses" and that going back to S35 gives "a larger breadth of options" from the 50-years of S35 lens development.
  20. I found LightRoom to be a very well-designed tool back when I was doing photos. I read about wedding photographers ripping through several hundred images from a wedding and only needing seconds per one, and I could completely believe it. Such an interface, where you just start at the top and go down through each section as required / desired, is a very easy experience. I'm at a bit of a crossroads with my colour grading approach, where on the one hand I could implement a default node tree with heaps of nodes, and I could adjust different things in different nodes that are each configured in the right colour spaces etc, but the other pathway is for me to just design my own plugin that is like LightRoom and just has the sliders I want, each in the right colour space, that are in the order I want to apply them in. I think this is why all those all-in-one solutions like Dehancer / Filmconvert / etc all have options to select the input colour space. The saving grace of all this is that most log profiles are so similar to each other that they mostly work with each other if you're willing to adjust a few sliders to compensate and aren't trying to really fine-tune a particular look. If you don't have colour management (either the tool doesn't support it or you haven't learned it) then you're really fumbling in the dark. To a certain extent I can understand people not wanting to learn it, because there are a few core concepts you need to wrap your head around, but on the other hand it is so incredibly powerful that it's kind of like being given a winning lottery ticket but not bothering to cash it in because you'd have to work out how to declare it on your next tax return.
  21. It's literally all the same, the only reason the software looks different is that in video there are things you want to do across multiple images, e.g. stabilisation. You can colour grade video in Photoshop (I've done it, it literally exports a video file) and you can edit still images in Resolve. Every operation to an image that is done between the RAW file and the final deliverable is just math.
  22. The more I think about colour grading in anything other than Resolve, the more difficult I see that it would be, and the more out-of-reach I can see that it is for normal people. Good thing that the editor and other pages in Resolve are getting more and more suitable for doing large projects end-to-end... just sayin' 😎😎😎 One thing I see many people being confused by is the order of colour grading transformations in the image pipeline vs the process of colour grading. The pipeline that each clip should go on is: conversion to a working colour space clip-level adjustments scene-level adjustments overall "look" adjustments that apply to the whole timeline conversion to 709 that also applies to the whole timeline The process of colour grading is to setup 1 and 5, then 4, then 3, then finally implement 2, then export. I have had thoughts about trying to create some tools that might be helpful and simplify things for folks, but between the barriers with using FCPX and PP, and the understanding that is required of pipeline vs process, I think it's too large a gap for me to be able to assist with. RAW is RAW, regardless of how many of them you happen to capture.
  23. Actually, with proper colour management, it is Linear. The Offset control in Resolve literally applies Linear gain, just like setting the node to Linear Gamma and then using the Gain wheel.
  24. I'm not sure what you're saying. I shoot outdoors in completely uncontrolled available lighting, cinema verite style with zero direction and zero setup - I participate in events and I shoot as best as I can and I get what I get. Matching exposure perfectly with zero adjustment in post under this situation is impossible. Lots of folks shoot in a mix of available lighting with some control over events and so have some degree of control over the shooting, but not all. Matching exposure perfectly with zero adjustment in post would be incredibly difficult and would come at the expense of other things that matter more, for example shooting fast enough to get additional material. If you are able to shoot fast enough to completely and perfectly compensate for the changes in the environment you are in, and are able to do so to the tolerance of your taste in post then that's great, but it's a pipe dream for many. Also, I think you radically underestimate what is possible in post if you have proper colour management setup. I also think you potentially underestimate how small a variation it takes to benefit from a small tweak. I don't see why a tiny tweak here or there is a big deal to be honest - even large changes are not visible on most material when done properly. As such, just get shoot as best as you can, adjust in post as best as you can, and focus on being creative - almost everything else matters more to the final end-product than such a minute aspect of the image.
  25. It depends on the situation you're shooting in. Obviously if you're in a studio and have complete control over the lighting etc then consistency is possible. If you are shooting anywhere near natural light of any kind then there is always the possibility of lighting changes. Even high-end productions shooting on days when there are only blue skies might end up shooting pickups later in the day when the light has changed a bit. Anything shot anywhere near sunset is going to change shot-to-shot, and potentially even faster than you could adjust exposure and WB to compensate. What is invisible on set might well be visible in post, considering how controlled the conditions are in the colouring studio and that you'd be cutting directly from one shot to the other. The reality is that with modern cameras the amount of leeway in post is absolutely incredible - latitude tests routinely show that you can push a shot 2 stops up or 3 / 4 / 5 stops down and it's still completely usable. The idea that you'd try and shoot on set to require zero simple changes in post, which take literally a few seconds, is a false economy. Here's one test showing Alexa overexposure tests: https://cinematography.net/alexa-over/alexa-skin-over.html
×
×
  • Create New...