-
Posts
7,980 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I don't think it works that way though. A 12K sensor would be sensitive to moire if there were repeating patterns that happened to align with the gaps between the pixels, just like a 4K sensor. It might be that common causes of moire are around a certain size and therefore impact one combination of sensor resolution / sensor size / and focal length more than other combinations. Also, lower resolution sensors might be more prone to moire as they're typically older and there were larger gaps between the pixels than there are now. Lower resolutions are likely to have issues on cheaper cameras too, due to the camera line skipping and therefore effectively creating very large gaps between the active pixels. Sadly, there's lots of different ways to create moire, and many of them tend to come from strategies to make the product more affordable!
-
Yes and no. The Foveon sensor would mean that demosiacing wouldn't be required but it might still have dead spots between the photosites, which is what I was talking about. Here's an image showing what I mean... the yellow areas indicate the light from the scene that would be captured by each photo site, and the blue areas indicate the light from the scene that would not be detected by any photo site on the sensor: If the micro lenses were configured to make sure all light made its way onto a photo site then it would eliminate this issue, like the below shows on the right with the gapless micro lenses: It's worth mentioning that not all lenses project the scene onto the sensor from directly in front either - some lenses have the light hitting the sensor at quite an angle, which can interfere with how well the micro lenses are able to direct all the light to a photo site. These lenses are typically from older cameras which were designed to project onto film, which didn't care about the angle it got exposed from. This is why different lenses often vignette differently on different cameras, especially vintage wide angle lenses that might be projecting the image circle from the middle to the edges at quite a steep angle.
-
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
When I was in South Korea earlier this year they had ads for 8K that were TVs showing demo material - they were everywhere in airports and shopping centres etc. When you walked up to them and inspected the image it was highly compressed and obviously being streamed from some central location, by the compression that source might have been some other planet - it really was completely trash - probably worse than YT in 1080p. This strikes me as being an interest in technology for its own sake, rather than for the end result. I can imagine such a thing driving the whole market in certain places. I can imagine a shift to the Alexa 35 from even the 65 based on the improved dynamic range and new colour science (LogC V4 vs V3). The lenses might also be lighter too, although not sure how much that actually influences the big budget productions. -
I suspect that there will be a GH7, and that it will be out sooner than their usual release schedule (as the GH6 wasn't well received) but I also wouldn't wait for it, it could be a couple of years and still be "ahead of schedule". In terms of moire, the only guaranteed solutions are an OLPF or to have pixels that have 100% coverage (ie, no gaps between the pixels). If you don't have either of those, there is always the possibility that a repeating pattern will fall into the gaps between the pixels and therefore be completely invisible to the sensor at that location where the pattern and pixels are identically spaced. AI can't deal with moire because it happens before the image is digital. The only way AI could deal with it would be for the AI to re-interpret the whole image and replace sections of the image because it "knows better" than what the camera captured. Getting a good result in video is a long way off I suspect. Your eye will have moire because the interference comes from the actual objects exhibiting this behaviour, not the capture mechanism. The "lines" shown in the below images are formed because the objects physically line-up when viewed from this particular vantage point, and the areas that don't have "lines" are because the objects didn't line-up from that vantage point. I've been saying this for years now. In the conversation that follows, people resume talking about PDAF and CDAF like my comment never existed. It's nice to hear someone else say it, but don't expect to raise the level of discussion!
-
I couldn't find any DR results either. Let's hope that the people who give thorough technical reviews are still working on them, rather than it just being passed over for deeper technical investigation.
-
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
Interesting stuff, thanks for giving a summary. I think the workhorses represent the quiet majority, with all the discussion going to the exceptions. It makes sense - the things that would get remarked about are the things that are remarkable, and the unremarkable things aren't worth remarking on. It's all literally in the dictionary definitions 🙂 Considering that professional equipment should have a very long service life, unless accidentally destroyed a good proportion of the cinema cameras manufactured over the last decade are likely to still be in-use, quietly getting the job done and as they get a bit beaten up over the years will start flying under the radar because the rental houses and studios sell them off to owner/operators and so the cameras don't appear on any sales figures or rental house stats, etc. -
I'm guessing there should be sufficient tests of the S5ii by now to be able to tell? I was also wary of the GH6 because of that mode, and specifically because they obviously had a problem with it with the horizontal banding in high-DR situations, but de-prioritised it. Apart from that, I'd benefit more from having a camera with dual-native-ISO rather than a dual-gain mode and only one native ISO because of the extended low-light it would give. I saw this review talking about how the highlight of the G9 ii is the 300 FPS mode - linked to timestamp: In terms of waiting vs buying, I adopt a risk management strategy. If an option in front of you is worth buying even if there were no new products, and you couldn't wait (you need the features now) then I say buy. If you can wait, then wait until you can't make do with what you have and then buy then. Worst cases are that: 1) you bought and then a better option was released - but if you needed the tool for the job then it was an investment and also you can just trade-up with the interim projects helping to justify the loss, or 2) you can wait and so you do wait and no better option is released - but that's fine too because either you never buy because you never needed to upgrade or you eventually do need to upgrade and you buy then when the product is cheaper.
-
Ah, I'm with you. When I read your previous post I thought somehow that you were using both cameras simultaneously on the same head, and that other heads weren't up to having two cameras mounted at the same time. It makes more sense if you have them mounted separately. I occasionally would like to capture multiple FOVs simultaneously from the same setup, but it's not something easily done or rigged up.
-
Why use dual cameras? One wider angle and one tighter angle? What types of shots would you use this for?
-
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
These young people, are they expressing this interest in a film-making context? Are they film students? I'm curious... I've never known what the cool kids were doing! -
I'd wait. Both are far too large for me! But for you, I'd say you should think about the total package, including lenses, batteries, accessories, etc, and work out what suits you and your workflow best. TBH, the camera body probably matters least out of everything...
-
Do you find that having RAW is more of a benefit on the wider cameras? I'm having vague memories that RAW matters more on wider lenses because there's more detail visible in the scene and things are likely to be sharper due to having a deeper DoF. I know on the iPhones the wide camera has historically been worse, but I'm not sure if that was just ISO performance. I did direct comparisons with my iPhone 12 Mini cameras and found the main camera had equivalent amounts of ISO noise to my GH5 at about F2.8 but the wide was about F8!
-
I'm not really up with all the latest things that all the flagships have in them, but I'd imagine there's a bunch of things that they could do that would fit into the "spirit" of the GH line, being that each one isn't really head-line grabbing but cumulatively it creates a real workhorse. Things that come to mind (but is by no means everything): PDAF with all the modes (face, eye, dog/cat/gerbil-eye-AF) Variable eND External RAW support (BRAW and Prores RAW) Focus breathing compensation Shutter angles All the prores modes internally (including the 4444 12-bit mode) Dual native ISO with a nice high second ISO for serious low-light performance Support for more than 2 channels of audio Clean HDMI out Updated BM-like UI where you can choose the resolution, frame-rate, codec, quality settings, audio settings etc all in one simple place 1000/1500/2000+NIT screen They could also make an effort to create a good package / rig, by offering products like: Updated interface module that offers XLRs, TC, other stuff On-camera hotshoe shotgun mic like the Sony ECM on-camera mics which make a nice compact package Bolt on ARRI LPL adapter Integration with DJIs system so the camera can talk to the gimbals (for follow-mode) and their LiDAR products so you can have AF on manual lenses etc A custom-designed grip that provides extra batteries but also integrates a swappable SSD NVMe (the long skinny ones) so you can record to SSD without having to mess with cables I mean, there's nothing life-changing in the above, and we're not breaking the laws of physics by suggesting a 20-stop DR from an MFT sensor, but if it came with half of that stuff then it would be a serious offering I'd say. I mean, if you think there's no improvement above other offerings just watch a few videos where people list the 27 reasons the FX3 isn't a cinema camera, or the equivalents from any of the other brands too. No camera has all of the good features, they're all missing a random smattering of them.
-
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
I hope that logic applies in reality, and from that Cooke video, it looks like maybe it is. Did you seeing the shift towards larger sensors, and now away from them, in your travels? Or are those cameras too high end for the projects you're on? -
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
This video from Cooke, suggests that the new Alexa 35 is creating a "return of super 35", and Jay and Christopher suggest that "it opens the world" and "as we moved into larger and larger sensors as a trend, that limited the options for lenses" and that going back to S35 gives "a larger breadth of options" from the 50-years of S35 lens development. -
I found LightRoom to be a very well-designed tool back when I was doing photos. I read about wedding photographers ripping through several hundred images from a wedding and only needing seconds per one, and I could completely believe it. Such an interface, where you just start at the top and go down through each section as required / desired, is a very easy experience. I'm at a bit of a crossroads with my colour grading approach, where on the one hand I could implement a default node tree with heaps of nodes, and I could adjust different things in different nodes that are each configured in the right colour spaces etc, but the other pathway is for me to just design my own plugin that is like LightRoom and just has the sliders I want, each in the right colour space, that are in the order I want to apply them in. I think this is why all those all-in-one solutions like Dehancer / Filmconvert / etc all have options to select the input colour space. The saving grace of all this is that most log profiles are so similar to each other that they mostly work with each other if you're willing to adjust a few sliders to compensate and aren't trying to really fine-tune a particular look. If you don't have colour management (either the tool doesn't support it or you haven't learned it) then you're really fumbling in the dark. To a certain extent I can understand people not wanting to learn it, because there are a few core concepts you need to wrap your head around, but on the other hand it is so incredibly powerful that it's kind of like being given a winning lottery ticket but not bothering to cash it in because you'd have to work out how to declare it on your next tax return.
-
It's literally all the same, the only reason the software looks different is that in video there are things you want to do across multiple images, e.g. stabilisation. You can colour grade video in Photoshop (I've done it, it literally exports a video file) and you can edit still images in Resolve. Every operation to an image that is done between the RAW file and the final deliverable is just math.
-
The more I think about colour grading in anything other than Resolve, the more difficult I see that it would be, and the more out-of-reach I can see that it is for normal people. Good thing that the editor and other pages in Resolve are getting more and more suitable for doing large projects end-to-end... just sayin' 😎😎😎 One thing I see many people being confused by is the order of colour grading transformations in the image pipeline vs the process of colour grading. The pipeline that each clip should go on is: conversion to a working colour space clip-level adjustments scene-level adjustments overall "look" adjustments that apply to the whole timeline conversion to 709 that also applies to the whole timeline The process of colour grading is to setup 1 and 5, then 4, then 3, then finally implement 2, then export. I have had thoughts about trying to create some tools that might be helpful and simplify things for folks, but between the barriers with using FCPX and PP, and the understanding that is required of pipeline vs process, I think it's too large a gap for me to be able to assist with. RAW is RAW, regardless of how many of them you happen to capture.
-
Actually, with proper colour management, it is Linear. The Offset control in Resolve literally applies Linear gain, just like setting the node to Linear Gamma and then using the Gain wheel.
-
I'm not sure what you're saying. I shoot outdoors in completely uncontrolled available lighting, cinema verite style with zero direction and zero setup - I participate in events and I shoot as best as I can and I get what I get. Matching exposure perfectly with zero adjustment in post under this situation is impossible. Lots of folks shoot in a mix of available lighting with some control over events and so have some degree of control over the shooting, but not all. Matching exposure perfectly with zero adjustment in post would be incredibly difficult and would come at the expense of other things that matter more, for example shooting fast enough to get additional material. If you are able to shoot fast enough to completely and perfectly compensate for the changes in the environment you are in, and are able to do so to the tolerance of your taste in post then that's great, but it's a pipe dream for many. Also, I think you radically underestimate what is possible in post if you have proper colour management setup. I also think you potentially underestimate how small a variation it takes to benefit from a small tweak. I don't see why a tiny tweak here or there is a big deal to be honest - even large changes are not visible on most material when done properly. As such, just get shoot as best as you can, adjust in post as best as you can, and focus on being creative - almost everything else matters more to the final end-product than such a minute aspect of the image.
-
It depends on the situation you're shooting in. Obviously if you're in a studio and have complete control over the lighting etc then consistency is possible. If you are shooting anywhere near natural light of any kind then there is always the possibility of lighting changes. Even high-end productions shooting on days when there are only blue skies might end up shooting pickups later in the day when the light has changed a bit. Anything shot anywhere near sunset is going to change shot-to-shot, and potentially even faster than you could adjust exposure and WB to compensate. What is invisible on set might well be visible in post, considering how controlled the conditions are in the colouring studio and that you'd be cutting directly from one shot to the other. The reality is that with modern cameras the amount of leeway in post is absolutely incredible - latitude tests routinely show that you can push a shot 2 stops up or 3 / 4 / 5 stops down and it's still completely usable. The idea that you'd try and shoot on set to require zero simple changes in post, which take literally a few seconds, is a false economy. Here's one test showing Alexa overexposure tests: https://cinematography.net/alexa-over/alexa-skin-over.html
-
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
Perhaps the most significant aspect of this whole thing, which is the elephant in the room, is that all movies and TV and videos are a fiction. Anything scripted is a fictional reality of course. Documentaries are a fiction too, taking hours of interviews, b-roll, on-location and historic footage and editing them down to a hopefully coherent string of snippets is a radical departure from the reality. In documentary editing they talk about "finding the story in the edit" - if I go out my front door into reality I don't have to "find the story in reality" and even if I believed that made sense how could it possibly occur? Wedding films are so romanticised that most wouldn't even call them a documentary, despite being made exclusively of filmed-on-location non-scripted cinema-verite. Even the walk-around videos shot by @markr041 are a long way from reality. If I was actually present in these locations then I could go wherever I want, talk to whoever I want, etc - the set of choices of each video represents one single possibility from an infinite number of creative choices, just like every edited film is one of an infinite number of possible edits that could be made. These films are edited in real-time, moment-by-moment, and cannot help but include an almost infinite number of creative choices. In this context, ALL choices are creative choices to contribute to the final film. So what does this mean? Well, if all work is fiction (of some sort or other) then every choice is a creative one. The choice of a high-resolution aberration-free lens is a choice just like any other. If I am shooting something modern then a modern lens is appropriate, if I am shooting something gritty then a gritty lens is appropriate, if I am shooting something vintage then a vintage lens is appropriate. But here's the kicker.... if everything is a fantasy, to some degree or other, then there is no such thing as a neutral choice. Clean lenses have just as much look as anything else, it's just an absence of technical aberrations. -
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
Yeah, that's what I said in my post. -
End of the shallow DOF obsession? Is 2x crop more cinematic?
kye replied to Andrew Reid's topic in Cameras
Specific examples are useful. The Tokina Vista lenses on Ted Lasso are relatively neutral but still do have a bit of character. Here is a comparison of them vs the ARRI Signatures nearest focal lengths and apertures. (Click to view image in proper resolution) I see differences in colour cast, contrast, background separation, and even aspect ratio perhaps? All lenses exhibit SOME character, although in the context of cinema, rather than TV, these lenses are relatively neutral. One thing to be aware of is how quickly we adapt to a look. Our eyes and visual system is incredibly adaptable, so much so that a significant element in colour grading is the design of the colour grading suite and workflows and processes so that your eyes continually have a neutral reference and so they don't gradually drift over the course of a project. With lenses etc there is no neutral reference. One thing that is interesting to look at is the VFX breakdown videos from shows like The Mandalorian where they show the process of making the 3D objects look as real as possible, then applying the same lens characteristics as the production used, which often has a significant effect aesthetically. Lens character is a tool that can be used to reinforce or undermine the creative intent of the film. It won't make or break your production, but it's available to you so why not use it? -
My understanding that I have pieced together from snippets over many videos, articles, etc is that middle grey is important because it's where you would expose your skin tones, and then everything downstream of that makes sure to keep it in the middle and generally unmolested. It's very common for elements of a colour grade to find middle grey, go down a bit from there, then make some adjustment that goes from that point down to zero (e.g. making the shadows cooler), or to go up from middle grey a bit then make an adjustment that goes up to the whites (e.g. highlight rolloffs, warm up the highlights, etc). I don't think it's technically wrong to make an adjustment that messes with middle grey, but it's a convention that is useful. If you expose your material in this way, and everyone designs LUTs and curves and rolloffs and colour schemes etc that don't mess with it too much, then your skin tones will be preserved and come out the other end looking appropriate. It might seem a tedious and maybe OCD thing, but the opposite would be ridiculous - where no standard was the same and adjustments were all the Wild West and you would spend most of your time trying to find a series of adjustment that didn't push the skin tones down into the shadow region, then make them blue, then bring everything up so it's too high, then make everything slightly purple. It would be a nightmare. Well said. One thing I don't think younger people or non-professionals really understand is that up until very recently, video was completely dominated by a vast array of technical standards that were completely inflexible. I came to video from computers and the internet, so my perspective was that video is simply a series of images at a certain frame rate with a horizontal and vertical resolution with each pixel being a particular colour. This is mostly true now, where you can have 1920x1080 at 24 fps but just as easily have 1922x1078 at 21fps. But you don't have to go back in time very far before that wasn't the case. Take for example the BM Ultrastudio Monitor 3G, a USB hardware device that works with Resolve to provide a signal for your reference monitor: It has a TON of options for 2K: First up, these are 2K video STANDARDS. Also, note that there's 2Kp and 2KPsF - we all know that P means Progressive which means that each frame contains all the lines, which is different to Interlaced which is where each frame contains half the lines of the image and was a standard to sort of "fudge" extra resolution. But PsF is "Progressive segmented Frame (PsF, sF, SF)" and "is a scheme designed to acquire, store, modify, and distribute progressive scan video using interlaced equipment." If we look at the BM 4K Mini device that also does 4K outputs, we see that compatibility with tape machines is now no longer implemented for the UHD and 4K resolutions: So, as recently as the creation of the 2K video standard, these hardware compatibility work-arounds were still in full-effect. If we got further back and look at SD standards, there are only two! These are the only standards available, and the products don't even support outputting anything else. People who record digitally, edit digitally, and deliver digitally online, probably have no idea about the very recent past where video was about standards and technical compatibility. The creativity that was created by the ability to colour video digitally - the Digital Intermediary (DI) process. For reference, da Vinci 2K was the first system to support HD and 2K and was released in 1998 and cost $250,000 for the base price, with things like extra power windows costing extra (because they were additional hardware processing boards!). Before DI there basically wasn't any creative colour grading, beyond changing the way you treated the film by deliberately changing the exposure or colour casts of scenes or films. This adherence to standards and conventions is not only still very present in the minds of manufacturers and production professionals, it's the entire framework for thinking about how TV and cinema is made. The concept of "making a killer LUT" is as remote to the manufacturers as the idea of SMPTE or AESEBU standards compliance is to the LUT BROs of social media. It's also why the industry players are so conventional in their thinking and have so much inertia, and why innovations from outside the industry is what is really shaking things up (like BM, Apple, etc).
