Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. I'd imagine it is partly to do with the analog and ADCs, but don't forget that just because modern ICs are very capable doesn't mean the manufacturer will sell you a product for a song. Also, digital processing is another step that the high-end products may well also have capabilities for.
  2. On the colourist forums there's a sample test project and users submit their specs and FPS on the various tests, here's the M1 and M3 - the numbers are FPS: Each colour is the same test between them. The jump up from the Intel Macs is enormous - mine gets about 4FPS on the light blue test and about 3FPS on the lighter-orange test, but once you're on the Apple silicone there seems to only be incremental improvement. Those tests above are pretty brutal by the way - the light blue test is a UHD Prores file with 18 blur nodes, and the dark blue is 66 blur nodes, and the orange ones are many nodes of temporal noise reduction! The differences between the Pro, Max, and Ultra chipsets is much more significant though. I found that the Resolve results correlated pretty well with the Metal tests in Geekbench: https://browser.geekbench.com/metal-benchmarks
  3. Preposterous! It would never work!!
  4. I was talking about the location of the hot-shoe, which is where you'd mount the mic on this hypothetical small vlogging camera. No point having a flip-up screen for vlogging if the mic will obscure it. That's why cameras with the flip-up screens like the G7X don't have a hot-shoe and require external accessories: or they have a hot-shoe and have a flippy screen, not a tilt-up one: ....and before you start mixing vlogging with a pocket camera with rigging out a cinema camera, by the time you have to put a cage on it, you might as well use a bigger camera that already has a mic port, interchangeable lenses, 4k120p, 15 stops of DR, and all the other crap that the independent-Sony-marketing-affiliates camera reviewers all use.
  5. It probably wouldn't be too hard to adjust the screen tilt mechanism to make it pop up and become a selfie screen too, which would make it infinitely more attractive to that market segment. Of course, the clash between an on-camera mic and tilt-up selfie screen is a difficult one to reconcile, as many/most vloggers view both as a requirement. They could take notes from other manufacturers and sell a separate mic that uses the hot-shoe (and therefore camera power) and also doesn't get in the way of the flip-up screen, that would be awesome! I did a bit of googling on that Sony HX99 and it looks like an interesting little camera. When you have gotten familiar with the ZV-1 it would be interesting to see a comparison between them. Of course, we can anticipate that low-light will be a weaker area for it of course.
  6. BTS: This looks good to me, but I think could have been shot with almost anything..
  7. Absolutely.. the thing is, this has probably been the case for many years now. It certainly has been the case that at least some of the affordable cinema cameras would be indistinguishable to audiences, the OG BMPCC / BMMCC / 5D + ML RAW for example. To me the milestone of having at least one affordable cinema camera be good enough is a much more significant event than "every" cinema camera being of that standard - who gives a crap how long it takes for the worst models to catch up?
  8. My condolences about your mother. We're here for you - especially if it involves fighting over irrelevant technical details! I've said it previously, but I actually don't have a huge list of wants for an updated GX-line camera, and the GX85 is now my daily driver as-it-were. I've heard people ask for a range of improvements but they all seem quite modest actually. Things like PDAF, 10-bit, LOG, full-sensor readout, etc. These may well only require a sensor upgrade and a processor upgrade, perhaps to existing chipsets that aren't even the latest generation, which may not actually require that much additional power / cooling / space. At the moment the GX85 already has: IBIS / Dual-IS Tilt screen / touch screen EVF half-decent codec etc etc Plus, it's right at the limit of actually being too small from an ergonomic perspective. The grip design on it is really quite effective and I enjoy using it, but I probably wouldn't want it to be any smaller. If it had a slightly larger grip then that would actually be an advantage ergonomically, wouldn't make the camera much bigger in practice because it would still be smaller than most lenses, and would allow for a larger battery size. With proper colour management it is actually a very malleable image too, I did an expose test with it some time ago, doing exposures under and over and bringing them back to normal in post, and while the DR wasn't great, the image was very workable. The endless pursuit of megapixels has proven that a feature can be designed and then marketed and people will go nuts over it, despite the fact that it's not of any practical use to most consumers and it also requires upgrading of all associated equipment in turn. I think this shows that even if you did make a perfect camera for the people who shoot, the people who like progress for its own sake will always think that the perfect camera is the next one. Please no!!!!!
  9. I don't really have a clear understanding of how Yedlin manages his pipeline - do you have a link to something I can look at? In some senses I guess he's got one of the most "processed" image pipelines. One thing I'm becoming much more aware of is the difference between colour grading and colour science / image science, where the colourist works on individual projects and the colour scientist develops the tools. Obviously some colourists are also doing colour science things as well, so there is definitely some crossover. After my previous post I was thinking about it more and realised that the colourist often acts as a sort-of central coordinator of visual processing, where they understand the needs of the client / project and then apply a variety of tools as is appropriate. Some of these tools can be enormously sophisticated, with the well-known examples being the film emulation packages like Dehancer / FilmConvert / FilmBox / etc, but also lesser known things like BorisFX / NeatVideo / etc also getting heavy use. I'd suggest that with this increasing level of sophistication these tools are really now a different form of VFX, maybe a 2D VFX? So in that sense the 3D VFX is mostly done pre-colourist, but then the colourist would apply a whole bunch of other VFX treatments after that. I don't know if this is making sense, but it seems like the workflows and scope of the VFX / DIT / colourist are going to change in interesting ways in the future. The move to the Cloud and having the ability in Resolve to go back and forth between the Edit and Fusion and Colour pages certainly supports this idea that it's no longer a one-directional process but a set of interactions with iterations etc. As a one-person setup the ability to colour grade and edit in an iterative fashion, going back and forth, always made sense, and the idea of the linear workflow just seemed restrictive, although in a big production I can see why it would make sense. Anyway, hope those thoughts were semi-coherent. It's a fascinating space. The film industry can be incredibly slow to innovate and change, especially in regards to how the different departments work with each other, but in some areas there is definitely innovation and it seems like this is one of those.
  10. kye

    Panasonic G9 mk2

    Interesting about the sensors (perhaps?) not being Sony - it makes sense though if they developed the dual-gain architecture as a custom design, which I am assuming they did. It also then follows that the colour science would be slightly different too. I suppose they could have made it identical if they'd wanted, but I think if you were designing such a thing you'd constantly be tinkering with it, trying to improve, adapting to more recent tastes etc. The fact that the G9ii is as good as the GH6 in many ways could be a promising sign for what a potential GH7 could bring. I think the difficulties with the GH6 were unfortunate, as was that it was overshadowed by the PDAF implementations of the next releases. I think that Panasonic were being bold and trying to really push forward in doing a dual-gain architecture sensor, but unfortunately it just didn't quite make the kind of difference that you'd hope from such a bold move. I guess Panasonic might take the "lesson" to just go back to incrementalism and play it safe etc, but I really hope they don't do that, but instead view it as a good-but-not-great project and keep being bold and trying things. If the GH6 and G9ii sensors are from TowerJazz (or any other non-Sony provider) then it might be a sign they're going out on their own and will continue to iterate on the design and improve it over time. That would be a great outcome I think.
  11. Yeah. One distinct advantage of doing this stuff in post is that you can tweak and tune it shot-to-shot if required, and if the results are ok but not great you can often lower the strength so it's not so visible, although there is no guarantee that you will be able to get an acceptable result, so if you have to rely on it you're still better off doing it manually / properly in-camera.
  12. kye

    Panasonic G9 mk2

    Yes, lots of stuff is done in-camera and cannot be un-done, so it's just a case of trying all the tricks we have and seeing how far we get. If you do a custom WB on the G9ii and GH6 on the same scene, does it remove the differences in WB between them?
  13. Just to round up the reason I was talking about MTF and digital having an in-natural resolution response, the primary function of the "look" of a film is to support the content of the subject matter. In most cases this will be to be slightly sharper or softer than a neutral point, but for it to not stand out and call attention to itself unless an artificial feel was deliberately being added, like if a scene was set in a fake reality etc. I would suggest that film was a relatively neutral reference in terms of the aesthetic. We didn't go to theatres and think "oh my god the whole thing is a soft blurry mess!" so by having something massively sharper I would suggest it's diverging from an ideal neutral position. Thus my comment about resolution vs "making a meaningful final film" which is meaningful because of the content.
  14. I was being a bit provocative, mostly just to challenge the blind pursuit of specifications over anything actually meaningful, which unfortunately is a bit like trying to hold back the ocean. I have seen a lot of footage from the UMP12K and the image truly is lovely, that's for sure. Especially, looking at the videos from Matteo Bertoli shoots with UMP12K and the BMPCC6K, because it's the same person shooting and grading both so the comparison has a lot less external factors, the 12K has a certain look that the P6K doesn't quite reach. The P6K is also a great image too, so a high standard to beat. The idea of massively oversampling is a valid one, and I guess it depends on the aesthetic you're attempting to create. In a professional situation having a RAW 12K image is a very very neutral starting position in todays context. I say "in todays context" because since we went to digital, the fundamental nature of resolution has changed. In film, when you exposed it to a series of lines of decreasing size (and therefore higher frequencies) at a certain point the contrast starts to decrease as the frequency rises, to the point where the contrast was indistinguishable. The MTF curve of film shows a slope down as frequency goes up. In digital, the MTF curve is flat until aliasing kicks in, where it might dip up and down a bit and then it will fall off a cliff when the frequency reaches half the pixel distance. In audio this would be the Nyquist frequency, and OLPFs are designed to make this a nicer transition from full contrast to zero contrast. While there is no right and wrong, this type of response is decidedly unnatural - virtually nothing in the physical world operates like this, which I believe is one of the reasons that the digital look is so distinctive. The resolution that the contrast starts to decrease on Kodak 500T is somewhere around 500-1000 pixels across, so the difference in contrast on detail (otherwise called 'sharpness') is significant by the time you get to 4K and up. So to have a 12K RAW image is to have pixels that are significantly smaller than human perception (by a looong way) so in a sense it takes the OLPF and moire and associated effects, of "the grid" as you say, out of the equation, but it also creates an unnatural MTF / frequency response curve. In professional circles, this flat MTF curve would be softened by filters, the lens, and then by the colourist. If you look at how cinematographers choose lenses, their resolution-limiting characteristics is often a significantly desirable trait in guiding these decisions. Going in the opposite direction, away from very high resolutions with deliberately limited MTF properties that Hollywood normally chooses, we have the low resolutions which limit MTF in their own ways. For example, a native 1080p sensor won't appear as sharp as a 1080p image downsampled from a higher resolution source. 1080p is around the limits of human vision in normal viewing conditions (cinemas, TVs, phones). In a practical sense, when people these days are filming at resolutions around 1080p, MTF control from filters and un-sharpening in post is normally absent, and even most budget lenses are sharper than 1080p (2MP!) so this needs some active treatment to knock the digital edge off things. The other challenge is that these images are likely to be sharpened and compressed in-camera, so will have digital looking artefacts to deal with, these are often best un-sharpened too as they are often related to the pixel-size. 4K is perhaps the worst of all worlds. It isn't enough resolution to be significantly greater than human vision and have no grid effects, but also has a flat MTF curve that extends waaay further than appears natural. Folks who are obsessed with the resolution of their camera are also more likely to resist softening the MTF curve, so are essentially pushing everything into the digital realm and having the image resemble the physical world the least. I find that "cinematic" videos on YT shot in 4K are the most digital / least analog / least cinematic images, with those shot in 1080p normally being better, and with the ones shot in 6K or greater being the best (because up until recently those were limited to people who understand that sharpness and sharpening aren't infinitely desirable). The advantage that 4K has over 1080p is that the compression artefacts from poor codecs tend to be smaller, and are therefore less visually offensive and more easily obscured by un-sharpening in post. Ironically, a flat MTF curve is just like if you filmed with ultra-low-noise film and then performed a massive sharpening operation on it. The resulting MTF curve is the same. I'm happy to provide more info if you're curious. I've written lots of posts around various aspects of this subject. Yep, massively overenthusiastic amateur here. I mostly limit myself to speaking about things that I have personal experience with, but I work really hard behind the scenes, shooting my own tests, reading books, doing paid courses, and asking questions and listening to professionals. I challenge myself regularly, fact-check myself before making statements in posts, and have done dozens / hundreds of camera and lens tests to try and isolate various aspects of the image and how it works. I have qualifications and deep experience in some relevant fields. I also have a pretty good testing setup, do blind tests on myself using it, and (sadly!) rank cameras in blind test by increasing order of cost! 😂😂😂 I'm happy to be questioned, as normally I have either tested something myself, or can provide references, or both. Sadly, most people don't have the interest, attention span, or capacity to go deep on these things, so I try and make things as brief as possible, so they end up sounding like wild statements unless you already understand the topic. Unlike many professionals, I manage the whole production pipeline from beginning to end and have developed understandings of things that span departments and often fall through the cracks or involve changing something in one part of the pipeline and compensating for it at a later point in the image pipeline. Anything that spans several departments would rarely be tested except on large budget productions where the cinematographer is able to shoot tests and then worked with a large post house, which unfortunately is the exception rather than the norm. Ironically, because I shoot with massively compromised equipment in very challenging situations, I work harder than most to extract the most from my equipment by pushing it to breaking and beyond and trying to salvage things. Professional colourists are, unfortunately, used to dealing with very compromised footage from lower-end productions, but they are rarely consulted before production to give tips on how to maximise things and prevent issues.
  15. Additionally, it's easy to look at residual noise on the timeline and turn up our noses, but by the time you have exported the video from your NLE, then it's been uploaded to the streaming platform, then they have done goodness-knows-what to it before recompressing it to razor-thin bitrates, much of what we were seeing on the timeline is gone. The "what is acceptable and what is visible" discussion needs to be shifted to what is visible in the final stream. Anything visible upstream that isn't visible to the viewer is a distraction IMHO.
  16. I'm actually not that fussed by 8-bit video anymore, assuming you know how to use colour management in your grades. If you are shooting 8-bit in a 709 profile you can transform from rec709 to a decent working space, grade in there, then convert back to 709. Assuming the 709 profile is relatively neutral, this gives almost perfect ability to make significant exposure / WB changes in post, and by grading in a decent working space (RCM, ACES) all the grading tools work the same as with any other footage. The fact you're going from 8-bit 709 capture to 8-bit 709 delivery means that the bit-depth is mostly kept in the same place and therefore doesn't get stretched too much. The challenge is when you're capturing 8-bit in a LOG space, or a very flat space. This is what I faced when recording my XC10 in 4K 300Mbps 8-bit in C-Log. I have spoken in great detail about this in another thread. This was a real challenge and forced me to explore and learn proper texture management. Texture management isn't spoken about much online, but it includes things like NR (temporal, spatial), glow effects, halation effects, sharpening / un-sharpening, grain, etc. I found with the low-contrast 8-bit C-Log shots from the XC10, that by the time I applied very modest amounts of temporal NR, spatial NR, glow, and un-sharpening, that not only was I left with a far more organic and pleasing image, but the noise was mostly gone. It's easy for uninformed folks to look at 8-bit LOG images like the XC10 and think they're vastly inferior to cameras where NR isn't required, but this isn't true - the real high-end cinema cameras are noisy as hell in comparison to even the current mid-range offerings, and professional colourists are expected to know about NR. A recent post in a professional colour grading group I am in was about NeatVideo and the post mentioned that NR was essential on almost every professional colour grading job. I'd almost go so far as to say that if you can't get a half-decent image from 8-bit LOG footage then you couldn't grade high-end cinema camera footage either. There are limits though, and things like green/magenta macro-blocking in the shadows were evident on shots where I had under-exposed significantly, but on cameras that have a larger sensor than the 1" XC10 sensor, and if exposed properly, these things are far less likely to be real issues.
  17. kye

    DJI Pocket 3?

    I've been talking about OIS and IBIS having this advantage over EIS for years..... sadly, most people don't understand the differences enough to even know what I'm talking about.
  18. Interesting stuff, and it makes me think about the future of VFX integration. My understanding of the current VFX workflows is this: Movie is shot Movie is partly edited, with VFX shots identified Footage for VFX shots is sent to VFX department with show LUT VFX department does VFX work, and applies the elements that are required to match the rest of the film (e.g. lens emulations, etc) but does not include elements not present in the real-life footage (e.g. colour grade) VFX department delivers footage and this is integrated into edit Picture lock Colour grading occurs on footage from real-life footage as well as VFX shots In this workflow, things like colour grading etc get applied by the colourist to the whole film. Things like film grain would only be applied by the VFX department if it was shot on film (and would only be applied to the VFX elements in the frame, rather than the whole finished VFX frames). I wonder if in future, VFX will become so prevalent that the colour grade might become integrated within the VFX framework, rather than the VFX being considered a step that occurs prior to the colour grade. The discussions that I have seen imply that advanced things like lens emulations / film emulations / etc (which are more image science / colour science, rather than colour grading) are beyond the scope and ability of most colourists.
  19. What do you mean? It looks flawless to me!!! 😂😂😂 Doing it in post in Resolve looks like it might be a mature enough solution now, but I can't imagine that devices will be good enough to do it real-time for quite a few years. I started a separate thread about this semi-recently: https://www.eoshd.com/comments/topic/78618-motion-blur-in-post-looks-like-it-might-be-feasible-now/
  20. The more you look at something, the more you notice. When I first started doing video, I couldn't tell the difference between 60p and 24p, now I can tell the difference between 30p and 24p! BUT, having said that, my wife is pretty good at telling very subtle differences in skin tones and has spent exactly zero time looking at colour grading etc, so we all start off seeing things differently as well.
  21. kye

    Panasonic G9 mk2

    Just un-sharpen more. Unless you like the digital look? TBH I find that if something hasn't been un-sharpened, even if it was shot with RAW video and not processed at all, I find that it is trying to shout at me over the content of the video. I suspect the culprit is the post-sharpening that is done by YT / streaming service. Comparing your upload to YT shows quite a substantial difference.
  22. Yeah, high resolution is great if you want high resolution. Not so good if you are interested in making a meaningful final film.
  23. The image quality side-by-sides aren't even close..
  24. 170Mbps... not terrible. Let's see what the implementation is like, especially the sharpening, NR, and auto-awesome AI. I like the idea there's two of them, with the "Pro" one having a larger sensor and more resolution etc. I feel like most action products are the generic model and somehow they never release the higher up models.
×
×
  • Create New...