Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. These folks are using generative fill to create matt environments for videos... Interesting stuff.
  2. I'm also interested in shiny things, but less-so now, and to a certain extent I'm protected from getting better equipment simply due to how important size is in how/where I shoot. It's also worth acknowledging that there are lots of reasons that buying new stuff might provide a benefit... improved end results is one that we're all familiar with, but also things like ease-of-use / quality-of-life, sentimental / emotional attachments, status, learning and discover, novelty, and other factors are in the mix too, and will be worth something to each of us, even though we may differ significantly about how much each of these is worth. Of course, when someone is spending thousands of dollars on a new system for the sake of getting nicer colour science rather than learning to colour grade beyond just using LUTs, then that's not the best use of resources!
  3. Simple - people would rather pay for better-looking videos than put in the work to develop their skills.
  4. While we're on the subject of innovation in camera design and vlogging in particular.... The new DJI Action 3 looks pretty innovative, and despite being an "Action" camera is better for vlogging than most vlog cameras: For vlogging, it has: a very wide lens (they claim it's equivalent to a 11.24mm lens on FF!!!) square sensor that can record horizontally, vertically, or the whole sensor and let you stabilise/crop in post crazy-good stabilisation, including things like horizon lock GoPro-sized charging case with pop-up selfie screen that doubles as a remote monitor magnetic / sticky attachments and lots of mounting options H264 @ 80Mbps up to 2.7K Is tiny and weighs practically nothing: camera is 35.5g / 1.25oz and action camera case thingy is 96g / 3.4oz Crucially, it no longer has restricted recording times - previous models had restricted recording times that were progressively shorter when you applied heavier stabilisation (I think it was a thermal limit). Product Showcase / AD: Footage comparison to Go 2 and GoPro 11:
  5. Ideas for your new channel: "Ask the beard" a channel with you reacting to videos and viewer questions from a NZ perspective "Hold my beard" a channel where you do challenges and review the many extreme sports in NZ "For the beard" a channel focusing on progressive fashion culminating in launching beard-care products for both men and women The possibilities are endless, just like the talent available!
  6. What is the rest of your image pipeline? If you're uploading to virtually any streaming service then their compression will do you a huge favour by removing almost all the noise in the image. I did a test of creating a video with increasing levels of noise and then uploaded it to YT, and was absolutely stunned at the amount of noise I had to have on the timeline before any noise was visible in the output at all.
  7. This is the exact reason they made the GH5S. It was a non-IBIS companion to the GH5, designed to be used on gimbals and other mounts.
  8. Sounds like you've got lots of options on the table, but I'd just caution you against getting wrapped up in specs and forgetting that durability and reliability are your primary concerns, as you're operating in harsh conditions for long periods and are a long way from support if something breaks. I don't know how this translates into the various models being discussed, but I'd imagine it would point towards the professional-bodied cameras like the FS and FX lines. Not only will they be built with the right build quality and cooling and temp and humidity and dust ratings, but having more durable hardware with things like integrated XLR connectors and such will also make the camera more robust. Having cheap, light, small cameras with great image quality is great for careful consumers like me, but that's definitely not what you want in the jungle.
  9. This is quite a good outcome. In Parts Unknown with Anthony Bourdain they filmed a lot in slow-motion and used it in the edit as both real-time as well as slowed-down (conformed) which would require audio in 120p if doing in-camera audio. I suspect it's quite useful if you're making stylised documentary / verite style content.
  10. That makes sense. It is worth noting that this has a limitation that clips can only be in one group. So if you want to use groups to manage camera grades between cameras, or to manage different grades between scenes, then you can do that, but you can't do both because a clip can only be in one group at once. So if you have two cameras and an emotional arc then this won't work. I think this is why BM added the Shared Node and Adjustment Layer functionality - it adds more flexibility. I have a sneaking suspicion that BM might be playing a bit of ecosystem protectionism here as well. Most people round-tripping to Resolve from another editor are likely using a professional colourist who wouldn't be used to using adjustment layers and so this limitation wouldn't annoy them. The people that are used to using adjustment layers are likely to be the folks that are doing all the post work themselves and are coming from a different NLE. So it's kind of a ransom situation where BM is effectively saying "if you want the greater colour grading potential of Resolve and you want to keep working the way you're used to, then you should edit in Resolve too....... and ditch your other NLE". It's a subtle thing, but can be important if that's a limitation on your workflow. I know that my workflow has definitely been hit hard a few times by "little" things like this before, even by things as small as if a function can be assigned to a hotkey or not.
  11. kye

    Panasonic GH6

    On my last trip I ended up using the GX85 and 14mm F2.5 as the main setup, and the saved profiles defaulted to the lens being wide open at F2.5. I did a single AF (using the back-button focus method recommended by Mercer) and then hit record, and normally didn't AF again during the shot, although I only do short takes typically as I'm shooting for the edit. I also used the 2x digital zoom, so that emulated a 28mm F5 lens on MFT. According to the DOF Calculator, this lens wide open would have infinity in focus when focusing anywhere further than 17ft / 5m away. I have since done some testing and the lens is sharper than 1080p (which is what I edit in) when wide open, so not much advantage to stopping down other than for extra DOF if I want it. My videos are about the people interacting with the environment so DOFs that obscure the background aren't really necessary and beyond lending a bit of visual interest and depth to the shot, aren't desirable. The DOF with the 14mm wide open was almost too shallow for closer shots, as I'd want to show more of the environment. This was likely wide open, simply for the low-light advantage, but I wouldn't want to blur the environment any more than this: ..but it did provide a nice background defocus for detail shots - this one is with the 2x engaged: I've since decided that it's worth it for me to sacrifice a bit of speed for greater focal range, and will get the 12-32mm F3.5-5.6 pancake zoom when I travel next. I could just use my 12-35mm F2.8 lens, but it is significantly larger and I think the extra low-light (F2.8 vs F3.5) on the wide end isn't that much. I don't typically use the longer end in low-light situations, and definitely don't need the extra speed at the long end for DOF - at 3m/9'10" the 12mm F2.8 has a DOF of 23m/75' and at the same distance the slower zoom at 32mm F5.6 has a much shallower DOF of 1.6m/5'3. I can always carry that lens in my bag in case I need it, along with my 50/1.2 prime. Obviously having larger apertures for narrative work would be desirable because you want to have consistent T-stops across the range and also to have the potential to open up a lot more if it is artistically appropriate to the emotional arc of the story. Every good movie has a moment where the main character feels overwhelmed and disconnected, so needs a long shot of them with everything else blurred, right? 🙂
  12. Are the UMP mounts locking mounts?
  13. Useful, but I suspect it doesn't solve the issue for most people who want to render out the clips separately.
  14. kye

    Panasonic GH6

    Ah yes, sorry, I'd completely forgotten the pulsing. I suspect that there's probably a tradeoff in the coding somewhere. My understanding was that the only advantage of PDAF vs CDAF was that PDAF knows which direction focus is in, whereas CDAF just knows what is in focus but not which direction to go in to get better focus. This is why DfD is pulsing - it's deliberately going too far one way and too far the other way just to keep track of where the focus point is. Maybe Olympus has just tuned their algorithm to be more chill about it, which would result in less pulsing but potentially more time slightly out of focus when the subject moves. Of course, the reason that eye AF is now a thing is because people want to have a DOF that isn't deep enough to get the whole face in focus, so they need an AF mechanism that won't focus on someones nose or ear but get the eyes out of focus. This really makes the job of AF much more difficult, and any errors that much more obvious. I wonder how much Sony is implementing their focus breathing compensation due to the crazy amount of background blur that people want nowadays. Even if you have perfect focus, when it tracks the small movements of an interview subject moving their head around the changes in size of the bokeh are so large and so distracting that focus breathing becomes a subtle (or not so subtle) pulsing of the size of the whole image. I'm glad I don't have to deal with it. Even though I'm moving to AF, I'm using AF-S only and having DoFs that are much more practical (and TBH, cinematic). Maybe it's time to reverse the 'common wisdom' online and start saying that if you want things to be cinematic then you need to close down the aperture, and that the talking-head-at-F1.4 is a sign of something being video rather than cinema. Wow.. So we're back to my iPhone 6 Plus where it had PDAF but didn't use it for certain things! I didn't expect that from Panasonic in 2023. I've seen a number of those "the AF is great, but you have to know how it works and choose the mode and perform integrals in your head to get the most out of it" videos, and I'm glad that I'm not using it TBH.
  15. Their only job is to record and legally identify criminals. That doesn't require 15 stops of 10-bit 444 in 800Mbps!!
  16. ..and you can do stuff like put the adjustment layer over the clips, but under the titles. Lots of my early edits had halation applied to the titles, and, well, it's A look, just not the one I wanted! The other alternative is to create a default node tree that has a bunch of Shared Nodes at the end, and then copy that across all shots prior to grading, then when you change them they get applied to all the shots, but are still at clip level in the Resolve rendering pipeline. There's probably a handy way to add those after you've graded the images too, but I'm not sure how.
  17. Interesting comment about colour, thanks for sharing. My impression of Vivid on GX85 was that it boosted the saturation. In theory, this is a superior approach to recording a flat profile, as long as it doesn't clip any colours in the scene. This is because if colours are boosted in-camera they will be processed in RAW and not yet subject to quantisation errors and compression errors. Then in post when you would presumably reduce the saturation any quantisation errors and compression errors such as blocking etc will be reduced. This is why B&W footage from older cameras looks so much nicer than full-colour images. I've also noticed that downsampling in post or adding a very slight blur can obscure pixel-level artefacts and quantisation issues. I learned this while grading XC10 footage, which is the worst of all worlds - 8-bit 4K C-Log. I've also had success with a CST from 709/2.4 to a wider space (like DWG), then grading there, then a CST/LUT to get back to 709/2.4 output. This has the distinct advantage of applying adjustments in a (roughly) proportional way, so if you change exposure or WB it does it in proportion across the luma range, much closer to doing it in-camera than operating directly on the 709 with all its built-in curves.
  18. kye

    Panasonic GH6

    At this point, even if the CDAF made coffee and told you next weeks lotto numbers, no-one would believe that it was capable of anything. I've maintained that DfD and AI based processing AF would get good enough that it would catch up, but haters gonna hate and "PD=GOOD CD=BAD" was always a simpler and therefore more desirable view to have. Plus, anyone over 20 has had a bad experience with a super-cheap CDAF system. Maybe the prejudice will finish with Gen Z? It's been years since I've seen a cameras AF fail to focus, these days focus "failures" are where the camera focuses quickly and accurately on the wrong thing, and PDAF does this just as much because CD or PD AF has literally nothing to do with that part of the AF functionality.
  19. Quality is crap, you pay to buy it, you pay ongoing fees to use it. Its a reasonable solution for security / security theatre, but if you just want a camera that you can watch via wifi then there are cheaper / better ways. Sample footage: and in case you think that's YT compression, here's one uploaded at 4K60:
  20. Not a solution, but maybe a workaround... Maybe just prior to the export you could create a powergrade with the nodes in the adjustment layer, then append those nodes to all the clips and then render that out. I'm not sure if you can grab a still from an adjustment layer, but I know you can't grab one from the Timeline.
  21. Yeah, they're pretty remarkable. Zooming out is an easier task as often the edges of the image don't include people, are often out-of-focus, and are usually of little consequence to the viewer of the image. My take on AI image generation is that it will replace the generic soul-less images that are of people that no-one knows and no-one cares about. It'll be sort-of generated stock images. I think it will be a very very long time before anyone wants AI generated photos or video of people they know rather than the real thing, except in special cases, just because they're not real. AI has been good enough to write fiction for literally decades, but hasn't overtaken people yet. Reviewers thought this 1993 romance novel was better than the majority of human-written books in the genre: https://en.wikipedia.org/wiki/Just_This_Once
  22. No idea about that one. I have a Ring doorbell, and you'd be better off setting up the telescope system in the bunker from Lost rather than using one of those.
  23. Or upgrade it to an eND like Sony have. Do we know if Sony have a patent on those things, or are they an available option for other manufacturers? I have a vague recollection of someone (maybe @BTM_Pix?) talking about RED doing some awesome experiments with an eND. One example was to use it as a global shutter but to fade it up at the start of the exposure and down at the end so that motion trails had softer edges. I think there were other things they did with it too, but that was the one that stuck in my mind.
  24. Ah! When I read "S16 sensor size pocket love" and I saw the lovely organic colour grade I took the word "pocket" to mean the OG BMPCC... You did very well! The more I learn about colour grading (and other post-production image manipulations), the more I realise that the potential of these cameras is absolutely huge, but sadly, un-utilised to the point where many cameras have never been seen even remotely close to their potential. The typical level of knowledge from solo-operators of cameras/cinematography vs colour grading is equivalent to a high-school teacher vs a hunter-gatherer. I am working on the latter for myself, trying to even-up this balance as much as I can. As you're aware I developed a powergrade to match iPhone with GX85 and it works as a template I can just drop onto each shot. Unfortunately I am now changing workflows in Resolve and the new one breaks one of the nodes, so it looks like I will have to manually re-construct that node, which I have been putting off. I've also been reviewing the excellent YT channel of Cullen Kelly, a rare example of a professional colourist who also puts knowledge onto YouTube, and have been adapting my thinking with some of the ideas he's shared. One area of particular interest is his thoughts on film emulation. To (over)simplify his philosophy, he suggests that film creates a desirable look and character that we may want to emulate, but it was also subject to a great number of limitations that were not desirable at the time and are likely not desirable now (unless you are attempting to get a historically accurate look) and so we should study film in order to understand and emulate the things that are desirable while moving past the limitations that came with real film. I recommend this Q&A (and his whole channel) if this is of interest: As I gradually understand and adopt various things from his content I anticipate I will further develop my own power grades. I'm curious to see how you're grading your LX15 footage, if you're willing to share. Wow, that is small! I'd love something that small.. it's a pity that the stabilisation doesn't work in 4K or high-frame-rates. Having a 36-108mm equivalent lens is a great focal range, and similar to many of the all-in-one S16 zooms back in the day. I love the combination of the GX85 + 14mm f2.5 + 4K mode crop as it makes a FOV equivalent to a 31mm lens. I used to be quite "over" the 28mm focal length, preferring 35mm, but I must admit I did find the 31mm FOV to be very useful when out and about, and having the extra reach is perfect for general purpose outdoor work. I want to upgrade to the 12-32mm kit lens, which gives the GX85 a 26-70mm FOV in 4K mode (and 52-140 with the 2x digital zoom for extra reach).
×
×
  • Create New...