-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Please recomend me a câmera for cinema verite easyness
kye replied to tomastancredi's topic in Cameras
Sounds like you've got lots of options on the table, but I'd just caution you against getting wrapped up in specs and forgetting that durability and reliability are your primary concerns, as you're operating in harsh conditions for long periods and are a long way from support if something breaks. I don't know how this translates into the various models being discussed, but I'd imagine it would point towards the professional-bodied cameras like the FS and FX lines. Not only will they be built with the right build quality and cooling and temp and humidity and dust ratings, but having more durable hardware with things like integrated XLR connectors and such will also make the camera more robust. Having cheap, light, small cameras with great image quality is great for careful consumers like me, but that's definitely not what you want in the jungle. -
This is quite a good outcome. In Parts Unknown with Anthony Bourdain they filmed a lot in slow-motion and used it in the edit as both real-time as well as slowed-down (conformed) which would require audio in 120p if doing in-camera audio. I suspect it's quite useful if you're making stylised documentary / verite style content.
-
Davinci resolve render individual clips with adjustment layer
kye replied to zerocool22's topic in Cameras
That makes sense. It is worth noting that this has a limitation that clips can only be in one group. So if you want to use groups to manage camera grades between cameras, or to manage different grades between scenes, then you can do that, but you can't do both because a clip can only be in one group at once. So if you have two cameras and an emotional arc then this won't work. I think this is why BM added the Shared Node and Adjustment Layer functionality - it adds more flexibility. I have a sneaking suspicion that BM might be playing a bit of ecosystem protectionism here as well. Most people round-tripping to Resolve from another editor are likely using a professional colourist who wouldn't be used to using adjustment layers and so this limitation wouldn't annoy them. The people that are used to using adjustment layers are likely to be the folks that are doing all the post work themselves and are coming from a different NLE. So it's kind of a ransom situation where BM is effectively saying "if you want the greater colour grading potential of Resolve and you want to keep working the way you're used to, then you should edit in Resolve too....... and ditch your other NLE". It's a subtle thing, but can be important if that's a limitation on your workflow. I know that my workflow has definitely been hit hard a few times by "little" things like this before, even by things as small as if a function can be assigned to a hotkey or not. -
On my last trip I ended up using the GX85 and 14mm F2.5 as the main setup, and the saved profiles defaulted to the lens being wide open at F2.5. I did a single AF (using the back-button focus method recommended by Mercer) and then hit record, and normally didn't AF again during the shot, although I only do short takes typically as I'm shooting for the edit. I also used the 2x digital zoom, so that emulated a 28mm F5 lens on MFT. According to the DOF Calculator, this lens wide open would have infinity in focus when focusing anywhere further than 17ft / 5m away. I have since done some testing and the lens is sharper than 1080p (which is what I edit in) when wide open, so not much advantage to stopping down other than for extra DOF if I want it. My videos are about the people interacting with the environment so DOFs that obscure the background aren't really necessary and beyond lending a bit of visual interest and depth to the shot, aren't desirable. The DOF with the 14mm wide open was almost too shallow for closer shots, as I'd want to show more of the environment. This was likely wide open, simply for the low-light advantage, but I wouldn't want to blur the environment any more than this: ..but it did provide a nice background defocus for detail shots - this one is with the 2x engaged: I've since decided that it's worth it for me to sacrifice a bit of speed for greater focal range, and will get the 12-32mm F3.5-5.6 pancake zoom when I travel next. I could just use my 12-35mm F2.8 lens, but it is significantly larger and I think the extra low-light (F2.8 vs F3.5) on the wide end isn't that much. I don't typically use the longer end in low-light situations, and definitely don't need the extra speed at the long end for DOF - at 3m/9'10" the 12mm F2.8 has a DOF of 23m/75' and at the same distance the slower zoom at 32mm F5.6 has a much shallower DOF of 1.6m/5'3. I can always carry that lens in my bag in case I need it, along with my 50/1.2 prime. Obviously having larger apertures for narrative work would be desirable because you want to have consistent T-stops across the range and also to have the potential to open up a lot more if it is artistically appropriate to the emotional arc of the story. Every good movie has a moment where the main character feels overwhelmed and disconnected, so needs a long shot of them with everything else blurred, right? 🙂
-
Are the UMP mounts locking mounts?
-
Davinci resolve render individual clips with adjustment layer
kye replied to zerocool22's topic in Cameras
Useful, but I suspect it doesn't solve the issue for most people who want to render out the clips separately. -
Ah yes, sorry, I'd completely forgotten the pulsing. I suspect that there's probably a tradeoff in the coding somewhere. My understanding was that the only advantage of PDAF vs CDAF was that PDAF knows which direction focus is in, whereas CDAF just knows what is in focus but not which direction to go in to get better focus. This is why DfD is pulsing - it's deliberately going too far one way and too far the other way just to keep track of where the focus point is. Maybe Olympus has just tuned their algorithm to be more chill about it, which would result in less pulsing but potentially more time slightly out of focus when the subject moves. Of course, the reason that eye AF is now a thing is because people want to have a DOF that isn't deep enough to get the whole face in focus, so they need an AF mechanism that won't focus on someones nose or ear but get the eyes out of focus. This really makes the job of AF much more difficult, and any errors that much more obvious. I wonder how much Sony is implementing their focus breathing compensation due to the crazy amount of background blur that people want nowadays. Even if you have perfect focus, when it tracks the small movements of an interview subject moving their head around the changes in size of the bokeh are so large and so distracting that focus breathing becomes a subtle (or not so subtle) pulsing of the size of the whole image. I'm glad I don't have to deal with it. Even though I'm moving to AF, I'm using AF-S only and having DoFs that are much more practical (and TBH, cinematic). Maybe it's time to reverse the 'common wisdom' online and start saying that if you want things to be cinematic then you need to close down the aperture, and that the talking-head-at-F1.4 is a sign of something being video rather than cinema. Wow.. So we're back to my iPhone 6 Plus where it had PDAF but didn't use it for certain things! I didn't expect that from Panasonic in 2023. I've seen a number of those "the AF is great, but you have to know how it works and choose the mode and perform integrals in your head to get the most out of it" videos, and I'm glad that I'm not using it TBH.
-
Their only job is to record and legally identify criminals. That doesn't require 15 stops of 10-bit 444 in 800Mbps!!
-
Davinci resolve render individual clips with adjustment layer
kye replied to zerocool22's topic in Cameras
..and you can do stuff like put the adjustment layer over the clips, but under the titles. Lots of my early edits had halation applied to the titles, and, well, it's A look, just not the one I wanted! The other alternative is to create a default node tree that has a bunch of Shared Nodes at the end, and then copy that across all shots prior to grading, then when you change them they get applied to all the shots, but are still at clip level in the Resolve rendering pipeline. There's probably a handy way to add those after you've graded the images too, but I'm not sure how. -
Help me on an eBay hunt for 4K under $200 - Is it possible?
kye replied to Andrew Reid's topic in Cameras
Interesting comment about colour, thanks for sharing. My impression of Vivid on GX85 was that it boosted the saturation. In theory, this is a superior approach to recording a flat profile, as long as it doesn't clip any colours in the scene. This is because if colours are boosted in-camera they will be processed in RAW and not yet subject to quantisation errors and compression errors. Then in post when you would presumably reduce the saturation any quantisation errors and compression errors such as blocking etc will be reduced. This is why B&W footage from older cameras looks so much nicer than full-colour images. I've also noticed that downsampling in post or adding a very slight blur can obscure pixel-level artefacts and quantisation issues. I learned this while grading XC10 footage, which is the worst of all worlds - 8-bit 4K C-Log. I've also had success with a CST from 709/2.4 to a wider space (like DWG), then grading there, then a CST/LUT to get back to 709/2.4 output. This has the distinct advantage of applying adjustments in a (roughly) proportional way, so if you change exposure or WB it does it in proportion across the luma range, much closer to doing it in-camera than operating directly on the 709 with all its built-in curves. -
At this point, even if the CDAF made coffee and told you next weeks lotto numbers, no-one would believe that it was capable of anything. I've maintained that DfD and AI based processing AF would get good enough that it would catch up, but haters gonna hate and "PD=GOOD CD=BAD" was always a simpler and therefore more desirable view to have. Plus, anyone over 20 has had a bad experience with a super-cheap CDAF system. Maybe the prejudice will finish with Gen Z? It's been years since I've seen a cameras AF fail to focus, these days focus "failures" are where the camera focuses quickly and accurately on the wrong thing, and PDAF does this just as much because CD or PD AF has literally nothing to do with that part of the AF functionality.
-
Quality is crap, you pay to buy it, you pay ongoing fees to use it. Its a reasonable solution for security / security theatre, but if you just want a camera that you can watch via wifi then there are cheaper / better ways. Sample footage: and in case you think that's YT compression, here's one uploaded at 4K60:
-
Davinci resolve render individual clips with adjustment layer
kye replied to zerocool22's topic in Cameras
Not a solution, but maybe a workaround... Maybe just prior to the export you could create a powergrade with the nodes in the adjustment layer, then append those nodes to all the clips and then render that out. I'm not sure if you can grab a still from an adjustment layer, but I know you can't grab one from the Timeline. -
Artificial voices generated from text. The future of video narration?
kye replied to Happy Daze's topic in Cameras
Yeah, they're pretty remarkable. Zooming out is an easier task as often the edges of the image don't include people, are often out-of-focus, and are usually of little consequence to the viewer of the image. My take on AI image generation is that it will replace the generic soul-less images that are of people that no-one knows and no-one cares about. It'll be sort-of generated stock images. I think it will be a very very long time before anyone wants AI generated photos or video of people they know rather than the real thing, except in special cases, just because they're not real. AI has been good enough to write fiction for literally decades, but hasn't overtaken people yet. Reviewers thought this 1993 romance novel was better than the majority of human-written books in the genre: https://en.wikipedia.org/wiki/Just_This_Once -
No idea about that one. I have a Ring doorbell, and you'd be better off setting up the telescope system in the bunker from Lost rather than using one of those.
-
Or upgrade it to an eND like Sony have. Do we know if Sony have a patent on those things, or are they an available option for other manufacturers? I have a vague recollection of someone (maybe @BTM_Pix?) talking about RED doing some awesome experiments with an eND. One example was to use it as a global shutter but to fade it up at the start of the exposure and down at the end so that motion trails had softer edges. I think there were other things they did with it too, but that was the one that stuck in my mind.
-
Help me on an eBay hunt for 4K under $200 - Is it possible?
kye replied to Andrew Reid's topic in Cameras
Ah! When I read "S16 sensor size pocket love" and I saw the lovely organic colour grade I took the word "pocket" to mean the OG BMPCC... You did very well! The more I learn about colour grading (and other post-production image manipulations), the more I realise that the potential of these cameras is absolutely huge, but sadly, un-utilised to the point where many cameras have never been seen even remotely close to their potential. The typical level of knowledge from solo-operators of cameras/cinematography vs colour grading is equivalent to a high-school teacher vs a hunter-gatherer. I am working on the latter for myself, trying to even-up this balance as much as I can. As you're aware I developed a powergrade to match iPhone with GX85 and it works as a template I can just drop onto each shot. Unfortunately I am now changing workflows in Resolve and the new one breaks one of the nodes, so it looks like I will have to manually re-construct that node, which I have been putting off. I've also been reviewing the excellent YT channel of Cullen Kelly, a rare example of a professional colourist who also puts knowledge onto YouTube, and have been adapting my thinking with some of the ideas he's shared. One area of particular interest is his thoughts on film emulation. To (over)simplify his philosophy, he suggests that film creates a desirable look and character that we may want to emulate, but it was also subject to a great number of limitations that were not desirable at the time and are likely not desirable now (unless you are attempting to get a historically accurate look) and so we should study film in order to understand and emulate the things that are desirable while moving past the limitations that came with real film. I recommend this Q&A (and his whole channel) if this is of interest: As I gradually understand and adopt various things from his content I anticipate I will further develop my own power grades. I'm curious to see how you're grading your LX15 footage, if you're willing to share. Wow, that is small! I'd love something that small.. it's a pity that the stabilisation doesn't work in 4K or high-frame-rates. Having a 36-108mm equivalent lens is a great focal range, and similar to many of the all-in-one S16 zooms back in the day. I love the combination of the GX85 + 14mm f2.5 + 4K mode crop as it makes a FOV equivalent to a 31mm lens. I used to be quite "over" the 28mm focal length, preferring 35mm, but I must admit I did find the 31mm FOV to be very useful when out and about, and having the extra reach is perfect for general purpose outdoor work. I want to upgrade to the 12-32mm kit lens, which gives the GX85 a 26-70mm FOV in 4K mode (and 52-140 with the 2x digital zoom for extra reach). -
Help me on an eBay hunt for 4K under $200 - Is it possible?
kye replied to Andrew Reid's topic in Cameras
Nice vibe! Is it 30p slowed to 24p? Or BMMCC? -
RAW is uncompressed. By definition, anything that isn't RAW is compressed with a lossy compression. By definition, anything compressed with a lossy compression is lower quality than something than RAW. Therefore, RAW is superior in almost all aspects relating to image quality. Your comment: indicates that it's only the ability to colour grade it that is improved, not anything else. In the context of pulling stills out of video files, all image quality aspects are important and relevant, not just the colour grading aspects. Maybe you meant something else, but that's not what you said.
-
A YouTuber I follow appeared on a TV show and vlogged a bit of BTS content, and also got permission to share a bit of the finished content, so a rare YouTuber vs professional crew comparison moment occurred. I'll preface this by saying that the YouTuber makes great content about Japan and is a talented film-maker. She does have, however, the same approach to YouTube camera equipment as most, based around the old combo of Sony A7S3, DJI drone, GoPro, and the usual approach to colour grading etc: and the show was shot on Sony cinema cameras, so same / similar image pipeline as the below, from the TV show: I know which one looks nicer to my eye... The other difference I see is that images for cinema get graded a lot darker than social media. Cinema treats 50% IRE as where the highlights start, and social media colour grading thinks 50% IRE is where skin tones should be!
-
On the contrary - the vast majority of content on this forum is devoted to the idea that any non-RAW codec is inferior to a RAW one, otherwise when I shoot with any of my non-RAW cameras there wouldn't be any loss of image quality compared to RAW. RAW is special for still images for lots of reasons: The frame will have no compression artefacts The frame will have no processing artefacts (sharpening, temporal or spatial NR, etc) As well as being easier to colour This would matter a lot more for stills, as they're potentially printed and put on walls and looked at on a regular basis for years or even decades. Far less scrutiny is placed upon an individual frame from a movie or TV show! Do people even grade anymore? The modern Sony sensors have so much DR that most of YT looks like C-Log or V-Log from the days when cameras had 11-12 stops. The turning point for me was realising that things shot on film were super contrasty, and had clipped highlights and crushed blacks, so it wasn't mandatory to keep all the DR of the image if it didn't serve your purpose. Then when I started grading to have that level of contrast I worked out why people don't do it - it's hard to get a great looking image with that much contrast (and therefore saturation) without it looking cheap and digital. Now I don't even bother capturing that much DR unless it's for a specific scene, like a sunset etc. Which allowed me to go from a GH5 to the GX85 and shoot lower DR 8-bit 709 instead of 10-bit HLG.
-
Did you watch it? Is it a good interview? There's heaps of content on Joker around online - I think they did a bunch of it as PR. Here's an interview of The Joker colourist Jill Bogdanowicz by Cullen Kelly who is a professional colourist: I highly recommend Cullens YT channel BTW. He's a real working pro and the more of his videos I watch the more embarrassed I get for the other YouTubers that pretend to be colourists.
-
The BM Camera forums are constantly talking about new cameras. There was a thread that had hundreds of posts and went from March 2022 to April 2023 and the new mount thread simply replaced that thread as the current water cooler. The entire premise of the old thread was that BM hadn't announced a new camera already. https://forum.blackmagicdesign.com/viewtopic.php?f=2&t=157059 It's pretty simple: If BM teases an announcement then people go wild speculating if it's a new camera If BM teases a new camera then people go wild speculating what it will be If BM announces a new camera then people go wild speculating what features it will have If BM doesn't announce anything then people go wild speculating what is going on
-
"The best camera is the one you have with you" is often translated as "the best camera is the one that doesn't get confiscated by security for looking too professional" 🙂 As I've mentioned previously, on my trip to South Korea earlier this year I favoured the GX85 + 14mm F2.5 pancake lens combo over the GH5, simply due to the size while filming in public and private spaces (like museums, etc). I would have liked more flexibility and since then have worked out that the best two lenses for my purposes are the 12-32mm pancake and the 45-150mm tele - the two kit zooms that were originally paired with the GX85. In terms of the quality of the files, it's adequate for my purposes, but I can understand if it doesn't meet the needs of others. Certainly I would appreciate a few added features if I was given a magic wand, but I'm ok with it how it is, and I certainly wouldn't make it any larger to accommodate any of the additional things I'd ask for.
-
The industry is both cautiously innovative as well as hugely traditional and very slow to change. What I mean by this is that if there is a new type of light source (e.g. LED) then the industry will take it's time to evaluate the technology, but then once it is understood in terms of strengths and weaknesses and impacts down the image pipeline etc, then it will adopt it. Moving from film to digital acquisition was another example of this cautious innovation. However, if there are any structural changes to how things are done, then they can take literally decades to be adopted. It's been common practice for there to be a digital intermediary step in the image pipeline even before there was digital acquisition and digital distribution, and everyone knew that colour grading and compositing was a huge factor in the image, but it's not universal practice even now for the colourist to be involved in the decisions up front on choosing the camera package and LUT(s) used to view images on set etc, despite the image being completely reliant on the colourist being able to deliver the desired look. This inertia is because it involves a change in how the team is structured - you have to involve someone from post-production in pre-production!! The Ronin 4D is a less dramatic example of such a structural change. If you use the 4D to shoot, then who operates the camera - is it the steadicam operator? If that's the only camera then is the steadicam operator the DoP? The number of films that were shot completely on steadicam is pretty low, so this isn't the normal practice. Are there union implications? It's a new lens system - is everyone familiar with the look? Do they like it? What availability is there in rental houses? What test footage is available (important for those who haven't got the luxury for camera tests). Its worth mentioning that DJI don't currently have a way to shoot with the 4D that isn't using the gimbal. Not many people are willing to shoot their whole film on a gimbal and if you're already using a RED or Alexa there's not much advantage to the 4D over just using a Komodo-X or Alexa Mini on a gimbal. It's easy for one-person operations to just see the whole process from start to finish as completely up for grabs, but the industry has been designed as a production line where each step of the process is done in a specific way and by a specific role requiring a specific skillset. No-one wants to go first, no-one wants to take a risk, people have enough change happening within the current structure to deal with, so no-one wants to look at things that throws the current approach up into the air.