-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Living on the west coast and having the beach as a convenient test location, I shoot a lot of sunsets, and my experience with the BM OG Pocket and BMMCC are that there's enough DR to just clip a round section just near the sun and still get shaded skin tones usable (to my tastes anyway). I have no idea if they were exposed "properly" at the right levels, or if they were one (or more) stops under, but the IQ seemed perfectly acceptable. I know the BMMCC is still relatively good in terms of DR compared to all but the current generation of cameras, but with those the exposure challenge is even easier to deal with. I notice really huge differences in practical DR when comparing the GH5 709 modes (that don't have the full DR of the GH5) with the HLG mode (which does - all 9.7 / 10.8 stops of it) and the OG BM cameras with their 11.2 / 12.5 stops. Another stop or two more would obviously be beneficial above those but I think the point at which you're choosing what to be able to include in the shot (ie, what is usable and what isn't) has probably passed for most people, even with the sun in frame.
-
@IronFilm - great pics but man... what a ride that was!! "we were out in the forrest just minding our own business and playing with cameras" "the bugs were getting annoying so we used some spray" "then out of nowhere the freaking empire showed up!" "It was at least 6 to 1, but those guys are SCARY... he said our bug spray needed a permit and that if we didn't have one, he'd KILL us" "we didn't know what to do.. we didn't have a permit, and the guy was completely nuts, so we mostly just avoided eye contact and tried to let the situation calm down" "he eventually left, probably to go kill more innocent people on a picnic or something, just gave us the evil eye before he left. psychopath!"
-
Absolutely - this is a dominant factor in most online discussions. One of the things that I think contributes to this is that there is no right or wrong in colour - the colourists are fond of the phrase "if it looks good, it is good". However, as you well know, the preferred workflow for anyone working with a professional colourist is for the director/dp to light and expose according to the directors vision, then the colourist can pull everything into a timeline, apply a global look, and then tweak from there, but mostly it's about respecting the choices made on-set. When it comes to the prosumers, there is no pro colourist, so there doesn't need to be a convention or agreed relationship established, and people can do whatever they want, and do, and "get away with it" because "if it looks good it is good". Real users disagree... I think you've gotten yourself turned around. RAW works with ETTR and the cameras that "There are too many cameras out now that you just pick up and shoot and bingo, 90% of what you wanted. One and done as they say." THOSE are the ones that you should think of as 0-100. When I shoot with the BMMCC (either in RAW or Prores - they feel the same I'm post) you ETTR and that's the best quality. I'm talking about recording run-n-gun in very high DR situations here too - sunsets and people standing in front of them, etc. It's when you get to shoot LOG that you want to avoid the areas close to 0 and to 100. I tried ETTR on the GH5 in HLG and WOW - if you want plastic skintones then they're there and they're plastic.... The reason is that LOG profiles designed by manufacturers take the bit-depth of the file and allocate more of it to the middle region of the luma scale, and really skimp on the highlights and shadows. I learned that the hard way by putting skintones in the highlights and got the plastic look. You can think of LOG formats as compressing the highlights and shadows - that's why they require very careful exposure for your skintones, and why you would want to adjust your ISO to a value where the skintones are at the right level (in the sweet spot in the middle where the good IQ is) and the DR of the camera is distributed so you get the highlights or shadows that you're interested in for that scene. In RAW, it's Linear, so the brightest stop of light gets literally half of the luma values, the second brightest stop gets one quarter, the next one eighth, etc etc.. The shadows get almost nothing. In that sense, the best image quality is in the highlights, so that's why ETTR was a thing - it makes sure that everything in your scene is exposed as bright as it can be (without clipping whatever it is that you want to keep in the scene) so everything gets the best quality.
-
Yeah, but by the time that a $100 SSD can record 4 hours of 8K60 RAW people will be saying "24K is the minimum required to shoot videos of my cats". It sounds ridiculous now, but people were making jokes about 4K being more than anyone would ever need. I don't use an external monitor on the GH5, no. I have one for my BMMCC and do find it to be cumbersome, and before I bought any FP setup I'd definitely be taking that setup out and shooting a dozen or so test videos just to really confirm that such a rig would be suitable. For the work I do I really find that a tilt/flip screen is invaluable, especially as I place particular emphasis when shooting of trying to shoot from interesting angles to make things more visually interesting, so I'm often shooting from above or below. Interestingly enough if you shoot a wider follow-shot looking down from as high up as you can reach with your arms and you stabilise it heavily in post it actually looks like a close drone shot, so that's a fun thing to include. In terms of the FP I am watching closely the discussion about exposing on it, but I would probably just set it to auto-ISO and let it expose for me, while having everything else in manual, and adding a fixed ND while outside during the day. I only use a 1080p workflow so the downscaling already does a lot of noise suppression, so the acceptable ISO range is greatly expanded for my purposes. I used to use a workflow where I had to render proxies before editing and it was a complete PITA, so I went to the 1080p ALL-I mode on the GH5 to essentially do that for me in-camera. I wouldn't be delighted if I had to go back to transcoding footage again. Lens choices would probably be an unstabilised 16mm and a 24-105/4 OIS. OIS would be a must for me as I often shoot in less than stable conditions (hand held while cold, low-blood-sugar, tired with shaky muscles, etc). Overall it wouldn't be a cheap setup, so it would have to really excel in other aspects to be a better option than the GH6. We'll see. I'm deeply aware of this "big camera = pro" dynamic because it works against me. I go places where pros aren't allowed and do things pros aren't allowed to do, so having a big camera is a problem for me, but the industry seems to think that if you care about image quality then the camera size is irrelevant. It's like making all cars that can do over 50kmh 30mph the size of a semi-trailer.... "hi, I'd like a small car"............."ok, here's a sedan - I like driving slow too!" "um. no, I want to be able to drive on the freeway" .............. "oh, I thought you said you wanted a small car.. here's our big rig - it can haul 20 tonnes!" "um, no, I don't need to haul anything more than some passengers around" ................ "I thought you said you wanted to drive on the freeway? I'm confused."
-
Pretty sure that the Lift/Gamma/Gain controls are designed for use on 709 footage, and the Contrast/Pivot/Offset are designed for use on Log footage. This might (I'm guessing here) have implications for how the out-of-bounds values are handled (eg, the curves might do predictable things and the LGG might not). To make a more general comment in the context of the bewildering complexity of the various colour spaces and gammas that are being discussed in this thread, Resolve is now getting to be a very sophisticated engine when it is put into a "colour managed" mode (RCM or ACES) and in the most recent versions some of the controls are now "Colour Space Aware" so will act differently depending on what colour space you have told Resolve that the project is in. To expand on this somewhat, you might have footage that's shot in one LOG format, and if you tell Resolve to manage the colour space in the same log format that matches the footage, and you adjust the Offset wheel +1 then you will get the same results as if you shot the footage exposing one stop brighter. However, if you do the same but don't match the log format of the project to the footage the same +1 adjustment will do strange things because Resolve is doing complex things underneath the surface to tailor its behaviour to the specific colour space you've told it to work in. I've confirmed this to be the case with the HLG files from my GH5, which (almost!) line up perfectly with rec2100. In previous versions of Resolve without colour management (I never tried ACES) I shot two clips one stop different and tried to duplicate one shot by modifying the other shot, but I failed to find a combination of colour space and tool that would do the correction. Now in Resolve 17 in Resolve Colour Managed mode it's dead easy to do it and the +1 control on the Offset wheel does an almost perfect job. (On footage that is directly aligned to a colour space in Resolve, the transformation is perfect - HLG on the GH5 isn't). I suspect that this means that some tools effectively have a CST prior to their adjustment and then one after it to put the footage back into the mode that your project is in. This isn't the only place that Resolve does "hidden" CSTs in - you can program individual nodes to do that too I think. I had a long and rather frustrating conversation with a pro colourist when I asked about configuring projects to use RCM and they essentially advised me to do everything manually, despite using the RCM modes (of CSTs) being the proper way to do things. I think the advice was well intended and was simply coming from a place of concern for the depth of complexity actually involved in the tool, and their (probably frequent) experience of non-post-pros going down the rabbit hole and essentially getting lost, so putting up a fence to protect people from the journey makes sense considering how many of them never make it to the other side. (I was also trying to do something a bit horrible that complicated the workflow beyond how that mode is designed to be used, so my example was even worse than the standard situation). To make an even broader comment about this whole thread I am reminded of a long conversation / debate about if there was any magic in the Alexa ARRIRAW files, and it was pretty clearly showed from some under/over tests that the RAW from the Alexa was a completely neutral linear capture - just like how sensors are designed. This should be true of the RAW output from any sensor because it's literally how sensors work. The challenge is how to process it in a way that gives you the results you want. It's true there are potentially slight differences between the frequency response of various R G B filters on the sensor, but they're likely to be relatively similar and can easily be adjusted with an RGB matrix if you're really keen on matching cameras (that's the basis of the BM RAW conversions from Juan Melara IIRC). Otherwise, you can simply take the RAW Linear files from any RAW camera and apply whatever conversions you like - if you like the ARRI colour science then apply their LUTs or transforms, etc. It's also worth stating that many heavy colour grades will completely obliterate any sense of colour accuracy so I would imagine the pursuit of it is really only relevant in situations where a natural or hyper-natural (eg, commercial look where everything has to be cleaner than real life) look is required.
-
Or does BRAW / Prores RAW also allocate more bit-depth / bandwidth to the middle part of the DR?
-
I suspect that this will derail the conversation completely but I'm going to ask it anyway... if you're shooting RAW then does middle-grey even matter? It definitely does when shooting LOG, as many LOG formats allocated less bit-depth to the brightest and darkest stops so you really did want to put your skintones in the middle where there was better IQ, but considering that Linear RAW gives the most bit-depth to the highlights and the least to the shadows, does the concept even make sense any more? I understand that if you're working with an editor/colourist then you want to expose things so that you can put on a display LUT for the editor and the colourist has a good starting point, but if you're trying to eek out the most quality and are willing to change exposure between shots, why wouldn't you just ETTR?
-
Yeah, I know there are but it seems to be the exception. I frequently find that discussions with "pros" assume you'll be able to operate fully manually, that you can have a rig the size of a fridge (or truck) if you need it, that light can be provided / modified to suit the composition, and that normal camera accessories will all be fine (up to the point where it makes the operator resemble a Borg drone). Any mention of situations where the speed of working is a factor or there are any limitations that might impact what you can do seem to be considered an exception. Those situations do seem to be handled pretty well when they come up, so I'm not suggesting a limitation on their behalf, merely assumptions around what is considered normal. I laughed pretty hard when the guy below mentioned that he had to work "incredibly fast" (6:25) to get a few random shots of his GF sitting on a train, or to get someone in focus on an escalator (essentially it's a stationary portrait when both of you are moving at the same speed). If he found that filming a subject who was motionless and would respond to direction to be "incredibly fast" then we've basically run out of words! "Extreme" seems suitable language to indicate to people that they should set aside their normal mode of thinking. The GH5 is still a really great offering at the intersection of the various considerations needed for working in highly unpredictable and fast-moving situations, despite showing its age in a number of ways (DR, colour, codecs, low-light). I'm not going to be in the market for a GH5 replacement until I start travelling again after COVID is actually gone (rather than the current wishful thinking that's going on in the PR departments of most governments), but I'm considering the GH6 and also the FP as potential replacements. The GH6 would keep the strengths of the GH5 but doesn't completely bridge the gap between the GH5 and current FF cameras in terms of low-light, DR, and perhaps colour (jury is still out on that one I think). The FP would limit me to OIS lenses (and likely less stabilisation compared to the GH5 with IBIS on unstabilised primes) but I think (when combined with a BM Video Assist to get BRAW and its sensible bitrates) it would buy me considerably better codec, DR, colour, and perhaps low-light too, so it might be a sacrifice worth making. I've also done a bunch of work during COVID around my editing process and style and have changed my requirements a bit because of that. Neither of these options is anywhere near the price of something like the R5C, which is competing in another league (8K RAW, DPAF, etc). I must admit that if I was willing to invest significantly more into a camera system, and was chasing something with PDAF then the R5C would be a contender, along with things like the FX3.
-
After years of telling people that I have different needs to what they think I do (or should have) I have now realised that I basically shoot in extreme situations. I shoot in caves, out of light aircraft, while walking/running, through glass, in high DR situations, in nasty weather and at crazy long focal lengths (>800mm FF equivalent). The fact that I do these things as part of home videos is irrelevant, and as soon as you say that, people just instantly think I should use a potato and be happy about it. I'd suggest you stop talking about "pro" and start using the phrase "extreme". It's true, after all, most pros shoot in situations where the world revolves around the camera, rather than the camera needing to fit into the world, and let alone like us where the camera needs to be able to survive a world that is actively hostile to both good images as well as equipment failure.
-
Keen to hear your thoughts on the camera, image, and this rather strange exposure behaviour... You mean a test like this?
-
Nope. Sex sells, but fear sells more. Look at the cover of any newspaper, basically ever, and tell me if it's "good news" or "bad news" 🙂
-
Wow - that would be why no-one has posted sample footage of it yet. Maybe it'll come in the firmware update where they add 4K and 1080p for Prores. There's absolutely no way they'd let people shoot their highest quality codec with non-oversampled lower resolutions. Maybe, I was just saying that it has the full DR. Same for me. Plus it gives the flexibility (once you know how to set it up) to adjust WB and exp in LOG before applying the LUT / CST.
-
Great post - thanks! As a GH5 user I can see lots of interesting little improvements here that are against things that irk me about the GH5 so will be real-life improvements. A few thoughts: V-Log clipping at 88 isn't a big deal - for 10-bit footage the difference between 88 and 100 is inconsequential and it's designed to provide compatibility with V-Log cameras with larger DR on the GH5 the HLG mode contains the whole DR of the camera (unlike any of the other picture profiles) so if HLG is in the camera then that's a full-DR alternative to V-Log 13 modes rather than 5 is huge, and especially that there's 4 on the physical dial - I will definitely be using more than 5 of these modes Sleep function forgetting things was definitely a PITA so refinements to that is huge, and the extra custom modes help with that too I'll be particularly interested to see how the 1080p (or 2K?) Prores mode is integrated into the camera. In addition to image quality, it'll be interesting to see if it requires the newer batteries or not, etc. Can you give a bit more information about how well the current (h26x) 1080p ALL-I mode works? Particularly, I'm interested in how the footage looks from these modes: 1080p24 mode 1080p24 mode with 2X digital zoom 1080p60 mode 1080p60 mode with 2X digital zoom On the GH5 all those modes are downsampled and the footage looks flawless. Is the GH6 the same?
-
When you say "workflow" what kinds of things are you talking about? I notice that each discipline (cinematographer, DIT, editor, colourist, etc) uses this word to mean different things, which is pretty obvious when talking about an Alexa, but cameras like the GH6 are often used by people that don't fit into such nice neat boxes 🙂
-
My first experience feeling very old (apart from interactions with young kids) was seeing a medical specialist for something and thinking "are they old enough to have graduated from medical school yet?". I suspect that most of life as (say) an octogenarian would be simply looking around and seeing kids running everything and being amazed the world doesn't fall apart.
-
I saw another FP video yesterday, and even though I think the person didn't do a great job on the grade, you could see immediately from the footage that the camera isn't overpowered by high-contrast scenes, but seems to render them quite neutrally without it being a "stretch". This is perfect as it gives a really solid base for your grade and if you are wanting to really push/pull the footage then it gives you extra leeway to push/pull it further from neutral before the image starts to suffer. The more I see images from this the more I like what I see. I wish CineD would revise their DR tests, as the one they did was on the first firmware version and had some odd qualities to it that I think might have been improved upon in subsequent updates. Can anyone comment on the DR and image quality in V4 vs V1 firmware??
-
Thanks for your reply - very useful, and also heartening that it's not the footage that is the challenge but the camera itself. I'm also further heartened that it seems the challenges are just in working out the camera rather than challenges that come up shoot to shoot (like fiddly buttons or confusing menus etc). I've done many (many...) of these types of tests in the past to learn to get the most from the GH5 (and other cameras). I've shot test scenes in different modes, uploaded to YT, downloaded back from YT at different resolutions and then studied the resulting images to see what is visible at the end of the whole pipeline and what is not (4K vs 1080p source material is essentially imperceptibly different when uploaded to YT at the same bitrate/resolution). etc etc. This means that those considerations are, for me at least, kind of inconsequential. I'd do some tests, evaluate my options, and then just work out my rules for shooting. On first glance, and knowing what I know about lighting levels (which I've talked about in another thread) I'd be tempted to have DR+ on all the time, have a fixed ND that I put on when outside during day, and then just let it expose with auto-ISO. I tend to shoot with aperture only varying a few stops anyway, so that sort-of doesn't factor in that much. Every time 100 people say "FF" on an MFT thread, one less review gets published. We're voting with every comment, and most people are voting against themselves. Go back in this thread and re-read the last 10-20 pages - you'll be left with the distinct impression that no-one is in the market for this camera and that AF has killed MFT. It might not be true, but it's how it sounds with all the moronic comments that people make.
-
I'm quite familiar with how the "feel" of footage from different cameras can be very different, and was heartened by how the S1 footage felt similar to the GH5. What does the footage feel like to you? Obviously this is a highly subjective question, but keen to hear why you agree with their assessment about the footage being difficult to work with.
-
Have many camera operators gone through all the necessary certifications and permits to be able to fly drones commercially? I haven't seen much discussion of it in those circles, but maybe I missed it.
-
Yes, I saw that and it's hugely impressive. Not being into FPV and not being familiar with it, the thing that surprised me most was the low-quality and drop-outs of the video signal that the pilot was using. It makes sense logically that it would be low-resolution, but the noise seemed pretty extreme. It is a long way though. Drone pilot seems to be a profession that is (temporarily at least) booming, with both the introduction of compulsory licensing as well as the introduction of FPV-style operation which seems to be hugely more difficult. These may disappear once the AI gets good enough but in the meantime it's a whole new career in film-making. While not being in the context of the entertainment industry, there's also a huge market emerging for infrastructure inspection by drones (flying a drone around buildings or the infrastructure) laden with all manner of sensors that can automatically detect various issues. It's an expensive endeavour, no doubt, but much preferable to enormous scaffolding or hanging with ropes and doing inspections visually or by hauling huge equipment with you, which is the current option.
-
Yes, quite interesting and much more useful than the CineD type tests that only measure DR at native ISOs etc. It also reminds us not to compare DR results between different reviewers, which only adds to the value of this test as the methodology is (assumed to be) directly comparable between the four cameras they tested.
-
Very interesting! While things like this feature don't mean much to the full-manual folks, I think that implementation of little features like this can really influence the experience and even the overall capability of a camera, especially when shooting fast in uncontrolled conditions. I use the 2x digital zoom on the GH5 all the time, which when in the 1080p mode, still oversampled from ~2.5K sensor area to the 1080p, giving a high-quality image and extending the focal range significantly. Considering the size of these cameras, and the hand-held target market, these are the things that really matter. I'll be looking forward to this comparison! I'd imagine I won't be the only one - everyone who hasn't been tempted to the dark side by FF will be very keen to see how they compare.
-
1.5kg is pushing what you'd want to carry around, that's for sure. My GH5 / Voigtlander f0.95 prime / Rode VMP+ combo is approaching that and is close to the limit of what I can comfortably carry around for long periods. In terms of the FP-L the main consideration is codecs. I'd inspect the images with a fine tooth comb - if it's anything like the FP the uncompressed RAW is a tsunami of data and the internal compressed formats were really not what you'd hope for. I'm considering the FP but would only use it with external RAW recording - and that's only for my own personal work.
-
Thanks, that's really useful. I would have been very surprised if I could do 4K60 but not 1080p120! I have seen various references over the years of bit-depths being reported incorrectly - 12-bit being recorded as 14 or 16 bit, and 14 bit as 16 bit so it might be a common thing. As soon as you go to 10-bit then you need (in theory) two bytes to contain the value, which is 16 bits, so maybe it's saying that the container is 16-bits, but of course the data will be limited to whatever bit depth the sensor/camera is capturing. If I go this way I would prefer a BM solution as the integration with Resolve will be perfect and the bitrates will be much more manageable. One article said that 3:1 BRAW was the same as Prores HQ, so I'll be able to get equivalent bitrates but with greater bit depth, plus on the 120p shots I can use as little as 12:1 and get roughly the same bitrates as 24p. This means 3:1 1080p would be ~180Mbps vs FP internal 610Mbps, and for 100p I can go from the FP 2,530Mbps to as low as ~180Mbps. You can often get away with more compression on slow-motion footage as things don't change much from frame to frame but it's nice to have the option. I don't think that Prores RAW has as many compression ratios available - just the two. I note the VA can record to SSD via USB, so that might be handy too.
-
I'm trying to find a 1080p 12-bit workflow up to 120p from the FP into Resolve, but using a compressed codec to limit file sizes. The FP lists 12-bit 1080p up to 119.88p on the HDMI Output, so that seems good: The BM Video Assist units don't list any frame rates beyond 60p, even in lower resolutions: Also rather troubling is that they list the HDMI as 10-bit. Does that mean that BRAW is 10-bit from these units? The Atomos Ninja lists 12-bit so I'm assuming we can get 12-bit into the unit: Then the codec challenge is there because Resolve doesn't support Prores RAW, and the Ninja doesn't support the 12-bit Prores flavours. It does support DNxHR HQX which I believe is 12-bit (link) and I believe Resolve supports the format (it will render proxies in the format so seems good?) but DNxHR is UHD only - the 1080p version is DNxHD (the last letter is different) and those seem to be limited to 10-bit. There is a converter to convert from Prores RAW to CinemaDNG which Resolve will read, but I'd like to avoid the transcoding step if possible. Any advice would be appreciated.....