-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Yes, this is exactly the challenge I was talking about - the direct support in NLEs so they will adjust the image appropriately. I know you said that it can be done with V-Log-L , but I don't have that and considering how old the GH5 is I didn't think that it was worth the huge asking price for the upgrade, so I'm looking forward to having that ability.
-
Yeah, if you're set on AF then there's the OM-1 and if not then there's this. Maybe the zombie apocalypse is actually thousands of people running around outside making films with dead-format cameras having a great time and the FF purists huddling in their fortified caves afraid that they'll get overrun by the impure hordes!
-
Fair enough, it was a stress test after all. I'm guessing it will look much nicer in its sweet spot. I'm looking forward to seeing how it looks with the Prores in FHD and DR boost mode, which is how I'll shoot with it. Getting the benefits of down sampling with the grain and texture retention of Prores and the full VLog capability will be an interesting mix. I'm also keen to see what treatments there are for VLog around, both LUTs and power grades. Maybe Juan Melara will give it his Alexa matching treatment, etc.
-
Awesome - that makes sense - DR is about latitude and so the over/under tests are where it's at. Maybe the GH5 has so little DR that I've not paid much attention to the thing I didn't have! 😂😂😂 The addition of full V-Log is excellent as I'm assuming it will mean you can do severe exposure and WB changes without running into tint problems. That has been my problem working with GH5 (and the S1) footage - when you're pushing and pulling it has these colour warps in the shadows and highlights. Makes sense, I'll have to read a bit more about it. A few reviewers mentioned it would get Prores RAW in a future update. It's not something that will entice me, but seems a pretty good thing to offer now that people expect RAW from cameras at this price-point. The Media Division review contains a test short near the start of their review. It was in the tunnel they use for testing, and isn't exactly a nicely lit environment, but the footage looked pretty nice actually, so that's worth a watch if you haven't seen it yet. Also, PotatoJet filmed the talking head part of his review using the GH6, including using the AF which he said did an awesome job, and that's a nice lighting environment for skintones etc. The GH6 isn't an "upgrade" to the P4K, they're fundamentally different cameras. People think that because they are similarly sized and priced that they're comparable, but not so - different cameras for different tasks. The GH5 in 4K is about a S16 crop using the ETC mode (1:1 readout), I haven't done the math but the GH6 will likely be something like that. The new frame guide function (that looks MUCH improved from the GH5 one which was pathetic) would help you shoot within the s16 frame if the crop was a little wide. You'd have to crop the files in post, but that's easily done. Yeah. Just a matter of buying a fixed ND that takes the Base ISO down to the normal non-boosted DR Base ISO, then you can use your normal NDs. I also liked that they have a new super-fast maximum shutter-speed, which may be necessary for shooting wide open in that mode in Aperture priority (how I shoot).
-
I suspect that it'll take a while to really understand what this tech is, means, and how to get the most from it. It's a different way of thinking about things compared to 'normal' or dual-ISO sensors. Yeah, the delay in their video was huge! Firmware version perhaps? or a resolution / HDMI version mismatch perhaps? I can't think of what else would cause such a big difference.. It is a little confusing... One thing that's interesting is CineD tested the GH6 to be 11 with Prores, and the P4K at about 11.6-11.7. (Source is from the CineD test on the P6K where they talk about the P4K in passing: https://www.cined.com/blackmagic-pocket-cinema-camera-6k-lab-test-dynamic-range-latitude-rolling-shutter-more/ ) So, the P4K with same sensor area and only a single gain stage gets better DR, which supports the idea that the GH6 sensor isn't the latest tech, but a smarter architecture and an older tech perhaps? If it is a BSI sensor - what advantages would that have?
-
Ah, that would be perfect too. So really then, have a 3.5mm mic jack with two 32bit ADCs, and then have it so that with a switch in settings you can swap the headphone 3.5mm jack to be a second mic jack also with two 32bit ADCs. I'd happily make a little mic array on the top of the camera by adding a couple of 3.5mm lavs to either side of the shotgun mic. EIS doesn't work well with a 180 shutter, it's only really good for very small amounts of movement or a very low shutter angle. Stabilised glass is a real limitation, which is one of the reasons I went to fully-manual primes. I shoot handheld at (the FF equivalents of) 15mm, 35mm, 85mm, and 400mm. Those are the primes I use, almost exclusively. Ah, ok. Lots of log / codec combos don't get good bit-density on the highlights or shadows, putting more of their bandwidth into the mids where the 'proper' exposure band is, so that makes sense. Interesting, I didn't know that those RAW codecs were RAW log and not Linear. Even better. The more bit density you can put in the skin tone range the better!
-
Ha! Wouldn't that be hilarious... "oh, the GH6 doesn't use AF because it's got sensors from the same place ARRI gets theirs!" ....still wouldn't shut up the people who can't focus their lenses by themselves 😛
-
@Tito Ferradans shows the latency here and it seems like there isn't any... linked to the timecode: CineD have their results up: https://www.cined.com/panasonic-lumix-gh6-lab-test-rolling-shutter-dynamic-range-and-latitude/ Other modes do a little differently, but one highlight was that the Prores didn't do much NR and they found they "show a very nice, organic noise floor with additional stops that can be retrieved in post-processing". Considering that the GH5 combination of features suits me better than any other camera, and it's got 2 full extra stops of DR, which is cool. Yeah, Media Division are spectacular. Plus all sorts of other little cool things too. So many things while watching reviews that made me think "oh, that's super useful - I'd use that".
-
+1 for Media Divisions review. I looked through the dozen or so GH6 videos in my feed and stopped when I saw Media Division had done one and just watched that. They are the most level-headed film-making centric channel out there - both technically capable and also film-making aware so don't get caught up in tech for its own sake. It looks like an absolute cracker of a camera. My impressions are that although it's not teaching FF lessons like the GH4 and GH5 did on their release, it'll still be a solid presence in the market, especially for people who don't need AF. I'm also looking forward to Andrews review.
-
I definitely did that to myself. I messaged @mercer asking for opinions and the "Camera for India" discussion hit triple digits, and I ended up buying the camera locally for RRP and I had one day to test it before I hit the road. I even had a lens shipped to a post office near where I was staying in Sydney as I didn't want to risk that it would be late and I'd miss it! This is a huge topic and I'll resist the temptation to rant, but I think this is a hugely under-developed part of camera design. My ideal camera would have the ability to record all these simultaneously: 3.5mm input 1 L&R to two audio channels (ie, normal) 3.5mm input 1 L&R to two audio channels (at -20/-30/-40dB as safety tracks) 3.5mm input 2 L&R to two audio channels (ie, normal) 3.5mm input 2 L&R to two audio channels (at -20/-30/-40dB as safety tracks) in-built speakers L&R to two audio channels with auto-levelling One thing that travel film-making really benefits from is ambient sound, and being able to record a shotgun (at normal gain and also a safety track) plus a stereo track would be killer. I basically never remember to unplug the on-camera shotgun to get stereo sound, so I'm having to do a lot of work in finding ambient soundtracks to liven things up in the mix and if they were just there then it would be spectacular. ...and yet those people will then have to buy a monitor anyway.... which will just be a monitor because it can now record internal RAW, so no need to go external for better codecs. *sigh*. Once again, camera size suffers because people won't do the right thing. IBIS is all the rage, and I love it for shooting manual primes (which I use almost exclusively) but OIS can be incredible too. I chose my Sony X3000 action camera over a GoPro because of the OIS and it's just incredible. The combination of super wide-angle lens (15mm FF equiv?) and OIS means that if I hand-hold a follow-shot from as high as I can reach looking down then people think it's a drone. The other piece that's missing from ARRI is that it records RAW in LOG format, not Linear. Is there any talk of that trickling down too? There can be some tricky stuff in the data pipelines, but also strange manufacturer limitations coming from the "it's all too hard" department. For example the GH5 uses UHS-II card slots, and the 400Mbps codec won't record continuously onto a UHS-I card, despite using UHS-I cards that are rated for well above that data rate and the fact the standard will allow it too. It might be worth trying to ETTR? I worked out that it's a bad idea on the GH5 as the HLG profile has a pretty aggressive rolloff and so anything you expose up there ends up pretty thin, but it does give you lots of room in the shadows....
-
Well, whaddaya know... it does have one. I take it back! I saw multiple YT videos where people said it had no audio on the body, not even a 3.5mm mic jack. I remember specifically because I was pretty stunned at how short-sighted it was. Apparently the camera reviewing YT gaggle gets it wrong for Sony as well - who knew!? Yeah, that concept of putting all the features into the smallest form-factor makes sense. Things like internal NDs really make life interesting, and although I can go out with a pocket full of batteries, really it's something that should be included on all decently sized camera bodies. My GH5 regularly does a full-day outing without me having to swap to a second battery, and I'm not sure I can remember a time when it needed the third. I only carry two spares, but I'm careful to always put them back on charge each night.
-
I think maybe @Andrew Reid is too taken by his NDA GH6 to worry about anything else! At least, that's what I'm telling myself 🙂
-
That camera is interesting in a number of ways, but the fact is has no audio capability at all without the huge top-handle really makes the very compact form factor very compromised. It's like taking a matchbox and having the only audio option be a shotgun microphone... permanently mounted to a boom! Obviously it's a cinema camera and blah blah blah, but not even capturing scratch audio while on a gimbal is a bit of an oversight I think.
-
Yeah, and compare it to the other two. If it has huge amount of noise because of the X-Trans sensor then we'll know, and if it doesn't, we'll know the chroma processing isn't for NR.
-
Considering you have access to the camera, maybe you can do a noise test by taking a few RAW stills at a few ISO settings of a dark blank surface? It would be interesting to see what the levels of noise are between the R, G and B channels. If you have another camera from a different manufacturer handy then a comparison would be useful too. Then we'll know if there even is a big noise problem with the sensor or not. Maybe it's terrible and such heavy-handed processing might be understandable, even if the severity might not be justified.
-
Good to hear you're not too far off. It does seem that regardless of our intentions, we somehow end up with new cameras right before real shoots. I did it to myself, buying my GH5 right before going to India on a once-in-a-lifetime style trip. I look back on how I shot that trip and just roll my eyes at how little I knew the camera, but it turned out pretty well and I didn't stuff anything up majorly, so alls well that ends well, right? 🙂 In terms of getting your audio worked out - absolutely! Bad visuals is unfortunate but bad audio makes it unwatchable!!
-
Great write-up and nice to hear a nuanced view of pros and cons and feedback from real use. My advice (for you and everyone getting a new camera) is to shoot and edit some test footage every day until you're familiar with the camera. I find that it's not until I shoot some footage and then look at it on the computer that I learn something. If you have a list of questions then just film a little test to explore each question and gradually work through them... eg, what do all the focus modes do? and what do the settings for each one do? how far can I overexpose? under? how far can I push the WB? highISO? what does every codec look like at every quality setting (find something with consistent movement - waves at the beach is a good one)? how long a focal length can I hand-hold without OIS (film your best handholding at a range of focal lengths, repeat the same test at the same focal lengths when you're warm, when you're freezing cold, when you're hungry, and when you've had too little sleep and too much coffee). etc etc etc... I've done a lot of these tests and I find myself referring back to them when contemplating things. Like, if I use my camera in a certain way then that will mean pushing this parameter, then I just go and look at that test and I'll know what that will look like. In terms of the screen, can you design a LUT that is hugely aggressive and can be seen in bright light? eg, everything from 30IRE down is 100% pink, 30-60IRE is 100% white, and above 60% is 100% blue? or even everything is black except 35-65 which is red below 50 and blue above 50? Seeing something would be better than nothing. Best of luck and good shooting! 🙂
-
It's too long ago for me to remember accurately, but mostly I think they dried up. Of course, in a lens it's more complicated because you shouldn't humidify them too much or you'll get fungus. Also, when @PannySVHS says the rubber "dissolved" that's something I think is literally true - which is very different to rubber drying out. It's probably a case of reading the advice from known good sources, which considering the level of completely wrong info posted all over the internet about it, I'd only get from lens manufacturers directly.
-
So, can you get the 16-bit files out of the camera and see them in an NLE? or just the 12-bit files? There's nothing stopping them from reading the data off the sensor, changing it in whatever ways they want to without debayering it or anything like that, and then saving it to a card. I talented high-school student could write an algorithm to do that without any problems at all, so it's not impossible. When light goes into a camera it will go through the optical filters (eg, OLPF) and the bayer filter (the first element of the colour science as these filters will determine the spectral response of the R, G, and B photosites. Then it gets converted from analog to digital, and then it's data. There's very little opportunity for colour science tweaks there. I've looked at their 709 LUT and it doesn't seem to be there either. I'm seeing things in the colour science of the footage, but I'm just not sure where they are being applied in the signal path, and in-camera seems to be the only place I haven't looked. It would be amazing if we were to get that tech in affordable cameras. It will give better DR and may prompt even higher quality files (i.e. 12-bit LOG is way better than 12-bit RAW). It's not a small guy vs corp thing at all. Most of the people pointing Alexas or REDs at something have control of that something. Most of the hours of footage captured by those cameras will be properly exposed at native ISO, will be in high-CRI single-temperature lighting, and will be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc). Most of the people pointing sub-$2K cameras at something do not have total control of that something, and many even have no control over that something. A lot of the hours of footage captured by those cameras will not be properly exposed at native ISO (or wouldn't be at 180 shutter), won't be in high-CRI single-temperature lighting, and won't be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc). You really notice how well your camera/codec handles mixed lighting when you arrive somewhere that looks completely neutral lighting and look through the viewfinder and see this: This was a shoot I had a lot of trouble grading but managed to make at least passable, for my standards anyway. There are other shots that I've tried for years to grade and haven't been able to, even through automating grades, because things moved between light-sources. Unfortunately that's the reality for most low-cost camera owners 😕 The difficult situations I find myself in are: low-light / highISO mixed-lighting high DR and when I adjust shots from the above to have the proper WB and exposure and run NR to remove ISO noise, the footage just looks so disappointing. Resolution can't help with any of those. I've shot in 5K, 4K, 3.3K and 1080p, and it's rare that the "difficult situation" I find myself in would be helped by having extra resolution. I appreciate that my camera downsamples in-camera, which reduces noise in-camera, and the 5K sensor on the GH5 allows me to shoot in downsampled 1080p and also engage the 2X digital zoom and still have it downsampling (IIRC it's taking that from something like 2.5K) but I'd swap that for a lower resolution sensor with better low-light and more robust colour science without even having to think about it. They care about quality over quantity, and realise that one comes at the expense of the other. This is literally what I've been trying to explain to you for (what seems like) weeks now. Interesting stuff. In that thread, the post you linked to from @androidlad says: This idea of taking frames "a few milliseconds apart" sounds like taking two exposures where the exposure time doesn't overlap. Assuming that this is the case then yeah, motion artefacts are the downside. Of course, with drones it's less of a risk as things are often further away and unless you put an ND on it the SS will be very short, so motion blur is negligible anyway. We definitely want two readouts from the same exposure for normal film-making.
-
How to circumvent a rigid procurement process... 101. "It was an open tender - honest!"
-
That's an interesting possibility and, although being hugely processor intensive, would unlock things not currently possible with the current setup. For example, if they took a huge number of short exposures and then stabilised and then combined them they could simulate a 180 shutter (eg, 1/50s exposure) that was stabilised DURING the exposure (which current EIS tech cannot do), but also they could do things like fade-in the first few images and fade out the last few creating motion trails that don't stop abruptly. They could even have adjacent frames overlap.. ie, if you imagine a point on a spinning wheel they could have frame one going from 12-oclock to 3-oclock, and frame 2 going from 2-oclock to 5-oclock, etc. IIRC RED did some things with using an eND filter to fade-in and fade-out each exposure, but of course they couldn't overlap the exposures like this. The whole research into 24fps being the threshold of continuous motion and the 180 shutter rule being the most natural amount of motion blur could be revisited as the limitations that they were working with in the film days would no longer apply.
-
I never bought the Vlog update so I haven't tried that. I've also never used ACES either. I do use Resolve Colour Management now - I think that's new in R17? When I interpret the GH5 HLG footage as Rec2100 the controls work pretty well. As I've mentioned before I've tested it against Rec2100 and Rec2020 and it isn't a perfect match with either of those, but Rec2100 works well enough to be useful. I did look at buying V-Log, but once I saw that it also isn't natively supported I figured there was no point - I already have one thing that's useful but not an exact match so why pay for another one 🙂 All this is in context of how you're grading of course, and I'm really liking grading under a PFE LUT (2393 is pretty good) which helps to obscure the GH5 WB / exposure / lacklustre colour science quirks in the images.
-
I used to be an IT tech and we used to have to do major services on printers that were old, even if they had low page-counts, because all the rubber in the rollers used to push the paper around had dried out and cracked and no longer worked reliably. What was interesting was that we had similar printers of similar ages in lots of different buildings with different types of air-conditioning systems (refrigerative, evaporative) and this really impacted the lifetime of the rubber rollers.
-
I'm still not convinced about this. Yes, they do say: "Nothing is "baked" into an ARRIRAW image: Image processing steps like de-Bayer, white balance, sensitivity, up-sampling or down-sampling, which are irreversibly applied in-camera for compressed recording, HD-SDI outputs, and the viewfinder image, are not applied to ARRIRAW. All these parameters can be applied to the image in post." However, immediately before that, they say: "For the absolute best in image quality, for the greatest flexibility in post, and for safest archiving, the 16-bit (linear) raw data stream from the sensor can be recorded as 12-bit (log) ARRIRAW files." So in this sense, the "nothing" baked into the ARRIRAW includes the combination of two streams of ADC and also a colour space conversion. It's perfectly possible to do whatever you like to the image and still have the first statement be true in a figurative sense, which is how they have obviously intended it - "nothing you don't want baked in is baked in". The ARRI colour science could still well be baked in and no-one would be critical and the statement they make would still be true in that figurative sense. To me, the proof is in the pudding, and even that Alexa vs LF test included small and subtle shifts between the two images that is unexplained by the difference in lenses. That doesn't mean you care about colour as much as I do, or see it in the way I do. The fact you can hire a colourist means that the colour science from the manufacturer means less as you can cover any shortfall by hiring a pro, I don't have that luxury unfortunately, and thus why the colour science has a greater impact to me. I suspect most people buying a GH6 also probably don't have regular access to a colourist to take up any shortfalls. Ironically, the cheaper the camera the more that someone would need to have more latitude, great colour science, and a solid codec to work with. By having the best image come from a camera that costs $100K you're giving the most robust image to the very people who need it least as they can afford to use the camera at its exact sweet spot and not need any of its latitude. Agreed, and being skeptical makes sense. Realistically I'd be happy with any log colour space as long as it's a standard that is supported by ACES or RCM. Like I said, the folks with access to higher-end equipment and colourists for backup and troubleshooting swap to the higher budget stuff when the going gets tough, but those of us who don't have that luxury are stuck with what we have and have to make the best of it without any backup. But when we suggest that we'd prefer things that make our lives easier (robust colour science, codecs, DR, ISO performance, etc) rather than things that don't really help in difficult situations (resolution, etc) somehow that doesn't make sense to the people that aren't in our shoes? I think the top comment on the Alexa vs LF video is the most telling.... "Based on image aesthetics I'd go with the 65, but based on my budget I went with my Panasonic G7." Expecting ARRI level images from a GH6 is definitely unrealistic. But I mention it for precisely the same reason you mention wanting more DR - the better they can make it the better our results will be. Magic Lantern gives options for 14-bit RAW on the 5D and also on my 700D, so I'm assuming that the Canon sensors have a 14-bit output. I'd assume that means it's not that difficult to do - after all the 5D isn't a new camera by any means! According to CineD the GH5 gets 9.7/10.8 stops, the OG BMPCC gets 11.2/12.5, the S1H gets 12.7/13.8 and the Alexa gets 14/15.3. I really notice a difference between the GH5 and OG BMPCC, and obviously the S1H is significantly more again. Have you shot with an Alexa? If so I'm curious to hear if you noticed any differences from the extra DR when using them? Obviously there's an amount of DR that would be "enough" for almost all situations, and I'm curious to know where that amount is. I'm also skeptical about motion artefacts that would come from non-sequential exposures. I guess we'll see. It might be more useful in situations where motion blur isn't significant and extra DR is, for example locked-off shots of architecture and the like. You could easily have it take a set of images around the 1/1000s mark, perhaps at 300fps, and then combine them. That would give you great DR to include the highlights in the sky etc. Based on your logic about 12-bit ADC being a limitation and new sensor tech being required it will be interesting to see what this new sensor can do. Only a few sleeps left until the formal announcement!