-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
@Tito Ferradans shows the latency here and it seems like there isn't any... linked to the timecode: CineD have their results up: https://www.cined.com/panasonic-lumix-gh6-lab-test-rolling-shutter-dynamic-range-and-latitude/ Other modes do a little differently, but one highlight was that the Prores didn't do much NR and they found they "show a very nice, organic noise floor with additional stops that can be retrieved in post-processing". Considering that the GH5 combination of features suits me better than any other camera, and it's got 2 full extra stops of DR, which is cool. Yeah, Media Division are spectacular. Plus all sorts of other little cool things too. So many things while watching reviews that made me think "oh, that's super useful - I'd use that".
-
+1 for Media Divisions review. I looked through the dozen or so GH6 videos in my feed and stopped when I saw Media Division had done one and just watched that. They are the most level-headed film-making centric channel out there - both technically capable and also film-making aware so don't get caught up in tech for its own sake. It looks like an absolute cracker of a camera. My impressions are that although it's not teaching FF lessons like the GH4 and GH5 did on their release, it'll still be a solid presence in the market, especially for people who don't need AF. I'm also looking forward to Andrews review.
-
I definitely did that to myself. I messaged @mercer asking for opinions and the "Camera for India" discussion hit triple digits, and I ended up buying the camera locally for RRP and I had one day to test it before I hit the road. I even had a lens shipped to a post office near where I was staying in Sydney as I didn't want to risk that it would be late and I'd miss it! This is a huge topic and I'll resist the temptation to rant, but I think this is a hugely under-developed part of camera design. My ideal camera would have the ability to record all these simultaneously: 3.5mm input 1 L&R to two audio channels (ie, normal) 3.5mm input 1 L&R to two audio channels (at -20/-30/-40dB as safety tracks) 3.5mm input 2 L&R to two audio channels (ie, normal) 3.5mm input 2 L&R to two audio channels (at -20/-30/-40dB as safety tracks) in-built speakers L&R to two audio channels with auto-levelling One thing that travel film-making really benefits from is ambient sound, and being able to record a shotgun (at normal gain and also a safety track) plus a stereo track would be killer. I basically never remember to unplug the on-camera shotgun to get stereo sound, so I'm having to do a lot of work in finding ambient soundtracks to liven things up in the mix and if they were just there then it would be spectacular. ...and yet those people will then have to buy a monitor anyway.... which will just be a monitor because it can now record internal RAW, so no need to go external for better codecs. *sigh*. Once again, camera size suffers because people won't do the right thing. IBIS is all the rage, and I love it for shooting manual primes (which I use almost exclusively) but OIS can be incredible too. I chose my Sony X3000 action camera over a GoPro because of the OIS and it's just incredible. The combination of super wide-angle lens (15mm FF equiv?) and OIS means that if I hand-hold a follow-shot from as high as I can reach looking down then people think it's a drone. The other piece that's missing from ARRI is that it records RAW in LOG format, not Linear. Is there any talk of that trickling down too? There can be some tricky stuff in the data pipelines, but also strange manufacturer limitations coming from the "it's all too hard" department. For example the GH5 uses UHS-II card slots, and the 400Mbps codec won't record continuously onto a UHS-I card, despite using UHS-I cards that are rated for well above that data rate and the fact the standard will allow it too. It might be worth trying to ETTR? I worked out that it's a bad idea on the GH5 as the HLG profile has a pretty aggressive rolloff and so anything you expose up there ends up pretty thin, but it does give you lots of room in the shadows....
-
Well, whaddaya know... it does have one. I take it back! I saw multiple YT videos where people said it had no audio on the body, not even a 3.5mm mic jack. I remember specifically because I was pretty stunned at how short-sighted it was. Apparently the camera reviewing YT gaggle gets it wrong for Sony as well - who knew!? Yeah, that concept of putting all the features into the smallest form-factor makes sense. Things like internal NDs really make life interesting, and although I can go out with a pocket full of batteries, really it's something that should be included on all decently sized camera bodies. My GH5 regularly does a full-day outing without me having to swap to a second battery, and I'm not sure I can remember a time when it needed the third. I only carry two spares, but I'm careful to always put them back on charge each night.
-
I think maybe @Andrew Reid is too taken by his NDA GH6 to worry about anything else! At least, that's what I'm telling myself 🙂
-
That camera is interesting in a number of ways, but the fact is has no audio capability at all without the huge top-handle really makes the very compact form factor very compromised. It's like taking a matchbox and having the only audio option be a shotgun microphone... permanently mounted to a boom! Obviously it's a cinema camera and blah blah blah, but not even capturing scratch audio while on a gimbal is a bit of an oversight I think.
-
Yeah, and compare it to the other two. If it has huge amount of noise because of the X-Trans sensor then we'll know, and if it doesn't, we'll know the chroma processing isn't for NR.
-
Considering you have access to the camera, maybe you can do a noise test by taking a few RAW stills at a few ISO settings of a dark blank surface? It would be interesting to see what the levels of noise are between the R, G and B channels. If you have another camera from a different manufacturer handy then a comparison would be useful too. Then we'll know if there even is a big noise problem with the sensor or not. Maybe it's terrible and such heavy-handed processing might be understandable, even if the severity might not be justified.
-
Good to hear you're not too far off. It does seem that regardless of our intentions, we somehow end up with new cameras right before real shoots. I did it to myself, buying my GH5 right before going to India on a once-in-a-lifetime style trip. I look back on how I shot that trip and just roll my eyes at how little I knew the camera, but it turned out pretty well and I didn't stuff anything up majorly, so alls well that ends well, right? 🙂 In terms of getting your audio worked out - absolutely! Bad visuals is unfortunate but bad audio makes it unwatchable!!
-
Great write-up and nice to hear a nuanced view of pros and cons and feedback from real use. My advice (for you and everyone getting a new camera) is to shoot and edit some test footage every day until you're familiar with the camera. I find that it's not until I shoot some footage and then look at it on the computer that I learn something. If you have a list of questions then just film a little test to explore each question and gradually work through them... eg, what do all the focus modes do? and what do the settings for each one do? how far can I overexpose? under? how far can I push the WB? highISO? what does every codec look like at every quality setting (find something with consistent movement - waves at the beach is a good one)? how long a focal length can I hand-hold without OIS (film your best handholding at a range of focal lengths, repeat the same test at the same focal lengths when you're warm, when you're freezing cold, when you're hungry, and when you've had too little sleep and too much coffee). etc etc etc... I've done a lot of these tests and I find myself referring back to them when contemplating things. Like, if I use my camera in a certain way then that will mean pushing this parameter, then I just go and look at that test and I'll know what that will look like. In terms of the screen, can you design a LUT that is hugely aggressive and can be seen in bright light? eg, everything from 30IRE down is 100% pink, 30-60IRE is 100% white, and above 60% is 100% blue? or even everything is black except 35-65 which is red below 50 and blue above 50? Seeing something would be better than nothing. Best of luck and good shooting! 🙂
-
It's too long ago for me to remember accurately, but mostly I think they dried up. Of course, in a lens it's more complicated because you shouldn't humidify them too much or you'll get fungus. Also, when @PannySVHS says the rubber "dissolved" that's something I think is literally true - which is very different to rubber drying out. It's probably a case of reading the advice from known good sources, which considering the level of completely wrong info posted all over the internet about it, I'd only get from lens manufacturers directly.
-
So, can you get the 16-bit files out of the camera and see them in an NLE? or just the 12-bit files? There's nothing stopping them from reading the data off the sensor, changing it in whatever ways they want to without debayering it or anything like that, and then saving it to a card. I talented high-school student could write an algorithm to do that without any problems at all, so it's not impossible. When light goes into a camera it will go through the optical filters (eg, OLPF) and the bayer filter (the first element of the colour science as these filters will determine the spectral response of the R, G, and B photosites. Then it gets converted from analog to digital, and then it's data. There's very little opportunity for colour science tweaks there. I've looked at their 709 LUT and it doesn't seem to be there either. I'm seeing things in the colour science of the footage, but I'm just not sure where they are being applied in the signal path, and in-camera seems to be the only place I haven't looked. It would be amazing if we were to get that tech in affordable cameras. It will give better DR and may prompt even higher quality files (i.e. 12-bit LOG is way better than 12-bit RAW). It's not a small guy vs corp thing at all. Most of the people pointing Alexas or REDs at something have control of that something. Most of the hours of footage captured by those cameras will be properly exposed at native ISO, will be in high-CRI single-temperature lighting, and will be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc). Most of the people pointing sub-$2K cameras at something do not have total control of that something, and many even have no control over that something. A lot of the hours of footage captured by those cameras will not be properly exposed at native ISO (or wouldn't be at 180 shutter), won't be in high-CRI single-temperature lighting, and won't be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc). You really notice how well your camera/codec handles mixed lighting when you arrive somewhere that looks completely neutral lighting and look through the viewfinder and see this: This was a shoot I had a lot of trouble grading but managed to make at least passable, for my standards anyway. There are other shots that I've tried for years to grade and haven't been able to, even through automating grades, because things moved between light-sources. Unfortunately that's the reality for most low-cost camera owners 😕 The difficult situations I find myself in are: low-light / highISO mixed-lighting high DR and when I adjust shots from the above to have the proper WB and exposure and run NR to remove ISO noise, the footage just looks so disappointing. Resolution can't help with any of those. I've shot in 5K, 4K, 3.3K and 1080p, and it's rare that the "difficult situation" I find myself in would be helped by having extra resolution. I appreciate that my camera downsamples in-camera, which reduces noise in-camera, and the 5K sensor on the GH5 allows me to shoot in downsampled 1080p and also engage the 2X digital zoom and still have it downsampling (IIRC it's taking that from something like 2.5K) but I'd swap that for a lower resolution sensor with better low-light and more robust colour science without even having to think about it. They care about quality over quantity, and realise that one comes at the expense of the other. This is literally what I've been trying to explain to you for (what seems like) weeks now. Interesting stuff. In that thread, the post you linked to from @androidlad says: This idea of taking frames "a few milliseconds apart" sounds like taking two exposures where the exposure time doesn't overlap. Assuming that this is the case then yeah, motion artefacts are the downside. Of course, with drones it's less of a risk as things are often further away and unless you put an ND on it the SS will be very short, so motion blur is negligible anyway. We definitely want two readouts from the same exposure for normal film-making.
-
How to circumvent a rigid procurement process... 101. "It was an open tender - honest!"
-
That's an interesting possibility and, although being hugely processor intensive, would unlock things not currently possible with the current setup. For example, if they took a huge number of short exposures and then stabilised and then combined them they could simulate a 180 shutter (eg, 1/50s exposure) that was stabilised DURING the exposure (which current EIS tech cannot do), but also they could do things like fade-in the first few images and fade out the last few creating motion trails that don't stop abruptly. They could even have adjacent frames overlap.. ie, if you imagine a point on a spinning wheel they could have frame one going from 12-oclock to 3-oclock, and frame 2 going from 2-oclock to 5-oclock, etc. IIRC RED did some things with using an eND filter to fade-in and fade-out each exposure, but of course they couldn't overlap the exposures like this. The whole research into 24fps being the threshold of continuous motion and the 180 shutter rule being the most natural amount of motion blur could be revisited as the limitations that they were working with in the film days would no longer apply.
-
I never bought the Vlog update so I haven't tried that. I've also never used ACES either. I do use Resolve Colour Management now - I think that's new in R17? When I interpret the GH5 HLG footage as Rec2100 the controls work pretty well. As I've mentioned before I've tested it against Rec2100 and Rec2020 and it isn't a perfect match with either of those, but Rec2100 works well enough to be useful. I did look at buying V-Log, but once I saw that it also isn't natively supported I figured there was no point - I already have one thing that's useful but not an exact match so why pay for another one 🙂 All this is in context of how you're grading of course, and I'm really liking grading under a PFE LUT (2393 is pretty good) which helps to obscure the GH5 WB / exposure / lacklustre colour science quirks in the images.
-
I used to be an IT tech and we used to have to do major services on printers that were old, even if they had low page-counts, because all the rubber in the rollers used to push the paper around had dried out and cracked and no longer worked reliably. What was interesting was that we had similar printers of similar ages in lots of different buildings with different types of air-conditioning systems (refrigerative, evaporative) and this really impacted the lifetime of the rubber rollers.
-
I'm still not convinced about this. Yes, they do say: "Nothing is "baked" into an ARRIRAW image: Image processing steps like de-Bayer, white balance, sensitivity, up-sampling or down-sampling, which are irreversibly applied in-camera for compressed recording, HD-SDI outputs, and the viewfinder image, are not applied to ARRIRAW. All these parameters can be applied to the image in post." However, immediately before that, they say: "For the absolute best in image quality, for the greatest flexibility in post, and for safest archiving, the 16-bit (linear) raw data stream from the sensor can be recorded as 12-bit (log) ARRIRAW files." So in this sense, the "nothing" baked into the ARRIRAW includes the combination of two streams of ADC and also a colour space conversion. It's perfectly possible to do whatever you like to the image and still have the first statement be true in a figurative sense, which is how they have obviously intended it - "nothing you don't want baked in is baked in". The ARRI colour science could still well be baked in and no-one would be critical and the statement they make would still be true in that figurative sense. To me, the proof is in the pudding, and even that Alexa vs LF test included small and subtle shifts between the two images that is unexplained by the difference in lenses. That doesn't mean you care about colour as much as I do, or see it in the way I do. The fact you can hire a colourist means that the colour science from the manufacturer means less as you can cover any shortfall by hiring a pro, I don't have that luxury unfortunately, and thus why the colour science has a greater impact to me. I suspect most people buying a GH6 also probably don't have regular access to a colourist to take up any shortfalls. Ironically, the cheaper the camera the more that someone would need to have more latitude, great colour science, and a solid codec to work with. By having the best image come from a camera that costs $100K you're giving the most robust image to the very people who need it least as they can afford to use the camera at its exact sweet spot and not need any of its latitude. Agreed, and being skeptical makes sense. Realistically I'd be happy with any log colour space as long as it's a standard that is supported by ACES or RCM. Like I said, the folks with access to higher-end equipment and colourists for backup and troubleshooting swap to the higher budget stuff when the going gets tough, but those of us who don't have that luxury are stuck with what we have and have to make the best of it without any backup. But when we suggest that we'd prefer things that make our lives easier (robust colour science, codecs, DR, ISO performance, etc) rather than things that don't really help in difficult situations (resolution, etc) somehow that doesn't make sense to the people that aren't in our shoes? I think the top comment on the Alexa vs LF video is the most telling.... "Based on image aesthetics I'd go with the 65, but based on my budget I went with my Panasonic G7." Expecting ARRI level images from a GH6 is definitely unrealistic. But I mention it for precisely the same reason you mention wanting more DR - the better they can make it the better our results will be. Magic Lantern gives options for 14-bit RAW on the 5D and also on my 700D, so I'm assuming that the Canon sensors have a 14-bit output. I'd assume that means it's not that difficult to do - after all the 5D isn't a new camera by any means! According to CineD the GH5 gets 9.7/10.8 stops, the OG BMPCC gets 11.2/12.5, the S1H gets 12.7/13.8 and the Alexa gets 14/15.3. I really notice a difference between the GH5 and OG BMPCC, and obviously the S1H is significantly more again. Have you shot with an Alexa? If so I'm curious to hear if you noticed any differences from the extra DR when using them? Obviously there's an amount of DR that would be "enough" for almost all situations, and I'm curious to know where that amount is. I'm also skeptical about motion artefacts that would come from non-sequential exposures. I guess we'll see. It might be more useful in situations where motion blur isn't significant and extra DR is, for example locked-off shots of architecture and the like. You could easily have it take a set of images around the 1/1000s mark, perhaps at 300fps, and then combine them. That would give you great DR to include the highlights in the sky etc. Based on your logic about 12-bit ADC being a limitation and new sensor tech being required it will be interesting to see what this new sensor can do. Only a few sleeps left until the formal announcement!
-
Which cameras have the most pleasing grain structure?
kye replied to QuickHitRecord's topic in Cameras
+1 on that. Also, sometimes you're pushing the ISO in ways you'd prefer not to. Some of the nicest shots I've got of the kids is when they're on their phones and the available light is so low that they're basically being exclusively lit by their phone. That's not a native 100 situation, at least on the single native ISO sensors! It's relatively typical for colourists (and film-emulation packages like FilmConvert and Dehancer) to apply grain to some parts of the image more than others. IIRC they apply more to the mids and almost none to the highlights and shadows, as I think that emulates the grain from a negative/positive film process, where the roll-offs lessen the strength of the grain. Don't quote me on that logic, but it sounds right. If I understand the math properly you can get basically the same effect by desaturating the brightest highlights on your image. I contemplated writing a DCTL plugin for Resolve to recover highlights on non-RAW footage (as you can't use the option in the RAW panel) and over the course of figuring out the logic this was where I got to. -
Finally got around to looking at the TV episode I chopped up and, to be honest, I am completely blown away. The show is a long running and award winning travel show with a few interview / talking segments per episode with b-roll sections in-between, or so I thought. I've watched dozens and dozens of episodes from this show, so I am quite familiar with it, but on first inspection I have multiplied my understanding of it by perhaps a factor of 100. Some initial impressions: there is far more b-roll than I thought. There are multiple b-roll sequences between most interviews, there are V/O with b-roll sequences within the interviews, etc. the sheer quantity of shots is just immense. I have a pretty good intuitive understanding about how much shooting results in how much finished footage, and obviously I'm no-where near as good as these cinematographers, but even if their hit-rate is 10x mine, they're still shooting spectacular quantities of b-roll. the editors are doing strange things with structure, even "fading" between locations by cutting more and more b-roll into talking sections that the talking section kind of fades out and then gradually cutting in clips from the new location so there aren't any clear transitions and questions like "when does this scene end" become sort of meaningless it seems to be a lot like music with "call and response" where two or more things are intercut. it also seems to be very technical in terms of the rhythm, where any timing queues established (by music or talking or anything) need to be aligned to, but can be doubled, or switched to the "off-beat" etc. It really reminds me of programming break-beats when I was making electronic music. I've watched hundreds or even thousands of hours of content at this level but the editing is so seamless that I really had no clue about what was going on until I chopped it up and started to try and categorise and understand it. Perhaps the most significant impression is that this is completely beyond anything I have seen on YT, or even discussed anywhere online. and I mean, this is several orders of magnitude more sophisticated. Admittedly, I haven't found many good resources for editing online actually, so I'm hoping they are out there and that I'll find them. If anyone is reading this and is tempted to start cutting up great work then I encourage you to do so. Once you start looking you'll start noticing things immediately - it's like opening a window and looking through into another world, and you don't get to see it unless you pull it into an editor. Take a look at this: It's obvious from this that there are a series of sections of building intensity with faster and faster cuts, leading up to a change of pace and going much slower. One interesting thing is that the transition actually happens at the playhead (red line), and the two shots prior to that are the release of the tension before it changes. The playhead is where an ad break is placed. You might think that this would be obvious, but the problem is that the above image represents about 5 minutes, which is very difficult to watch and keep the overall structure clear in your head - the above is about 160 cuts! Also, if you look more carefully, it starts off with shorter bursts that then find a mid-level of pulsing intensity then a final push and then release. Even if you could keep track of the pattern of building intensity over and over you wouldn't be noticing that pattern. I'd imagine that not everything these masters are doing will be obvious from looking at the edit, but there's so much that is that it's an all-you-can-eat buffet regardless.
-
Simply wonderful! The tech is needed and is a necessary discipline and skillset. The first challenge we have (especially in forums such as this) is to remember that the tech serves the art. The second challenge we have is to understand how the tech serves the art. How shutter angle makes the viewer feel. Colour. Motion. Lighting. Depth of field. Composition. Dynamic range. etc. All have tangible emotional impacts on an audience. The third challenge we have is to understand how to align all the tech to push the art in the same direction so that the desired aesthetic and emotional experience is clear and strong, being supported from all angles and with all factors. Most discussion doesn't acknowledge these challenges even exist, let alone satisfy the first challenge, and then the others.
-
Is there room in an MFT mount for an eND? I have no idea how thick those things are. That would be a great addition for stills as well as video. DJI could really easily make an MFT camera. They've shown they're not afraid of shaking things up - the design process for the 4D seems to have been them brainstorming every crazy idea they could think of and then implementing every one of them they could get to work into the final product. It's a pity that it's the camera equivalent of carrying around a live duck, but a great testbed for the tech.
-
That (might) indicate it's not a bug, but a deliberate decision. It all seems rather odd to me.. I get why Canon etc cripple their cameras, but why would Fuji do it? and in such a strange way? My knowledge of X-Trans sensors is limited, but I thought they were simply an alternative layout of the RGB photosites in order to get an advantage when de-bayering. Nothing in that seems to indicate they'd have some sort of horrendous chroma noise issue that would require such brutal chroma NR.
-
I'm keen to read more about this - can you link the source? I'm not explaining the entire image pipeline in every post that I make, but even if I did no-one understands the details anyway so it wouldn't help. Perhaps instead of just saying I don't understand things, speak to something tangible that I can talk to. In terms of having used RAW footage? Sure. Probably more than a dozen. I own three cameras that shoot RAW - one Canon and two BM. What's your point? I'm trying to get the best colour possible. For that I have pulled apart the colour science from many brands, trying to understand what they are doing. I talk about ARRI mostly because they give the nicest colour by just applying their official LUT. RED is right up there too, along with BM 2012. BM 2018+ and Canon (RAW) are nice too, but aren't really at the highest level. I own the Emotive Colour matrix and don't talk about it much here, partly because I've pulled it apart and don't want to give away too much, and partly because the it's so fragile to use - if the stars didn't align then you're not getting good results. It doesn't like exposure changes much and doesn't deal with mixed colour temperatures basically at all, which is present in most situations I film in. I have also bought film simulations from Juan Melara, downloaded DCTL plugins from professional colourists, LUTs from post production houses, and many other things. Did you see the difference in image between the Alexa and the LF from that guy who did the comparison by splitting the light with a piece of glass to duplicate the cameras position between the two? The other manufacturers may have been closing the gap but that was ARRI leaping ahead by a mile. You don't seem to really acknowledge the differences, but I read lots of comments from people who are amazed at how much of an improvement they were able to make to what was already a world-leading performer. I suspect colour matters more to me than to you. Sadly, in the blind tests I always pick the most expensive cameras, even if I watch the comparison in 480p on YT. You missed my point. If I was a manufacturer including h264 in a camera, I know it's going to be targeted at consumers and videographers, so I will go a bit heavier on the processing because that's what this audience wants. If I then decide to include Prores, I know it's for a different target market who have different expectations about the image, and so I'm likely to apply far less processing when the camera is set to record Prores rather than h264. The anamorphic mode in the GH5 applies less processing than the 16:9 modes, so Panasonic clearly understand that it's for a different target audience. Panasonic would be insane to include consumer amounts of processing and NR when the camera is set to record Prores. Cool. Personally I record in a lower resolution than the sensor and don't want the camera to crop. For this, Prores is the winning option. Saying that would be plain wrong. Good thing I didn't say it! Thanks for pointing that out? I've done tests on a number of RAW cameras comparing the various bit-depths and also comparing the latitude of RAW vs Prores and mostly I'm ok with 10-bit Prores recording a log profile. I'd prefer a 12-bit Prores if it's available, but I'll happily take Prores HQ. Colour science quality (and image quality in general) has two factors for me, the first is how good things look when under the optimal conditions. This is how (you'd hope) most professionals working on controlled sets are working. The other main factor of colour science is how robust the image is when conditions are far from perfect. This is probably something you're not very experienced with, but it's the vast majority of clips that I shoot. I've mounted my BMMCC and GH5 together and put them through a number of sub-optimal situations and then pulled the footage into Resolve and graded them side-by-side and the results are eye-opening. The BM footage just does what you tell it to do, whereas the GH5 footage suffers almost immediately. The sweet spot of the GH5 is has pretty good colour, not great but good, but that very quickly disappears when pushing and pulling the footage, even just adjusting the WB reveals that the small amount of colour magic it does have is pretty fragile. I've graded files from the S1 and the colour in the sweet spot was nicer, the DR was higher, but the magic was still quite fragile and the footage felt like the GH5. I'm hoping that with colour science improvements and Prores that the files will have more colour science magic and that the magic will be more robust when graded.
-
Absolutely, and that's perhaps the biggest reason that I keep referencing ARRI. Their sensor is great, but it's the processing that they do to the image that really sets them apart. In fact, it's so valuable to them that they apply it in-camera rather than in LUTs that can be pulled apart and analysed. I'm not sure how much you know about colour science, but I have done many deep dives into pulling apart the colour science from a number of cameras, including Alexas, REDs, BMs, Canons, and Panasonics. I have spent a lot of time on the colourist forums reading their responses, reading the articles they link to, and studying their methods (and when I say "studying" I mean opening Resolve and trying to re-create the methods they explain, and then using those methods on my own footage to get a feel for what is going on - like how you do when you do assignments at school - literally studying). What I have found is that: most colour science is a long way from neutral, but are almost universally pushing the colours in the same ways, typically in the ways that film does you can take clips from multiple cameras and match them (and I'm talking about footage with larger colour checkers with lots of patches) and they look the same, but the ARRI or RED will have magic that the other simply won't have I have also observed time and time again that the colourists are doing very complicated adjustments (often in alternate colour spaces that work in very different ways) and applying them very subtly. What I conclude from both the comparisons I have done and the little tweaks that the colourists are willing to share (there is a lot they're not willing to share too) is that the magic is in the tiny little adjustments. Like in cooking how some chefs can add tiny amounts of various seasonings that are so subtle you can't pick them out but they really lift the flavour. You are absolutely right that each manufacturer has the opportunity to be building these things into their colour science (and not relying on their sensors), but the problem is that they just don't. Year on year they are getting incrementally better but really aren't closing the gap between their $2K-5K cameras and what the leaders are doing. The end result is that we're getting food that has come from the same ingredients (Sony sensors) and has only been seasoned with salt and pepper and therefore tastes rather bland in comparison to ARRI/RED who are demonstrating mastery in their use of spices. I don't have V-Log on my GH5 for precisely this purpose - it wouldn't get me anything. That's why I've been shooting HLG and testing it (it's not exactly either rec2020 or rec2100, but it's close enough to rec2100 to use that in Resolve). It would be great if the GH6 had real V-Log. I'm very keen to see how they go about using the Prores. Currently the GH5 HLG implementation is 10-bit rec2100-like colour and gamma, which isn't too bad to work with. The extra bit depth of Prores 4444 would be most welcome. In camera NR and sharpening are definitely an issue and I'd hope that implementing Prores will mean they'll tune the image to that codec and the expectations that pros would have. I don't think the idea that Prores isn't sharpened is true - I read somewhere that as Prores is compressed its best to add a small amount of sharpening to match the look of RAW. I can't remember where I read that but I remember it coming from a source beyond questioning - perhaps ARRI or RED or the like. It makes sense, as does the idea they would match the perceived sharpness of RAW. In a sense, Prores isn't just a codec, but a complete approach to the processing of the image. The flavours of Prores will be interesting to see. It is unfortunate that Prores wasn't included in the GH4 and GH5, but the bitrates might have been more than they could handle. With h264 there's no "right" bitrate, but with Prores there are standards, and it doesn't look so good on marketing if you're only giving people Prores LT, even though the bitrate of 4K Prores LT is 328Mbps - more than most other cameras and almost as much as the headline grabbing 400Mbps GH5 ALL-I codec. Marketing is real, and often irrational, unfortunately. In terms of saying prores doesn't matter because other cameras have internal RAW is just ridiculous. It's like someone saying that their Ferrari doesn't have cupholders and someone else saying that most family sedans now have cupholders. A different camera having a good codec doesn't matter if that other camera doesn't meet other criteria. I can't go outside and capture images using the sensor of one camera, the colour science of a second camera, and the codecs of a third camera. RAW is also different to Prores in that RAW tends to be a 1:1 sensor read-out, meaning that you either have to have the huge resolution and huge file sizes of the full sensor read-out or cope with some kind of crop which screws up your whole lens collection. Lots of people shoot with a lower-resolution codec than their sensor and enjoy having the benefits of downsampling. I am one of them, so RAW isn't of that much interest. One of the other benefits of Prores is that it was designed to be mostly indistinguishable from RAW under most conditions, so it's a very practical thing. Otherwise, why would every / most cinema cameras offer it in addition to shooting RAW?