-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Yeah, I've seen glimpses of that life and it doesn't look easy by any stretch of the imagination!
-
Link to the announcement?
-
I'm not sure that this applies for wedding videographers.... 🙂
-
I'd suggest that you also take into account resolution of the various platforms and what resolutions you'll want to deliver in. For example, it seems that snapchat might be the most extreme vertically: ...but if it only requires a very low resolution then you can get away with a wider sensor read-out and still oversample. Also take into account sensor resolution. An 8K 16:9 would give more resolution than a 4K 3:2. As is almost always the best advice - clearly define what it is that you want to end up with and then work backwards from there.
-
...and more chance of extra equipment getting stolen from set while everyone is busy doing things. I see posts on FB groups every so often listing serial numbers of items that got stolen so the group members can keep an eye out if they pop up for sale. Mostly someone asks how they got stolen and it's mostly out of the boot/trunk of locked cars, but the odd one disappears from set, unfortunately.
-
I must admit that I haven't kept up with your discussions on this, but I got the impression that you can't use the False Colour mode on the FP to accurately monitor things across all the ISOs - is that correct? The way I would use this camera would be either manually exposing or using it in auto-ISO and using exposure compensation, but I would be using the false-colours in either mode to tell me what was clipped and where the middle was. I'd be happy to adjust levels shot-by-shot in post (unlike professional workflows when working with a team) and know how to do that in Resolve, so I'd be comfortable raising or lowering the exposure based on what was clipping and what I wanted to retain in post. If the false-colour doesn't tell me those things then it would sort of defeat its entire purpose..
-
"HLG" isn't a standard as-such, more like a concept. There are multiple standards within HLG (rec2020 and rec2100, maybe more) but the HLG from the camera may not be an implementation of one of those. I've tried this on the GH5 and found it wasn't exact, but was a "good enough" match to the rec2100, but I'm not doing this at any professional level. It's easily testable though - just do a set of over/under exposures on a reference scene and see if matching the levels in post results in a match or not. You can try converting from various standards and see if any of them get the shots to match. Its very easy in Resolve to put a reference exposure over the top and set the blending mode to "Difference" and you can clearly see the errors. IIRC on the GH5 the errors were strange colour shifts in the shadows and highlights and some of the more saturated colours, but was a pretty good match in the mids and lower saturation. I tried compensating with curves etc but it was messy.
-
Good post and good points made, and nice to see appetite for a more nuanced discussion around it. I see a number of elements in the overall set of functionality: being able to reliably focus on a thing (this is where PDAF has the clear advantage) being able to recognise objects that might be good to focus on (face detect, eye detect, animals, etc) and also being able to recognise that there might be candidates that are quite out-of-focus being able to choose the right subject from the faces / animals detected being able to know what to do when an object tracking is lost (focus on someone else, the background, or hold focus?) being able to adjust the focus racking speed to be context-sensitive being able to anticipate focus on an object before that object is in frame The PDAF vs CDAF debate only applies to #1 - literally none of the other elements are related to it at all. Obviously PDAF is much better at number 1 than a non-dedicated focus puller, and this is what CDAF systems get criticised for (either not doing it at all, going the wrong way, or pulsing while tracking). Modern cameras are all getting much better at #2 with face-detect being pretty reliable and ubiquitous at this point. MILC/cinema cameras seem to have zero capability to do #3, which is why you have outlined the use of the joystick and touchscreen and other techniques, and I agree with you that if someone practices a bit and gets familiar with their camera then this is probably a good enough way to control the AF. #4 is only just being added to new cameras so I think is probably "cutting edge". Apart from the "face AF only" mode not being on all cameras, the ability to change the focus mode on the fly is probably absent or tedious (maybe I'm wrong here but I doubt that it's as easy as the controls for #3. #5 is adjustable deep in the menus of most cameras, but changing it on the fly is likely to be cumbersome / prohibitive, and the camera automatically doing it is completely absent from all cameras and will be a long wait. This is particularly relevant because there's always a balance to the camera not getting jumpy when something moves through the shot but also not having a huge lag when the thing to focus on actually does need to change. #6 isn't available in any real capacity from cameras Anyone who focuses manually will be worse at #1 than PDAF systems, and have full and complete capability on 2-6. Mostly the conversation only talks about #1 with sideways references to 2-3 and no acknowledgement the others exist. In terms of aesthetics, I greatly prefer to have imperfect #1 if the rest are on-point. In fact, as someone who spends most of the time behind the camera on my own family videos, the way I use the camera (and edit) has proven to be a significant presence in these videos, and focus pulling is a significant contributor to that aesthetic.
-
If only it were the case that something simply being true meant that no-one needed to be convinced of it! At this point, I've read so much BS online that I require quite a high amount of evidence that something is true before I repeat it to others, which was the point of me phrasing it like that in my original comment 🙂 It really seems like the FP is a great FF cinema camera really just lacking a good post-process and support for it in the NLE space. I really hope they rectify this in future firmware updates - the sensor has soooo much potential!
-
The thing never discussed in these debates is "who". Who should it focus on? Very early on in the AF game you could program your camera with photos of your family and then it would look for those faces specifically and focus on them, not whatever face happened to be most convenient at the time. I've never heard features like this mentioned in these debates. Sure, AF from Canon / Sony are phase detect, which means that it can focus on the object it chooses for you. Sure, AF from Canon / Sony are face / eye detect, which means that it can focus on the face or eye it chooses for you. I've used some of the worst AF systems ever made and had many many shots ruined by them. As often as not, they were ruined because it focussed on the wrong thing or wrong person. AF is great if you're trying to keep a talking head in-focus on 100mm at F1.2, so I guess it's good enough for professional videographers, but as someone who shoots home and travel content, it's no-where near sophisticated enough for my needs.
-
I have spoken about this at length one other forums and been convinced that the Alexa is simply a Linear capture and all the processing is in the LUT. Here is what happens when you under/over expose the Alexa and compensate for that exposure under the LUT: https://imgur.com/a/OGbI2To Result: identical looking images (apart from noise and clipping limits) Of course, the Alexa is a very high DR Linear capture device so I'm not criticising it at all. However, the FP is also a high-quality Linear capture device, and the fact that the ARRI colour magic is in the LUT means that we can all use it on our own footage if we can convert that footage back to Linear / LOG from whatever is recorded in-camera.
-
Sounds like you need to calibrate your monitor! Let's have an argument about what specification your monitor is, if it has a dedicated video output or is being polluted by your operating systems colour management, what calibration device you use and what software you paired with it, what options you chose for calibration, what the ambient lighting conditions are in your studio and what CRI and calibration the globes have, what colour the walls of your studio are - I mean did you even buy the special neutral colour paint???? No wonder it's all gone topsy-turvy for you!!! (In all seriousness, those things do matter, but making a film that isn't boring is still way more important....)
-
I've tried correcting skintones on hundreds? thousands? of shots of GH5 10-bit footage. I haven't tried breaking them or really pushing the grade specifically on them though. As I said, I'm not sure if 10-bit is enough or not, maybe not. I know that some of the ML users here have reported seeing not only differences between 10-bit and 12-bit RAW, but also between the 12 and 14 bit RAW, so that's more data to factor in. I guess mostly, I'd like to be given the option! ......and without having to build a whole rig around an external recorder 🙂
-
Agreed. Something that I think that doesn't get talked about much and isn't well understood is the relationship between resolution, bit-depth, and bitrate. To some extent they are linked and can, under some circumstances, offset the weaknesses in the others. I'd still like Prores 444 though, because it would give me downsampled 1080p with low levels of sharpening, high-enough but not unmanageable bit-rates and RAW-like bit-depth 🙂
-
I so wish that companies would put Prores 444 into their cameras as a 12-bit standard that is well supported... well, that they'd put Prores in their cameras at all, then 444 after that 🙂 Mind you, I have tried to break the 10-bit files from my GH5 and they've held up, so maybe there isn't much need for it? Not entirely sure on that one.
-
You're probably thinking of Prores RAW, which Resolve doesn't support. Normal prores has full support though. I've noticed that people online often say Prores when they mean Prores RAW and it's quite confusing as they're definitely not the same.
-
I think all crews will always be stretched. The crew might be stretched because they're all trying to do their own thing well and any spare time/resources is quickly put to doing something better or something more that wasn't originally included. The crew might be stretched because they're all trying to do as little as possible and therefore the task expands to fill the available time. The crew might be stretched because they're a smooth running operation who are coordinated and efficient through experience and discipline (which is no easy feat to implement). The way you can tell the difference is that on the first instance the results are greater than expected from the time and resources, the second is that the results aren't better than expected, and the last is that things seem relaxed and smooth and there are time and/or resources spare at the end. Absolutely. People concentrate on image image image and it's so short sighted. I talk a lot about dynamic range and stabilisation etc in my work and share nice images I've captured and people respond with the old GH5 is enough, but what they're not seeing are clip after clip after clip where I couldn't get my shit together in time and missed the moment, where the DR wasn't enough, where the stabilisation made it unusable, etc etc. Every usable clip on the card is better than every clip you can't use or didn't capture, and camera choice is huge in that equation.
-
I thought Gerald was the detailed spec guy? Like, he was the one who would find and mention all the bizarre and non-sensical cripple-hammer combinations, because he was the only one that had used the camera for more than a day before posting a review about it. But would you want his recommendations and conclusions? Definitely not - he's a technician not an artist. Film-making isn't about technical perfection, it's about the aesthetic. Geralds (and most internet technologists) favourite aesthetic is beige filters on a beige lens on a beige sensor with enormous resolution to reproduce the mind-numbing dullness of the whole affair. If you only read the internet, then this would be a delicious delicious image:
-
It's a good question why they don't add those modes. I've noticed on looking at the specifications sheets on sensors that have been shared over the years that often Sony (it's always Sony) will have a full-resolution readout of the sensor up to a certain frame rate (eg, 30p) and then a lower resolution at a higher rate (eg, 60p or 120p) but don't list any lower/faster modes than that. For example, they might list 6k60, 4k120, but not 1080p240 (which is actually half the number of pixels per second as 4k120). Let alone faster readouts for 720p or 480p. Obviously the tiny sensors for smartphones and the 1" ones like the RX100 (and GoPro?) etc do have those fast/lower-res modes, but maybe they just don't include them on larger sensors - not sure why.
-
Actually I think the image is overkill in resolution, and underkill in everything else. Just because the GX85 is smaller than most other cameras and just because it's got a better image than the competitors in a similar size / form-factor doesn't mean it's overkill. So many times people confuse being 'better than average' with being 'more than needed' and it's just not the case. Sure, you might have the smartest toddler in the world, but that doesn't mean they can lead the space program. Reality doesn't bend just because the standard of offerings is low!
-
I think the colour on that video was really good, and I actually think that the oversaturated grading shows how impressive the image from the camera must be in order to be able to be graded like that. Anyone who has tried to push a huge amount of saturation into some footage will know that it's a good way to reveal the weaknesses in the footage so the fact that the image held up to that is really a vote of confidence. I find it strange and a bit sad that mostly people can't separate the capabilities of a camera from the grading that is done to it. The Slashcam comparisons on YT show that most cameras look dull and understated when simply exposed properly and with the default manufacturers LUT on them, meaning that all the great looking videos we see are the effects of lighting and production design on set and colourist in post, of course all with the limitation of the capabilities of the camera. I think if you went through every camera thread on these forums you'd find one (or many) examples of "I don't like that grade on that video - the camera must be crap". It's like burning a steak and then blaming it on the cow.
-
[Edit: Thanks for your apology post - I wasn't offended and no harm done 🙂 ] I'm not sure why you and others keep running to the extremes. I never said the GH5 was great and neither did I say it was terrible. I'm actually very interested in a new camera once I'm back and travelling again, as the GH5 colour and low-light and DR are definitely showing signs of age and are actively limiting factors in me getting the most from the camera. In terms of trends or whatever, nowhere did I claim that was the first trend ever, so that's just a bizarre instance of you putting words in my mouth. As for 12 people liking it, um.... What a completely non-sensical comment considering that a few clicks show that video has 14.5 million views on YT, the song reached the top 100 charts around the world and the artist appeared on shows like Jimmy Fallon.
-
@PannySVHS's comment was "They went a bit overboard with grading, especially the red cloth of the suit is oversaturated" and @webrunner5 said while talking about the background colours "It is just a dull blob." Sure, no-one said they wanted accurate, but some kind of ideal middle ground is implied. You can't say that something is "too <something>" unless you're comparing it to something, and they didn't specify what that something was, just some sort of nice/desirable one-size-fits-all kind of idea. If there's no "right" way to do it, then there can't be any wrong way to do it, and therefore no criticisms. But that's not what was posted.....