-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I think this is the fate of many cameras. The camera market seems to be oriented around camera size being a prerequisite for image quality, despite the fact it doesn't have to be (as evidenced by BMMCC and OG BMPCC which were affordable a decade ago). This means that there is little support in-post for the lighter cameras and less reputation. This creates a feedback loop where people who want to shoot light don't make good-looking videos because the expectation isn't there, and people who have the expectation of great images shoot with larger cameras, reinforcing the concept that small and budget friendly isn't the way to go. In response to this completely made up size=quality concept, BM decided to make the BMPCC 4K literally 3x the total size of the OG BMPCC and abandoned the delightful S16 sensor for one that looks like any other camera, and almost all evidence that small could be great was swept under the rug. Those who shoot small typically shoot in uncontrolled conditions where the lighting is mixed colour temperature and lighting ratios are difficult to deal with, leaving those who shoot in the most difficult conditions and have the least ability in post production to shoot with the cameras least able to handle those conditions, and unaware that it could have been any different.
-
The news shooter article says that normal retail price will be $3,500 or so, which seems reasonable. The price of mechanical and electronic components has plummeted over the lsat few decades thanks to centralisation in China. The factor that makes or breaks products these days is the design of the physical unit and the software that controls it. The proof is in the pudding, as they say, so I would reserve judgement until there are units in the wild, and of course, all PR materials that are real (it's common for even things like car commercials to be 3d renders now) will be under the absolute best possible circumstances. Anyone who has ever used a GoPro knows that you'd have to be luckier than a lotto winner to get shots like they put in their promos.
-
Yes, it is noteworthy, but it would be good to see some tests where the magnitude of the streaks is known. ie, are the streaks 10-stops below the bright part of the image? 20-stops? 30-stops? If we knew that then it would be easier to know which situations you might encounter a problem and which you wouldn't. This is the GH6 XYLA21 chart from CineD, and the curve from the brightest patch to the noise floor (the curve on its left) looks like it drops away into the noise floor pretty quickly, so I'd imagine that for the streaks to be visible above the noise floor maybe 1/4 of the width of the frame from the brightest areas (as in the image you showed) then the highlighted areas must be well above 13 stops or so above the flag. That seems obvious considering the flag is pretty dark and the windows are completely blown to hell, so who knows how bright they actually are.
-
If it helps, you can go to a hardware store and grab a bunch of highly-saturated free paint swatches with the brightest colours you can find (and grab some skin-coloured ones too while you're there if you don't already have a colour checker) and you can experiment with those. Assuming you're setup with a calibrated monitor and neutral studio lighting (I recommend MediaLight https://www.biaslighting.com/ and have lit my studio with their bias light and plug-in globes) then you can shoot some test shots (I recommend in full-sun for best colour accuracy, or a halogen / incandescent globe if it's not sunny outside) and then you can bring the switches into the studio and compare them to your monitor (if you have a portable high-CRI light you can light the swatches with that - keeping any spill off your monitor) and literally compare then side-by-side, which is by far more brutal a comparison than anyone would ever have looking at your work in isolation. Much easier to do that than convert your studio into a rainforest habitat! What are your thoughts on RAW vs Prores and how good Prores is at matching the capabilities of RAW? I ask because lots of folks here have only shot with h26x and RAW and the differences are obvious, but may not be aware of where Prores sits in that spectrum. lol.... Are you able to provide any examples? I've seen a number of tests that don't have these issues, so interested in seeing how people are testing this. I'm seeing random idiots all mouthing off without having any thought (or even fact) behind their opinions all the time at this point. I think I saw one person declare that the GH6 was useless because it didn't have PDAF and then raved about the S1 for a while, I was thinking "um.........??" lol.. I find that when shooting with the family on trips there's a few things that solve lots of problems. Unfortunately one of the ones that gets used most often is a credit card!!
-
On what I shoot I'm everything... director, cinematographer, audio tech, editor, colourist, and Head of Distribution. You'd think that would make things easier, but it just means that instead of a room full of people all blaming each other and feeling good about their own work, I just sit quietly wondering what went wrong and wishing I was better at literally everything! That's probably pretty easy. Shoot a few test shots, then just adjust the curves to get the right output. Specifically, Hue vs Hue, Hue vs Sat and Hue vs Lum. You'd be amazed at how powerful those are. if you start with Contrast/Pivot/Sat and then do a pass on those Hue curves you can really get a good grade very simply and easily. I use them a lot when grading videos from here in Australia where the grass is almost always patchy and somewhere between lush green and straw yellow, and trying to get a shot-to-shot match when you're pointing in different directions and seeing different bits of lawn. The final season (and one episode especially) of the show was so dark that it was difficult to follow what was going on in the show - it was a social media shit-storm, the cinematographer blamed the viewers: https://mashable.com/article/game-of-thrones-too-dark-cinematographer and websites got a lot of mileage talking about how to re-watch it so you could actually see it: https://www.theverge.com/2019/4/29/18522550/game-of-thrones-got-season-8-hbo-battle-of-winterfell-long-night-fix-tv-settings-darkness In reality, it was a large-scale test of how content is consumed in the real world because they pushed how dark the final grade was (which was appropriate to the narrative and subject matter), and the answer was that basically the entire pipeline isn't setup for dark things to even be visible, let alone somehow .. "accurate".
-
Walter Volpatto also mentioned that he's never been asked for a HDR output that wasn't limited to the P3 colour space, so it seems that you're right that 1000 nit P3 is becoming the standard. Colour accuracy is a pipe dream and (besides "how can I get my iPhone to look like an Alexa") is the topic that gets the colourists the most excited. The best approach that I've seen the colourists take is: grade on a calibrated display keep a calibrated consumer-grade 709 display handy and check it every so often to make sure you're not doing anything stupid get whatever display the Director is viewing your work on and check it on that too once it leaves your studio, forget about it whenever someone important (other than the Director) starts yelling at you incoherently about the colour being completely screwed, arrange to have them visit your studio and show them what it looks like on your reference display and make sure to explain in great detail how expensive and calibrated everything is If you want to talk about accuracy, let's start the conversation discussing the final season of Game of Thrones...... It seems to be an approach taken by other manufacturers too - shoot a scene at "correct" exposure on multiple models of their cameras using their standard log profile and you should be able to pull all the footage into post, apply their LUT or CST, and everything should match across all the cameras. Canon did that with the XC10 - matching it to the C-series line, which was why it took the same ridiculously expensive media as the C200/300 but didn't use the cards to even remotely near their performance, and used the C-Log profile but didn't use the full 0-1024 values but instead mapped the XC10 limited DR to where that DR lined up with the same values coming from a C500 or whatever. In the case of the XC10 it meant they basically crippled the camera for the sake of compatibility, but it would have been great to be able to use these for BTS and reference use (which seemed to be something they did) and not have to have a separate colour pipeline in post, which would have been really useful for the VFX folks who might want to see the setups and might also want a colour reference from that.
-
I thought this video was interesting...
-
I can definitely see it as a crutch for narrative film-making. I'm exploring this in the videos I'm looking at and my own work, which for travel is mostly montages with music. The works I'm analysing seem to use the intensity of cuts as an added device to build and release tension along with the music and wider sound design. To a certain extent it's like b-roll is part of sound design and as long as the visuals are remotely relevant then you can kind of cut them up in whatever way you want to, creating everything from a slow peaceful progression to frantic wall-of-sound-and-visuals type moments. There was a particularly good example of this in one of the Parts Unknown episodes in a high-pressure region (IIRC it might have been Beirut) where they built up to fast cutting of b-roll of the signs of war there / a heavy metal rock band / and the whistling of a pressure pot steam valve in full release. ThusGuyEdits did an interesting video on that.... Definitely editing for short-attention spans...!
-
Some grabs (all from YT 4K mode using screen capture tool)... all from: https://www.youtube.com/channel/UCeYoFHiCGpoJYofP26ZpSaQ GH6 R5C R3 Komodo S1H FX3 C70 UMP12K I tried to grab the highest resolution / best codec images. I'm not really sure how much we're looking at the cameras and how much we're looking at the LUTs, presumably that the manufacturer supplied (hopefully!). It would be fascinating to put each of these through a CST to LogC and then put them all through the LogC-709 LUT from ARRI to get the same colour science.
-
That's probably the best one that's been out so far. Especially considering they do the same test with same model at same location for multiple cameras. Unfortunately the weather is different all the time so it's not controlled the way that a studio test would be. What were your thoughts about the image?
-
Batman just reminds me of spoiled 4 year olds feeling entitled to do WTF they want in the supermarket because they're wearing a costume they got from grandma at xmas. I have zero interest in seeing this stuff. I did enjoy Guardians of the Galaxy, and Dr Strange, but they're making all the crossovers now so you have to see whatever person X did in the person Y film to keep up. In 10 years it will have fully transformed into an action-comedy series "Marvel: Episode 32 - The one where they eat chips and save the world from armageddon using only a plastic fork".
-
I agree about the GH5 having a sharpened / NR look. That's one of the reasons I'm keen to see how the GH6 renders its downsampled Prores. I'd probably shoot 1080p Prores HQ, or perhaps a slightly higher resolution if they have one, like a 3K one. In terms of the GH6 vs the P4K, the GH6 is far more neutral than the P4K in latitude tests. (Unfortunately there's only a few P4K images that I can find. If you have any others then please share - the GH6 test from CineD has more of the GH6.) GH6 - 4 stops under NR: P4K has colour shift issues at 4 stops under....... That's true - a few cameras support both BM and Atomos recorders. I was talking about getting internal RAW though, which I still don't think they'd do as it would essentially skewer the relationship with Atomos. I think the footage released so far has been pretty random and mostly not that helpful. What we really need is someone to hire a model and a nice cine prime for a day and just go out and shoot in the sun, in the shade, and do some sitting tests in with a colour checker in controlled conditions and just cycle through the codecs, perhaps going under and over a few stops to see the latitude of each codec. GH6: GH6 is unusable at -5 stops. P6K starts to have noise and colour shifts at -5 stops: A74: -4 stops looks clean and -5 with NR is still clean but shifts in colour: GH5ii: -4 is fine and -5 is ok but has colour shifts FP tests - CineD compared exposures to "ETTR" so no idea WTH that means: CineD certainly have no standard test approach for latitude tests - almost no cameras have over-exposures pulled back and the under testing seems erratic. Komodo: BGH1: -5 is noisy but colour is relatively good A7S3: -5 is good with NR, but a bit messy without it S5: starts having issues at -3 stops: R5: Things start to go bad at -5 Zcam e2-f6: colour shifts starting pretty early in the Prores: but creep into the RAW too: Not too sure what to make of all of that, considering we don't really have over-exposure tests for most? but, it seems that most cameras require NR at -4 and fall apart at around -5, with some sliding into terrible colour shifts and others being relatively neutral, with the GH6 on the neutral side.
-
Great to hear!
-
I'd doubt that it would help with the corner warping problem. I suspect that it is simply a side effect of how the IBIS mechanism works and there's been no talk about an innovation in the stabilisation approach, just that it's now got more stops of stabilisation.
-
Sorry to hear this - it's probably too expensive to fix. A few thoughts - it could be a faulty sensor.. in a nice clean environment maybe give it a gentle blow out with a rocket blower to remove any stray fibres or dust that might be interfering with the mechanism. You could try flashing the firmware again on it, assuming it will let you, as maybe that has gotten a little corrupted over time perhaps. Otherwise, that flipping it around trick might be enough, depending on your shooting style and situation.
-
Wow, the S1 has no ALL-I codecs? That's a bummer. The P4K has about the same DR as the GH6 in DR Boost mode - 11.6 / 12.7 vs 11.5 / 12.8 (Source is the P6K tests where the make some comments about the P4K: https://www.cined.com/blackmagic-pocket-cinema-camera-6k-lab-test-dynamic-range-latitude-rolling-shutter-more/) From my perspective the GH6 and P4K are so incredibly different that they're incomparable. You'd have to be shooting in pretty narrow and controlled conditions for their differences to not be relevant - IBIS, battery life, photo mode, automatic settings, etc etc. I doubt it. They've announced Prores RAW with Atomos and I doubt they'll provide a second RAW option - most cameras don't even provide one RAW option!!
-
Some thoughts..... Think of the total ecosystem that you have This includes cameras, lenses, batteries and chargers, and all your accessories, and then build a system that works for you. Think about failures and having equipment redundancy too. My travel setup is two MFT cameras and an action camera. If I want to shoot simultaneously then I can use the same lenses across the two bodies. If my action camera dies I can use a wide on a second body to emulate it. If my main camera (GH5) and lens (Voigt 17.5mm) and mic (Rode VMP) disappear then I have backups for those three (GX85, 14mm f2.5, Rode Video Micro) which are small and I can easily carry as a spare. If my backup / 2nd setup dies then I can use my action camera in a pinch and there's also my phone. The difference in DoF is 2 between MFT and FF, so to match the FOV and DoF of a 50mm at F4 on a FF camera, you would need a 25mm F2 lens for an MFT camera. But you should also think about the base ISO of 2000 on the GH6 as I'd assume you'd be using V-Log and DR Boost, which might require new NDs to suit the new lenses you'll probably have to buy. Don't just think of the fact that MFT has faster lenses with the same DoF, because the sensor is smaller and therefore has higher noise and therefore requires more light to get the same performance, but then technology of the sensor and processing all apply and then there's the sensors with dual-ISO or dual-gain and then there's blah blah blah blah. Long story short - there are too many variables that factor in to be able to predict low-light capability. The only real way to compare low-light is to look at a test of the two cameras you're comparing shooting the same scene with equivalent lenses.
-
Actually, what I said was "I've heard that most professional productions shoot Prores rather than RAW, and that RAW is typically only used by the big budget productions and by amateurs." and that's when this side-conversation started. I'm still keen to hear more about how often and why productions would shoot Prores even on cameras that can shoot raw. I figure the explanation is probably a combination of it being simpler to work with (as all software can play it) and that its image quality is good enough.
-
Well done for saying something that was correct, unfortunately I have completely lost any sense of what point you are trying to make. Perhaps you could try again?
-
Panasonic have confirmed that the GH6 will get Prores RAW in a firmware update, so you can capture RAW if you like when that arrives. Every time there's a MFT camera realised the people come out of the woodwork to try and tell us that AF and Toneh are somehow now mandatory for all film-making. Once upon a time everyone laughed at digital photography saying that it would never replace film. What's that saying? The people saying something isn't possible should get out of the way of those who are doing it. 🙂
-
The bitrates are available from most reviews of the camera, this one from newsshooter.com is pretty comprehensive, and for the higher frame rates I just multiply the bitrate by 24 and divide by the frame rate to get the effective bitrate that the viewer will see when the footage is played back at 24p. In reality a slow-motion shot taken at 200Mbps / 240p won't have exactly the same image quality as a 20Mbps 24p file would have because the subject matter doesn't move nearly as much from frame to frame in the 240p one, but like anything it will be subject specific. 50Mbps is a good amount of bitrate for recording an interview mid-shot with a blurry background as not that much will be moving, but it's no-where near enough if you're recording trees and snow during a blizzard. Let me put it another way. When we record a scene with a camera, grade it, and then play it back on a calibrated playback device, the colour and dynamic range that appears on the calibrated display is almost never accurate. We do all kind of creative things to make the image look more in-keeping with the vision from the director or other people in charge. We add or remove contrast, we change the WB, we cool the shadows and we warm the mid-tones, we take keys and selectively change those areas to make them brighter or darker or more or less contrasty or differently coloured etc. In other words, the output is a pleasant fiction. There are other elements that are a fiction too. We record at 8-bit and 10-bits and 12-bits and 14-bits and more, the sensor is reading things at a certain bit-depth and doing colour and gamma changes before the bit-depth gets converted to the output format, which is then decoded and processed in a different bit-depth inside the software we're using (IIRC Resolve operates internally at 32 or 48 bits), then output at whatever bit-depth you choose when you select the delivery codec. Resolution is also a fiction. The sensor captures an image, it gets de-bayered, it might be downsampled in-camera to the capture resolution, it gets scaled to the timeline resolution in the NLE, and output at perhaps a different resolution again to the delivery format. Softening and halation and sharpening are applied. Most/all of these operations will interpolate between pixels so it's not a simple 1:1. Now, take the situation where you're recording with a GH5. The camera is capturing 5.1K at who-knows-what bit-depth and perhaps its 10 or so stops of DR, it converts to the colour profile which might crop the DR, it rescales the image to the target resolution, and it saves the file to the codec you've selected. The NLE loads the file, colour grading does whatever the hell you want to the image, and then it gets exported to whatever delivery format you've chosen. If we'd exported that to a rec709 colour space, let's say in UHD or DCI4K, the image will have been downscaled, the bit-depth downconverted, and the colours changed - potentially quite severely. If we'd exported that to a rec2020 colour space, also in UHD or DCI4K, the image will still have been downscaled, the bit-depth downconverted from whatever was read off the sensor, and the colours changed in basically the same way. Sony cameras famously had the worst rated colour science, but that was actually the most accurate colour science when it came to capturing things as they really were. No-one likes accurate colours, and a search for the word "accurate" in any colourist forum would probably only return results from people who stumbled into the discussions and didn't know that the concept is completely alien to the task at hand. There isn't even agreement about how to master HDR content... Netflix and creators of broadcast standards have an opinion on how it should be done technically, but there are significant flaws with their logic and there are completely differing opinions, especially from people who are very well respected. I've linked to the time here: In his masterclass he gave a slightly different explanation which included the idea that when a person is presented with two displays, one SDR and one HDR, they are likely to adjust the brightness on them differently based on how people grade, so if one of them gets graded brighter or darker (maybe because it's "correct") the user will probably just change the brightness and un-do the "correctness" of the gamma shift. I'm probably saying this all wrong, but in a sense correctness is a myth - the TVs out there in the wild have vastly different colours and gammas and are in all sorts of different environments too. The idea of chasing some kind of technical purity is a myth to begin with because colour grading is an art and not a science, and then also once the image leaves the streaming platform then who-knows-what will happen to it once it reaches the consumer too. Long story long, recording and publishing to SDR is a fiction and always was, and so publishing to HDR can be done just as successfully while still remaining a complete fiction. Hopefully that makes sense?
-
So, if they were the only ones that offered RAW, and they all shot Prores, rather than saying...... wouldn't it be more correct to say that all cameras that shot RAW also shot Prores? Which was my original point... that cameras shot either RAW & Prores or h26x-style codecs, but very few offered h26x and Prores. Which was why it's so hard to compare the two codecs.
-
I have noticed that there seems to be a world of subtle timing with editing and the edit points themselves. All edits have a rhythm and pace of their own, music or no music. Try clapping your hands to an edit, any edit, and you may be able to "find" that pace. So, edit points can happen on the beat, slightly before/after it, deliberately off the beat (either on the off-beat or intentionally nowhere near any beat and therefore unexpected). When there is music then there is a time signature involved and there will be bars, where the first beat in that bar is more important than the other beats and may be emphasised. So then you can have edits aligned to the first beat in a bar, or the third beat in a bar, etc. So much variation.... but it's hard to really "see" them, because our hearing is much more time sensitive than our vision. Here's a super niche trick to 'hear' the edits in Resolve. start with an edit with separate clips. use the Scene Cut Detect if you need to get a sound file with a short sound and put it in a bin on its own (a short beep sound is good) duplicate the audio part of all the clips in your edit to another track highlight them all and disable the Conform Lock Enabled option on them (and ensure it's on for all other clips) right-click on your timeline and select Timelines -> Reconform from Bins on the Conform Bins, unselect all bins except the one with your beep sound in it in the Conform Options, select only Codec (that's the only setting I found that works) hit ok every clip that you disabled the Conform Lock on in your timeline should now be your beep sound, and if you've trimmed it right then you will now have a track that beeps every time there's an edit point adjust volume on the track to be something sensible watch your footage and be able to 'hear' the edits Let me know if you actually do this and what you find. I'm currently looking at Mustafa Bhagats work on Parts Unknown and he has a fantastic sense of rhythm and timing (being a musician in addition to an editor probably helps with this) so I'm hoping to learn from these.
-
I find the aesthetic of 120p to be very artificial - it's absolutely a special effect that calls attention to itself. The aesthetic of 60p conformed to 24/25/30p is still connected to reality but is more surreal, like we experience in moments of intoxication or high-emotional states. I notice it used in narrative work regularly. I think there's a reason that the cinema cameras have offered 24/25/30/48/50/60p for decades and 120p is a relatively recent arrival. The triplet of ALEV sensor, DGO sensor and the new GH6 sensor all offer the dual-gain up to 60p and I think this correlates with this application. I don't think I'll miss it above 60p really.