-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I think the answer to your question is probably that there is no camera that meets your requirements. Any camera that is "something similar, in term of size" to a GoPro will have a small sensor and therefore poor low-light performance. If you want better low-light performance from a larger sensor then that means that the depth of field will be shallower, which means that you'll have to focus the lens, which I thought violated your "fast use" criteria. I don't really see the problem of having an interchangeable lens - you can simply install a lens and then never take it off. Almost every camera with a significantly larger sensor will have an interchangeable lens. Cameras with larger sensors that don't have interchangeable lenses will likely have zoom lenses with very large zoom ranges, making them physically much larger and giving shallower depth of field which makes them require focusing from shot to shot.
-
-
The digital stabilisation was done in Resolve, which has better digital stabilisation than ANY camera will EVER have, because Resolve can see into the future and cameras can't. IBIS / OIS stabilises DURING the exposure of each frame, digital stabilisation does not and therefore the frames will have motion blur. I don't understand how you could watch that comparison and still genuinely think that one can replace the other. I'm sorry if this is rude, but I just literally can't understand how you could see it and not understand it. The IBIS means that the frames are all sharp, the digital stabilisation has frames that are blurry as all hell. I shoot very unstable handheld footage all the time and stabilise digitally in post and this whole thing has been obvious to me from basically day one.
-
Well done everyone... you inspired me to shoot a little test to show why Digital Stabilisation can't replace OIS / IBIS. @Emanuel - this is the video that should end the debate. Digital stabilisation can be great, but only if there is very little motion and only if you have short shutter speeds. I'd encourage everyone to realise that Digital Stabilisation is just DIFFERENT to OIS or IBIS. It's a different tool for a different job. I often use digital stabilisation on my IBIS footage and it can work really well. Hopefully this clears things up?
-
I'm not aware of any way to check focus while recording, and the GH5 is my main camera. If you used an external monitor then it would be a standard feature on all those I'd imagine, although if it was an external recorder then you'd have to look at the functionality of that device. It's really dependent on how you shoot. In terms of the GH6, who knows. The precedent set by the GH3,4, and 5 was that it leapfrogged the competition and really stood out. I'm not really sure how the GH6 could do that against the current crop of cameras, but maybe they'll pull a rabbit out of their hat.
-
Interesting. Does what you describe above still apply for ML RAW files "developed" with the Arri LogC to 709 LUT in third party ML applications, or exported as CinemaDNGs and processed in the Resolve Raw Panel? Or just when developing them with Adobe products?
-
@BTM_Pix Got it working and it doesn't do anything in the Cut and Edit pages - just in the Colour page. Bummer. I'll have to re-think my workflow. Currently my workflow is to pull selects into a timeline and rearrange appropriately to make an assembly, then duplicate the timeline and add music and cut ruthlessly, touching up shots I like as I go. Then I'll continue cutting and refining, and at some point I'll go across to the Colour page and do a "proper" colour session where I do primaries and secondaries etc. The advantage of that is that approach is that I only colour grade the shots that survive a couple of passes, and I only really go to town on the Colour of shots that are basically in the final edit. The downside is that it involves lots of trips to and from the Colour page during the editing process, which means I have to change from the Speed Editor to the mouse and keyboard and back again. The alternative would be to do a colour pass on all the footage in the assembly, but it would result in lots of work for shots that get culled early. It's interesting that the officially supported panels have the same Colour-page-only limitation as the Beatstep. I also noticed that the Printer Lights don't work in the Cut or Edit pages either. Hmmm.
-
This is very interesting actually. I know you mentioned it before but now I understand my workflow better. The use-case for this is that when I'm editing I want to be able to do basic colour adjustments to correct WB and exposure but without having to swap controllers and without having to swap from the Speed Editor in the Cut page to using the mouse/keyboard in the Colour page. The only functions that Resolve provides for shortcut keys are the Printer Lights keys, which don't really suit the way I work as they're Offset based which is designed for Log and not 709. Do you happen to know if a controller app like Tangent-Vs can adjust the LGG wheels while Resolve is in the Cut or Edit pages?
-
Yes. For editing, I now have the BM Speed Editor: It is BM official product and has a bunch of features that can't be done with any other hardware. These are actually quite useful and make editing a really intuitive process. For colour grading I have the Arturia Beatstep, and the Beatstep Resolve Edition software: The Beatstep device itself is a music production synthesiser and has absolutely nothing to do with Resolve or video editing at all. The Beatstep Resolve Edition is a suite of software that assembles several different pieces of software to essentially "hack" the controller so that it works with Resolve. It isn't supported at all by BM and controls Resolve by faking mouse movements and keystrokes so basically Resolve thinks you're just moving the mouse and typing super fast. It has a huge amount of functionality built-in but is a little clunky to use in some instances. Don't get me wrong, you could absolutely use this professionally, and I've been in contact with the developer and he's mentioned that its used by many professional colourists and from my experience with it I would suggest that is very likely to be true. I've been able to rip through timelines and perform basic corrections to match shots and apply grades and it's quite straightforward to average 10s per shot, even if the shots are from different cameras. The Beatstep Resolve Edition doesn't provide any editing features and the buttons aren't the right kind of buttons for that (they're touch-sensitive pads designed for music making) so I'd prefer clicky keyboard type buttons for editing anyway. In Resolve, I see that there's basically four "levels" of control: Devices where you can assign a key on the device to a shortcut in Resolve This is the case with many third-party devices, but you'll be limited to what functions Resolve will let you map to a keyboard shortcut. Devices and custom utilities where you can assign a key on the device to run a sequence of events on the computer like multiple keypresses or mouse movements etc These are hit and miss and difficult to use and I've downloaded a few and uninstalled them almost immediately because they don't do what I want them to do. Note that Apple products and some third-party products aren't recognised by these apps as Apple doesn't let third-party apps capture keystrokes, probably a security mechanism Setups like the Beatstep + Beatstep Resolve Edition where there's custom hardware and software that control the mouse and keyboard to operate Resolve. Beatstep + Beatstep Resolve Edition is the only one I'm aware of, which makes sense because it's months/years of work to code something like this Dedicated BM hardware This can do things that none of the above can do because the device is seamlessly integrated into Resolve. Note that on the Speed Editor the custom utilities that can change keyboard shortcuts etc can't see the Speed Editor and Resolve itself also doesn't let you re-map any of the keys on the Speed Editor. There are threads on the BM forums of people asking / complain about this. The main thing is that the Speed Editor has 9 dedicated keys for multi-cam use and they can't be used for anything else, so if you don't do multi-cam then they're completely useless to you, even though there are other things that would be really great if they were mapped to the device Resolve is a bit like Apple in the sense that it's a closed eco-system and they don't really support third-party devices that much and on their devices it's their way or the highway essentially.
-
A better question would be "is there enough room in the hearts of the Canon executives for all of that". Spoiler alert, the short answer is NO. The longer answer is "hahahaha.. you're kidding right? Shut up stupid customer and just be thankful for whatever we give you, and remember that when talking about our products you're only allowed to talk about the resolution and the colour science - every other comment is banned"
-
Can you say a bit more about what focal length / FOV you are looking for, and what you're trying to film? A GoPro has a very wide-angle lens, is fixed focus, and a very small sensor, which gives a very deep depth of field. To get significantly better low-light performance you will need to get a camera with a larger sensor, but that means that you will have shallower depth-of-field when you keep the aperture of the lens opened up which is needed for good low-light performance. There might be a sweet-spot with an MFT camera and something like the TTartisan 7.5mm F2 lens, which should have relatively deep depth-of-field when wide-open and could be focused to be around 2m/12ft away, but it really depends on what you're shooting. Having a fast and wide lens is a real challenge.
-
1Gbps is fine for 8K RAW, but if you were shooting a documentary with literally hundreds (maybe even 1000+) hours of footage, halving the bitrate is a significant thing and can make a huge difference. The way that documentaries are going now you need to have a build-up and climax, which works for things like athletes preparing for a competition, or some other kind of fixed deadline, but it's very common for people shooting a documentary to not have an end-date. These situations are very difficult to finance as there's no guarantee of having a good climax or even of having a story where anything happens. Shooting is normally done with the DoP essentially "on call" and the subjects reach out when anything specific is happening and the DoP drops what they're doing and goes to shoot. For this reason it's common for the DoP to own all their own gear (you can't rent a camera or lenses on one-hours notice in the middle of the night) and so these are often self-funded, and can take years to film. This situation is actually quite a likely one for something like the R5C, where a DoP would own something like this and rent more expensive gear for bigger projects, but would rent out this and their own set of (likely vintage) lenses with them when hired on smaller budget productions. In this sense, having a "medium" bitrate that's still Netflix-approved would matter more for a camera like this than a cheaper hybrid or more expensive cinema camera. Canon are well regarded for the colour science of their skin-tones, but they are absolutely NOT well regarded for the quality of their compressed codecs on their hybrid cameras. That's what we're talking about here. I see ML RAW frame grabs from a 5D3 regularly (thanks to @mercer) and when shot properly and graded with a simple LUT the skin-tones should remind you of an Alexa. If you're not going "wow" then something horrible has happened to the image. Think about all the you tubers who are shooting 4K with Canon FF cameras - when was the last time you looked at one of their videos and thought "holy cr*p those colours are AMAZING"....? Never? Exactly.... Ummm..... how? Digital stabilisation can line up the frames with each other, but OIS / IBIS actually stabilises DURING THE EXPOSURE OF EACH FRAME. If you have ANY motion blur in ANY frame then it's there forever and digital stabilisation can't do a single thing about it. Even worse though, is that once the digital stabilisation has worked its magic, the shot looks smooth but there will be random blurring of frames without any corresponding change to the overall shot. In other words, digital stabilisation without having very short shutter speeds will look worse than no stabilisation at all.
-
One article says the EIS crop factor is 1.25, so I think that makes the widest a 22.5mm - wider than 24mm but really not by much. Not really a good vlogging setup. It's a tough one for compact vlogging if you want wider than 20mm and stabilised. The X3000 action camera has gimbal-like OIS built-in but doesn't zoom or have crazy slow-motion, but on the other hand its fixed focus means no more AF debates - it's the only type of focus system that is actually reliable.
-
We were talking about options for avoiding the "clay" skin-tones which the non-RAW codecs provide. The only way to do that is to either shoot RAW or output RAW to something that won't degrade the skin-tones. Yeah, sometimes more isn't better. Sometimes it's a PITA actually.
-
Interesting, if slightly repetitive, article on portable post-suite.. https://ymcinema.com/2022/01/20/building-your-editing-grading-suite-in-a-hotel-room/ This is something I'm moving towards gradually as I upgrade and upgrade equipment. My goal is to have a portable setup that could do solo-shooter projects at a professional level. I travel with a carry-on and single checked suitcase, and this would have to contain my cameras/lenses, clothes/toiletries/etc, as well as the post-suite. I typically travel for 3-6 weeks at a time, shooting and editing and releasing videos while "in the field", and stay in hotels, air-bnbs, etc. Currently my portable setup is designed to include: GH5, GX85, 4-5 lenses, Rode VMP+ MBP with Resolve BM UltraStudio Monitor 3G (Thunderbolt to HDMI controlled by Resolve)* BM Speed Editor Beatstep music synthesizer with Beatstep Resolve Edition hack from Tachyon Post Portable HDMI monitor** X-Rite i1Display Pro calibration probe Platform*** * = about to order this ** = will order this when I get to travel again *** = currently investigating options on this The combination of the BM UltraStudio Monitor 3G and portable HDMI monitor will enable Resolve to bypass the operating system colour management and resolution/frame-rate controls and give an accurate representation of the video. I include the calibration probe in case I want to use the TV in the accommodation as the external monitor. This would be dependent on the layout and ergonomics of the location, but would be useful to include and isn't that big or heavy. The "Platform" is something that is unlikely to be relevant for others, but is useful to me. I like to sit next to my wife on a couch or recliner and edit while she watches TV or is on her phone or computer. This means that I have the setup on my lap, and in order to have a flat surface for the controllers and also to ensure that there's gaps for airflow from the vents in the laptop, I use a flat surface of some kind. I currently use a bamboo cutting board, but this would likely be too heavy to travel with, so I'm contemplating other options like balsa wood, aircraft aluminium, or maybe a thin ply of some kind. The other thing that I might include is an ultra-high CRI calibrated 6500K light-globe or two. These mean that the light in the environment can be controlled to have a neutral colour temperature and tint and not provide a non-neutral reference for the rest of the room surrounding the display. I like the Beatstep Resolve Edition but it's not perfect and I'd be curious if BM released a basic colour grading console that was comparably sized to the Speed Editor. As seen in the article above, the Micro Panel is huge compared to a laptop and Speed Editor combo, and really doesn't need to be, as something that was the size of a full-size keyboard would be sufficient to hold 3/4 trackballs, a row of knobs and some buttons. Is anyone else looking at more portable solutions?
-
According to https://www.canonrumors.com/here-are-some-canon-eos-r5-c-specifications/ the rates for 8K (therefore uncropped) are: In terms of Prores and what is visible, Prores HQ in 1080p is ~180Mbps and was used on feature films that screened in theatres, so could be argued to be 'sufficient' in terms of bitrate, but if we 'play it safe' and use a 4K flavour, then we could record externally to flavours like: ProRes 422 (UHD) - 471Mbps ProRes 422 LT (UHD) - 328Mbps ProRes 422 Proxy (UHD) - 145Mbps Of course, this would require the camera to output 8K and the recorder to support that and downscale appropriately. Obviously if you need the full 8K then just go with that, but if you were filming for hours and hours, say you were interviewing people for a documentary, then having more reasonable bitrates but still non-plastic skintones would be a good thing to have.
-
Sounds like shooting RAW is the solution then. It's a pity, as RAW makes you have huge file sizes or dramatic sensor crops, but it's better than nothing I guess. Maybe you could also go old-school and partner these with an external recorder that does a high-end Prores variation? The manufacturers know that resolution above 2K is borderline invisible - the problem is the consumer market which is uneducated and thinks that MORE = BETTER. Just look at threads like this and see how many people are asking about the quality of the 2K codecs and modes....... none. But every time there's a new camera oh boy are people interested to know if there's a 2% difference in something that makes zero difference whatsoever. There's a phrase in business 'what gets measured gets managed' and resolution gets measured and image quality most certainly does not.
-
I disagree that it's the same. Firstly, the human eye has limits, and depending on a range of factors those are somewhere between 2K and 4K. Most of the time publishing in anything over 4K will be a complete and utter waste of time, with exceptions being things like Omni, which are super rare. The downsampling argument is reasonable but the GH5 downsamples from 5K to 4K, and has been doing so since 2017, and there are very serious diminishing returns that kick in for this. Downsampling 8K to 4K would not have much advantage over downsampling 6K to 4K for example. VFX is fine, but how many productions were saying "we need 8K RAW for VFX but somehow couldn't just rent an UMP12K" or "we shoot VFX on so many productions that it's commercially viable to buy a R5C, but we couldn't justify buying a UMP12K". I totally understand that there are lots of shooters who would like the form-factor of a R5C and wouldn't be able to use a UMP12K, but these users are not heavy VFX users. It's a different audience. VR is a completely different beast and this is where resolution really does matter (because that 2K limit to human vision is over a narrow FOV) but anyone filming VR isn't using a R5C to do it. They're using a dedicated VR camera like the Insta360 Pro 2, which means no messing around with lenses, sensor plane alignments, inter-eye distance calibration, etc etc etc. Fast forward a bit and think about the statement "we need 24K because it's the year 2030" and see how ridiculous that is. Therefore there's a point at which mostly we have enough resolution for most things on most cameras. That point is now in our rear-view mirror...
-
You might be right about the next race being frame rates, but sadly there's not a lot of creative space left there either. The way I see it is: 24/25/30p = normal. Some might distinguish between 24/25 and 30p, but it's subtle 50/60p = emotional. Motion looks the way that it looks when we experience heightened states of emotion and "time slows down" 120p = special effect. Motion looks like a special effect, with things hanging in mid-air artificially, ripples moving through objects surreally, etc. It's potentially only useful creatively to simulate extreme drug use or be used outright as a special effect for the spectacle of it. >120p is the same as 120p only some things happen faster and you have to slow them down more, but the aesthetic is the same. I'm not saying there is no benefit to, say, 500fps or 1000fps over 120p, but the times you'd need it are few, and the look is the same as 120p, so it's not anything "new".
-
Are you noticing this look from the RAW as well? If not, it's rather odd and indicates perhaps excessive NR perhaps? If it is in the RAW then that's even more strange and I really would know where that would be coming from...
-
It would be amateur hour if he was making a film, sure, but he's a reviewer, so stress testing the camera is the name of the game. A review where they put the camera on a tripod, put on a low-quality lens, and film a stationary low-DR scene would be of basically zero value to anyone. The purpose of testing a camera is to push it to, and past, its limits, so that the viewers can see what those limits are and align them to their own needs. One thing that I really think is missing from the camera tests is a stabilisation testing setup. If someone built a rig that was mechanically controlled to vibrate the camera across a range of frequencies at a range of amplitudes, and that were repeatable and therefore comparable, and put a bunch of cameras and lenses on it, then we'd be able to see how the cameras / lenses compare and then we wouldn't be trying to work out if that person has steadier hands than we do, or if they've had lots of coffee that day, or if it was cold, or if they were walking on a smooth or uneven surface, or what their blood sugar is like, etc.. Depending on the shutter angle you use, you could just record RAW and do a screen grab. Even compressed 4K stills can be high quality, enough for printing magazine size anyway, so that's an option worth considering. It also means you don't have to be Henri Cartier Bresson on the shutter button.
-
It depends on how you're using the camera. I love the EVF / IBIS combo for vintage lenses on my GH5 but shooting low angles doesn't lend itself to using the EVF and high angles would require dozens of radical leg extension surgeries to enable me to use the EVF! In-camera Electronic IS is handicapped compared to EIS done in post, because the camera can't predict future motion so easily but NLEs can easily do so. With RAW files there isn't really any advantage to using the in-camera EIS so in that situation you can do a better job in-post, which if you have time to do it will always be a better option. Plus if you're recording in 8K (or as I call it, ridiculous-o-vision) you can process the living bejeebers out of it and it'll still look fine once you downsample to 4K and run it through the YT hammer-the-shit-out-of-it compression and processing.