-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Another test - including 4K, 5K, 6K, and 8K RED raw files.
-
Any camera movement should be a deliberate artistic choice, designed to support the narrative and aesthetic of the film. I'm far from an expert, but there are lots of articles around if you google 'camera movement'. There is also the element of motivated vs unmotivated camera movement that you can read about as well. In terms of camera shake, it the aesthetic of it is that it's a bit more 'real' because that's how amateurs take video of real life, so it can give a more authentic feel to a shot. It can also make things more exciting, which is why they use camera shake in action sequences. Topics like this are so deep that you can never learn everything about it, even if you studied it for the rest of your life. However, these are the skills that will drastically improve your film-making. Camera movement, composition, lighting, editing, dialogue, sound design, etc etc etc..... the beauty of film-making is that you can learn a little and get a big improvement in your work, you can learn more and get even better, but you can study it for the rest of your life and never run out of things to discover and there is no limit to the degree that you can improve your work.
-
@KnightsFan I'm far from an expert but the approach that I have taken (and I'm pretty sure aligns to what I was advised to do) was to calibrate the monitors in the OS so that all apps get the calibration applied, but I'm on Mac so it could well be quite different for you. I take a slightly different approach of having a preset node in Resolve that I append to the node tree, take the screen grab, then remove the node again. I've taken multiple shots and the brought them up in the UI and compared them to the Resolve window and found the match to be acceptable. @Oliver Daniel The other thing I forgot to mention about colour accuracy is to get a bias light that will provide a neutral reference. I purchased one from FSI https://www.shopfsi.com/BiasLights-s/69.htm which is just a LED strip that gets stuck to the back of your monitor and shines a neutral light onto the wall behind your monitor like this: The wall behind my monitor has black sound panels so I just pinned up some A4 paper for the light to reflect off. I figured you can always rely on reflex! There's all sorts of other tricks to getting a neutral environment, but probably the most important aspects are to calibrate your monitor to 100nits and 6500K, to get a bias light, to match the ambient lighting to the bias light (which is an accurate reference) and to ensure that the ambient light isn't falling onto the monitor and that it's quite dark compared to the brightness of your monitor. I used my camera as a light meter to measure how bright the ambient light was, but if you calibrate your monitor and pull up an 18% grey then match your ambient light to that then you should be good. Professional colourists talk about getting paint that is exactly neutral grey (which is available and very expensive) but once again, it's diminishing returns.
-
To further add to the Mac v PC conversation, I think people underestimate the fact that film-making is a creative pursuit focusing on a visual and auditory experience. Every creative pursuit requires that the creator be comfortable to create, which is a function of building an environment that suits the tastes and preferences of the creator. The fact that film-making is a visual and auditory craft means that the kind of creators that it attracts are people that care about visual and auditory aesthetics, so it makes sense that the visual and auditory experience of the creative environment will directly impact the quality and quantity of the creative work, and how good the creative process is to experience for the creator. Considering that video editing is done on computers, this directly maps to the choice of a computer. Only considering the price that you pay for every million floating point operations per second is a valid perspective if it is your perspective. Others have their own perspectives, and they will always be different, either subtly or radically, and that is part of the beauty of people and creativity. I would hate it if we were all the same and every movie or TV show I watched was created the way that I would have created it - how dull and predictable such a thing would be.
-
I asked the question on the colourist forums about calibration on a normal monitor and although the strict answer is that you need to have BM hardware some people said that their GUI matches their reference monitor almost exactly, obviously after calibrating, so it's really about what level of certainty and accuracy you want. If a pro colourist says the match is almost perfect then I figure that's good enough for me!
-
Swapping AF and low light for unlimited recording and colour matching seems like a good trade-off if you're doing interviews and weddings, especially for a B-camera. Also WRT low light, the MFT cameras can get very passable low-light performance if paired with a fast lens. I have above average night vision (I can easily go off-road mountain bike riding with no lights if there's a full moon) but the GH5 and f0.95 lens combo can see in the dark better than I can. The other low-light advantage of MFT is that on an f0.95 lens it gets the exposure value of T0.95 but a DoF equivalent to a FF at F1.9, so better exposure without the razor-thin DoF. I'd suggest having a think about the various interview configurations that you might use and then working out what the minimum requirements would be for those setups, in order to avoid buying gear you don't end up using.
-
In terms of 8-bit, concentrate on getting it as right in-camera as you can. All the 10-bit vs 8-bit tests that identified differences (many didn't) only had issues when the image was pushed by a decent amount, so the better you get it in-camera the less you have to push it around and the better the image will be. Matching colour is a matter of: choosing the same colour profiles on all cameras setting the WB and exposures properly using a colour checker once per setup You can do your homework and record a colour checker on all cameras in controlled conditions, then make an adjustment to match the lesser cameras to the better one, and make it a preset so you can quickly apply it. Then when you're working on actual projects all you have to do is tweak slightly, if anything at all. Yes, I realise that this means you will be shooting a rec709 type profile on your best camera, but this is intentional to match colour profiles, and also it will help you get a good quality image in post. I use the GH5 as my A-cam and shoot the 1080p All-I 10-bit 422 modes, and just recently I swapped from shooting HLG to shooting in Cine-D. Extrapolating from the principle of "the less you mess with it the further you are away from the image breaking" I no longer take my 10-bit image and radically push/pull the bits around in the HLG->rec709 conversion, so now when I get my 10-bit Cine-D images out of camera the bits are nice and thick and almost exactly where I'll end up putting them in the final grade. @Ironfilm makes a good point about using the kit lens on a second / third angle, as the kit lenses are nice and sharp when stopped down a bit, and can also be pretty good when at their longest focal length and wide open, especially if it's an angle you're not going to go to for long or many times in the edit.
-
I second everything that @IronFilm said. Now is a good time to consolidate to one lens mount as you're selling lenses anyway. This folds into a lens strategy where you should be able to use the same lenses across both cameras, but without having duplicates, which typically means having the same sensor size. For example, if you want a variety of shots in your Multicam setup then you want a different FOV from the two cameras. Typically that would mean having a mid-shot on the A-cam and either a wider shot on B-cam, or a tight shot on the B-cam, and is often done with a 24-70 / 16-35 or 24-70 / 70-200. If your B-cam was an MFT then you'd either have to have a second 24-70 to get a tighter shot with the crop sensor, or you'd have to use a specialist MFT wide lens like a 7-14mm to get something that was actually wide, but then that's a lens you can't use on your A-camera. The other advantage of having cameras of the same sensor size and common lens mount is that everything is a backup for everything else. If a body dies then you have a second body that can use any of your lenses. If a lens dies then you have access to every other lens on that camera. In the spirit of "two is one and one is none" you might consider buying a very cheap third body in case one body dies, which would leave you with a dual-camera setup in that case. I don't know how you shoot and edit so you could always cut up the interview with B-roll if you only had one camera for the interview, but it's worth considering. In terms of a third body, something with good 1080p would probably be the way to go, which typically means older and smaller sensor. In that situation you'd want to get something that required as little extra peripherals as possible, for example, using compatible power solutions, lenses, etc. That might be where a G7 or something comes into play, where you can use the 70-200 on your main camera and the 24-70 on the backup camera as a mid shot, perhaps positioned further away, or go to 70 and without a speed booster it would be a tight shot. I'm employing this strategy in my work. My main setup for travel film-making is a GH5 combined with a Sony X3000 action camera and next trip I will probably buy a GH3 as a second / backup body. It will take the same lenses, can be used as a second angle if required (either for real-time or time lapses), and if my whole main setup of GH5 goes in the drink (for example) then I can easily replace it with the GH3 and the only overheads of the GH3 will be to carry an extra 3.5mm on-camera mic, USB battery charger and a couple of batteries, and a 14/2.5 lens as replacement to my 17.5/0.95 main lens. If my X3000 dies then I can use my 7.5/2 lens on the GH5 or on the GH3 to replace the FOV. I would lose the waterproof abilities of the X3000 but that's the price of a camera dying on a trip.
-
I suggest: if you don't have a monitor calibration device, buy that as top priority and then buy the monitor with what is left - a cheap calibrated monitor will kill a more expensive uncalibrated monitor read lots of reviews.. when I was monitor shopping I read a bunch of them and there is all the information you need out there think about size and resolution, for example the larger the monitor the larger the resolution is typically, which will end up specifying how many of your controls you can view in your NLE at a time and also what resolution your preview window will be, which isn't so critical for editing but is for things like sharpening and other resolution-related tasks also, in combination with the above, think about aspect ratio, as those super-wide monitors have room for more UI in some NLEs
-
If anyone is using Resolve and looking at the new Macs, this shows very good results: Note that there's a special download link to the new version for the M1 Macs. TLDR: the cheapest Mac mini plays 4 x 4K files simultaneously with 2 nodes of grading on each clip.
-
I agree with the above sentiments about getting one and trying it out. In terms of RAM and Apple vs PC, I've found that Apple handles RAM better than PCs, but it all falls apart if you run out of space on the SSD, so I'd suggest finding some SSD benchmarks if they're around yet, and buying the largest one that has good speeds. I haven't kept up with the tech, but it used to be that there was a sweet spot in size where below and above that size the performance suffered. I recently bought a new 13" MBP but seriously considered the Mac Mini as you sure get a lot of performance for your money compared to other Macs. One thing I've wondered with the benchmarks is the T2 chip, which supposedly accelerates encoding and decoding, but it doesn't appear in things like the Activity Monitor, and I struggled to find the detail about which types of files it worked with. For example, maybe it works with h265, but is that only 420 or 422, and what about 10-bit, etc etc.
-
Resolve has quite a number of things that you can keyframe, but a trick for key framing anything is to duplicate the clip, put it on a layer on top of itself, and grade it slightly differently, then use the opacity to crossfade between the two grades. I've used this trick a few times when panning/tilting in mixed lighting situations where I needed to change between grades and it worked really well.
-
My solution is about $10 🙂 Of course, my colour grading setup is hugely more expensive, but that's because it has to drive Resolve, which is a proprietary hardware interface, and it controls things that aren't keyboard mappable either, so it's a different proposition. In terms of spending lots of money for something you'll be slower at to begin with, if you get something that works then the payback is huge. Imagine that you have a controller that saves 1s on a basic operation. If we do that operation twice on every clip, have an average clip length of 3s, and edit a 45 minute show then that's 30 minutes. However, that's if we only saved that time on the shots that made the final edit, but we do lots of editing on clips that don't make the final edit, and for the shots that do make the final edit we will adjust them multiple times, so let's conservatively multiply that by 5. This doesn't count versions with a client where we create something that is finished and then have to move stuff around again for the next version. This gives us 2.5 hours on one project by saving 1s on a single operation. Multiply that by your hourly rate and you can see this starts to add up. In reality a good controller will save time on many operations, but will also cut down on distractions while editing, saving the little moments of having to re-orient yourself, potentially making edits less stressful and potentially making you a better editor.
-
Exactly. My Dad was a computer engineer for a large educational institution before he retired. They once bought a new top of the line computer from their supplier to act as a replacement to their primary server, custom built with the top motherboard, CPU, RAID controller, and HDDs. A week in he worked out that the problems he was having were between the motherboard, RAID controller, and the drivers for one of the chips on the motherboard. Two weeks in he'd found the half-dozen threads online of people complaining about the exact hardware combination not working, worked out which one had the most intelligent people in it, and started working with them to hassle the manufacturers. Four weeks in the group had gotten official replies from the motherboard manufacturer, the chipset manufacturer (who wrote the drivers) and the RAID controller manufacturer, with each blaming one of the other two manufacturers. Two months in he declared defeat and leaned on his supplier to take the whole machine back for a full credit, which he could only do because they bought hundreds of PCs from them per year. With Apple, it doesn't work and someone gets told to fix it, and so they get everyone in a room and fix it. That's the difference between closed and open systems.
-
I've read many times that an investment in a control surface repays itself many times over in increased efficiency, so I think it's a good way to go. I wasn't suggesting the Resolve keyboards, quite the opposite in fact. The link I provided in the other thread (and repeated below) is what made me think the "DIY" option might be the best. From the article: I haven't tried it yet, but what I interpret this to mean is that you can program your extra keyboard to have different hotkeys than your normal keyboard, potentially doubling your controls, or more if you are using more than one additional extra keyboard (not sure if that's possible?). Combined with the keyboard shortcuts within your NLE I'm imagining this should make a setup as flexible as you like. I watched a few reviews of various controllers and the downside was never the hardware, it was always the limitations of customising things, which I thought the above would get around. In terms of my reference to two hands, I wasn't suggesting that you spend time drinking beer or doing some other task with the other hand, more that you could use the other hand to have access to more controls in editing. In practical terms, you get speed by having one-key access to a function, but even more speed when you don't have to look. There's a limit to how many keys you can reliably hit without looking, probably something like 12 or 16 per hand. I don't know about your editing hotkeys, but mine are typically JKL for back/stop/forwards, IOP for MarkIn/MarkOut/Insert, leaving only a few remaining keys for other operations such as ripple trimming etc. If you have a jog wheel then you'll probably have a hard time operating it and reliably hitting a few extra buttons with the same hand without looking. However, if you have two hands active at once, you can use one for the basic navigation and operations you'll need to do in bulk, and then the other hand can have access to another dozen more sophisticated editing commands, or alternate methods of navigation like next and previous clips, navigating between markers or flags or whatever, etc. If you're anything like me you will have very little muscle memory on your left hand, so your injury has kind of forced you to work through the frustration of learning to navigate and do basic operations with your left hand, making it likely that by the end of it you'll be fresher with it than your current dominant hand, especially if you get a control surface of some kind, which your dominant hand won't have experience with. The end-game if you go down this route is to be able to edit a project from start to finish without looking away from the monitor basically at all. Depending on how you work, you may even want to map some basic colour adjustments to your right hand, like WB or exposure, so you can kind of correct as you go. As Resolve is so nicely integrated and I use it for my whole workflow I tend to bounce back and forwards between the Edit and Colour pages, as I find that Colour impacts how I edit to a certain extent. For example, I might make a selects by eliminating all shots that are crap, but then I would do a Colour pass adjusting WB, levels, and conforming to rec709 so I can see the shots (instead of them being in LOG, for example). Then I would go back and make as assembly with more decisions based on how lovely the shots look. Then I'd do a colour pass really working with the clips, especially the 'hero' shots. Then adding music and doing the timing of the edit I would be looking at how great each shot looks from the colour grading to determine how many 'beats' to keep on a particular shot. Sometimes a shot really comes alive in grading and so I might linger on it longer, or maybe even slowing it down slightly, etc. These grading things all contribute to the edit, but I don't want to colour grade every clip before I start editing as that would be a huge waste of time. Anyway, food for thought about keyboard shortcuts. The other thing to think about is your overall workflow. I've seen that there are really two methods for editing. The first is to review all the clips and make selects, then make another pass eliminating more clips and refining timing, etc etc until you have a final edit. This means once you eliminate a clip you shouldn't need to look at it again, but has the downside that you end up looking at lots of clips several times that won't make the final edit. The second is to log footage properly, and then just make a timeline by pulling the best clips in. This is more efficient if you have higher shooting ratios and are organised, but if you have poor organisation skills and a poor memory then you could end up spending minutes/hours looking for each clip that you pull onto the timeline, which would be less efficient overall than the first approach. Essentially the first approach is that you start with everything and delete clips until you have the edit, and the second is starting with nothing and adding clips until you have an edit. Most people have a hybrid of these approaches, so it's whatever works for you, but I'd suggest that getting this sorted would contribute more to your overall efficiency than a control surface would. Anyway, food for thought.
-
I've watched a few videos running down the new features, and I must say that I'm pretty excited for quite a few of the features. The New Colour Wheels look awesome and I think i'll likely use them a lot. I have a control surface so the Lift/Gamma/Gain controls are great for exposure corrections, but they are very 'macro' controls, not really having enough control over things like shadows vs blacks etc, especially for the naturally lit uncontrolled situations I shoot. I can use curves, but they're a PITA to use with the control surface, so having the new colour wheels will be great, giving enough 'resolution' but still being fast enough to use quickly. The Colour Warper (spiderweb) and luma/chroma warper will be quite handy too. I often find myself wanting to quickly change the saturation, hue, and luma of certain colours (eg, foliage) and currently you have to bounce back and forwards between Hue v Hue, Hue v Sat, and Hue v Lum to do that. Also, one of the things that I have played with in the past is a Saturation limiter. The idea is that I want to up the saturation on clips, but when I do that sometimes theres a splash of colour in the background that goes nuclear in OTT saturation, so what you want is a curve that increases the saturation of colours under a certain threshold, but once a colour gets to a certain level of saturation it should encounter a 'knee' and the saturation boost should slow down at that point, limiting the most saturated elements of an image. That's also possible easily with this tool. I had previously used a Sat v Sat curve, but the risk of that is that you end up with overlaps where more saturated colours end up less than colours that started off less saturated to begin with, so I had to generate some test images to ensure I designed it correctly. The new Sat vs Lum curve looks great too. One of the things that I learned in investigating film is that emulating a subtractive colour model requires the darkening of more saturated colours, which currently has to be done in a relatively customised way, whereas this gives a nice curve. I'm also curious about the new Temp and Tint controls. I use Temp and Tint all the time to correct WB and apparently they've redesigned them to work in XYZ colour space (assuming I understood that correctly) which means they will be perceptually better, which is cool.
-
I have a GH5 and only shoot 1080p - although I do shoot 120p for sports. I don't have a problem though. I mean, I can stop at any time.
-
I'm still looking into this, and started a thread some time ago: My little keypad thingy arrived but I haven't had a chance to look at it. My initial impressions are that the only difference between dedicated editing controllers and normal keyboards is that 1) dedicated controllers have a jog wheel for accurately scrubbing forwards and backwards, and 2) dedicated controllers often have a specific layout and colour-coding or labelling of the keys. Beyond those things they're pretty much just keyboards: Zooming back out a little, sorry to hear about your injury, but great to hear you're trying to work around it and potentially take it as an opportunity to improve your setup. I would go one step further and suggest that this is an opportunity to get an edit controller, learn to use it with your left hand, and that maybe you should think of this as a permanent way forwards. You'll be building muscle memory, and (assuming you're right-handed), when your hand heals you will have your dominant hand free to do other things while you're editing. In contrast to that I use my dominant hand for editing control and my non-dominant hand is pretty useless as I'm not as coordinated with it and I don't have any muscle memory for it either.
-
We all know about face-tracking auto-focus (AF). Presumably, face tracking also helps with controlling exposure (AE) because faces are the priority in most shots. I've just worked out that my GH5 has face-tracking AF and AE, but when you turn off AF, it also turns off the face-tracking AE, and then exposes for the whole frame instead of the person, even if the face is clearly in focus and visible. Obviously this is ridiculous as there are lots of situations where you want AE and not AF, but is this a common thing in other cameras? Also, WTF Panasonic....
-
After using my old one for 4 years I replaced my 13" MBP with a new one only a few months ago. I did this deliberately, knowing that the new architecture was coming, because I didn't want to be a beta tester. I haven't read in detail what was in the announcement, but assuming that it was anything close to what was predicted, it's an interesting thing. The things I think are most interesting are that by transitioning all the Apple hardware to ARM: they can optimise the hell out of everything as they'll have control over the whole software/hardware stack it essentially merges the hardware platforms of phones/tablets/computers, meaning that they would go to one App Store, app developers only have to write one version of the apps instead of two all iPhone/tablet apps could be run natively on the computers potentially all the computer apps would be able to be run on iPhone/tablets The reason I waited is that it also means that they'll have to re-write OSX from the ground up, potentially putting into place a huge backlog of fundamental architecture changes that have been accumulating since OSX went to a unix platform, which is going to be a huge and very messy process. Also, that means that every OSX program will have to be re-written, or will have to run in an emulator. That's not something I want to beta test on something as sensitive to performance as video editing. The end-game of this technology choice is that your phone will become your computer. I've said this before, but imagine coming home, taking your phone out of your pocket and dock it which provides power and also connects it to your monitors / peripherals / eGPU / storage arrays and it goes into 'desktop' mode and becomes a 'PC'. This might sound like science fiction, but I've seen someone actually do this years ago running linux on an Android device - it had tablet mode and desktop mode, similar to how modern laptops with touchscreens now have a tablet mode and a pc mode. Modern processors are good at being efficient while they sit almost idling in the background, but then turn into screaming power-thirsty race-horses when asked to do something huge (anyone that has had their phone crash will know it can drain the battery before you even notice that anything is wrong). When you dock it then full-power becomes available and only cooling may be a limiting factor. The other aspect that supports this is the external processing architecture that has been worked out. OSX supports having many eGPUs and Resolve will happily use more than one, although currently you don't get much improvement by having more than a few of them. It's not inconceivable that in future an eGPU will be available that will appear to the OS as a small cluster of eGPUs, and the computer simply becomes a controller. When I studied computer science we programmed software for a couple of toroidal clusters, one of which had 2048 CPUs (IIRC). The architecture is getting there and video processing and graphics is a perfect application for it as you can just divide up the screen, or separate tasks such as decompressing video / rendering the UI / scaling the video / doing VFX processing / colour grading / etc.
-
It's literally the first beta version! Kids these days 🙄🙄🙄 If history is any guide then there will be a new beta out very soon. I don't have the data, but it seems like with previous versions they'd occasionally put out a beta that had a big issue for a lot of people and they'd fix it within days in a subsequent release. One thing that people don't really know about software development is that companies like Amazon make releases multiple times per day. They're just very small and in some feature you probably don't use, so it doesn't seem like that's what's happening. Bug fixing is obviously a different thing as it takes time to diagnose and then fix without breaking other stuff, but that's the life of a beta tester. I gave up on betas a few versions ago - unless you want to be a beta tester then I just wait for the full release.
-
I think a lot of folks out there don't really mind if something is hybrid or MILC or even DSLR, they just want a small(ish) package that can be hand-held or put on a gimbal, and have 10K for a complete setup. In this sense there's things like the Komodo, BGH1, C70, as well as the DSLR form factor with things like S1H, Canon 1DX line, etc. Lots of options around to choose from.
-
The new editing keyboard looks very interesting... Of course, this paves the way for a Resolve Nano Surface which will be the size of a normal keyboard, has the editing controls / jog wheel on one side and three trackballs / control knobs on the other, and will be $199. I'd preorder one of those right now, maybe more than one if I was bouncing between locations.
-
I'm curious about v17, but if it's like any other version we won't get anything but betas for some months, so if you're allergic to instability then it won't be out for quite some time....
-
Resolve has been stable for me since v14. Sometimes something seems to stop working or doesn't do what I'd expect, so I restart and that normally fixes it, but that's rare.