Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. The way I tested it was to block one eye and look through a tube at the image and test how convincing it was that I was looking at a real scene instead of a flat image. I positioned the tube so that I could only see the image through that eye, so that the border to the image didn't ruin the illusion. You can then flick back and forwards between images from different lenses taken from the same location and directly compare. Of course, going back to my original point, if you compare lenses of different focal lengths then you have to account for the different DOF because that has a huge impact on how 3D a lens looks. I'd suggest that it's so large that much of the perceived differences between two lenses could be that one of them is actually just slightly wider open than the other and that's what you're seeing. Of course, things like other types of distortions can also impact the impression of dimensionality, like the famous CZ 28/2 "Hollywood" lens and its distortions.
  2. Me too. The situation I really want is for the camera to control exposure with auto-ISO and auto-ND and then let me control SS and Aperture manually. I'd also like it to do face-recognition and exposure for that, even if I've got MF enabled. That way, the camera is doing the technical operations, and I am doing the creative operations.
  3. In terms of combining the bits together, you would only add the bit depths if they didn't overlap in DR. For example, if I took two 14-bit readouts, one a stop below the other, then 13 of the 14 bits from each read-out would be duplicate data, and my effective bit-depth is really only 15-bits. So in order to understand the total bit-depth you'll need to know the overlap range in the sensor. That would also be further complicated if they weren't offset by a whole number of stops, which would place the values of one readout between the values of the other, providing more bit-depth but less increase in DR. If you really want to understand this, try modelling things in excel and graphing them. That should give you a more intuitive sense of what is going on. In terms of there being a certain quality in the older sensors, this guy has done lots of tests comparing the older BM cameras to the P4K and Ursa. https://www.youtube.com/user/joelduarte27/videos My overall impression is that most people don't utilise anything like the potential of their cameras, and that the difference between what images most people get and the images that you see from an Alexa or RED is more down to user skill (in terms of lighting, composition, camera operating, and the complete image pipeline in post) than it is about any camera limitations that might exist.
  4. One thing to note, for anyone reading this thread, is that small variations in DOF create a very large perceptual difference of how 3D an image seems. I've previously compared a ton of lenses under test conditions and a 55mm lens seems much more 3D than a 50mm lens, when both are at the same aperture. I know this is different to compression, but it's likely to get mixed up in this conversation at some point considering that all these things are interrelated.
  5. That seems like the sensible option to me. It's a standard hardware configuration and makes the setup 100% Apple, which bodes well for optimisation and support in future updates and FCPX optimisations. Keep us informed about performance when you get it all setup 🙂
  6. kye

    camera movement

    This all looks suitable for hand-held to me. I don't know what they were saying, but from the various clips and acting, I can see that the characters (and therefore the narrative) is raw, unpredictable, unstable, violent, and high-energy. The editing is reflective of this as well. By contrast, think about the aesthetic of viewing something violent, like an armed robbery perhaps, through footage from a security camera. The security footage is completely stationary, and is wide-angle making any movement in the frame much smaller than a tighter lens would make it. This experience is very detached, impartial, and makes the action seem small, despite how intense it might be. Another alternative to having a static angle or hand-held footage would be to have smooth and steady camera movement like a pan or tilt. This would obviously not be a good aesthetic choice either, as smooth controlled movement is just that, smooth and controlled. This type of movement is often associated with beauty (like panning over a grand vista) or scale (tilting up to see a tall building). Both of these situations are controlled - the horizon is always level and the building is always vertical. The alternative to that is a steadicam or crane shot, where the movement is steady but is not controlled in the same way that a pan or tilt is. The aesthetic of a gimbal is that it is floating, and although being closer to the action and maybe even being affected by it in the example of a shot where the camera follows the action or revolves around characters, it's still got the feel of detachment. It lacks the reaction that a security camera lacks - it doesn't jump the way a human would. Crane movement is slightly different that it's normally more geometric, and as it often moves vertically more than a gimbal shot it's also more detached from a human point of view, considering that humans typically experience the world from eye-height. I think they're the choices for camera mounting and the various aesthetics of movement. The trick is to choose the one that most represents the emotional experience you want the viewer to have.
  7. kye

    camera movement

    Depending on what films or TV you are watching, it could well be a choice made due to laziness or fashion rather than an optimum aesthetic choice. There are plenty of film-makers who have a style that other people do not appreciate and are critical of. Michael Bay is a great example of someone that makes creative choices that many are critical of, but others are supportive, so it's all a matter of taste.
  8. Another test - including 4K, 5K, 6K, and 8K RED raw files.
  9. kye

    camera movement

    Any camera movement should be a deliberate artistic choice, designed to support the narrative and aesthetic of the film. I'm far from an expert, but there are lots of articles around if you google 'camera movement'. There is also the element of motivated vs unmotivated camera movement that you can read about as well. In terms of camera shake, it the aesthetic of it is that it's a bit more 'real' because that's how amateurs take video of real life, so it can give a more authentic feel to a shot. It can also make things more exciting, which is why they use camera shake in action sequences. Topics like this are so deep that you can never learn everything about it, even if you studied it for the rest of your life. However, these are the skills that will drastically improve your film-making. Camera movement, composition, lighting, editing, dialogue, sound design, etc etc etc..... the beauty of film-making is that you can learn a little and get a big improvement in your work, you can learn more and get even better, but you can study it for the rest of your life and never run out of things to discover and there is no limit to the degree that you can improve your work.
  10. @KnightsFan I'm far from an expert but the approach that I have taken (and I'm pretty sure aligns to what I was advised to do) was to calibrate the monitors in the OS so that all apps get the calibration applied, but I'm on Mac so it could well be quite different for you. I take a slightly different approach of having a preset node in Resolve that I append to the node tree, take the screen grab, then remove the node again. I've taken multiple shots and the brought them up in the UI and compared them to the Resolve window and found the match to be acceptable. @Oliver Daniel The other thing I forgot to mention about colour accuracy is to get a bias light that will provide a neutral reference. I purchased one from FSI https://www.shopfsi.com/BiasLights-s/69.htm which is just a LED strip that gets stuck to the back of your monitor and shines a neutral light onto the wall behind your monitor like this: The wall behind my monitor has black sound panels so I just pinned up some A4 paper for the light to reflect off. I figured you can always rely on reflex! There's all sorts of other tricks to getting a neutral environment, but probably the most important aspects are to calibrate your monitor to 100nits and 6500K, to get a bias light, to match the ambient lighting to the bias light (which is an accurate reference) and to ensure that the ambient light isn't falling onto the monitor and that it's quite dark compared to the brightness of your monitor. I used my camera as a light meter to measure how bright the ambient light was, but if you calibrate your monitor and pull up an 18% grey then match your ambient light to that then you should be good. Professional colourists talk about getting paint that is exactly neutral grey (which is available and very expensive) but once again, it's diminishing returns.
  11. To further add to the Mac v PC conversation, I think people underestimate the fact that film-making is a creative pursuit focusing on a visual and auditory experience. Every creative pursuit requires that the creator be comfortable to create, which is a function of building an environment that suits the tastes and preferences of the creator. The fact that film-making is a visual and auditory craft means that the kind of creators that it attracts are people that care about visual and auditory aesthetics, so it makes sense that the visual and auditory experience of the creative environment will directly impact the quality and quantity of the creative work, and how good the creative process is to experience for the creator. Considering that video editing is done on computers, this directly maps to the choice of a computer. Only considering the price that you pay for every million floating point operations per second is a valid perspective if it is your perspective. Others have their own perspectives, and they will always be different, either subtly or radically, and that is part of the beauty of people and creativity. I would hate it if we were all the same and every movie or TV show I watched was created the way that I would have created it - how dull and predictable such a thing would be.
  12. I asked the question on the colourist forums about calibration on a normal monitor and although the strict answer is that you need to have BM hardware some people said that their GUI matches their reference monitor almost exactly, obviously after calibrating, so it's really about what level of certainty and accuracy you want. If a pro colourist says the match is almost perfect then I figure that's good enough for me!
  13. Swapping AF and low light for unlimited recording and colour matching seems like a good trade-off if you're doing interviews and weddings, especially for a B-camera. Also WRT low light, the MFT cameras can get very passable low-light performance if paired with a fast lens. I have above average night vision (I can easily go off-road mountain bike riding with no lights if there's a full moon) but the GH5 and f0.95 lens combo can see in the dark better than I can. The other low-light advantage of MFT is that on an f0.95 lens it gets the exposure value of T0.95 but a DoF equivalent to a FF at F1.9, so better exposure without the razor-thin DoF. I'd suggest having a think about the various interview configurations that you might use and then working out what the minimum requirements would be for those setups, in order to avoid buying gear you don't end up using.
  14. In terms of 8-bit, concentrate on getting it as right in-camera as you can. All the 10-bit vs 8-bit tests that identified differences (many didn't) only had issues when the image was pushed by a decent amount, so the better you get it in-camera the less you have to push it around and the better the image will be. Matching colour is a matter of: choosing the same colour profiles on all cameras setting the WB and exposures properly using a colour checker once per setup You can do your homework and record a colour checker on all cameras in controlled conditions, then make an adjustment to match the lesser cameras to the better one, and make it a preset so you can quickly apply it. Then when you're working on actual projects all you have to do is tweak slightly, if anything at all. Yes, I realise that this means you will be shooting a rec709 type profile on your best camera, but this is intentional to match colour profiles, and also it will help you get a good quality image in post. I use the GH5 as my A-cam and shoot the 1080p All-I 10-bit 422 modes, and just recently I swapped from shooting HLG to shooting in Cine-D. Extrapolating from the principle of "the less you mess with it the further you are away from the image breaking" I no longer take my 10-bit image and radically push/pull the bits around in the HLG->rec709 conversion, so now when I get my 10-bit Cine-D images out of camera the bits are nice and thick and almost exactly where I'll end up putting them in the final grade. @Ironfilm makes a good point about using the kit lens on a second / third angle, as the kit lenses are nice and sharp when stopped down a bit, and can also be pretty good when at their longest focal length and wide open, especially if it's an angle you're not going to go to for long or many times in the edit.
  15. I second everything that @IronFilm said. Now is a good time to consolidate to one lens mount as you're selling lenses anyway. This folds into a lens strategy where you should be able to use the same lenses across both cameras, but without having duplicates, which typically means having the same sensor size. For example, if you want a variety of shots in your Multicam setup then you want a different FOV from the two cameras. Typically that would mean having a mid-shot on the A-cam and either a wider shot on B-cam, or a tight shot on the B-cam, and is often done with a 24-70 / 16-35 or 24-70 / 70-200. If your B-cam was an MFT then you'd either have to have a second 24-70 to get a tighter shot with the crop sensor, or you'd have to use a specialist MFT wide lens like a 7-14mm to get something that was actually wide, but then that's a lens you can't use on your A-camera. The other advantage of having cameras of the same sensor size and common lens mount is that everything is a backup for everything else. If a body dies then you have a second body that can use any of your lenses. If a lens dies then you have access to every other lens on that camera. In the spirit of "two is one and one is none" you might consider buying a very cheap third body in case one body dies, which would leave you with a dual-camera setup in that case. I don't know how you shoot and edit so you could always cut up the interview with B-roll if you only had one camera for the interview, but it's worth considering. In terms of a third body, something with good 1080p would probably be the way to go, which typically means older and smaller sensor. In that situation you'd want to get something that required as little extra peripherals as possible, for example, using compatible power solutions, lenses, etc. That might be where a G7 or something comes into play, where you can use the 70-200 on your main camera and the 24-70 on the backup camera as a mid shot, perhaps positioned further away, or go to 70 and without a speed booster it would be a tight shot. I'm employing this strategy in my work. My main setup for travel film-making is a GH5 combined with a Sony X3000 action camera and next trip I will probably buy a GH3 as a second / backup body. It will take the same lenses, can be used as a second angle if required (either for real-time or time lapses), and if my whole main setup of GH5 goes in the drink (for example) then I can easily replace it with the GH3 and the only overheads of the GH3 will be to carry an extra 3.5mm on-camera mic, USB battery charger and a couple of batteries, and a 14/2.5 lens as replacement to my 17.5/0.95 main lens. If my X3000 dies then I can use my 7.5/2 lens on the GH5 or on the GH3 to replace the FOV. I would lose the waterproof abilities of the X3000 but that's the price of a camera dying on a trip.
  16. I suggest: if you don't have a monitor calibration device, buy that as top priority and then buy the monitor with what is left - a cheap calibrated monitor will kill a more expensive uncalibrated monitor read lots of reviews.. when I was monitor shopping I read a bunch of them and there is all the information you need out there think about size and resolution, for example the larger the monitor the larger the resolution is typically, which will end up specifying how many of your controls you can view in your NLE at a time and also what resolution your preview window will be, which isn't so critical for editing but is for things like sharpening and other resolution-related tasks also, in combination with the above, think about aspect ratio, as those super-wide monitors have room for more UI in some NLEs
  17. If anyone is using Resolve and looking at the new Macs, this shows very good results: Note that there's a special download link to the new version for the M1 Macs. TLDR: the cheapest Mac mini plays 4 x 4K files simultaneously with 2 nodes of grading on each clip.
  18. I agree with the above sentiments about getting one and trying it out. In terms of RAM and Apple vs PC, I've found that Apple handles RAM better than PCs, but it all falls apart if you run out of space on the SSD, so I'd suggest finding some SSD benchmarks if they're around yet, and buying the largest one that has good speeds. I haven't kept up with the tech, but it used to be that there was a sweet spot in size where below and above that size the performance suffered. I recently bought a new 13" MBP but seriously considered the Mac Mini as you sure get a lot of performance for your money compared to other Macs. One thing I've wondered with the benchmarks is the T2 chip, which supposedly accelerates encoding and decoding, but it doesn't appear in things like the Activity Monitor, and I struggled to find the detail about which types of files it worked with. For example, maybe it works with h265, but is that only 420 or 422, and what about 10-bit, etc etc.
  19. Resolve has quite a number of things that you can keyframe, but a trick for key framing anything is to duplicate the clip, put it on a layer on top of itself, and grade it slightly differently, then use the opacity to crossfade between the two grades. I've used this trick a few times when panning/tilting in mixed lighting situations where I needed to change between grades and it worked really well.
  20. My solution is about $10 🙂 Of course, my colour grading setup is hugely more expensive, but that's because it has to drive Resolve, which is a proprietary hardware interface, and it controls things that aren't keyboard mappable either, so it's a different proposition. In terms of spending lots of money for something you'll be slower at to begin with, if you get something that works then the payback is huge. Imagine that you have a controller that saves 1s on a basic operation. If we do that operation twice on every clip, have an average clip length of 3s, and edit a 45 minute show then that's 30 minutes. However, that's if we only saved that time on the shots that made the final edit, but we do lots of editing on clips that don't make the final edit, and for the shots that do make the final edit we will adjust them multiple times, so let's conservatively multiply that by 5. This doesn't count versions with a client where we create something that is finished and then have to move stuff around again for the next version. This gives us 2.5 hours on one project by saving 1s on a single operation. Multiply that by your hourly rate and you can see this starts to add up. In reality a good controller will save time on many operations, but will also cut down on distractions while editing, saving the little moments of having to re-orient yourself, potentially making edits less stressful and potentially making you a better editor.
  21. Exactly. My Dad was a computer engineer for a large educational institution before he retired. They once bought a new top of the line computer from their supplier to act as a replacement to their primary server, custom built with the top motherboard, CPU, RAID controller, and HDDs. A week in he worked out that the problems he was having were between the motherboard, RAID controller, and the drivers for one of the chips on the motherboard. Two weeks in he'd found the half-dozen threads online of people complaining about the exact hardware combination not working, worked out which one had the most intelligent people in it, and started working with them to hassle the manufacturers. Four weeks in the group had gotten official replies from the motherboard manufacturer, the chipset manufacturer (who wrote the drivers) and the RAID controller manufacturer, with each blaming one of the other two manufacturers. Two months in he declared defeat and leaned on his supplier to take the whole machine back for a full credit, which he could only do because they bought hundreds of PCs from them per year. With Apple, it doesn't work and someone gets told to fix it, and so they get everyone in a room and fix it. That's the difference between closed and open systems.
  22. I've read many times that an investment in a control surface repays itself many times over in increased efficiency, so I think it's a good way to go. I wasn't suggesting the Resolve keyboards, quite the opposite in fact. The link I provided in the other thread (and repeated below) is what made me think the "DIY" option might be the best. From the article: I haven't tried it yet, but what I interpret this to mean is that you can program your extra keyboard to have different hotkeys than your normal keyboard, potentially doubling your controls, or more if you are using more than one additional extra keyboard (not sure if that's possible?). Combined with the keyboard shortcuts within your NLE I'm imagining this should make a setup as flexible as you like. I watched a few reviews of various controllers and the downside was never the hardware, it was always the limitations of customising things, which I thought the above would get around. In terms of my reference to two hands, I wasn't suggesting that you spend time drinking beer or doing some other task with the other hand, more that you could use the other hand to have access to more controls in editing. In practical terms, you get speed by having one-key access to a function, but even more speed when you don't have to look. There's a limit to how many keys you can reliably hit without looking, probably something like 12 or 16 per hand. I don't know about your editing hotkeys, but mine are typically JKL for back/stop/forwards, IOP for MarkIn/MarkOut/Insert, leaving only a few remaining keys for other operations such as ripple trimming etc. If you have a jog wheel then you'll probably have a hard time operating it and reliably hitting a few extra buttons with the same hand without looking. However, if you have two hands active at once, you can use one for the basic navigation and operations you'll need to do in bulk, and then the other hand can have access to another dozen more sophisticated editing commands, or alternate methods of navigation like next and previous clips, navigating between markers or flags or whatever, etc. If you're anything like me you will have very little muscle memory on your left hand, so your injury has kind of forced you to work through the frustration of learning to navigate and do basic operations with your left hand, making it likely that by the end of it you'll be fresher with it than your current dominant hand, especially if you get a control surface of some kind, which your dominant hand won't have experience with. The end-game if you go down this route is to be able to edit a project from start to finish without looking away from the monitor basically at all. Depending on how you work, you may even want to map some basic colour adjustments to your right hand, like WB or exposure, so you can kind of correct as you go. As Resolve is so nicely integrated and I use it for my whole workflow I tend to bounce back and forwards between the Edit and Colour pages, as I find that Colour impacts how I edit to a certain extent. For example, I might make a selects by eliminating all shots that are crap, but then I would do a Colour pass adjusting WB, levels, and conforming to rec709 so I can see the shots (instead of them being in LOG, for example). Then I would go back and make as assembly with more decisions based on how lovely the shots look. Then I'd do a colour pass really working with the clips, especially the 'hero' shots. Then adding music and doing the timing of the edit I would be looking at how great each shot looks from the colour grading to determine how many 'beats' to keep on a particular shot. Sometimes a shot really comes alive in grading and so I might linger on it longer, or maybe even slowing it down slightly, etc. These grading things all contribute to the edit, but I don't want to colour grade every clip before I start editing as that would be a huge waste of time. Anyway, food for thought about keyboard shortcuts. The other thing to think about is your overall workflow. I've seen that there are really two methods for editing. The first is to review all the clips and make selects, then make another pass eliminating more clips and refining timing, etc etc until you have a final edit. This means once you eliminate a clip you shouldn't need to look at it again, but has the downside that you end up looking at lots of clips several times that won't make the final edit. The second is to log footage properly, and then just make a timeline by pulling the best clips in. This is more efficient if you have higher shooting ratios and are organised, but if you have poor organisation skills and a poor memory then you could end up spending minutes/hours looking for each clip that you pull onto the timeline, which would be less efficient overall than the first approach. Essentially the first approach is that you start with everything and delete clips until you have the edit, and the second is starting with nothing and adding clips until you have an edit. Most people have a hybrid of these approaches, so it's whatever works for you, but I'd suggest that getting this sorted would contribute more to your overall efficiency than a control surface would. Anyway, food for thought.
  23. I've watched a few videos running down the new features, and I must say that I'm pretty excited for quite a few of the features. The New Colour Wheels look awesome and I think i'll likely use them a lot. I have a control surface so the Lift/Gamma/Gain controls are great for exposure corrections, but they are very 'macro' controls, not really having enough control over things like shadows vs blacks etc, especially for the naturally lit uncontrolled situations I shoot. I can use curves, but they're a PITA to use with the control surface, so having the new colour wheels will be great, giving enough 'resolution' but still being fast enough to use quickly. The Colour Warper (spiderweb) and luma/chroma warper will be quite handy too. I often find myself wanting to quickly change the saturation, hue, and luma of certain colours (eg, foliage) and currently you have to bounce back and forwards between Hue v Hue, Hue v Sat, and Hue v Lum to do that. Also, one of the things that I have played with in the past is a Saturation limiter. The idea is that I want to up the saturation on clips, but when I do that sometimes theres a splash of colour in the background that goes nuclear in OTT saturation, so what you want is a curve that increases the saturation of colours under a certain threshold, but once a colour gets to a certain level of saturation it should encounter a 'knee' and the saturation boost should slow down at that point, limiting the most saturated elements of an image. That's also possible easily with this tool. I had previously used a Sat v Sat curve, but the risk of that is that you end up with overlaps where more saturated colours end up less than colours that started off less saturated to begin with, so I had to generate some test images to ensure I designed it correctly. The new Sat vs Lum curve looks great too. One of the things that I learned in investigating film is that emulating a subtractive colour model requires the darkening of more saturated colours, which currently has to be done in a relatively customised way, whereas this gives a nice curve. I'm also curious about the new Temp and Tint controls. I use Temp and Tint all the time to correct WB and apparently they've redesigned them to work in XYZ colour space (assuming I understood that correctly) which means they will be perceptually better, which is cool.
  24. I have a GH5 and only shoot 1080p - although I do shoot 120p for sports. I don't have a problem though. I mean, I can stop at any time.
  25. I'm still looking into this, and started a thread some time ago: My little keypad thingy arrived but I haven't had a chance to look at it. My initial impressions are that the only difference between dedicated editing controllers and normal keyboards is that 1) dedicated controllers have a jog wheel for accurately scrubbing forwards and backwards, and 2) dedicated controllers often have a specific layout and colour-coding or labelling of the keys. Beyond those things they're pretty much just keyboards: Zooming back out a little, sorry to hear about your injury, but great to hear you're trying to work around it and potentially take it as an opportunity to improve your setup. I would go one step further and suggest that this is an opportunity to get an edit controller, learn to use it with your left hand, and that maybe you should think of this as a permanent way forwards. You'll be building muscle memory, and (assuming you're right-handed), when your hand heals you will have your dominant hand free to do other things while you're editing. In contrast to that I use my dominant hand for editing control and my non-dominant hand is pretty useless as I'm not as coordinated with it and I don't have any muscle memory for it either.
×
×
  • Create New...