Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. Yah, the $200 one might be 20x the little USB keypad, but it's also only 20% of the price of the Resolve Keyboard, so yes, everything is relative. I've taken to using the layout: I O P J K L where I is Mark In, O is Mark Out, and P is Append to Timeline, and J K L. are backwards, stop and forwards. Then in the Cut panel I can pull up tape mode and it puts all the clips in a folder back-to-back (like tape, funnily enough!) and then just using these 6 to go through and do selects. I find that works just fine, and I get selects out of that onto a timeline. Considering I mostly shoot my own travel and events I am normally keeping things in sequential order, and only change the order if there's a problem and I need a cut-away or something to fix the coherence, that means that the selects are also an Assembly. The part that has me wondering is once I've got the Assembly it's what happens from there. I then go through and work out what the shots are that I want to essentially shortlist. I do this by dragging the good ones up onto a different track (often several different tracks for different classifications of shots) and this isn't something that I have used the keyboard for in the past, but would be cool. Then I cull all the shots that weren't shortlisted, apply music, and then I'm doing more fine-tuning of the edit in accordance with the music. As my selects are typically the entire duration of the "good bits" from the source clips, they're normally way too long and so I'm mostly taking clips that are 2-20s long and cutting them down to the 1-5s to fit into the timing of the edit. This is done with Command-Shift-[ and Command-Shift-] which do a ripple delete from the playhead either to the start of the clip ([) or the end of the clip (]). I'm likely using some other keys to navigate around too during this phase. I should do a mini editing session and actually film the keyboard so I can see what I'm doing during a real edit, rather than editing but trying to pay attention to the keys I use rather than the actual edit. Luckily, the garish LED lights are extra if you can find the stripped down versions! Depending on the keypad I might end up getting something that's got nicer ergos. I might also just end up remapping more keys on the normal keyboard using the Keyboard Shortcut editor in Resolve 🙂
  2. @DFason Cool little video. I'm also curious to see your rig.
  3. Just doing some searching for something else and came upon this - does the keypad look familiar? @BTM_Pix thanks, but wow are they more expensive! It seems like the shuttle wheel is the only hardware control that can't be easily replicated with a keyboard, but do people use this much? In Resolve the J-K-L combo is backwards-stop-forwards but if you hold down K and hit L then it goes forwards one frame and J goes back one frame, so it's easy to fine-tune the playhead frame-by-frame without moving your fingers at all. I'm curious to hear what other people are using...
  4. kye

    Panasonic GH6

    That might actually be a bit easier to design - currently the GH5 has a higher resolution viewfinder than the screen, but maybe a larger screen could match the viewfinder and make the design a bit simpler. A higher resolution viewfinder / screen would make manually focussing so much better too. At the moment the focus peaking isn't that great.
  5. kye

    Spot The Camera?

    Camcorders can be great in the right hands. Here's Mr Herzog for his latest film...
  6. I've gone down a rabbit hole with hardware controllers, and while I am really appreciating my Beatstep Davinci Resolve Edition for colour grading, it's not the tool for video editing. The tool for video editing in Resolve is the Davinci Resolve Keyboard, but I can't justify the $1000 price tag, and one of the reviews I read said that it's only worth the money over a normal keyboard if you use the extra buttons that it has on the sides. Ultimately, the review said, you'll have more keyboard shortcuts than you will have keys on the keyboard, so the extra physical buttons are needed so that you're not constantly having to do Command - Shift - Something to get at your shortcuts. This lead me to the idea of just buying more keys and then programming them to say Command - Shift - Something when I press a key. That lead me to this article - https://www.instructables.com/id/Making-a-powerful-programmable-keypad-for-less-tha/ It talks about using one of these: along with this software to program it: http://www.hidmacros.eu/ This would mean that any function of any software that can have a keyboard shortcut attached to it can be assigned onto one of these and then you can have your own dedicated controller. Has anyone done this? These keypads are $10 from amazon and the software is free. I think it's worth doing just to try it out.
  7. I've developed a few thoughts on this over the years. My grand unified theory of cameras is that highly skilled people can make almost any camera look glorious, unskilled people can't make any camera look good, and that for the rest of us in-between it's about how easy or hard it is to work with the equipment that matters. Watching one video can tell us a lot of things. It could show us the potential that the equipment has. It could show us what is possible in a difficult situation (for example, low light, high DR, etc). It could also give us a clue about how hard it is to work with if we know the skill level of those involved in making that specific video. However, one data point is one data point, and for a more general view of a camera we have to watch many videos, and even then there is a limit to what we can tell without actually using it ourselves. Reviews attempt to bridge this gap, but they are subjective and biased (consciously or unconsciously). I suspect it's the case that amateurs draw conclusions based on one video because they don't know any better, skilled people know not to draw conclusions from one video because it's folly, and geniuses draw conclusions from one video because they can.
  8. kye

    Panasonic GH6

    @sanveer @thebrothersthre3 have you run into situations where the 10-bit had issues? Or just wanting to push closer to raw? Now we're talking!! Seriously though, a 5.2K sensor should easily be enough for everyone except those who want to digitally re-frame, and even then you can still upscale to something like 150% and add a bit of sharpening and there's no perceptible difference. Another increment in the endless upgrading of modes from 420 to 422 and from 8-bit to 10-bit and from IPB to ALL-I would be great. I imagine that upgrades to the 120p mode would be very popular - anything over 50/60p from the GH5 is limited to 1080p 100Mbps 420 8-bit IPB. Even a modest bump for 120p up to 200Mbps 420 10-bit would be hugely beneficial. Reconforming 120p down to 24p means that at current you're effectively watching 24p at 20Mbps 420 8-bit IPB. If you're shooting a film then pairing 1080p 20Mbps 420 8-bit shots with 200Mbps 422 10-bit ALL-I 1080p or 400Mbps 422 10-bit ALL-I 4K is a very significant mis-match! Having a GH5 for sports is actually a great setup. You have MFT and the crop factor gives you access to much longer equivalent focal lengths for less money than the big zooms, the IBIS is great, and the viewfinder works in harsh lighting. The only real Achilles heel for sports is the codec.
  9. I've moved away from the XC10 and have completely changed the way I shoot, so this is about grading footage I already have. It's ok, it's making me a better colourist as I'm getting pushed to work out how to solve all kinds of issues. My poor choices meant I pushed myself in the deep end and I think I might have just gotten to the stage of having my first few gasps of air as I learn to swim 🙂
  10. You seem to be very focused on sharpness, but seem to be buying the wrong lenses for that. If you're talking about using it on your P4K (like in the other thread) then maybe you should just invest in the Sigma 18-35 1.8 and Sigma 50-100 1.8 with a metabones SB and be done with it. They're super sharp and will cover most of the range required. If you're chasing sharpness wide open and also the softer contrast of vintage lenses then the Tiffen filters can give that look quite easily.
  11. Buy a 50mm F4 lens. Virtually guaranteed to be sharp wide open.
  12. Obviously we're spending TheoryCoins, because this is a camera forum on the internet, but for me it's no choice, it has to be the Tokina 11-20mm. My favourite FOV is 35mm (FF quiv) and second favourite is 15mm and the Tokina is the only one in the above list that goes wider than the mid 20s.
  13. Getting better! It might be a little bit too pushed in terms of sat and contrast, but I'm gradually working out the kinks.
  14. kye

    Panasonic GH6

    That's true. Would that mean that it would be 420 colour? That's what the 5K mode is currently. Maybe it will happen and then I'll be happy I switched to 1080 - it's not like YT is good enough to tell between 4K and a 1080p up-res anyway....
  15. Actually it's both. The original has quite a lot of blue in it, and the reference images have no objects in them that are blue and all the colours in the reference image are in completely the opposite direction in terms of colours. You can pull a key and neutralise out the blues, but then you end up with some salmon tones, which you then need to fix, and in the end you end up with a background that is bland and may as well have been desaturated. The reference images all have darker green foliage in the background, the footage has a bright orange/blue city in the background.
  16. LUTs can be tricky, and it's impossible to make them completely general purpose. I'm not surprised that when you light a scene you can get great results but that it wouldn't work so well on less manicured situations. On a controlled scene there are certain conventions that can be assumed: subject skintones will be at a certain IRE, the background will be slightly or a lot darker than the subject, there will be no heavily saturated objects, etc. When designing a LUT for a situation like that you can assume that anything warm and above 50% is skin tone highlights, and anything not skin tone and bright can be much more heavily processed, and that non-skintone darker areas can be desaturated and the colours all played with, etc etc. Take an image shot anywhere else where there are bright and saturated objects in the background and the LUT will make a screaming and clipped carnival memoire of the whole scene. Of course, if you assume there could be anything from 110 IRE down to -10 IRE then you're immediately screwed because most LUTs only go from 0-100, but even then if you can pull those things back into legal range then you'll end up with something that looks flat and awful on controlled and low-DR scenes. I have found that exposing the GH5 can be a challenge if you want to do it technically, but shoot creatively and grade each shot individually and you're fine. The more I learn to grade the more I'm learning that it's actually very simple, but with a range of techniques for solving various problems and few cool tweaks to make the image pop a little more. Of course, the biggest improvement to my colour grading has been learning to use my equipment better to capture the images in the first place. I'd still be very interested to see a before/after of the Noam Kroll LUT if you have time.
  17. That's a good strategy, although it makes me a little confused as to why you would comment in a colour grading thread that there's a magic button that will match any shot to any other shot and then it can just be converted to a LUT and you're done.... Kind of like a colourist telling a cinematographer they can just take their cell phone out of their pocket and wave it around to get cinematic images, but when told why that won't work they just reply "oh I just hire people for that, I'm not interested in learning anything new...".
  18. You may have read that more keyframes gives better motion cadence, which I suspect is true, but only on extremely extremely compressed files. I tried to replicate motion cadence issues but failed completely to do so. Here's the thread: The editing thread linked above showed that the difference between a cut one frame ahead and one frame on the beat is decent, but the cut ones frame ahead and the cut two frames ahead of the audio wasn't nearly as large, and there's a decent tolerance for thing being ahead, so I'd suggest doing a few test uploads and seeing how far ahead of the beat looks best. BTW: YT moves the image one frame ahead of the audio when you upload, and seemingly more than one frame on smaller resolutions. Check out the great videos by John Hess linked in the Editing thread above.
  19. kye

    Panasonic GH6

    That would mean that shooting on the GH5 in MFT 4K was a downsampled image from 5.2K, and then shooting on the GH6 in MFT 4K would be a straight 4K and not downsampled. Ie, a downgrade.
  20. WTF, that first one is totally not the emoji I picked!
  21. What do I think? I think you should give it a go..... everything looks easy until it's your turn 🙂 Shade works too, but it's a pretty simple equation, if you want what they got then do what they did. If shade was the same as a diffusion panel and the shot-matching feature did 90% of the work then why would those things exist? 😬🤔🤔
  22. Ultimately, the best thing to apply for this look would be one of these:
  23. Here's the result of that Shot Match feature, just so you're aware: I think the video you're looking for from Aram K is this one:
  24. A few thoughts: Look at your scopes - look at what the waveform monitor is telling you about the levels in the image, and their colour balance. Check this out: You can see that in the shadows there is pink below green, that means that there's green in there. You can see in the middle and on the right there are much higher levels where red is highest and blue is lowest with green in the middle, that means warm tones at a higher luminance, which in this case is the girls skin tones. Put a global adjustment over your reference image and your grade turning the saturation right up, it will make your vectorscope much easier to see, and will accentuate all the colours so they're easier to see. It's like colour grading with a magnifying glass. Match the colours like that and turn it off and it will be a much better match than it looked when you were doing it. Apply an outside window and pull gain down to zero to crush the whole image (except for your window) then look at the scopes to see what just that part of the image looks like. Hover the window over the skin tones and see what the vectorsope is telling you - where is the centre of the hue range? how much hue variation is there? how saturated are they? Hover over other parts of the image too. Use a glow-style effect to bloom highlights to match the reference before you apply any other adjustments - they radically effect your shadow levels and contrast overall. When playing with shadow levels and black levels, crank the Gain right up so that most of the image is clipping and you've "zoomed into" the shadows - now 10 IRE might be up to 100IRE and you can easily compare your reference with your grade. All these are useful for grading your own footage in reference to itself too.
  25. Attempt number one. Reference: Grade: and the node graph which quite obviously shows that this is the wrong way to go about such a thing: Attempt 2 Reference: Grade: Node graph - simpler but still awful: The absolute best way to match scenes is to get any camera and point it at something that looks like your reference.
×
×
  • Create New...