Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. I'm yet to make the move, but I think I'll have to eventually. Slightly OT, but don't they have new OS's free for a certain period of time and after that you have to pay for the upgrade? I think I had to pay for one when the version I was running didn't support the software I wanted to run and the only free version was the one that had buggy wifi so I paid to get the second newest version.
  2. Absolutely! Think of all the awful products out there, and take a minute to order why no reviewer has ever looked at any of them. It's simply incredible that the people who make awful products never contact reviewers, or if they do then the reviewers aren't interested, or if they are then the product gets lost in the post. That postal system must be crazy good at assessing how good products are - it's the final gatekeeper in the entire system!!
  3. I'd have a look at what the YouTube community uses (especially people who do talking to camera) because they shoot a ton, are hard on their equipment and value reliability and durability very highly. Kai Wong and Lok for example.
  4. I've used it with a key to beautify skin a few times, but I'm not sure that it's the best way to go as I haven't compared it to other methods. Also worth checking out is the Sharpen and Soften OFX plugin which gives you independent sliders for small, medium and large textures so you have more control, and it let's you sharpen some and soften others so it's powerful for skin tones. If you want to use a dedicated sharpen or blur tool you can use the Edge Detection OFX plugin (I think that's what it's called) in the mode to display the edges and then connect the main output from that to the key input of the node with your effect in it. This gives you the ability to apply any filter to edges (and presumably also the inverse) in a very controllable way. There are so many ways to get the job done in Resolve!
  5. @Sage I have shot a bunch of footage at night with HLG at various exposure levels (ETTR as well as ETTL to keep highlights) and I'm wondering how to pre-process the footage in post to get the right levels for the LUTs. At what step in the Resolve node graph would you suggest adjusting the exposure back so skin tones are in the recommended 40-60 IRE, and which controls would you suggest (perhaps either the Gamma wheel or the Offset wheel?). Thanks!
  6. My timecode keeps resetting - does anyone know what settings to use so that it only counts up while recording but doesn't reset? It reset at 00:13:11:01 so it didn't "wrap-around".
  7. What is your impression of how transferrable things like this are? I read the ML forums for a while and saw some instances of where figuring something out on one camera helped to solve it on other models, but I'm not sure if these were isolated instances or if this was more the norm. If so, perhaps these findings might apply to the 5D3 and maybe 5D4?
  8. Yes, I suppose that the 1DX isn't so bad if you think of it as having a battery grip included. Personally though I don't use a battery grip and I would expect to be able to get away with not having one, so it depends on your perspective.
  9. Sorry to hear you got two duds... I'd suggest buying a lottery ticket - luck that bad can only be that it's been redirected somewhere else! I was particularly interested in the dial system that Fuji had, being able to control every parameter in either auto or a given manual setting is great, and gives more flexibility than some PASM mode implementations do, with a much better interface. Are you back to using the 1DXii for video as well? It looks great but I hear it weighs as much as a bus!
  10. kye

    Lenses

    I haven't seen it discussed much for video (maybe it was and I missed it) but lots of people have looked at it for photos. It was the widest non-fisheye lens that I could find for a reasonable price. It looks hilarious on the GH5 because it's so small and thin! My strategy has been to get a 17.5mm lens for generic people / environment shots, and the 8mm for those wider "wow" scenery shots where you want the grand look of a wide angle, and it's been great for that. Today was my third day in India and it hasn't come off the camera yet as everything here seems to lend itself to that "wow" aesthetic - look at that building - look at how many people there are - look at this amazing chaos... Basic grade and crop.... I'm still making friends with the GH5 so these might not be sharp, but they certainly suit the geometric nature of the architecture
  11. kye

    Lenses

    I'd be sceptical, the bridge in the sample shot goes exactly through the middle of the frame, so it won't show many forms of distortion. 7artisans does have a range of lenses and a track record though, so who knows. I've been using the SLR Magic 8mm F4 on my GH5 and I'm really enjoying the image, but it's also designed as a drone lens and so the ergonomics aren't that great though!
  12. kye

    Color science

    So, if I understand correctly, colour information is worse than in a theoretical 8-bit RAW codec then?
  13. Does Resolve have support for the NX1 colour and gamma profiles? If so, you could use the Colour Space Transform OSX plugin to convert both types of files to a common colour space. Although it's not perfect, I've had good results and it does most of the work, I just wish it had support for Protune so I could also use it to match GoPro footage.
  14. kye

    Color science

    Good post. I think we're mostly agreeing, but there are aspects of what you said that I think are technically correct but maybe not in practice. @TheRenaissanceMan touched on two of the biggest issues - the limitations of 8-bit video and the ease of use. It is technically true that you can use software (like Resolve or photoshop) to convert an image from basically any set of colours into any other set of colours, but in 8-bit files you may find that information may well be missing to do a seamless job of it. Worse still, the closer a match you want, the more manipulations you must do, and the more complicated the processing becomes. In teaching myself to colour correct and grade I downloaded well shot high-quality footage and found that good results were very easy and simple to achieve. But try inter-cutting and colour matching multiple low-quality consumer cameras and you'll realise that in practice it's either incredibly difficult or it's just not possible. Absolutely!
  15. kye

    Color science

    Manufacturers design the bayer filter for their cameras, adjusting the mix and strength of various of tints in order to choose what parts of the visible and invisible spectrum hit the photo sites in the sensor. This is part of colour science too. Did you even watch the video?
  16. Use a tripod and fix it in post!!
  17. You should all hear how the folks over at liftgammagain.com turn themselves inside out over browser colour renderings... almost as much as the client watching the latest render on grandmas 15 year-old TV and then breathing fire down the phone about how the colours are all gone to hell... I'm sure many people on here are familiar with the issues of client management, but imagine if colour was your only job!!
  18. Dogs: the guard animal for people that don't care about protecting their property enough to keep geese.
  19. Good question. Here's some info: (Source) However, it seems some codecs are supported. V14 release notes: Unfortunately it doesn't seem like there's a complete list anywhere, the "what's new" feature lists are only incremental, and BM don't use consistent language in those posts so searching for phrases doesn't work either (sometimes they say "hardware accelerated" sometimes "GPU accelerated" etc). My guess is that you should just set up a dummy project with a high resolution output and then export it in every codec you can think of (just queue them up, hit go and walk away) and then pull all of them into a timeline and compare the FPS and CPU/GPU load on each type. It's a bit of admin you shouldn't have to do, but it will definitely answer your question.
  20. Or they could treat it like a cinema camera ??? Besides, we all know how that song about camera stabilisation went - "if you liked it then you should have put a rig on it".
  21. Actually, I think that the tripod might be the most significant piece of equipment in this entire conversation. So far I have seen only a couple of videos where I can appreciate the IQ, with the rest of them leaving me wondering if this camera automatically gives you Parkinsons as soon as you hit record. I spend the whole time thinking things like "It looks sunny, but maybe the wind is really cold" .... "other people don't seem to be dressed warmly either" .... "it is high up, so maybe they ran up the hill" .... "the girl being filmed looks healthy, so they're probably not going through withdrawal" .... I wish I was joking!
  22. I'm actually not that concerned. Considering that 150Mbps 10-bit is so nice and the UHS-II cards are so expensive, I probably would have ended up with the same card even if I knew about it before buying. Plus I shoot a lot of footage and so I have large capacity cards as well.
  23. Thanks - I'll check that out. I've just done a bit of googling about speed performance of SD cards, and the short answer is that I can't find a good answer. The longer answer is that there are a number of things that might be going on here, but I can't verify if they are or not. There are two candidates that I can think of. One is the type of file system that is used on the card, and the other is the pattern of where each chunk of data is written to the card (interleaving). This is interesting: (Source) This gives you an idea about the complicated housekeeping going on - you don't just write a block of data to an empty slot - you have to update the table of contents as well. This all references the FAT file system (of which there are multiple versions), and I'm not sure but it might be that there are different housekeeping requirements between different file systems. The second potential source of performance could be timing and interleave issues. Interleave is something that happens on physical disk drives. Imagine you have a disk spinning, and you write a block of data to it that is a quarter turn, and the drive then needs to ask for the next block of data, but by the time that block of data arrives the disk has spun a little bit more. If you organised the disk so that the location of the next block was straight after the previous block then you would have to wait for the disk to do almost a complete 360 degree rotation before that next block came around again. To solve this you would format a disk so that the blocks were arranged to minimise that waiting time. With an SD card there is obviously no physical movement, but there might be timing issues. For example it might take time to change between different blocks on the disk, or there might be different modes where if you miss a cycle then you have to wait for the next cycle to begin, or whatever. The above quote indicates that for every chunk of data written it also has to update other areas, and that the largest block size would be useful because it means you write more data for each set of housekeeping, however if every time you wanted to write some data it took a whole cycle then it might be useful to have smaller cycles so that when you do the housekeeping writes they have less time waiting for the cycle to end. I'm really just guessing with this stuff, but I know from older technologies like physical HDDs and memory timings that these things exist for those technologies and might be at play here. EDIT: I should add that I'm not even sure of the terminology at use here. There are even three ways to format a disk. IIRC Back in the day there were two, normal and low-level. The normal one was where you overwrote the whole disk but kept the interleaving the same, and low-level was where you could change the interleaving pattern and it also overwrote the whole disk. Overwriting the whole disk has the advantage of detecting bad sectors and marking them bad in the index of the disk so they wouldn't be used. Then along came the quick format which just overwrote the file allocation table but left the data all in-tact on the drive. Somewhere along the way the quick format turned into a normal format and the low-level format became a mystery. I think that manufacturers got better at optimising interleave patterns and consumers were getting less technically educated so they just stopped letting people play with them. I know that now if you choose low-level format in a camera it definitely doesn't ask you for interleave options, so who knows if the camera just leaves it alone, has a one-interleave-fits-all approach, or if it runs a bunch of tests and chooses the fastest one like the old software used to (I very much doubt that!).
  24. Yeah, that video is interesting.. that's a few from him I've found useful
×
×
  • Create New...