Jump to content

KnightsFan

Members
  • Posts

    1,292
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. This is one of my favorite music videos of all time. I love surrealism and how the strangeness is grounded in reality.
  2. Another YouTube Promo is up! Shot over the course of two very cold days.
  3. Probably the same Sony sensor that's in the Z Cam E2 S6. I thought that it was the same sensor as the XT3 but I don't know that for sure.
  4. This is great! All around top notch. Great locations, clear sound, really quite enjoyable to watch. I love the prosthetics and the effects were very well done. My biggest critique would be that the knife cutting off the head is not convincing. If you can't get a better shot or make it in post, just cut away before you see it, or give a reaction shot instead. Stylistically, I would prefer deeper DoF and a little more lingering on the scenery. You've got such great locations, but I feel like it was either out of focus or the camera skipped by too quickly.
  5. Nice video and nice bike! I like the grungy color. The gimbal motions aren't quite as smooth as they could be, especially on pans, but that's all just keeping at it with lots of practice.
  6. Look into 2-pass encoding. You can do it with ffmpeg, there is a description here https://trac.ffmpeg.org/wiki/Encode/H.264 (also available with H.265). Alternatively, you can specify min and max bitrates, but two pass is generally the way to go for targeting a specific file size. I'm sure there are GUIs out there that do it, just look for something with two pass encoding options.
  7. Ffmpeg is the library, so that shouldnt be an issue. I dont know if performance is different. Resolve's encoders suck in general. I recently discovered that their AAC encoder produces all kinds of artifacts, so i am using trusty ffmpeg to encode audio from PCM and losslessly mux with the video. I have not tested hardware encoding quality in ffmpeg, i use it for creating proxies so i want it to be fast rather than high quality. It would be interesting to test.
  8. I always use ffmpeg from the command line. Like @rawshooter said, all the free converters use ffmpeg under the hood but may expose different options. If you put some time into learning it, running ffmpeg commands yourself offers the most flexibility, for free. You can really dial in the quality/encoding time/size compromises to just where you want it, and if you can take advantage of hardware acceleration then it's lightning fast.
  9. To be fair, mechanical moving parts are more difficult to make, test, and repair than software. Overall Blackmagic hardware isn't great. Battery life sucks, ergonomics have been sketchy (remember the 2.5k?). I bet Blackmagic and Sigma save a lot of cost compared to photo cameras by removing shutters and articulating screens.
  10. Apparently the versioning officer from Valve now works at Sony.
  11. How about we manually separate the RGGB channels from a 24 mpix raw photo, and try various scaling algorithms to make a better scaled bayer image? That would be fun.
  12. APS-C is usually 23.5mm x 15.6mm (Sony, Nikon, Fuji), or smaller (Canon). That's a 3:2 aspect ratio, so the area when shooting 16:9 or wider is smaller still. If anyone does "APS-C with MFT mount" it will likely be 23.5x15.6--Z Cam, for example. No one expects all MFT lenses to work on an APS-C sensor, it's the adaptability that makes it appealing, and mirrorless APS-C lenses like the the Fuji MK cine zooms.
  13. I have only surface level knowledge, but any time your lens is longer than your FFD, you'll need extra lens elements to provide a "reverse telephoto." For example, an 18mm lens on an EF mount 44mm from the sensor. The extra elements introduce aberrations, which require more elements to correct, and so on. Add to that the complexity of making a zoom lens in the first place, and you quickly rack up size and cost. That's one reason there are smaller wide angle lenses made exclusively for mirrorless mounts, such as the SLR Magic MicroPrimes. On the telephoto end, it's simply the sheer size of the lens elements required. As focal length increases, the physical size of the aperture also increases to achieve the same f-stop.
  14. Because no one cares about APSC except for Fuji.
  15. In all seriousness, definitely the latter. I remember the first time I shot with a Sony, which was right around the time I first shot with a Blackmagic. Those kinds of experiences stick with you.
  16. Maybe it's because you post topics titled "Sony A7R IV - can confirm colour is still SH**!" ? Regarding lack of Panasonic FF and Z Cam, I bet a lot of us are just constantly a few years behind on new gear. The S1 has looked perfect to me since day 1, but I'm waiting for a really good deal and/or a big project to use it on.
  17. I can't believe the poor NX1 is relegated to the "Something weird" category Actually on second thought that's pretty accurate. Carry on.
  18. I meant an example showing how the P4K is better than the GH5s in the same conditions. As long as you white balance in linear gamma, I've found no benefit to shooting raw in terms of ability to white balance in post. I have edited quite a bit of material shot in 400 mbps on a GH5s and never really ran into any compression issues even on extreme grades, so I'm curious to see any examples where the P4K does better. Same with exposure. You can correct exposure with any camera with the same results as raw if there aren't compression artifacts, and you can do it in linear. Since they share the same sensor, I would be very surprised if noise levels were different, and in my experience I haven't run into compression issues with GH5s footage.
  19. That's pretty much what I was saying terminology-wise: f stop is not just a theoretical number from a spec sheet, it's a value you can measure on a physical copy of a lens. I didn't mean to imply you were incorrect at all, just thought what you said could be misunderstood to mean that f-stop is some sort of voodoo number a manufacturer pulls out of a hat.
  20. Actually what the rumor says is that the person who claimed anamorphic, articulated screen and has been shown to be wrong on the date so we should not automatically trust everything else in the list.
  21. Also all lenses have some light loss, you won't find any f1.4 lenses that are also t1.4. (Unless its a pinhole lens but that won't be a happy solution for video lol.) So that doesn't mean they will have worse light transmission than any other given f1.4 lens, make sure to compare t stop to t stop when deciding on a lens. I have to quibble about semantics. f stop isnt theoretical, its focal length over the diameter of the aperture, a physical property of a lens you can measure. f stop simply is not a measure of light intensity, so its only theoretical if you use it as such. Of course you are right in practical terms, its just wording.
  22. Happy new year! Just waned to say how much I appreciate your website and your work. I recommend it every time someone asks me where to get music.
  23. I didn't watch Gerald's entire video, just the part linked to, but what I didn't see was a test of color across the exposure range. Most cameras have excellent color at middle grey exposure. In my experience, "bad color" is almost entirely due to color shifts across the exposure range. The same object won't be same exact same hue depending on how you expose it. What I do with new cameras is shoot a color chart at every exposure (using shutter speed or iris to adjust, NOT gain), and then create a LUT that makes the color chart maintain hue across different exposures, without touching any of the adjustments for the middle grey exposure. I've found such a LUT to make 95% of shots immediately look better. The other part of bad color is white balance. Auto WB aside, even balancing from a grey card won't be identical between cameras. The only way to do it properly is to transform the footage to linear gamma, perform white balance corrections, and then transform it into a your gamma of choice for other corrections/grading. If your software isn't color managed, this is all but impossible. Older Sony cameras had both issues, and with the amount of correction needed to simply even out hues across the exposure range, and correct for the atrocious WB, the 8 bit footage fell apart. It gets pasty and loses vibrance, especially in the shadows. It loses its "thickness," to use a popular term. And the compressions starts to show blocky hue variance that absolutely ruins skin tones--I imagine this last part can be mitigated with an external recorder to some degree, but no amount of pseudo-10 bit ProRes can account for WB issues and hue shifts on any camera.
  24. I (and many others) think Sony's color has improved considerably. I've done color work on a couple films shot with the A7 and A7s2, and those were the worst colors I've dealt with, especially SLog2. From what I've seen on the web, the A73 is a significant improvement. I don't think there's anything wrong with modern Sony color anymore.
×
×
  • Create New...