Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/18/2021 in all areas

  1. This is a videoclip that I shot mixing the Bmpcc4k (C/Y Zeiss Primes + Speedbooster) and the Microcinema with russian 16mm Lens (Meopta Largor and Openar), both in CDNG. I also use the Microcinema as an A cam with a more modern look, like in this other videoclip, shot with speedbooster and Sigma 18-35mm (car interiors) and Tokina 11-16mm (day exteriors) I´m still learning and trying to get a more organic look from the Bmpcc4k. I have test the new Gen 5 (not in this videoclip) in Resolve 17 and I found it more pleasant than the Gen4 as most of users says but for now I´m still prefer the more organic texture of the old fairchild sensor. And sorry for my english!
    1 point
  2. This is my concern too. Hopefully I have dissuaded them from your arguments sufficiently. Once again, you're deliberately oversimplifying this in order to try and make my arguments sound silly, because you can't argue against their logic in a calm and rational way. This is how a camera sensor works: Look at the pattern of the red photosites that is captured by the camera. It is missing every second row and every second column. In order to work out a red value for every pixel in the output, it must interpolate the values from what it did measure. Just like upscaling an image. This is typical of the arguments you are making in this thread. It is technically correct and sounds like you might be raising valid objections. Unfortunately this is just technical nit-picking and shows that you are missing the point, either deliberately or naively. My point has been, ever since I raised it, that camera sensors have significant interpolation. This is a problem for your argument as your entire argument is that Yedlins test is invalid because the pixels blended with each other (as you showed in your frame-grabs) and you claimed this was due to interpolation / scaling / or some other resolution issue. Your criticism then is that a resolution test cannot involve interpolation, and the problem with that is that almost every camera has interpolation built-in fundamentally. I mentioned bayer sensors, and you said the above. I showed above that bayer sensors have less red photosites than output pixels, therefore they must interpolate, but what about the Fuji X-T3? The Fuji cameras have a X-Trans sensor, which looks like this: Notice something about that? Correct - it too doesn't have a red value for every pixel, or a green value for every pixel, or a blue value for every pixel. Guess what that means - interpolation! "Scanning back" you say. Well, that's a super-broad term, but it's a pretty niche market. I'm not watching that much TV shot with a medium format camera. If you are, well, good for you. And finally, Foveon. Now we get to a camera that doesn't need to interpolate because it measures all three colours for each pixel: So I made a criticism about interpolation by mentioning bayer sensors, and you criticised my argument by picking up on the word "debayer" but included the X-Trans sensor in your answer, when the X-Trans sensor has the same interpolation that you are saying can't be used! You are not arguing against my argument, you are just cherry picking little things to try and argue against in isolation. A friend PM'd me to say that he thought you were just arguing for its own sake, and I don't know if that's true or not, but you're not making sensible counter-arguments to what I'm actually saying. So, you criticise Yedlin for his use of interpolation: and yet you previously said that "We can determine the camera resolution merely from output of the ADC or from the camera files." You're just nit-picking on tiny details but your argument contains all manner of contradictions.
    1 point
  3. To be fair Gerald’s audience isn’t who the fp camera is for and he knows it. I was recently looking at the fp-L + tiny sigma 24mm f3.5 as a small b-cam and backup high-megapixel photo camera (electronic shutter not an issue for my use case). I agree the 61MP sensor in such a small body is pretty cool and the ability to strip it down by taking off a modular grip and EVF is attractive. Lots of options. I would still like to see a bit more ecosystem pop up before I drop some money.
    1 point
  4. Agreed. I can't imagine you would need to push the colors around very much for interviews, so 10 bit shouldn't be necessary. I love that 10 bit is becoming regular but I think people overestimate how much they really need to use it.
    1 point
  5. I've never owned a more inspiring camera than the OG FP. I bought a C70 to make work a little easier and tried out a Lumix S5 for a week to check out the advantages for anamorphic shooting but quickly returned it. Nothing has come close to the IQ I'm getting out of the FP.
    1 point
  6. I was thinking something along those lines, like the phantom power should be turned off first. That's why I think I should read the manual to see if there is a startup sequence. I just don't remember these problems with the GH5 but of course there could be different tolerances with the S5. I have the Sigma EF adapter and EF mount lenses so it may work slightly differently with L mount lenses. But for me, I keep the dial in Movie Mode (M with the little camera beside it), the EF lens is set to AF on, and the back button focus selector is set to S(ingle) or C(ontinuous). Continuous AF doesn't work with the adapter, so when you have the adapter mounted and an EF lens attached S and C does pretty much the same thing. You can also use M but then you can't use the half press method to AF. So the three different ways you can use Hybrid One Shot AF with the S5 that I have found is: Focus Mode Dial set to S or C- The EF lens has to have AF turned on (switch on the lens) and the top dial in Movie mode. With it set this way I can then half press to AF both while recording or before starting to record. I typically pick 1 area for my focus mode. Focus Mode Dial set to M - The EF lens has to have AF turned on and the top dial in Movie mode. This is where it gets interesting. With this combination of settings. half pressing the shutter button doesn't do anything, the only two options are to turn the focus ring to manually focus, or you can tap the screen where it says AF and it will focus one time for you. What I like about this mode is for the EF lenses that support it, the camera will automatically punch in to help you MF when you turn the focus ring on the lens (as long as you set it up to do this in the menu and as long as you haven't yet pressed record). For the other two modes, if you want to punch in to check focus you have to press the center button on the focus mode selector switch. Lens AF Mode Off or Fully Manual Lenses - If you switch the lens AF mode to off, then you are 100% responsible for pulling focus. What is pretty cool about this mode is that the focus selector switch doesn't matter anymore. S/C/M all automatically punch in to help you set focus as long as you have it set that way in the menu and as long as you haven't yet pressed record and as long as the lens reports to the camera that the ring is being rotated (most of my lenses do). I'll admit I've gotten kind of lazy with focusing now that I have the S5. With the GH5 all of my lenses were manual so I used to have to focus everything by hand. With the S5, the focus peaking is so hard to see (IMO worse than the GH5), that I've found manually focusing much harder so I tend to rely more on the hybrid method. The only problem is the camera completely fails to focus sometimes in this mode so then I still have to do some manual focusing. I like many others truly hope Panasonic does something to improve their AF. I would literally buy an S5II if the only improvement was a working AF system.
    1 point
  7. You really turned this into a rant about trans folks?
    1 point
  8. Who cares!! The more unique and customised my camera is, the better. Who wants to use whatever everyone else does?
    1 point
  9. Now your username is relevant again 😉
    1 point
×
×
  • Create New...