Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/04/2021 in all areas

  1. If it's worth saying, it's worth repeating, right? also, if it's worth saying it's definitely worth repeating. I completely agree. To put things into perspective, here's a video from the GH1 that I saw shared recently. To my eyes, it look better than almost everything I see posted in the last couple of years. 1080p camera, 1080p upload, from a camera that is so old it doesn't change hands much anymore, but when it does it can be had for about $100.
    3 points
  2. Awesome... next steps are: figure out of there is a cheaper way to get that glass, perhaps in a consumer lens figure out if there is a different way to get the same optical recipe (for example the soviet lenses are famously replications of the Zeiss recipes that the soviets took from Germany at the end of WWII) figure out what the image qualities are that you like from those lenses and work out if there is a way to replicate them in other ways, like diffusion filters, streak filters, etc
    2 points
  3. Shot mostly on the Zcam S6 with some Panasonic S1 in there too. I feel like the trailer was harder to make than the actual film. ๐Ÿ˜…
    1 point
  4. Sage

    GH5 to Alexa Conversion

    Working on it at this very moment (and a raft of updates simultaneously). Its a generational shift rolled in. I rewrote all code since the last 4.2 update; it's been a very technical winter ๐Ÿ™‚
    1 point
  5. Yeah, I love the new RAW Lite but it kinda isn't worth it when the DR seems gimped even in clog2 mode. So much processing to do to clean up shadows to eek out 0.3 stops extra. The clog3 mode does offer 1 stop better highlight recovery and its worth using, especially in the 4KHQ. I've been shooting it with cinema gamut and it looks nice with the R5 Cinema Gamut CLOG3 to Rec.709 LUT. You do have to tweak the exposure quite a bit but the highlights recovery much better than clog1. They just need to fix the damn view assist.
    1 point
  6. Thanks for the compliments everyone! Means a lot from people on this forum as I know we are all pretty picky when it comes to this stuff. ๐Ÿ˜‚๐Ÿ˜„ For the Z-cam I used a set of Rokinons 24, 34, 50, and 85 Everything on the Panasonic was shot on a Rokinon 35mm 1.8 We also used a Sigma 14-24 2.8 for a few interior wide shots where we need that ultra wide FOV. I have a set of Canon FD lenses which I would have used had I owned them last year. The Rokinons do the job though. I really love Minolta lenses though they can't be adapted to EF so only some cameras can use them. I'll do a little breakdown for you if I can remember perfectly. 0:05 Rokinon 35mm 1.4 0:12 Rokinon 85mm 1.4 0:15 Rokinon 50mm 1.4 0:18 Rokinon 50mm 1.4 (1.8 crop in post) 0:19 Rokinon 50mm 1.4 0:22 Rokinon 35mm 1.4 0:31-0:47 Minolta 35mm 1.8 (shot mostly wide open) 0:50 Mavic 2 pro 0:53-1:03 Minolta 35mm 1.8 0:52 Rokinon 24mm 1.4 1:06-1:20 Rokinon 35mm 1.4 well you get the idea at this point
    1 point
  7. Posted 3x for effect ๐Ÿคช
    1 point
  8. Iโ€™ve seen some utter trash (a lot of what is out there) and some really wonderful stuff. What it comes down to is what it often/usually comes down to and that is the skill of the operator + the glass used + the grade. The typical reviewer tests will rarely showcase anything to itโ€™s best other than by accident, but those folks who actually own something and work with and at it, can demonstrate something far superior by design. I donโ€™t claim to be any kind of expert but with the combination of camera choice (S1H) + lens choice + diffusion filter + tweaked profile in camera + basic initial tweaks in editing before final light grade, using a LUT I created from a photography Lightroom preset I created, used at 25-30%, Iโ€™m getting a consistent end result that I really like. And the YouTube anyway... All kinds of nasty compression etc. I only upload to Vimeo and in 4K. Itโ€™s all still a work in progress, albeit somewhat limited progress because of the limited chances to actually get out and shoot real projects that are almost impossible to recreate from home.
    1 point
  9. Well if you only shot the wide end of the 20-60 @ 50p itโ€™s going to be a 30mm equivalent FOV and at f3.5. For me, itโ€™s about as wide as I would ever go and at f3.5 isnโ€™t too bad even in low light. Iโ€™m really enjoying the 28-70, but on the job I did last week (lockdown 3 starts here today for the next month...) but I used it stills only on the S5. For video, I went Sigma 45mm f2.8 on the S1H and it was much better than I thought it would be, which bodes well for the 65mm f2. There are a few options to go wide(r) native L Mount or adapted but the 23mm f1.8 should be coming soon that is; compact, light, reasonable price, fastest AF and decent aperture?
    1 point
  10. Yep, the body alone is much beefier although I have a speedbooster mounted to my P4K 100% of the time so once you take that into account the difference is less. I also like the feeling of it in the hand more than the 4K. The battery is also sooo much better. I always used my P4K with a Vlock as the LP6N batteries were pretty unreliable and has short run times but I'd quite happily run the 6kPro on internal NPF. For me, the size is an advantage..... the internal ND's save so much time on set and the second XLR input means that the sound recordist is happy being able to feed two channels of audio directly into camera as well as Timecode via a tentacle sync into the microphone minijack input. So really.....the camera is still tiny considering all its features, inputs and the fact that it had a really nice 5" high bright screen built in.......it's a proper video/cinema camera. Sure is big compared to a mirrorless camera but then it offers so much more for me. I'm also a fan of the added BRAW compression ratios. While Blackmagic are not showing the Samsung T5s as being officially 'approved', in my tests I am able to shoot 6K50p BRAW at constant bit rate down to 5:1 or constant quality at Q1 to 1TB drives. Anyway, still testing for now....I'll be using it as a B Camera on a TVC next week so will keep you updated on how it performs.
    1 point
  11. Two stand out lenses for me in that test (the 40mm Baush and Lomb and the Cooke ....then i looked at the prices... OUCH!
    1 point
  12. I've been debating whether acquiring a Ninja V to experiment with what VLOG/ProRes RAW might offer over the internal VLOG options for HDR. I was a bit disappointed in finding out through my inquiries this past week that the only option to record ProRes RAW is with the Ninja V...is this all this much in it's infancy that there exists no other market choices?! I was also disappointed after watching last week's LUMIX YT presentation to learn that Panasonic still had nothing to say regarding piping the RAW data packets over the internal PCIe bus to the internal CFExpress cards and will instead continue to only make that data dump available through the HDMI pipeline to external recorders. There was a fair amount of talk inferring thermal capacity of the S1 body as a limitation but what still does not add up to me is that they state (on one hand) that the heat is generated by the internal processing done before sending data to their cards for their internal codecs, but did not satisfactorily (to my viewing and listening) reconcile (as they stated on the other hand) that packet-izing the RAW data requires no internal image processing (so less heat), only sensor data transport? So, simply put, packetizing the RAW data generates less internal heat than that caused by the internal image processing that is sent to the internal cards. Sooo, if internal codec recording times are dictated by... 1. Sensor image processing heat + writing to card heat = 15 minutes, why is it if... 2. 0 sensor image processing heat only RAW packetizing + HDMI piping = unlimited record time, and... 3. Sensor image processing heat + HDMI piping = unlimited record time, that... 4. 0 sensor image processing heat only RAW packetizing + writing to card heat โ‰  at least 15 minutes? By what the Panasonic reps stated, scenario 4 should generate less internal heat than scenario 1, so why not allow the RAW data packets (with time limits, if that needs to be so) that fit the S1's thermal design...i.e. 15-minutes, or whatever it works out to (they have the thermal numbers)? My point, all along (and I will not tire from this), I'd prefer to be focused on shooting the bird and frog, etc. activities (that actually require my attention!) while I'm out in the field rather than fiddling around with a recorder and settings and connections and snag points and added weight whilst missing vital irreproducible action, I can fiddle with the RAW data to my heart's content in FCPX when I get home. And I'm sure I am not alone in wishing to be able to record RAW internally (just like all the other codecs) without the fuss and muss of extraneous gear. ๐Ÿ™‚ -Jim
    1 point
  13. Would have loved a 10bit 422 update for 50/60p for the S series. Too bad its just 420. But still, awesome update.
    1 point
  14. I am surprised that S1/H,5 footage on youtube does not do the camera justice. The colour out of this cam is awesome. Unfortunately I have not shot much with it since last summer. S1 is best bang for the buck by far in my opinion.
    1 point
  15. Looks very good. Lush colors in the cinematography, framing, acting and editing, all very appealing. Great job!
    1 point
  16. I did it but it really suck... doesnโ€™t match properly..
    1 point
  17. It's in-between the 1/1.2" and 1" on this chart, so practically 1" as near as makes no difference. Bit like Canon APS-C is still APS-C even when it is a 1.6x crop vs 1.5x crop. How long before Micro Four Thirds sensor in a smartphone? ๐Ÿ™‚
    1 point
  18. For anyone who is reading this far into a thread about resolution but for some reason doesn't have an hour of their day to hear from an industry expert on the subject, the section of the video I have linked to below is a very interesting comparison of multiple cameras (film and digital) with varying resolution sensors, and it's quite clear that the level of perceptual detail coming out of a camera is not that strongly related to the sensor resolution:
    1 point
  19. Ok.. Let's discuss your comments. His first test, which you can see the results of at 6:40-8:00 compares two image pipelines: 6K image downsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x 6K image downsampled to 2K, then upsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x As this view is 2X digitally zoomed in, each pixel is twice as large as it would be if you were viewing the source video on your monitor, so the test is actually unfair. There is obviously a difference in the detail that's actually there, and this can be seen when he zooms in radically at 7:24, but when viewed at 1:1 starting at 6:40 there is perceptually very little difference, if any. Regardless of if the image pipeline is "proper" (and I'll get to that comment in a bit), if downscaling an image to 2K then back up again isn't visible, the case that resolutions higher than 2K are perceptually differentiable is pretty weak even straight out of the gate. Are you saying that the pixels in the viewer window in Part 2 don't match the pixels in Part 1? Even if this was the case, it still doesn't invalidate comparisons like the one at 6:40 where there is very little difference between two image pipelines where one has significantly less resolution than the other and yet they appear perceptually very similar / identical. He is comparing scaling methods - that's what he is talking about in this section of the video. This use of scaling algorithms may seem strange if you think that your pipeline is something like 4K camera -> 4K timeline -> 4K distribution, or the same in 2K, as you have mentioned in your first point, but this is false. There are no such pipelines, and pipelines such as this are impossible. This is because the pixels in the camera aren't pixels at all, rather they are photosites that sense either Red or Green or Blue. Whereas the pixels in your NLE and on your monitor or projector are actually Red and Greed and Blue. The 4K -> 4K -> 4K pipeline you mentioned is actually ~8M colour values -> ~24M colour values -> ~24M colour values. The process of taking an array of photosites that are only one colour and creating an image where every pixel has values for Red Green and Blue is called debayering, and it involves scaling. This is a good link to see what is going on: https://pixinsight.com/doc/tools/Debayer/Debayer.html From that article: "The Superpixel method is very straightforward. It takes four CFA pixels (2x2 matrix) and uses them as RGB channel values for one pixel in the resulting image (averaging the two green values). The spatial resolution of the resulting RGB image is one quarter of the original CFA image, having half its width and half its height." Also from the article: "The Bilinear interpolation method keeps the original resolution of the CFA image. As the CFA image contains only one color component per pixel, this method computes the two missing components using a simple bilinear interpolation from neighboring pixels." As you can see, both of those methods talk about scaling. Let me emphasise this point - any time you ever see a digital image taken with a digital camera sensor, you are seeing a rescaled image. Therefore Yedlin's use of scaling is on an image pipeline is using scaling on an image that has already been scaled from the sensor data to an image that has three times as many colour values as the sensor captured. A quick google revealed that there are ~1500 IMAX theatre screens worldwide, and ~200,000 movie theatres worldwide. Sources: "We have more than 1,500 IMAX theatres in more than 80 countries and territories around the globe." https://www.imax.com/content/corporate-information "In 2020, the number of digital cinema screens worldwide amounted to over 203 thousand โ€“ a figure which includes both digital 3D and digital non-3D formats." https://www.statista.com/statistics/271861/number-of-digital-cinema-screens-worldwide/ That's less than 1%. You could make the case that there are other non-IMAX large screens around the world, and that's fine, but when you take into account that over 200 Million TVs are sold worldwide each year, even the number of standard movie theatres becomes a drop in the ocean when you're talking about the screens that are actually used for watching movies or TVs worldwide. Source: https://www.statista.com/statistics/461316/global-tv-unit-sales/ If you can't tell the difference between 4K and 2K image pipelines at normal viewing distances and you are someone that posts on a camera forum about resolution then the vast majority of people watching a movie or a TV show definitely won't be able to tell the difference. Let's recap: Yedlin's use of rescaling is applicable to digital images because every image from every digital camera sensor that basically anyone has ever seen has already been rescaled by the debayering process by the time you can look at it There is little to no perceptual difference when comparing a 4K image directly with a copy of that same image that has been downscaled to 2K and the upscaled to 4K again, even if you view it at 2X The test involved swapping back and forth between the two scenarios, where in the real world you are unlikely to ever get to see the comparison, like that or even at all The viewing angle of most movie theatres in the world isn't sufficient to reveal much difference between 2K and 4K, let alone the hundreds of millions of TVs sold every year which are likely to have a smaller viewing angle than normal theatres These tests you mentioned above all involved starting with a 6K image from an Alexa 65, one of the highest quality imaging devices ever made for cinema The remainder of the video discusses a myriad of factors that are likely to be present in real-life scenarios that further degrade image resolution, both in the camera and in the post-production pipeline You haven't shown any evidence that you have watched past the 10:00 mark in the video Did I miss anything?
    1 point
  20. The a6300 is a good example of exactly the point heโ€™s trying to make, the distinction between resolution and fidelity. I had a shoot with a Sony a7sii and an Ursa back to back. The Ursa was shot in 1920x1080 prores 444, the a7sii shot UHD in mega compressed h264. The difference in PERCEIVED resolution between these two cameras is night and day, with the Ursa kicking the A7s itโ€™s butt, because the image of the former is so much more robust in terms of compression noise, color accuracy, banding, edge detail, and so on. One is more resolute but the other camera has a lot more perceived resolution because the image is better. If you blow up the images and ask random viewers which camera is โ€˜sharperโ€™, 9 out of 10 times people will say the Ursa.
    1 point
  21. Here's some tips, based on my experience with the sharpness tweak: I'm not sure if this is the case with every lens, but with mine, it was possible to adjust the sharpness without even removing the brass tabs. Just by loosening all 6 screws, but leaving the tabs in place, it created enough rotational play in the front optic for me to fine-tune the sharpness. This is because the holes in the brass tabs are ever so slightly bigger than the screws, so you have a little room for rotation. For me it was just enough. Once you find the sweet spot, simply re-tighten the screws. No glue needed. Be careful that the act of re-tightneing the screws doesn't nudge the optic. This took me a little while to get right. Once everything is tight, check the image again to make sure it's still sharp. As Tito says in his video, it's easiest to judge the sharpness using a long lens at (or near) infinity, and stopped down to 4 or 5.6. In my case, I could never get a sharp image with the Kowa and taking lens at infinity. I had to focus both lenses slightly less than infinity to get a super sharp image. This is very important, especially if you are using a single-focus solution. Don't blindly trust the infinity marks on your Kowa or your taking lens. You might be able to get a MUCH sharper image by adjusting them slightly under infinity. I'm sure this will vary from lens to lens. Hopefully this helps!
    1 point
  22. The widest I've managed in apsc is 28mm (in video mode and cropped to 2.4:1). There is a guy on the anamorphic shooters group on Facebook that uses a 40mm on fullframe.
    1 point
×
×
  • Create New...