Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/05/2021 in all areas

  1. I don't get the complaints about dynamic range. All recent cameras (non-Arri, non-Red) get around 12 stops—including the R5 in raw. Raw is noisy, including Redcode, and NR is a necessary tool in post. If you don't want to shoot in Raw, there are other options out there that will give you a cleaner image with higher dynamic range. The tradeoff of course will be more artifacts, limited latitude, or worse color. It's your choice. Even the Komodo has tradeoffs. On image quality alone, it has relatively awful low-light and limited latitude with dramatic color shifts. If you just can't live without 13 stops of dynamic range or the world will be bereft of your creative potential and professional skills, you have a simple answer: C70.
    2 points
  2. @tupp You raise a number of excellent points, but have missed the point of the test. The overall context is that for a viewer, sitting at a common viewing distance, the difference won't be discernible. This is why the comparison is about perceptual resolution and not actual resolution. Yedlin claims that the video will appear 1:1, which I took to mean that it wouldn't be a different size, and you have taken to mean that every pixel on his computer will appear as a single pixel on your/my computer and will not have any impact on any of the other surrounding pixels. Obviously this is false, as you have shown from your blown up screen captures. This does not prove scaling though. As you showed, two viewers rendered different outputs, and I tried it in Quicktime and VLC and got two different results again. Problem number one is that the viewing software is altering the image (or at least all but one that we tried). Problem number two is that we're both viewing the file from Yedlin's site, which is highly compressed. In fact, it is a h264 stream, and 2.32Gb, something like 4Mbps. The uncompressed file would have been 1192Mbps and in the order of 600Gb, and not much smaller had he used a lossless compression, so completely beyond any practical consideration. Assuming I've done my maths correctly, that's a compression ratio of something like 250:1 - a ratio that you couldn't even hope would yield a pixel-not-destroyed image. The reason I bring up these two points is that they will also be true for the consumption of any media by that viewer that the test is about. There's no point arguing that his test is invalid as it doesn't apply to someone watching an uncompressed video stream on a screen that is significantly larger than the TXH and SMPTE recommendations suggest, because, frankly, who gives a toss about that person? I'm not that person, probably no-one else here is that person, and if you are that person, then good for you, but it's irrelevant. You made a good point about 3CCD cameras, which I'd forgotten about, and even if you disagree about debayering and mismatched photosites and pixels, none of that stuff matters if the image is going to get compressed for digital distribution and then decoded by any number of decoders that will generate a different pixel-to-pixel readout. Essentially you're arguing about how visible something is at the step before it gets put through a cheese-grater on its way to the people who actually watch the movies and pay for the whole thing. In terms of why they make higher resolution cameras? There are two main reasons I can see: The first is that VFX folks want as much resolution as possible as it helps keep things perceptually flawless after they mess with them. This is likely the primary reason that companies like ARRI are putting out higher resolution models. The second reason is that electronics companies are companies, and in a capitalist society, companies exist to make money, and to do that you need to make people keep buying things, which is done through planned obsolescence and incremental improvements, such as getting everyone to buy 4K TVs, and then 4K cameras to go with those 4K TVs. This is likely the driver of all the camera manufacturers who also sell TVs, which is.... basically every consumer camera company. Not a whole lot of people buying a GH5 are doing VFX with it, although cropping in post is one relatively common exception to that. So, although I disagree with you on some of the technical aspects along the way, the fact that his test isn't "1:1" in whatever ways you think it should be is irrelevant, because people watch things after compression, after being decoded by unknown algorithms. That's not even taking into account the image processing witchcraft that things like Smooth Motion that completely invents entirely new frames and is half of what the viewer will actually see, or uncalibrated displays etc. Yes, these things don't exist in theatres, but how many hours do you spend watching something in a theatre vs at home? The average person spends almost all their time watching on a TV at home, so the theatre percentage is pretty small.
    2 points
  3. @kye Two years or so back we had a thread with classy classic 8bit stuff. We were discussing how gh1,2,3 and even 4 shooters did much better than gh5 fans on vimeo. How they were rather artisitc than unimaginative techconsumers and advocats of putting anything dull in slow motion. Here is a repost of a video I posted back then, if I remember correctly. If not, anyway, one of my favorite GH1 clips on vimeo. Your example above is a splendid demonstration of craft and love for the image, showing a convincing image for sure.
    2 points
  4. Surprise for S1 owners, fancy new firmware update available one day early: Download Link 6k, 5.9k, 5.4k, Cinema 4k, 10-bit 4k60, manual dual-gain ISO control, and anamorphic support! Enjoy your new camera, everybody
    1 point
  5. The Venice has a great look, pretty similar to Alexa LF but a bit sharper. I have noticed that cameras with on-chip autofocus (DPAF or whatever the A7S3 has) tend to have a blotchier shadow noise texture. Venice has a nice noise texture. S1 seems nice, too, but the HEVC codec smooths out all the noise. Alexa dynamic range is no longer that much better than the competition. It's largely that Alexas are used on sets with higher budgets. Some of the easiest footage to work with in post, though, imo.
    1 point
  6. Nah. I watched it out of curiosity. Those two just looked nice to me in that setting. I do not need any new lenses and non I really want in that focal range. I have gotten rid of a lot of gear lately (given away) and have come to realise I could be very happy with just a handful of lenses for my (mostly stills but some video) uses. I could probably actually do 90% of my shooting with just my 17 TSE and 55 1.8 and 75% with just the TS-E (or at least almost all of my photos used by others are with the 17 TS-E). I do still want an AF portrait lens of some description (85mm plus). My little RX100 iv has taken care of a lot of other use now.
    1 point
  7. From real works test max seems to be around 900 MB/s: https://petapixel.com/2020/09/22/cfexpress-a-real-world-performance-comparison/
    1 point
  8. I have not yet found anything as good as the Huawei P40 Pro's RYYB sensor (1/1.28") in RAW mode 8K / 50MP resolution: Click for full res 22MB ACR 80% JPEG: https://www.eoshd.com/wp-content/uploads/2021/04/P40Pro8k_IMG_20200731_191137.jpg The Oneplus 8 Pro is good too in RAW. Shooting RAW is the only way to overcome the excessive processing, and usually the RAW files are only saved in the "Pro" mode of an Android phone's camera app. So you need to choose a phone where the Pro mode can launch as the default mode each time and is easy to access. RAW files can be saved with the 5x optics of the P40 Pro as well. The CPU of phones is so good now, you have no trouble editing the 50MP RAW DNG file "in-camera". I use Polarr. Dynamic range is really good in the RAW file but not quite as good as the phone HDR processing is. However, considering that is up there with medium format for dynamic range, the RAW is still on the RX100 1" sensor level for DR.
    1 point
  9. Figured out the Zebras. As gt3rs mentioned earlier in the thread, I have to set second level to 95 instead of 100. I need to learn how to read more carefully.
    1 point
  10. I think you raise an interesting point about the pipeline. The video files that many cameras produce are only a pale impression of the capabilities of the sensor, and that is definitely the case when cameras have 8-bit low-bitrate codecs. Do you think this is also true for the RAW shooting cameras we now have, such as the BM or Prores-RAW cameras? I ask the question because although I don't think that anything should be missing with those setups, I wonder if there's something you're aware of that I'm not? In terms of DR, I think that we are most certainly not there. I still think that there is benefit to having more DR than even an Alexa has, at least, when shooting fast in uncontrolled circumstances. For example, I would like to have perfect exposure on a subject while also being able to have the sunset in the background, whereas (IIRC) even the widest DR cameras still clip more of the sunset than you'd want if the subject is placed at the right IRE for skin tones. The Alexa (apart from being generally regarded as more capable than ARRI suggest) employs a simultaneously-combined-dual-gain architecture combined with 10-year old sensor tech. The latest sensor tech has gotten a lot closer to that performance using a single-simultaneous-gain architecture, but if we took some of the zillions of pixels we now have and sacrificed them to implement that architecture, we could easily leapfrog the Alexa DR, which I think would create absolutely stunning images beyond what we have seen from current sensors in anything other than their sweet-spots. Going back to the destructive image pipeline that happens inside consumer cameras, it really makes me angry that in many cases, people have bought a sensor with X performance, an image processor with Y performance, and an SDcard card writer with Z performance, but instead of giving the overall performance of the least capable component (bottleneck), we get something like 10% of the bottleneck. Things like having a 709 profile that deliberately clips the top few stops of DR instead of putting in an aggressive knee for example is ridiculous and is simply just an adjustment of the profile itself and requires no hardware changes at all. Then people start wanting to hack the camera, and the manufacturers respond by encrypting and otherwise preventing these alterations. In effect, you are paying extra money for each camera you buy, in order for the manufacturer to prevent you from being able to get the full benefit of the product that you are buying.
    1 point
  11. Shot mostly on the Zcam S6 with some Panasonic S1 in there too. I feel like the trailer was harder to make than the actual film. 😅
    1 point
  12. Not to be a jerk to her, but I think maybe she just has greasy, tired skin?
    1 point
  13. Sage

    GH5 to Alexa Conversion

    Working on it at this very moment (and a raft of updates simultaneously). Its a generational shift rolled in. I rewrote all code since the last 4.2 update; it's been a very technical winter 🙂
    1 point
  14. Keep in mind that resolution is important to color depth. When we chroma subsample to 4:2:0 (as likely with your A7SII example), we throw away chroma resolution and thus, reduce color depth. Of course, compression also kills a lot of the image quality. Yedlin also used the term "resolute" in his video. I am not sure that it means what you and Yedlin think it means. It is impossible for you (the viewer of Yedlin's video) to see 1:1 pixels (as I will demonstrate), and it is very possible that Yedlin is not viewing the pixels 1:1 in his viewer. Merely zooming "2X" does not guarantee that he nor we are seeing 1:1 pixels. That is a faulty assumption. Well, it's a little more complex than that. The size of the pixels that you see is always the size of the the pixels of your display, unless, of course, the zoom is sufficient to render the image pixels larger than the display pixels. Furthermore, blending and/or interpolation of pixels is suffered if the image pixels do not match 1:1 those of the display, or if the size image pixels are larger than those of the display while not being a mathematical square of the display pixels. Unfortunately, all of images that Yedlin presents as 1:1 most definitely are not a 1:1 match, with the pixels corrupted by blending/interpolation (and possibly compression). When Yedlin zooms-in, we see a 1:1 pixel match between the two images, so there is no actual difference in resolution in that instance -- an actual resolution difference is not being compared here nor in most of the subsequent "1:1" comparisons. What is being compared in such a scenario is merely scaling algorithms/techniques. However, any differences even in those algorithms get hopelessly muddled due to the fact that the pixels that you (and possibly Yedlin) see are not actually a 1:1 match, and are thus additionally blended and interpolated. Such muddling destroys any possibility of making a true resolution comparison. No. Such a notion is erroneous as the comparison method is inherently faulty and as the image "pipeline" Yedlin used unfortunately is leaky and septic (as I will show). Again, if one is to conduct a proper resolution comparison, the pixels from the original camera image should never be blended: an 8K captured image should be viewed on an 8K monitor; a 6K captured image should be viewed on an 6K monitor; a 4K captured image should be viewed on an 4K monitor; 2K captured image should be viewed on an 2K monitor; etc. Scaling algorithms, interpolations and blending of the pixels corrupts the testing process. I thought that I made it clear in my previous post. However, I will paraphrase it so that you might understand what is actually going on: There is no possible way that you ( @kye ) can observe the comparisons with a 1:1 pixel match to those of the images shown in Yedlin's node editor viewer. In addition, it is very possible that even Yedlin's own viewer when set at 100% is not actually showing a 1:1 pixel match to Yedlin. Such a pixel mismatch is a fatal flaw when trying to compare resolutions. Yedlin claims that he established a 1:1 match, because he knows that it is an important requirement for comparing resolutions, but he did not acheive a 1:1 pixel match. So, almost everything about his comparisons is meaningless. Again, Yedlin is not actually comparing resolutions in this instance. He is merely comparing scaling algorithms and interpolations here and elsewhere in his video, scaling comparisons which are crippled by his failure to achieve a 1:1 pixel match in the video. Yedlin could have verified a 1:1 pixel match by showing a pixel chart within his viewer when it was set to 100%. Here are a couple of pixel charts: If the the charts are displayed at 1:1 pixels, you should easily observe with a magnifier that the all of the black pixel rulings that are integers (1, 2, 3, etc.) are cleanly defined with no blending into adjacent pixels. On the other hand, all of the black pixel rulings that are non-integers (1.3, 1.6, 2.1, 2.4, 3.3, etc.) should show blending on their edges with a 1:1 match. Without such a chart it is difficult to confirm that one pixel of the image coincides with one pixel in Yedlin's video. Either Steve Yedlin, ASC was not savvy enough to include the fundamental verification of a pixel chart or he intentionally avoided verification of a pixel match. However, Yedlin unwittingly provided something that proves his failure to achieve a 1:1 match. At 15:03 in the video, Yedlin zooms way in to a frozen frame, and he draws a precise 4x4 pixel square over the image. At the 16:11 mark in the video, he zooms back out to the 100% setting in his viewer, showing the box at the alleged 1:1 pixels You can freeze the video at that point and see for yourself with a magnifier that the precise 4x4 pixel square has blended edges (unlike the clean-edged integer rulings on the pixel charts). However, Yedlin claims there is a 1:1 pixel match! I went even further than just using magnifier. I zoomed-in to that "1:1" frame using two different methods, and then I made a side-by-side comparison image: All three images in the above comparison were taken from the actual video posted on Yedlin's site. The far left image shows Yedlin's viewer fully zoomed-in when he draws the precise 4x4 pixel square. The middle and right images are zoomed into Yedlin's viewer when it is set to 100% (with an allegedly 1:1 pixel match). There is no denying the excessive blending and interpolation revealed when zooming-in to the square or when magnifying one's display. No matter how finely one can change the zoom amount in one's video player, one will never be able to see a 1:1 pixel match with Yedlin's video, because the blending/interpolation is inherent in the video. Furthermore, the blending/interpolation is possibly introduced by Yedlin's node editor viewer when it is set to 100%. Hence, Yedlin's claimed 1:1 pixel match is false. By the way, in my comparison photo above, the middle image is from a tiff created by ffmpeg, to avoid further compression. The right image was made by merely zooming into the frozen frame playing on the viewer of the Natron compositor. Correct. That is what I have stated repeatedly. The thing is, he uses this same method in almost every comparison, so he is merely comparing scaling methods throughout the video -- he is not comparing actual resolution. What? Of course there are such "pipelines." One can shoot with a 4K camera and process the resulting 4K files in post and then display in 4K, and the resolution never increases nor decreases at any point in the process. Are you trying to validate Yedlin's upscaling/downscaling based on semantics? It is generally accepted that a photosite on a sensor is a single microscopic receptor often filtered with a single color. A combination of more than one adjacent receptors with red, green, blue (and sometimes clear) filters is often called a pixel or pixel group. Likewise, an adjacent combination of RGB display cells is usually called a pixel. However you choose to define the terms or to group the receptors/pixels, it will have little bearing on the principles that we are discussing. Huh? What do you mean here? How do you get those color value numbers from 4K? Are you saying that all cameras are under-sampling compared to image processing and displays? Regardless of how the camera resolution is defined, one can create a camera image and then process that image in post and then display the image all without any increase nor decrease of the resolution at any step in the process. In fact, such image processing with consistent resolution at each step is quite common. It's called debayering... except when it isn't. There is no debayering with: an RGB striped sensor; an RGBW sensor; a monochrome sensor; a scanning sensor; a Foveon sensor; an X-Trans sensor; and three-chip cameras; etc. Additionally, raw files made with a Bayer matrix sensor are not debayered. I see where this is going, and your argument is simply a matter of whether we agree to determine resolution by counting the separate red, green and blue cells or whether we determine resolution by counting RGB pixel groups formed by combining those adjacent red, green and blue scales. Jeez Louise... did you just recently learn about debayering algorithms? The conversion of adjacent photosites into a single RGB pixel group (Bayer or not) isn't considered "scaling" by most. Even if you define it as such, that notion is irrelevant to our discussion -- we necessarily have to assume that a digital camera's resolution is given by either the output of it's ADC or by the resolution of the camera files. We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups. Once we agree upon the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline." You probably shouldn't have emphasized that point, because you are incorrect, even if we use your definition of "scaling." There are no adjacent red green or blue photosites to combine ("scale") with digital Foveon sensors, digital three chip cameras and digital monochrome sensors. Please, I doubt that even Yedlin would go along with you on this line of reasoning. We can determine the camera resolution merely from output of the ADC or from the camera files. We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups. After we agree on the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline." Regardless of these semantics, Yedlin is just comparing scaling methods and not actual resolution. When one is trying to determine if higher resolutions can yield an increase in discernability or in perceptible image quality, then it is irrelevant to consider the statistics of common or uncommon setups. The alleged commonality and feasibility of the setup is a topic that should be left for another discussion, and such notions should not influence nor interfere with any scientific testing nor with the weight of any findings of the tests. By dismissing greater viewing angles as uncommon, Yedlin reveals his bias. Such dismissiveness of important variables corrupts his comparisons and conclusions, as he avoids testing larger viewing angles, and he merely concludes that larger screens are "special". Well if I had a 4K monitor, I imagine that I could tell the difference between a 4K and 2K image. Not that it matters, but close viewing proximity is likely much more common than Yedlin realizes and more more common than your web searching shows. In addition to IMAX screens, movie theaters with seats close to the screen, amusement park displays and jumbo-trons, many folks position their computer monitors close enough to see the individual pixels (at least when they lean forward). If one can see individual pixels, a higher resolution monitor of the same size can make those individual pixels "disappear." So, higher resolution can yield a dramatic difference in discernability, even in common everyday scenarios. Furthermore, a higher resolution monitor with the same size pixels as a lower resolution monitor gives a much more expansive viewing angle. As many folks use multiple computer monitors side-by-side, the value of such a wide view is significant. Whatever. You can claim that combining adjacent colored photosites into a single RGB pixel group is "scaling." Nevertheless, the resolution need never change at any point in the "imaging pipeline." Regardless, Yedlin is merely comparing scaling methods and not resolution. Well, we can't really draw such a conclusion from Yedlin's test, considering all of the corruption from blending and interpolation caused by his failure to achieve a 1:1 pixel match. How is this notion relevant or a recap? Your statistics and what you consider be likely or common in regards to viewing angles/proximity is irrelevant in determining the actual discernability differences between resolutions. Also, you and Yedlin dismiss sitting in the very front row of a movie theater. That impresses you? Not sure how that point is relevant (nor how it is a recap), but please ask yourself: if there is no difference in discernability between higher resolutions, why would Arri (the maker of some of the highest quality cinema cameras) offer a 6K camera? Yes, but such points don't shed light on any fundamental differences in the discernability of different resolutions. Also how is this notion a recap? You are incorrect and this notion is not a recap. Please note that my comments in an earlier post regarding Yedlin's dismissing wider viewing angles referred to and linked to a section at 0:55:27 in his 1:06:54 video. You only missed all of the points that I made above in this post and earlier posts. No, it's not clear. Yedlin's "resolution" comparisons are corrupted by the fact that the pixels are not a 1:1 match and by the fact that he is actually comparing scaling methods -- not resolution.
    1 point
  15. Thanks for the compliments everyone! Means a lot from people on this forum as I know we are all pretty picky when it comes to this stuff. 😂😄 For the Z-cam I used a set of Rokinons 24, 34, 50, and 85 Everything on the Panasonic was shot on a Rokinon 35mm 1.8 We also used a Sigma 14-24 2.8 for a few interior wide shots where we need that ultra wide FOV. I have a set of Canon FD lenses which I would have used had I owned them last year. The Rokinons do the job though. I really love Minolta lenses though they can't be adapted to EF so only some cameras can use them. I'll do a little breakdown for you if I can remember perfectly. 0:05 Rokinon 35mm 1.4 0:12 Rokinon 85mm 1.4 0:15 Rokinon 50mm 1.4 0:18 Rokinon 50mm 1.4 (1.8 crop in post) 0:19 Rokinon 50mm 1.4 0:22 Rokinon 35mm 1.4 0:31-0:47 Minolta 35mm 1.8 (shot mostly wide open) 0:50 Mavic 2 pro 0:53-1:03 Minolta 35mm 1.8 0:52 Rokinon 24mm 1.4 1:06-1:20 Rokinon 35mm 1.4 well you get the idea at this point
    1 point
  16. PannySVHS

    The D-Mount project

    @kye Masterpiece. It would have been worth to use a tripod to compliment your beautiful framing and montage. Love the shotdesign and realization and the juxtaposition of images and montage in its entity. cheers
    1 point
  17. If it's worth saying, it's worth repeating, right? also, if it's worth saying it's definitely worth repeating. I completely agree. To put things into perspective, here's a video from the GH1 that I saw shared recently. To my eyes, it look better than almost everything I see posted in the last couple of years. 1080p camera, 1080p upload, from a camera that is so old it doesn't change hands much anymore, but when it does it can be had for about $100.
    1 point
  18. Quick test both Zebra 1 and 2 shows up and I have it set at 60 +-5 and 95 in CLog3 (100 does not work in log)
    1 point
×
×
  • Create New...