Jump to content

KnightsFan

Members
  • Posts

    1,292
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. @sanveer That was my impression, too, which makes me really curious to know what's going on under the hood though, and how it's different from cDNG.
  2. That's not entirely true. HDMI doesn't "know" what information it is carrying. You can send whatever data you want over HDMI, as long as you can encode it in a format it likes, and then decode it on the other end. Back a few years ago the Axiom project was putting a raw signal over HDMI. If I remember correctly, they were taking three Raw frames and sending them as the "red" "green" and "blue" channels. All you've got to do at the end is split the "channels" back into Raw frames.
  3. Does anyone have any info on ProRes Raw vs. cDNG yet? Grant Petty mentioned something about it moving all the color science to FCPX, instead of being a mix of in-camera and in-post. Does anyone have any insight on that?
  4. KnightsFan

    NAB 2018

    Exactly! Since you can now get a great camera for around $1k, it's great to see more lenses in that price range for people who will have to bring their lenses between systems (such as me).
  5. KnightsFan

    NAB 2018

    I will very interested if they do get around to making EF mount as he mentioned. Rokinon doesn't really have any competition right now for non-mirrorless cine lenses.
  6. All that is true. I was responding to your first post, where you said: "The human eye is estimated to see about 10 million colors and most people can't flawlessly pass color tests online, even though most decent 8 bit or 6 bit FRC monitors can display well over ten million colors: 8 bit color is 16.7 million colors, more than 10 million." My point was that that this reasoning does not apply to raw, and that Raw samples NEED a higher bit depth than each individual color sample in a video file. I was illustrating the point by showing that if you count the bits in a lossless 1920 x 1080 Raw image at 14 bit depth, it will be considerably smaller than a 1920 x 1080 color image at 8 bit. True! I was simplifying. But aggregate over the entire image, you still end up with less overall data in the Raw file than the color file. Two ways of saying the same thing, in terms of data rate. I think we're in agreement for the most part
  7. I said "three separate 8 bit values, to make a 24 bit color pixel"
  8. @HockeyFan12 I re-read your original post. I agree wholeheartedly that there is no practical difference between 12 and 14 bit color video. However, Raw is different. Essentially, each 14 bit raw sample becomes three separate 8 bit values, to make a 24 bit color pixel. That means that each pixel in the final video has 1024 times as many possible colors as each corresponding raw pixel--and that's when you're outputting to just 8 bit! (I said "essentially" because color pixels are reconstructed based on surrounding pixels as well) I don't know if it makes a practical difference on a 5D with magic lantern, so you could be completely right about 12 bit being practically equivalent to 14. I haven't had access to a 5D since before the lower bit hacks became available, unfortunately.
  9. Oh okay, yeah, I completely misread it. I totally agree with you!
  10. But can you plug it into the USB-C port on the BMPCC4k and use it as an on-camera usb audio interface? Wishful thinking...
  11. I don't know. Even if its unlikely that any cameras will in fact output ProRes Raw over HDMI, it may not have been a huge cost for Atomos to implement. I wonder if the hardware is actually any different? I mean it's the same data rate, and the recorder is still just taking a signal and writing it to disk. Sure, there's debayering etc. for the screen, but literally every camera in existence does that same processing in real time, so the hardware can't be too expensive. I wouldn't be surprised if world-of-mouth advertising (like that of this topic!) boosted sales enough for it to pay off.
  12. @Deadcode That's correct. 10 and 12 bit simply truncate the lowest 4 or 2 bits in Magic Lantern. Removing each bit divides the number of possible values by two. (So, to answer the original question, lower bit depths will manifest themselves by crushing the blacks. Though, as has been mentioned, the "blacks" have a low signal to noise ratio anyway).
  13. @Deadcode each bit you add doubles the number of possible values. Adding two bits means there are 4 TIMES as many shades. 10 bit has 1/16 the shades vs 14 bit.
  14. You know what would be cool, is if there was a way to use the USB-C port as a multitrack digital audio input from a mixer. That's always been That One feature I've never seen.
  15. I understand what you're saying, but it's not matching with my observations. What I am doing is shooting high dynamic range scenes that include both pure black and pure white. I've shot it every way I can think of, including using a clickless aperture to slowly adjust exposure, and then picking the frames that have either the same black point or the same white point. Using the RGB boost consistently gives me slightly more dynamic range, and a noticeable increase in sharpness. Perhaps the difference we're experiencing is that I am changing the ISO: comparing RGB 1 at ISO 800 vs RGB 1.99 at ISO 400, adjusting the aperture by minuscule margins both ways to ensure equal white or black points. So my theory is that reducing the ISO is what's giving it the slight edge with whatever internal logic the camera has. Compensating with ISO makes the most to me, since I want to test camera settings independent of the lens. I might try compensating with shutter speed at some point. Also, just another note about my observations: it's not a full stop of dynamic range added. Nowhere close. Perhaps it's not even more dynamic range per se, but just better color response and sharpness (from a lower ISO?). Looking at individual color waveforms, it might not even be a uniform change between all three channels.
  16. I mean the RGB boost improved dynamic range in Gamma DR in my test. Yeah, I totally get why you closed the aperture. I was surprised that closing the aperture changed the dynamic range. I'd just never really considered it before, but it seems plausible that using different apertures could in fact affect the dynamic range of the scene that is captured.
  17. So changing the aperture change the dynamic range? That sounds more like a property of the lens than the RGB boost. That's really interesting, because in my test it seemed to help quite a bit. Perhaps it is dependent on what ISO you are at? Which would not surprise me. I may try to do more of my own tests later today. 0. I did extensive tests when I got my NX1 and found no difference between any of the MB settings regarding what info is captured. In other words, if you shoot with the MB at 0 and adjust curves in post, you will end up with the same image as you would by shooting at a higher MB. I concluded that adjusting MB is only useful if you will not be color correcting. I think the MB level simply raises the black level from IRE 0 to IRE x with a hard clipping at x, the inverse of lowering the RGB boost.
  18. I think this was the sticking point mark was querying, which we don't know yet.Though,as an open standard, DNG continues to evolve and improve as well. My mistake, I misunderstood. It seemed like you were saying that the reason producers shoot in ProRes because it has an easy workflow, and that since PRR has the same workflow as ordinary ProRes, it is preferable to DNG.
  19. @Jim Giberti so essentially, you're saying PRR is better than DNG solely because it provides a better workflow for FCPX users? In that case, wouldn't it make more sense just to give FCPX native DNG support?
  20. Nice! In 4, I'm not sure I understand what you mean by "the stop following then boosts IRE about 1." Does it not hit a wall then? My (possibly incorrect) interpretation is that you are seeing some slight increases on the waveform, but not enough to really classify them as more stops of DR. If that is the case, do you think perhaps that it's the H265 compression algorithm making tiny adjustments that push a few pixels beyond 90 IRE wall? I do think that it would be helpful to compare tests 3-5 with a test that has PW on, but keeps the RGB boost at 1.0. That way we can isolate what the RGB boost is doing. Also, which Picture Wizard setting are you using?
  21. I think we lost the term RAW to the marketers back when Redcode arrived. I'm inclined to believe that, like Redcode, Prores Raw is "mildly compressed images with a color filter," rather than "uncompressed linear sensor data" which would be truer to the concept of Raw. The fact that there are two flavors--Raw HQ and Raw--tells me that at least one of them has lossy compression. And it also seems that some cameras that do "Raw" over SDI have done some processing already, like the C200's Raw Lite which apparently has a log curve applied to it. None of that is to say it's a bad format, of course! Just slightly dishonest naming.
  22. 0-255, Gamma DR. The noise is actually more pronounced in the second image, but I think that is mainly from the image being sharper (less noise reduction, as others have mentioned.) I haven't done a lot with the new settings, but I haven't seen any abnormal color shifting to be honest. When I first got my NX1, I noticed that the color was not even across the luma range. For example, an underexposed whitecard will have a greenish tint, etc. It seems to me that the same rules apply.
  23. This was manual exposure. In the two shots I posted above, I did not do any exposure compensation. Both are at ISO 1600, f8, and 1/50. I also did the test compensating for exposure, and the same clipping occurred at the same point. I actually did a series of tests last weekend, both adjusting the ISO to compensate for the decreased brightness and not compensating, trying it with the RGB sliders at 0.5, 1, and 1.99. Everything was manual exposure. It was apparent to me that by setting the sliders to 1.99, you do get increased dynamic range and detail (at least at ISO 1600 and compensating to 800.) Based on a few other posts, this may not be true at lower ISO values (e.g. 400, compensating down to 200). The following shot is at ISO 1600, RGB sliders at 1.0 (the same one from above). This time, both images are at a 100% crop to show details. This one is ISO 800, RGB sliders at 1.99. Notice how the yellow text on the book is no longer blown out, without a significant difference to the overall exposure. You can even see a teeny bit more of the metal in front of the light bulb. Furthermore, there is much more detail. Try saving both images and flipping between them quickly in your photo viewer. The text is sharper, the noise is sharper--overall simply a better image.
  24. @mnewxcv Yes, the highlights are simply clipping early. Here's a screenshot in Resolve with the waveform for reference. This was with RGB sliders set to 0.5. You can see even without the waveform that the lightbulb does not reach full white. For comparison, here is the same scene with the RGB sliders at 1.0.
  25. I'd say that 16gb of Ram and a 7200 rpm mechanical HDD are adequate for editing 4k that has typical consumer-camera compression (I haven't tried specifically with HDR from new Sony cams). Uncompressed or Raw 4k might be too much for the hard drive. And CPU and/or GPU can be a bottleneck, depending on the codec--H.265 for example takes a lot of computational power to decode. No idea on the rolling shutter, but we all hope it's soon!
×
×
  • Create New...