-
Posts
957 -
Joined
-
Last visited
maxotics's Achievements
Frequent member (4/5)
443
Reputation
-
Jedi Master reacted to a post in a topic: Blade Runner 2049 bombs at box office
-
PannySVHS reacted to a post in a topic: Does anyone shoot in B&W?
-
Geoff CB reacted to a post in a topic: Software to trim MP4 4k files without recoding
-
maxotics changed their profile photo
-
zetty reacted to a post in a topic: How to Take Advantage of Our Entirely Saturated Market and Make Money
-
kidzrevil reacted to a post in a topic: Canon 5D Mark III - 3.5K and 4K raw video with Magic Lantern
-
graphicnatured reacted to a post in a topic: Time to dump Adobe. First impressions of Resolve 14 and EditReady 2.0
-
From the Sony technical document you claim there is "no mention of pushing and pulling": 3.1 Changing the Sensitivity (Push Process / Pull Process) The most common method is to adjust the MASTER GAIN of the camera as shown in Table 4. The image contrast that appears on camera viewfinders and the on-set displays will remain consistent, hence it is easier to monitor 3.1.2 Push Process Increasing camera gain will improve camera sensitivity but will increase the camera noise floor. When extra dynamic range is required, the exposure value should be defined according to the light meter readings as in film. 3.1.3 Pull Process Reducing the camera gain will improve the signal to noise ratio performance. This technique is suited for blue/green screen effects shots where pulling a clean key is of prime importance. It should be noted that the camera dynamic range will be reduced. Chart 1. Comparing the Differences in Effects of Push/Pull Processing between S-Log and film Push Processing (+ve) Pull Processing (-ve) Contrast Latitude Graininess Contrast Latitude Graininess Film Increases Narrows Increases Reduces Widens Decreases S-Log No changes No changes Increases No changes Narrows Decreas Thanks @IronFilm!
-
Maybe all of us who have SOMEHOW lived without a talking refrigerator will now be rewarded I strongly predicted that Samsung would not give up on their cameras. I was wrong, but not completely wrong because they still sell them. And we must keep that in mind. It's still speculation what's going on with their R&D and factory. If Samsung is listening to all the passion for HDR around here,@jonpais I'm talking to you then NX2 is on the way! 6K would greatly improve color texture in 4K HDR!
-
The dynamic range may be more perceptible but that doesn't mean it gives a higher quality to the filmmaker than the gain in color information, no matter how small. You accept that the trade-off exists above, and then try to argue with me again. I don't get it. It's always up to the filmmaker's subjective decision. I've said that from day one and will keep saying it. I only start arguing again when one say the trade-off doesn't exist. Again, don't know why we're arguing? Small, big, up to the filmmaker. As for @HockeyFan12 second article which is from the beginning of Sony's S-LOG development. Sony writes, "S-Log is a gamma function applied to Sony’s electronic cinematography cameras, in a manner that digitally originated images can be post-processed with similar techniques as those employed for film originated materials" They weren't trying to replace rec.709. They were ONLY giving a filmmaker a way of shooting where he could push or pull the "digital negative" like a film negative. S-LOG was designed for filmmakers were used to the ability to push or pull their FILM! They didn't design it to provide a "better" image than what the camera already recorded in rec.709 with a LOG curve already applied, as @webrunner5 pointed out. Just a different one. In the exposure guide Sony gives these specs Table 1 F35 Dynamic Range in Cine Mode ISO S/N Exposure Latitude over18% Exposure Latitude below 18% Total Latitude (Dmin to Dmax) 450 54.5dB +5.3 Stop -6.8 Stop 12.1 Stops 500 53.6dB +5.5 Stop -6.6 Stop 12.1 Stops 640 51.5dB +5.6 Stop -6.3 Stop 11.9 Stops Notice how the signal/noise at 450 is 54.5 and it is 51.5 as 640. That tells the filmmaker that harder he "pushes" his digital negative the more noise he will experience. So think of the noise difference between shooting 400 ISO on your Sony/Panasonic camera and 800 ISO in LOG gammas. The reason sony didn't go into color loss is that any filmmaker who uses film is ACUTELY AWARE of color loss when pushing negative stock. Now again, Sony isn't going to tell the users of their cameras how to shoot. But I am going to tell any young filmmaker, who has only known digital video, that they want to be careful jumping to conclusions about what 8-bit LOG gammas can do, and cannot do.
-
maxotics reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
maxotics reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
maxotics reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
maxotics reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
Thanks, I think you've finally give me the information for my report, when I finish it! THANKS!!!! LOG makes sense for a 16-bit linear scan in old scanners because the data DOES need to be in a LOG scale to work with displays. That makes total sense to me. However, I believe some people assume that the camera manufacturer don't already pick data into a LOG distribution for cameras. That is, they believe there LOG gives a trick to give more DR that the camera manufacturers didn't "notice" until they put LOG gammas into their shooting profiles. I believe that is totally false. Needed gamma adjustments to electronic visual data in cameras have been understood from day one. Scanners are a whole other story. The links you gave appear to me as broken graphics. Can you try again? Thanks! @IronFilm What is distortion? My understanding is that It can either result from sound pressure overloading the microphone's diaphragm or from a data stream that has more dynamic range than the data containers can hold? The reason some don't notice a relationship between dynamic range and bit-depth is that in audio, the equipment is more than powerful enough to capture all the electronic data produced by the microphone and a resolution where people do not notice distortion. This is not the case in video. Current sensors produce way more data than consumer cameras and memory cards can handle. One can consult the table values I attached above. In any recording of physical measurements, dynamic range must have enough bits to record whatever resolution of data is required to give a continuous signal. As you know, there are people who argue that we can discern distortion unconsciously even at 44hz resolution. What do they believe one needs to reduce that distortion? More samples which means more data. Ultimate Dynamic range = resolution / range-of-values. If I can sum my whole point about dynamic range is that one can't think of range only (5-14 stops, whatever) one must factor in the resolution (the "dynamic") needed to give continuous color visually or continuous tone in sound.
-
maxotics reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
IronFilm reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
PannySVHS reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
PannySVHS reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
Yes, it's better in "10-bit" because it's shooting chroma sampling 422, but calling it "10-bit" well, I'll leave that alone . I don't dispute that I'd rather grade 422 than 420. What I question is whether the 10-bit from the GH5 is the same as the 10-bit ProRes from the BMPCC as @Damphousse Anyway, like you, I don't have any real problems with 8-bit. Will the C-LOG from my Canon C100 look better on an HDR TV--most likely! But there are other reasons for that than dynamic range. The whole 15-stops of DR in 8-bit claims are beyond ignorant to me, but again, I'll say no more. The C100 gives a super beautiful image in 8-bit. I just started this thread hoping some people were viewing who have worked with professional equipment for the past 15 years and could shed light on the question of when LOG began, etc. I love all this technology, the cameras from Panasonic, Sony, BM, Canon, Nikon, etc. However, this is supposed to be a forum for filmmakers. I hope to help them understand something that is going on under the hood so they can make better decisions. I hope to learn from them, what they experience in their shooting. It's sad I can't give opinions about what HDR can, or cannot do. I should probably leave this forum for a bit.
-
EthanAlexander reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
I'm not going to go that far, Mark. What I said about the limitations of HDR I still believe true. If you believe I have given incorrect information please post it right here. Please quote me verbatim and give technical proof of any technical inaccuracy I have given. I have given technical data above, to show the difficulties inherent in providing increased dynamic range. I am the closest person here to a real engineer as I have worked with RAW data on a very low level. For example, when you tell me you can understand this then let's talk https://bitbucket.org/maxotics/focuspixelfixer/src/016f599a8c708fd0762bfac5cd13a15bbe3ef7ff/Program.cs?at=master&fileviewer=file-view-default "Those people who know more than you on HDR" is who? Sorry, but just because you can go out and buy a $5,000 camera and TV doesn't mean you know anything about how it is built, how it works, or what it can do when measured SCIENTIFICALLY against other TVs. Unlike you, I don't just post clips from expensive cameras of walking around in parks and train stations. I build software and experiments to test what cameras do. That IS my thing. I build gadgets to help with technology. I've designed and built cameras that take 1+ gigapixel images. http://maxotics.com/service/ though a single optic. If you don't value what I've learned fine, but I don't see why you need to leave the nastiest comment I've ever read here on EOSHD.
-
I never said it wouldn't be more pleasing to me. I said I was doubtful it would solve the DR problem inherent in 8-bit equipment. It can be better for a lot of other reasons having nothing to do with DR! I've said this a lot but feel my statements have been taken out of context. If I could do it all over again I wouldn't have said or speculated about anything HDR since it just wasn't appropriate because some people are just getting into HDR and it dilutes the worth of what they're doing (which is the last thing I want to do). For that, I am sorry.
-
maxotics reacted to a post in a topic: What was the first professional camera to shoot LOG gamma?
-
You guys are killing me! You know, I want to be as liked as the next guy. When I first started this stuff years ago I got into a huge fight with someone on the Magic Lantern forum. I insisted each pixel captures a full color. I went on and on and on. Much like you guys are doing to me. I feel shame just thinking about it. In the end, I learned 2 things 1) What a CFA is and what de-mosaicing does and 2) Always consider the possibility I might be not just wrong, but horrendously, embarrassingly wrong. It's what we do after learning our errors that define us (hint, hint @IronFilm). Anyway, after the MF thing I try to be like the guy who educated me. He didn't give up on me and I'm glad he didn't....but it's hard.
-
Yes, it is. The question is how 10-bits are measured. In RAW, 10-bits would mean 1,024 values of each R,G,B value. That is certain more dynamic range than storing 256 values (8-bit). If we're talking about 10-bit, in the first sense, we need 1,024 x 1,024 x 1,024 = 1,073,741,825 full-color values. What amount of memory do you need to store a pixel's color in that range? I'm attaching a table of data that I suggest studying and thinking about. The truth is, 10-bit is not 10-bit the way you (and I) would like to think about it. The extra 2-bits goes into reducing chroma-subsampling, or the the amount of color compression the camera does across macro blocks in an 8-bit dynamic range. It does not increase a pixel's dynamic range! You can see this in the Panasonic GH5 specs I included.
-
Sorry, I'm just frustrated. I believe you get everything I'm saying. My guess is you have a reverse blind-spot to Jon. Sorry! You obviously shoot with high-end equipment, so your cameras have fat-pixel sensors and powerful electronics. LOG IS useful to you. But I believe sometimes when you think about LOG you forget that you're thinking about LOG in high-bit depth or cinema-sensor contexts. Many people on this forum have never shot RAW or high bit-depth cameras. All they know/have, is 8-bit. That's always what I'm focused on. Anyway, THANK YOU SO MUCH for your observations. I haven't used the equipment you have and I certainly don't have your experience so I find your comments extremely interesting. Now that I hope we're getting somewhere. I didn't say that Sony and Panasonic were lying. But I do question what they mean by 10-bit video. @HockeyFan12 do you find Sony consumer and Panasonic cameras to have true 10-bit, like an Arri? I don't believe they do. I believe they are 10bit in adding a couple of decimal places to the internal 24-bit color values, but I don't believe they are 10-bit in saving 1,024 bits per color channel. Let me know if my question doesn't make sense.
-
Yes! Because c-log has been tuned to give the most amount in increase DR "look" without super-compromising color. It's a beautiful look, but it's also a sensor made for video. Anyway, I shoot LOG, never said anyone shouldn't. All that said, c-log isn't my first choice.
-
Oh, I could pull my hair out! Those Alexa files (HD) are at around 42 MBS/sec, in other words, pretty close to what you need for a BMPCC or ML RAW. 4K it would be 4x that amount. 10-bit isn't the same for all cameras. That is 10-bit on an Alex isn't the same as 10-bit on a GH5 because the former is, in the case above, doing 444 which is essentially full-color compressed RAW (no chroma-subsampling). It's not a "tiny fraction of the size" in my book. It's more like half the size of RAW, which, don't get me wrong, is nice! No matter how many ways or times I try, some people don't want to read the fine-print or again, figure out if they're really getting 10-bits of DR with no color loss. @EthanAlexander I don't question your viewing experience. Again, I've never said HDR can't be good. The disconnect here is that you and Jon are looking at images where the brain doesn't care if mid-tone color is reduced. If I'm shooting a beach, or landscape, with no one in it, I would shoot LOG. Again, never said one shouldn't use LOG. However, if a person was the subject on a beach, I might not should LOG to maintain the best skin tones. This is a subjective judgment! I might shoot LOG anyway. However, if I did I would not expect the LOG footage to give me as nice skin tones as I would have gotten shooting a standard profile and letting the sky blow out. Your example above demonstrates my point. In the non-HDR version of the truck the red headlights are nice and saturated. The image is contrasty, but not because a standard profile is naturally contrasty but because most of the image data is OUTSIDE the sensor's ability to detect it without a lot of noise. In your second image, you have more detail in the street, and if that's what you're going for, good! But look at the truck's red headlight and the yellow paint. It's all washed out. And if you go back to your editor and try to saturate them, you will get close to the other image, but this time you have high contrast and crude saturated colors, a LOSS against your original image which at least had nicely saturated colors around your center exposure. It's the same thing with your helicopter shot. If you look for rich colors you won't find it in the HDR. Again, all depends on which look you're going for. If you don't like nuanced saturated colors then, then your HDR is great. I do favor saturated colors. We should respect each other's values, right? For anyone to say I can get what I want in HDR is fine, but they need to prove it to me. The images you posted just confirm my experience. But thanks for posting them!!!!!!! We all learn by looking at evidence! This is why I wish Jon would be more careful in accusing me of things. I never said the manufacturers are covering anything up. I said they have no incentive to release test/specifications of their consumer equipment. No professional gives an arse about any of this stuff I'm talking about, because they just go out and get a camera that gives them what they want. If Netflix gave me money for a show do you think I'd shoot it on a DSLR Arri here I come I'm confining myself to the subject of this blog which is understanding and getting the most out of consumer equipment.
-
HDR is a scheme to sell more television sets. They are not artists living off free-love. How much the technology can/will HDR deliver is the question. I NEVER said manufacturers were lying about depth. Please, if you going to put words in my mouth please quote me. I thought I answered all your questions. I don't know exactly what the eye can take in. I only explained my experience. You said it looks fantastic. I said, 'great, I look forward to it'. Yes, you can't see the difference between TRUE 8-bit and 10-bit image data, but that was NOT what we were talking about. We're talking about 8 and 10-bit consumer video. And I don't understand why you fight me so much on these technical issue when you say you're not interested in math? I was never good at math, but have taught myself what I need to understand, I believe, these issues. I made that effort. You don't have to make the effort, if you don't want. But to fight me on it when I have done the work, and you haven't, is well, disrespectful. You may think I have disparaged what you said about HDR. If you read again,you will see I have not done that. I have only pointed out technical problems that I would think they're up against. Because, as I've said repeatedly, I have not seen good HDR I can only speculate. And for the umpteenth time, if LOG can't really fit good color AND extended DR into 8-bit, how will HDR make an end-run around that? Again, not saying there isn't more to the technology. I'll just have to see, literally
-
maxotics reacted to a post in a topic: This guy is hilarious
-
Am I speaking English? I don't disapprove of HDR or LOG or anything. I only DISAGREE with claims made that LOG can fit more DR range into a fixed data-depth without, essentially, overwriting data. And I'm not making a judgment on you, or anyone else, who believes they can grade 10-bit footage better than 8-bit. This stuff is esoteric and, in the scheme of things, completely pointless. One shoots with what they can afford. Even if I approved, who gives a sh_t? I've gone to great pains to figure some of this arcane stuff out. If you say, I don't care what you say Max, I love my image. I have no answer to that! If you say, 'this isn't working as well as I thought it would' (which is what happened to me and got me on this whole, long, thankless trip), then I have some answers for you. The short answer is the manufacturers have already maximized video quality in 8-bit and there is no way of recording increased DR unless you give up color fidelity. Therefore, if color fidelity is your prime goal, don't shoot LOG. In 10-bit, theoretically, there should be more DR, but 10-bit isn't true 10-bit from 10-bits of RGB values, but is 10-bit in the sense that when three 8-bit values are averaged, say 128 and 129 and 131, it saves that as 129.33 instead of 129. If you want me to explain this is greater depth let me know. Though it doesn't seem you trust anything I say.
-
98% false. You might be enjoying a placebo effect. But I'm not going to beat a dead horse here. In order to display a continuous color gradient, say, each bit must contain a color that will blend in with the next color. If it doesn't we notice banding. I'm just using banding because it is the easiest way to visualize when the bit-depth is great enough to spread a color out evenly through the data space. So let's assume that we never see banding in 5 stops of DR with 256 shades of red (8-bit). If we reduce it to 200 shades, we notice banding (and if we increase it to 512 we don't notice any difference). Okay, so let's say we shoot at 10 stops how many bits do we need to maintain smooth color assuming, again, that we accept that we needed 256 in 8-bits? Please think this out, @IronFilm and explain what bit-rate you came up with and why. Then explain why bit depth and DR and not married at the hip. To put this in audio terms, which you should know more about than I, why don't you record your audio in 8-bits which, I believe, is closer to what is distributed than the 16-bit you probably use? Is DR in audio not connected to bit-depth? If I told you I had a new "Audio LOG" technology that would give you the same quality you get with your mics at 8-bit what would you say to me?