maxotics Posted September 8, 2017 Share Posted September 8, 2017 3 minutes ago, Shirozina said: How about doing some proper research and getting your data from several sources - just like a scientist? I presented a fact. You said wrong with NO SUPPORTING evidence on your side. I give you evidence. Now you want more? Why do I have to do all the work here, Shirozina? You want to believe the eye can see a static 20 stops of DR fine, and you've made your point here, and people can either believe your evidence (which is none) or mine, which IS based on scientific research, go read the footnotes of the Wiki article. I don't understand why you're upset. You called me on something and I agree, I should give evidence, which I did. We're all good! Quote Link to comment Share on other sites More sharing options...
cantsin Posted September 8, 2017 Share Posted September 8, 2017 16 minutes ago, maxotics said: Scroll down to "Dynamic Range" https://en.wikipedia.org/wiki/Human_eye Yes, read the full paragraph - perceived dynamic range also includes the adaptation of the human eye and composite image created by the brain "The human eye can detect a luminance range of 1014, or one hundred trillion (100,000,000,000,000) (about 46.5 f-stops)". Quote Link to comment Share on other sites More sharing options...
Shirozina Posted September 8, 2017 Share Posted September 8, 2017 You presented a Wikipedia as fact - please do yourself a favour and do some proper research which also includes using your own eyes. Justin Bacle 1 Quote Link to comment Share on other sites More sharing options...
maxotics Posted September 8, 2017 Share Posted September 8, 2017 1 minute ago, cantsin said: Yes, read the full paragraph - perceived dynamic range also includes the adaptation of the human eye and composite image created by the brain "The human eye can detect a luminance range of 1014, or one hundred trillion (100,000,000,000,000) (about 46.5 f-stops)". Not sure if this is directed to me or Shirozina. So to be clear, I've never said we don't experience 20 stops of dynamic range. Never! Quote Link to comment Share on other sites More sharing options...
hmcindie Posted September 8, 2017 Share Posted September 8, 2017 Actually that part in wikipedia doesn't seemed to be sourced (citation needed). I take that 6.5 stops with a huge grain of salt (no way). Quote Link to comment Share on other sites More sharing options...
Justin Bacle Posted September 8, 2017 Share Posted September 8, 2017 1 hour ago, tweak said: There's a few threads on ML forums where 8-bit Raw is discussed, mostly coming to the conclusion that it isn't a very good idea. That's because we still cannot encode the image in a log space. 8bit linear encoding is very poor as it limits your maximum theoretical dynamic range and will be affected a lot by shadow noise. When a proper way to encode from linear to "log-ish" before writing to the card with not much impact on the cpu, then we might wanna start talking 8bit-10bit :D I did try 10bit raw mode too, I found it a bit noisy, and preferred 12 and 14 bits files. 10bit log should be better though Quote Link to comment Share on other sites More sharing options...
cantsin Posted September 8, 2017 Share Posted September 8, 2017 4 minutes ago, maxotics said: Not sure if this is directed to me or Shirozina. So to be clear, I've never said we don't experience 20 stops of dynamic range. Never! And that's why it's desirable to have 12-15 stops DR in film projection/display (since we perceive, for example, a landscape shot like we perceive a real landscape) which also means that you 12-15 stops camera DR to have an image without color banding. Quote Link to comment Share on other sites More sharing options...
maxotics Posted September 8, 2017 Share Posted September 8, 2017 You guys! Please follow me. Put your brain on neutral. When we go about our day, we move from dim areas, say our bedroom, at 2 EV say, to the outdoors which might be 12 EV and our eyes adjust. Our brain composes images that make sense to us, say our dimly lit hallway to the outside when we open the door. We experience 20 stops of dynamic range, or more, whatever. I have NEVER said otherwise. However, our eye is capturing these images in 6-stop exposures, so to speak. That's what the scientists have determined and what Wiki has published. When we view film/video we do not need high dynamic range to make the image appear real! That's why we don't complain about photographs that have a 2 DR spread. Our brain can figure it out within 6 stops. If our TV could display the 20 stops that are in the physical world, it would certainly look more real, but would it be practical? Would we be able to tolerate quick shifts in dynamic range from 2 EV to 12 EV? You tell me. I suggest you think about this and don't just assume it would work. I believe you will come to the same conclusion, that the visual artist works within 6 stops, 2 stops for print, 6 for projection. And by the way, most displays don't even do 6 stops well, you need to spend real money for that. Quote Link to comment Share on other sites More sharing options...
Super Members Mattias Burling Posted September 8, 2017 Super Members Share Posted September 8, 2017 43 minutes ago, maxotics said: Why is "adding the eye" into the mix silly? It's the only way I can see photographs. How do you see them? Our eye, like our camera, does not have variable dynamic range. It is around 6 stops, that's what they've measured. however, in low light the eye can do amazing things in those 6 stops and the same with bright light; HOWEVER, the pupil must resize for it do it. The brain doesn't so much modify what the eye sees as in create a real-time composite of all the information the eye has looked at. Sure, in real life we can take wide DR for granted. How often do we change brightness levels, say go from inside to out, or outside to in. In VIDEO however, there are quick cuts and to change brightness to the point where the eye would need to keep adjusting its pupil every few seconds would create a very disorienting experience. Stand in your doorway and look inside, then outside on a bright day, back and forth and you tell me how good your brain really is at switching. It's not because, believe it or not, the pupil doesn't change size instantaneously. I'm not trying to mix DR from a camera with that of screen. Sorry, but I feel you're really not paying attention to what I'm explaining and why I'm doing it. You're looking to disagree with what I'm saying. Fine. Assume you can always mix your 13 in the camera to the 10 for the screen (don't know where you get that EV spread). Most of your photographs I see are processed to very high contrast. Such post processing hides many defects of the subtle DR you have to work with. And if you believe the quality of your videos has remained the same since you stopped shooting with a RAW based camera, well, you should explain why Andrew just wasted his time with that guide (I LOVE your stuff Mattias, just having a little fun here! Get a 5D3, Andrew's guide, and come back to real digital film (RAW) Which by the way, is not about DR, it's about the fact that RAW doesn't chroma-sub-sample and degrade the image. Noise is organic. It doesn't contain the various artifacts of a digital processor) It is silly because you didnt understand my post. And you added extra information to it that I never said. I dont see images with my eyes. Nether do you. No one sees images. Our brain tricks us into seeing light reflection, thats all. So yes. We have variable DR, interpolation, monochrome and color. Minimum 20 stops DR. Super fast AF. Sharpness corrections. Content aware masking. You name it. Our brain fixes it for us. Ive seen how lengthy these "science" discussions get so Im backing out. And no 5D for me at this point. I love shooting ML Raw but I have no use for it at this time and I already have a nice still shooter. Good luck with the debate. /M Quote Link to comment Share on other sites More sharing options...
Don Kotlos Posted September 8, 2017 Share Posted September 8, 2017 The wikipedia article is lacking some detail and references. There are two kinds of receptors in the retina: rods & cones. The cones are the least sensitive and carry the color information, and the rods are monochromatic but are very sensitive. During daylight conditions most rods are saturated & most of our vision depends on cones. At each instantaneous time point they have a contrast sensitivity up to 1000:1 or 10 stops. But for longer time periods that ratio can increase dramatically. So during the few seconds that we enjoy a scene looking either the sky, the foreground or the forest we can have a much higher dynamic range. And this is how we perceive the whole scene. In the night (scotopic vision), our cones rods come into action and can offer an instantaneous dynamic range of up to 20 stops... So in few words, while the dynamic range for a moment is small, in reality (longer times) it is much higher. Your brain is really good at masking this adaptation process. Think of all the eye movements (saccades) that you need to make when you look outside your window and how much of it you actually are aware of. Quote Link to comment Share on other sites More sharing options...
maxotics Posted September 8, 2017 Share Posted September 8, 2017 4 minutes ago, Mattias Burling said: Good luck with the debate I was presenting my findings, which I've spent hours and hours upon. I don't look at your work as a "debate" 2 minutes ago, Don Kotlos said: So in few words, while the dynamic range for a moment is small, in reality (longer times) it is much higher. .Are you helping? Do exceptions prove the rule? Quote Link to comment Share on other sites More sharing options...
cantsin Posted September 8, 2017 Share Posted September 8, 2017 4 minutes ago, maxotics said: When we view film/video we do not need high dynamic range to make the image appear real! That's why we don't complain about photographs that have a 2 DR spread. Our brain can figure it out within 6 stops. If our TV could display the 20 stops that are in the physical world, it would certainly look more real, but would it be practical? Would we be able to tolerate quick shifts in dynamic range from 2 EV to 12 EV? You tell me. I suggest you think about this and don't just assume it would work. I believe you will come to the same conclusion, that the artist works within 6 stops. Ansel Adams' zone system has 10 stops and is meant for mere prints, not the extended dynamic range of film projection. Quote Link to comment Share on other sites More sharing options...
maxotics Posted September 8, 2017 Share Posted September 8, 2017 4 minutes ago, cantsin said: Ansel Adams' zone system has 10 stops and is meant for mere prints, not the extended dynamic range of film projection. Oh brother The zone system is ALL about fitting 10 stops of physical world detail into 2 stops print-world of DR! It is NOT about displaying 10 stops!!!!! Indeed, think about it, how much DR does film capture in relative brightness? Almost none. Quote Link to comment Share on other sites More sharing options...
Don Kotlos Posted September 8, 2017 Share Posted September 8, 2017 19 minutes ago, maxotics said: Are you helping? Do exceptions prove the rule? What rule, what exceptions? I am trying to explain how it works. What you perceive as dynamic range in typical daylight conditions is much higher than what your eyes can detect in an instantaneous moment. I can go into more details if you want, I am a neuroscientist specializing in vision The problem is when you capture the footage with limited dynamic range and then on your monitor you look at the sky, you see a large white patch with a contrasty line between the clouds which you have never observed in reality. So while most monitors cannot produce more than 6-10 stops of dynamic range, capturing the dynamic range can be useful for example if you want to hide the hard clipping of the photosensors as Mattias said. Of course putting 14stops in an 8bit file can lead to all sorts of other problems such as loss of tonalities or artificial looking footage (most HDR photographs). The large dynamic range of sensors is most useful during night, as is for our eyes since the rods can be slow to adapt at times... Quote Link to comment Share on other sites More sharing options...
cantsin Posted September 8, 2017 Share Posted September 8, 2017 26 minutes ago, maxotics said: Oh brother The zone system is ALL about fitting 10 stops of physical world detail into 2 stops print-world of DR! It is NOT about displaying 10 stops!!!!! Indeed, think about it, how much DR does film capture in relative brightness? Almost none. Further above, you wrote that "the true artist works within 6 stops". This is ambiguous language because it doesn't say whether you mean motif contrast or display contrast. If you mean motif contrast, Ansel Adams would be wrong. If you mean display contrast, your number would neither match the (low) contrast of photographic prints, nor the (high) contrast of film/cinema projection and modern flatscreens. What exactly is your point? Quote Link to comment Share on other sites More sharing options...
maxotics Posted September 8, 2017 Share Posted September 8, 2017 19 minutes ago, Don Kotlos said: What rule, what exceptions? I am trying to explain how it works. What you perceive as dynamic range in typical daylight conditions is much higher than what your eyes can detect in an instantaneous moment. I can go into more details if you want, I am a neuroscientist specializing in vision I'm 56 and have been doing photography since early early teens--you'd think I'd get good at it by now I appreciate your knowledge, I do, but it's confusing the issue here, which is what the filmmaker should focus on. The stuff you're pointing out, true as it is, is in the weeds. Like many others here I used to focus on how much DR I could get in my image. Then I had the epiphany that I was thinking outside in (physical world to viewing world) when improvements can come by looking at it from inside-out (viewing world to physical world). That is, by thinking about what the viewer ultimately sees, and how that image can be the best, I can understand what's important and what isn't. I wasn't trying to start a debate It's like Plato's cave. Everyone wants to talk about the metaphysical forces that create the image on the wall. I enjoy doing that too. But much can be learned by talking about ONLY what we see and what makes for good quality there. Don Kotlos 1 Quote Link to comment Share on other sites More sharing options...
Drew Allegre Posted September 8, 2017 Share Posted September 8, 2017 6 minutes ago, Justin Bacle said: That's because we still cannot encode the image in a log space. 8bit linear encoding is very poor as it limits your maximum theoretical dynamic range and will be affected a lot by shadow noise. When a proper way to encode from linear to "log-ish" before writing to the card with not much impact on the cpu, then we might wanna start talking 8bit-10bit :D I did try 10bit raw mode too, I found it a bit noisy, and preferred 12 and 14 bits files. 10bit log should be better though Can you explain how raw data would be encoded into a log space? I always thought raw was just raw (to an extent) with very little manipulation. Wouldn't all raw be linear, and any "log" space could be created from the linear raw? I'm completely ignorant on this stuff. Quote Link to comment Share on other sites More sharing options...
tweak Posted September 8, 2017 Share Posted September 8, 2017 3 hours ago, Justin Bacle said: That's because we still cannot encode the image in a log space. 8bit linear encoding is very poor as it limits your maximum theoretical dynamic range and will be affected a lot by shadow noise. When a proper way to encode from linear to "log-ish" before writing to the card with not much impact on the cpu, then we might wanna start talking 8bit-10bit :D I did try 10bit raw mode too, I found it a bit noisy, and preferred 12 and 14 bits files. 10bit log should be better though Exactly. Justin Bacle 1 Quote Link to comment Share on other sites More sharing options...
Don Kotlos Posted September 8, 2017 Share Posted September 8, 2017 4 hours ago, Drew Allegre said: Can you explain how raw data would be encoded into a log space? I always thought raw was just raw (to an extent) with very little manipulation. Wouldn't all raw be linear, and any "log" space could be created from the linear raw? I'm completely ignorant on this stuff. In principle RAW should be data directly off from the A/D converter. But that is hardly the case for most manufacturers since there is always some level of processing, usually selective noise reduction and gain for each channel. I would say that RAW data right now refers to just non deBayered data + some metadata. Log space just means to calculate the log (or other log like function) of the high bitdepth pixel value before you conform it to a lower bitdepth in order to maximize the information that is captured in the lower part of intensity scale. So for example lets take 12 stops of light captured from the sensor and processed with an 12bit A/D converter: stop 1 2 3 4 5 6 7 8 9 10 11 12 0 2 4 8 16 32 64 128 256 512 1024 2048 4096 if we were to scale that into 10 bits we would have these values: 0 1 1 2 4 8 16 32 64 128 256 512 1024 We just lost two stops of information in the shadows... So a log version of the original 14bit data, scaled to 10 bits would give: 0 85 171 256 341 427 512 597 683 768 853 939 1024 And that results in no loss of dynamic range by spreading the values for each stop equally across the bit range. Of course things are a bit more complicated since the log function that is used has to take into account the sensitivity of the sensor, the color calibration and so on. For example you might want to assign more values to the midtones which most things are anyways, and less to the highlights. Drew Allegre, Justin Bacle and EthanAlexander 3 Quote Link to comment Share on other sites More sharing options...
maxotics Posted September 8, 2017 Share Posted September 8, 2017 5 hours ago, Drew Allegre said: Can you explain how raw data would be encoded into a log space? I always thought raw was just raw (to an extent) with very little manipulation. Wouldn't all raw be linear, and any "log" space could be created from the linear raw? I'm completely ignorant on this stuff. Almost everything we experience/measure, is either counting (linear) or some form of exponential (log). We measure the sensor by counting (linear); that is how many photons have hit the sensel, but we generally express/measure the amount of light exponentially, like f-stops 1/1, 1/4, 1/9, 1/18, 1/32, 1/64 etc. (we put fractional stops in between). That is, if we count 100 photons at one pixel, and 200 photons at another pixel, we generally don't come to the conclusion that the difference in light, measured through the inverse law https://petapixel.com/2016/06/02/primer-inverse-square-law-light/ is double in the second pixel. Double would actually be 400 photons! In the real world, light behaves exponentially. In our data world, we measure/or store number in a linear space. Put another way, humans think in numbers for counting. God thinks, or nature behaves, exponentially. @cantsin can explain better than I why measuring light linearly, or I should say, using the sensor's linear readings at the low and high end generally mean you end up with a lot of useless data. Many of us have hashed this to death. The core problem is you can't save 12 stops of full-color dynamic range in an 8-bit space. It doesn't matter how those numbers are represented, LOG or otherwise. The 8-bit space was designed for our viewing, not for our capturing. LOG can't magically "compress" the sensor data into an 8-bit space. It can help in telling you where you can trade off some data in one area for another, as Don mentions above, but the rub is the area you're getting a lot of noise is also the area you can least afford to lose if you're extending beyond the 6-8 stops you can capture in 8-bit! LOG is also used in video processing to distort data into an 8-bit space. Just to 'f' with us humans, LOG also does a better job of explaining light as we really experience it Unfortunately, our computers, which are binary, don't work with data (like RAW sensor data) in LOG, let alone the decimal system As a side note, Canon RAW, being saved in 14 bits created huge problems for me (and others) in software development. Everything in programming makes it easy to work with in bytes. All electronics are based around that. It's easy to say, get me this byte, or two bytes. To say/program get me this 14 bits...no fun. Take my word for it !!!! My guess is most cameras are based around 8-bit (1 byte chunk) storage. The RAW ML works with is just a stream of bits, which again, you have to parse out into 14-bit chunks. To make the data easy to work with in camera, or PC, they'd really need to make them 2-byte (16 bit) chunks. Well, most small electronics do not use 16-bit processors. You'll read many conspiracy theories about this stuff, but the mundane truth is our electronics aren't up to working with 12-stop (12 to 14-bit) dynamic range data. Drew Allegre 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.