Jump to content

ND64

Members
  • Posts

    717
  • Joined

Everything posted by ND64

  1. Its paper. You mentioned "automative" sensor as an example, and I said its very common in this industry to achieve big FWC with binning. What evidence you have that proves otherwise?
  2. And all automative sensors do some kind of pixel binning. Use this tool and set binning factor to 4 and see what happens: https://www.microscopyu.com/tutorials/ccd-signal-to-noise-ratio
  3. People don't throw away their perfectly fine 4k TV to get new 8k one, if there is no visible benefit. And the problem with 8k is that you need 100 inch TV to see its 4x-more-than-4k pixels. In 55 inch and at typical distance its completely meaningless.
  4. In science there is distinct answer to every question, but there are countless ways to make a raw data, visible! So its a anecdotal process, not science. I just put better description from two DPR forum members here: "The term 'colour science' is somewhat meaningless, and has become a marketing and fandom term whereby companies and people can say 'my colours are exceptional due to my special magic'. It's nonsense. So, let's look at the science. Colour vision is a tri-stimulus phenomenon. Your eye has three different kinds of receptor, which have three different spectral responses, generally called S,M and L (a lot of confusion is cause by people incorrectly thinking that they are 'Red', 'Green' and 'Blue'). Your brain decodes the mix of stimuli received in these three types of cone into a colour. Any light stimulus that causes the same mix of S,M and L will cause the same colour to be perceived even if the spectrum is completely different. However, it's even a bit more complex that that. How the brain decodes the colour is not constant, it depends on the prevailing conditions. So, under sunlight and incandescent lighting, the same object will reflect different spectra, and thus produce different S,M and L stimuli, yet the brain will decode the colours the same, because it applies a 'colour temperature' correction.Your camera also has a tri-stimulus system, but its responses are not the same as S,M and L, so your camera doesn't 'see' the same colours as you do. Worse, the spectrum of the three camera stimuli is different camera to camera. Thus, you can't 'see' the colours in a raw file, and your idea that you can 'see' some differences in a raw file is just wrong. To 'see' a raw file it needs to be 'developed'. This means taking the camera-specific stimuli and calculating what human eye responses they should elicit, taking into account the lighting conditions. The human eye responses used are based on a body of work done by the by the International Commission on Illumination known as the Commission Internationale de l’Elcairage (CIE) back in the 1930s. This defined a 'colour space' based on a set of standardised stimuli called XY and Z, which provides a possible description of most human visible colours. From this XYZ space, other spaces for useful devices, such as RGB (if you have a display device based on red, green and blue emission) might be derived. Until your raw file has been transformed into such a standardised and meaningful space you can't 'see' it. When you're 'viewing' a raw file, what you're 'seeing' is a display of a file which is the result if such a development process. This might be a JPEG file embedded in the raw, which the camera generated when you took the photo, or it might be other results of a 'CODEC' operating in the background in your operating system. In either case, what you're seeing is a developed file, and any differences are likely to be as a result of the development process." "Decisions regarding colour made in different converters and cameras, however, do differ. Big part of the difference is that different assumptions of typical colour perception and pleasant colour are made. Another part is that some designers are either rather illiterate in colour science, or employ obsolete tools and notions. Also shortcuts and trade-offs made to balance the system differ. Ultimately, from marketing point of view, cameras are JPEG-producing tools, and everything, including sensor spectral response, is nearly inevitably designed to produce nice-looking JPEGs faster. Sensors in different cameras may have intentionally different spectral responses, different compensations and calibrations may be applied before the raw data is written to what we call a raw file, and after all that a raw converter takes on, adding its own technologies (that may favor some spectral responses over others), trade-offs, and idiosyncrasies to the mix. Because raw file formats are not publicly documented, third-party converters may have additional troubles interpreting colour. Custom profiling of raw converters using modern software, custom-measured targets, and good target shooting discipline makes the difference to mostly go away."
  5. There are pro shooters who can match Canon still raw to Nikon raw, to the level that nobody recognize which is which. The main reason its a bit difficult is that these companies don't share any data about spectral response of the CFA used on the sensor.
  6. Nikon has some patents for that idea, but its difficult to make, and even if they managed to do it, its very unlikely they put it in a off the shelf product. I think its more simpler, just a pixel binning. 605000/4=151000 which is typical FWC in modern CMOS sensors. Note that the next step is 81500, almost half, as it should be.
  7. Arri solution decreases the read noise.. nothing to do with FWC. The only thing than can create such a huge FWC is 2x2 pixel binning, which everybody can do, but usually don't.
  8. 605000 is too good to be true for this pixel size. Probably its a base-only HDR trick.
  9. Why? You can get all these bitrates with h.265. Anyway, how on earth anyone can compare efficiency without any info about PSNR and visual quality?
  10. I don't think the target market, who probably demand RAW, care about Fuji's film and photo color know-how. They just want a clean unaltered data in a manageable codec. I doubt that. What solution new players are going to provide that RED, Sony, Panasonic, BMD and Canon can't?
  11. "It’s the gain level that introduces noise, not the lack of light." Its exactly the opposite, brother. Gain actually decreases the noise. Lack of light increases that.
  12. I think it does. But vignette at 35mm shouldn't be much.
  13. Look at the chip size. Even if they put this in A7S, they will ditch the IBIS, cause it would have issues with moving 39mm wide sensor in that restrictive mount. It also helps them handle the heat. However, this is off-the-shelf chip (Sony Semi doesn't need to write marketing stuff for Sony Imaging, right?), meaning its available for camera makers to buy NOW. Its very unlikely that they use a sensor for their upcoming camera AFTER it was available for others.
  14. The difference is less than a half stop and only at base ISO.
  15. 16bit is ordered by marketing department. Even 14bit is only useful at base ISO, beyond that you only waste the extra bit with noise.
  16. I haven't seen a sport shooter with 20mp or 24mp camera who says "I need more pixels".
  17. Same ambient light, same lens, same settings, same processor, and yet different AWB values.
  18. Added a quick clarity and sharpening to DPR's 4k sample of A7III. They are very close in details. And as we know, Sony Standard color is terrible.
  19. Imagine the meeting where one of them suggested banning eoshd URLs in the forum and others said yea, its a good idea lets do it. Imagine the volume of RAW arrogance in that room.
  20. I can open his forum to see the comments but got 404 error when I clicked on post titles! What a joke.
  21. They did their best to lower the cost of production as extreme as possible and yet they reported lower profit for their half year.
  22. Its not hate. Its a natural reaction to absurdity of CEO's cluelessness about the company in her hand.
×
×
  • Create New...