maxotics Posted November 13, 2014 Share Posted November 13, 2014 The sensor can do 16000 frames per second, so that color filter is faster than that. You mean the color filter could be slower than that and work? The Sony sensor might use the technique used in the first color photo, except at a pixel level. The camera would in 0.0139/sec take a red reading, shift to the green filter, take a second reading, then to blue filter and take the 3rd reading, all within 0.0417th of a second. Look at these images: http://educacionporlaaccion.blogspot.com/2013/04/evolucion-historica-del-color.html Quote Link to comment Share on other sites More sharing options...
Nikkor Posted November 13, 2014 Share Posted November 13, 2014 You mean the color filter could be slower than that and work? The Sony sensor might use the technique used in the first color photo, except at a pixel level. The camera would in 0.0139/sec take a red reading, shift to the green filter, take a second reading, then to blue filter and take the 3rd reading, all within 0.0417th of a second. https://www.blendspace.com/lessons/fhBlI8h96rfxPA/history-of-color-photography (see Number 4) No, it should be faster. So when you shoot at 1/4000 it probaby does a lot of cycles so I don't see the problem with moving subjects. Anyway, this is just speculation. Quote Link to comment Share on other sites More sharing options...
maxotics Posted November 13, 2014 Share Posted November 13, 2014 No, it should be faster. So when you shoot at 1/4000 it probaby does a lot of cycles so I don't see the problem with moving subjects. Anyway, this is just speculation. Yes, even whiskey wouldn't hurt this conversation ;) I don't see how they can do it without physically moving some kind of filter over the sensor. One of the problems with sensors, of course, is the space between actual light reading silicon. I believe a similar problem is present here, the time between filter changes. Quote Link to comment Share on other sites More sharing options...
Oscar M. Posted November 13, 2014 Share Posted November 13, 2014 The readout speed of the sensor is astonishing if the leaked specs are to be believed. 1080p at up to 16,000. That figure is very unlikely to make it into any finished product though, as frame rate also depends on how fast frames can be handled by the image processor and video encoder. Don't forget the storage media controller's speed. Anything that fast will need nothing less than a SATA controller or storage to external media. Quote Link to comment Share on other sites More sharing options...
BenjaminJ Posted November 13, 2014 Share Posted November 13, 2014 Respectfully, let me provide a correction: Sigma's Foveon X3 sensors do not use stacked color filters -- they measure color by penetration depth in the silicon (red penetrates deepest and blue least deep, green in between). The only advantage that this type of sensor has is that the spatial color resolution for red and blue is twice higher than with a Bayer sensor and there is no color aliasing or colored moiré (but still some liminance aliasing/moiré). The downside is that the colors need to be calculated with heavy mathematics and this is error prone, leading to more metameric failures (failure of the sensor to differentiate between similar colors -- something that cannot be fixed with post-processing). I have no personal experience with Sigma cameras but have heard complaints that the sensor output is unpredictable, especially with mixed light sources. Also, if you think Leica can really make the sensor of their M Monochrom for $50 you're out of your mind. :D They only removed the color filter array, which isn't exactly the most expensive part. The very limited production runs are obviously impacting the price (and Leica is a boutique brand anyway so lowering their prices would hurt their image). Regarding the "electronic color filtering" of this Sony sensor: I would presume it's a solid state technology and not some set of physical filters moving across the sensor. I use them. I hate everything about them except the final image. If I have good light, and time, and need medium format quality still in my pocket, they are the ONLY game in town. The same for your BMPCC if you want 1080 RAW video in your pocket. My gut feeling is this new Sony sensor is more about semantics, than any true pixel-level RGB sampling (like Foveon). The problem is something along the lines of Heisenberg Uncertainty Principle. In our world, you can either know a light beam's color or intensity, but not both at any given instant. All sensors are intensity only. Whether Bayer or Foveon, filters are put in front of the sensor to estimate color. If you think there is NO problem in this then why does Leica have a monochrome (no color filter) camera that sells for $6,000? (It probably cost them $50 to make it, but no matter ;) ). If you're a B/W purist, those filters degrade your black and white image. So there are only two ways around this problem (unless Sony has discovered an electrically conductive material that can read color), you can stack color filtered sensors on top of each other (like Foveon) or next to each other (like Bayer). With Foveon, you get true color pixels with little color distortion (if in strong light); with Bayer, you get high sensitivity color pixels, but when you combine them horizontally you get aliasing/moire problems. Theoretically, you could take a grid of color filters, RGB, and vibrate them across the sensor so that the sensor could take three readings for each color. So if you had a global shutter that ran at 72 frames a second, it could take the red at 1/3rd a 24th of a second, then the blue, then green. Perhaps they use a seriously precise stepper-motor to do this. If the sensels are rectangle, maybe they use that to capture all three colors at any instance, but the vibration changes the pixel-center focus color and the pixel just averages them all together. It's all very interesting stuff, to me at least, but my guess is that though it may make for a good video application, it won't be good enough for still photography (at least professional or enthusiast). The reason is that Foveon doesn't work because of PHYSICS, it isn't a failure of Sigma. They simply can't find a substance that will take the color value of light and send enough light to the sensel below it, then the next one below. If Sony can change a color filter over the sensel it could eliminate color moire problems in a still subject, but if the subject moves then the color may change between filter changes (in the 1/3rd of 24th of a second). Color problems are back! I believe, understanding this stuff makes one a better photographer, or filmmaker, even if it has no immediate practical use on set. For example, if you noticed a lot of moire in the background of your shot you might go out and buy all kinds of blur filters. Every time you got rid of the moire you might find the image not sharp enough. If you knew this stuff about sensors, however, you'd open the aperture up a bit and increase the blur in the background while keeping your subject in focus. Quote Link to comment Share on other sites More sharing options...
GMaximus Posted November 13, 2014 Share Posted November 13, 2014 assuming Sony use their RGBW system, creating a red, green, blue and white pass using an electronic filter: 4.44M * 4 = 17.76M = 18MP 18 megapixels is about 5760 * 3240, roughly 6K Yeah, and my old fullhd monitor is actually 6k too if i count the subpixels! ^) Quote Link to comment Share on other sites More sharing options...
maxotics Posted November 14, 2014 Share Posted November 14, 2014 Respectfully, let me provide a correction: Sigma's Foveon X3 sensors do not use stacked color filters -- they measure color by penetration depth in the silicon (red penetrates deepest and blue least deep, green in between). The only advantage that this type of sensor has is that the spatial color resolution for red and blue is twice higher than with a Bayer sensor and there is no color aliasing or colored moiré (but still some liminance aliasing/moiré). The downside is that the colors need to be calculated with heavy mathematics and this is error prone, leading to more metameric failures (failure of the sensor to differentiate between similar colors -- something that cannot be fixed with post-processing). I have no personal experience with Sigma cameras but have heard complaints that the sensor output is unpredictable, especially with mixed light sources. Also, if you think Leica can really make the sensor of their M Monochrom for $50 you're out of your mind. :D They only removed the color filter array, which isn't exactly the most expensive part. The very limited production runs are obviously impacting the price (and Leica is a boutique brand anyway so lowering their prices would hurt their image). Regarding the "electronic color filtering" of this Sony sensor: I would presume it's a solid state technology and not some set of physical filters moving across the sensor. Hi Benjamin, I have not had any "unpredictable" outputs with my Sigma cameras in mixed light sources. I have noticed problems with the red channel, however. So I'm a bit confused about what you said that the sensor just reads the depth of the light to ascertain color. I'm not saying you're wrong, I'm just staying I'm still confused. How does the Foveon work then, does it take 3 separate reading from the sensel and use math to calculate color, or if it doesn't use filters at each depth, does it have some sort of probe (for lack of a better word) at 3 depths of silicone, one for each color. And if either of those are the case, why don't any other monochrome sensors have color possibilities? That makes it difficult for me to believe that Foveon gets diffierent colors from the same sensel material. As for the Leica monochrome, aren't color filters added after the chip is made? Machine vision systems still use mostly monochrome, right? Anyway, my point wasn't to bash Leica, only to point out that all sensor designs are compromises in some form and that some photographers will pay a lot to get near full-frame monochrome images (I agree, it's probably an expensive short run from the manufacturer). If Sony is going to use some sort of material for color reading, what could it possibly be? Do you have any idea? I mean that alone would be game changing. That the chip is going to have such high FPS and a global shutter seems too good to be true. Quote Link to comment Share on other sites More sharing options...
ZoranBerlin Posted November 14, 2014 Share Posted November 14, 2014 Since Sony has developed their multishot technology in recent years, i can imagine that for each pixel three shots are taken (RGB) that are composed together. If there is camera movement and a pixel receives three different kinds of light information, the algorithms must provide for the loss of colour information. So, I guess you end up with noise, too. In multishot tech, usually blurred shots are omitted. It is a trade-off, and it doesn t work for RAW. Also, with long exposure, does it mean that the filter array switches endlessly until you have your minute or so of exposure time? Wonder how that figures. Quote Link to comment Share on other sites More sharing options...
AndrewM Posted November 14, 2014 Share Posted November 14, 2014 On how the Foveon sensor works - you are both right. It does have multiple sensels for the different colors, but they are stacked on top of each other vertically. And it does have color filters, and it doesn't have color filters. Essentially, it uses the silicon as a filter - different frequencies of light penetrate to different depths, so end up in different sensels. On the new Sony sensor - we are all guessing, but if it is taking sequential pictures in different colors, then what you are doing is swapping spatial chroma aliasing for temporal chroma aliasing - if objects are moving, then a single object will exhibit weird uneven color smearing that would have to corrected in software, which will cost you resolution. That is why I think you are seeing the insane frame rates. If you were shooting 30p with 1/60 shutter, and it did 1/180 second red, then 1/180 green, then 1/180 blue, anything that has changed position in 1/180 second is going to cause real problems. But if, in 1/60 of a second, it shoots r-b-g-r-b-g-r-b-g-r-b-g... and combines the exposures, the problem goes away to a large extent. That would be really exciting. It would also allow you to do some really, really cool things to deal with temporal aliasing and to give more pleasing motion blur - you could assign lower weights to the exposures at the beginning and end of the broader exposure interval. see http://www.red.com/learn/red-101/cinema-temporal-aliasing Nikkor and maxotics 2 Quote Link to comment Share on other sites More sharing options...
Nikkor Posted November 14, 2014 Share Posted November 14, 2014 see http://www.red.com/learn/red-101/cinema-temporal-aliasing Oh great link, finally I understand why motion blur sucks so much on my camera, I thought I was making this up. AndrewM 1 Quote Link to comment Share on other sites More sharing options...
sunyata Posted November 14, 2014 Share Posted November 14, 2014 too bad about the 6k, which might be accomplished by an updated type of pixel shift (advantages of no de-bayer). other than that, it sounds like it could be analogous to a dye transfer process, which had problems with crosstalk, but we still loved it. let's hope corporate doesn't screw it up. Quote Link to comment Share on other sites More sharing options...
maxotics Posted November 14, 2014 Share Posted November 14, 2014 On how the Foveon sensor works - you are both right. It does have multiple sensels for the different colors, but they are stacked on top of each other vertically. And it does have color filters, and it doesn't have color filters. Essentially, it uses the silicon as a filter - different frequencies of light penetrate to different depths, so end up in different sensels. Hi Andrew, as you know, Sigma cameras take seemingly forever to write RAW data. I assume it's because, as Benjamin pointed out, there is a lot of heavy math going into the interpretation of color readings. More evidence of the difficult math is that Adobe has never created a RAW reader for these files and the Sigma software is notoriously slow and buggy processing these files. The newer Sigma cameras use a high resolution top (blue?) layer to make a better compromise between resolution and color. I tried one of the new camera, still very slow. So the question is, has Sony figured out an electronic or mathematical way around Foveon (vertical color sampling) problems. I somewhat doubt it. Or are they using high resolution, over-sampling and the less critical nature of video color to sacrifice color accuracy for aliasing free video at high frame rates? Like the A7S, which isn't anything new, but a chip made the way a low-light videographer would make it, my belief is that Sony hasn't developed a new technology here. They're just making chips that will do one thing well, along the lines of what Sunyata said, (as you know, as nice as thte a7S is, it loses dynamic range at low ISO). Thoughts? AndrewM 1 Quote Link to comment Share on other sites More sharing options...
AndrewM Posted November 14, 2014 Share Posted November 14, 2014 Hi Andrew, as you know, Sigma cameras take seemingly forever to write RAW data. I assume it's because, as Benjamin pointed out, there is a lot of heavy math going into the interpretation of color readings. More evidence of the difficult math is that Adobe has never created a RAW reader for these files and the Sigma software is notoriously slow and buggy processing these files. The newer Sigma cameras use a high resolution top (blue?) layer to make a better compromise between resolution and color. I tried one of the new camera, still very slow. So the question is, has Sony figured out an electronic or mathematical way around Foveon (vertical color sampling) problems. I somewhat doubt it. Or are they using high resolution, over-sampling and the less critical nature of video color to sacrifice color accuracy for aliasing free video at high frame rates? Like the A7S, which isn't anything new, but a chip made the way a low-light videographer would make it, my belief is that Sony hasn't developed a new technology here. They're just making chips that will do one thing well, along the lines of what Sunyata said, (as you know, as nice as thte a7S is, it loses dynamic range at low ISO). Thoughts? I think the Foveon/Sigma problem is probably a mix of (1) silicon being a pretty lousy (and probabilistic) color filter, thus providing "information" that requires an awful lot of massaging to make "real" color, (2) trying to pull information from three distinct layers of a chip and (3) Sigma being a small company and not really an electronics company. So they probably don't have cutting-edge processing hardware, or custom hardware, or optimized code and algorithms. And there are just differences in the details of everybody's solutions that don't seem intrinsic to basic technology - Panasonic, for instance, seems to have lower power/lower heat 4K processing than Sony right now, for whatever reason. I am a little puzzled why the Sigmas aren't better. I think, if we are understanding the Sony imager right, the first issue is how good and how quick the electronically-changing color filter is. If it is good (quick and color accurate), then all you have to do theoretically is pull the numbers from each sensor and sum them individually for each color pass. You could do that locally for each pixel with very simple hardware in essentially no time - you just have an add-buffer. Then you have to get all the numbers off the sensor into the rest of the chip, but that only has to happen once a frame. And you don't have to do that quickly, or worry about rolling shutter, because the moment of exposure is controlled at the pixel. Global shutter is free. The other issue is sensitivity. If we are right, then each frame is a lot of exposures in each color added up. Short exposure means less light, even if the pixels are three times the size because no Bayer. My guess is that there will be a trade-off. At low light, you can have low noise/good sensitivity (by having long individual color exposures and summing fewer of them) but then you will have problems with temporal aliasing of color (which you might be able to deal with in processing but at a loss of resolution). Or you can have good temporal resolution by having more, shorter exposures, at the cost of greater noise/lesser sensitivity. Low light with lots of motion is going to be the worst case for this technology, if I am understanding it right. maxotics 1 Quote Link to comment Share on other sites More sharing options...
sunyata Posted November 15, 2014 Share Posted November 15, 2014 very interesting ... Quote Link to comment Share on other sites More sharing options...
GMaximus Posted November 15, 2014 Share Posted November 15, 2014 But if, in 1/60 of a second, it shoots r-b-g-r-b-g-r-b-g-r-b-g... and combines the exposures, the problem goes away to a large extent. Well, 16kHz mode suggests a filter which should do an r-b-g thing really-really fast. Quote Link to comment Share on other sites More sharing options...
hoodlum Posted November 15, 2014 Share Posted November 15, 2014 More details have emerged. It has a native ISO of 5120 with huge DR. http://www.sonyalpharumors.com/sr4-detailed-spec-sheet-of-the-new-sony-apcs-active-pixel-color-sampling-sensor/ Quote Link to comment Share on other sites More sharing options...
Quirky Posted November 15, 2014 Share Posted November 15, 2014 Interesting. I can't help but wondering if this new tech will become the one to replace the Bayer one eventually, though. I'm no expert, but my gut feeling tells me that the next 'industry standard' tech, should there ever be one in the future, should be something simpler, electronic or not. I bet there will soon be other rivalling technologies out later on, at least on paper. I also wonder if this will just replace one set of digital artefacts (rolling shutter, interpolation artefacts) with all new ones. Oh well, s'pose we'll see, eventually. Anyway, getting a global shutter as a bonus sounds good to me. Meanwhile, what a nice 'leak,' whether it was deliberate or not. It'll be an abundant source for nerdytainment, and it'll keep the hardcore geeks busy for weeks, way before any actual device with the actual tech hits the shelves. As soon as someone introduces the obligatory and inevitable N-word and C-word into the debate, this thread will no doubt have 17+ pages by the time CES and CP+ take place in early 2015. I'm not bashing no-nonsense comments like those by AndrewM here, they are interesting reading per se. Just predicting the likely near future before we see the actual sensor in an actual camera, in a not too serious fashion. Carry on. ;) AndrewM 1 Quote Link to comment Share on other sites More sharing options...
Nikkor Posted November 15, 2014 Share Posted November 15, 2014 21 stops DR @ iso 5120... ehm wow... Quote Link to comment Share on other sites More sharing options...
maxotics Posted November 15, 2014 Share Posted November 15, 2014 Hey, some evidence I was right in one of my first guesses (sorry to blow me own horn): "single pixel color sample by moving color filter on electrified track" At first I thought the 16-bit A/D sampling was suspicious, but now I believe they're using higher precision readings to reduce the amount of mathematical calculations--interesting! Andrew, you're probably right that Sony has signal-processing chip strengths here that Sigma doesn't. Quirky, what's with all the negative waves this early in the rumor ;) Photography has ALWAYS, and will ALWAYS, be pushed ahead by crazy ideas. I can safely say that if people weren't into these challenges there wold be no photography or film. Sony and Panasonic are doing really cool things, from an electrical engineering standpoint. FORGET THE MARKETING. Will I need this camera? No. I don't have any interest in slow-motion video. But I might one day. You might one day. Don't get angry at the scientists because they work for money focused corporations :) This sensor will be a specialist product. Quote Link to comment Share on other sites More sharing options...
Nikkor Posted November 15, 2014 Share Posted November 15, 2014 As you can see the DR changes (it can go 24.2 stops) depending on the sampling speed, so my guess is that every single sampling cycle is done at different iso or readtime, therefor the DR and also the soft shutter Andrew linked to. Pretty amazing. The shape of the graphic also points into that direction. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.