HockeyFan12 Posted August 13, 2017 Share Posted August 13, 2017 Depends how confident you are that you're right. I'm 100% confident that I am. If you can find a 4k SINUSOIDAL zone plate to shoot and we can agree on what represents 2,000 horizontal line pairs, let's just make it a $500 wager. I'll bring my Foveon camera. Again, there will be some slight difference due to real-world factors like imperfectly aligned grids and quantization error and sharpening etc. But the result will be far closer to the figure I cite than the one you do. It doesn't matter how many people agree with Graeme. What matters is that he's the one doing math and the others are confusing vertical axes with horizontal axes and confusing sine waves with square waves... those articles are poorly-researched and scattershot in their methodology. They're click bait. Truth is truth. Doesn't matter what the majority says. That's what I'm standing up for above all else. iamoui 1 Quote Link to comment Share on other sites More sharing options...
jcs Posted August 13, 2017 Share Posted August 13, 2017 Just now, HockeyFan12 said: Depends how confident you are that you're right. I'm 100% confident that I am. I think this is just entertainment now, and you are maybe joking about the whole thing. It's not about being right or confident, you can't even draw 4K lines in a 4K bitmap without aliasing except in the perfectly aligned condition. So I'm asking you to do something that is impossible: shoot a chart with a 4K sensor and resolve 4K lines without aliasing. If you don't believe me, prove it to yourself by either searching online for evidence to support your belief or shoot a 2K or 4K test chart and see for yourself. Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 13, 2017 Share Posted August 13, 2017 I have. (Again, read my post from a few days ago and see how those lines will cycle between high contrast and low contrast but never alias in that model so long as they're sinusoidal and not square). How's a $1000 wager? Quote Link to comment Share on other sites More sharing options...
jcs Posted August 13, 2017 Share Posted August 13, 2017 Just now, HockeyFan12 said: I have. How's a $1000 wager? Thanks, I don't need your money. Just post your 2K or 4K chart results (online or your own). Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 13, 2017 Share Posted August 13, 2017 If you can find me an affordable 4k zone plate (SINUSOIDAL) I will. You're right it's not about money but I also don't have to prove it to myself. Like, how's $10,000? Doesn't make me any more right or wrong. This escalation is absurd and I'm sorry I got involved with it. Might makes right is generally wrong. You're right to the extent that it doesn't matter, it's about what's right and not about money or what sources claim what (even if the reputable ones agree with my model). I apologize for that, but why should I spend money to prove to myself what I already know? It's not on me to prove you wrong, it's on you to stop spreading misinformation. iamoui 1 Quote Link to comment Share on other sites More sharing options...
jcs Posted August 13, 2017 Share Posted August 13, 2017 Just now, HockeyFan12 said: $10,000? Haha, that's some indica brother! Maybe you should wager with Canon? Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 13, 2017 Share Posted August 13, 2017 Can you send me a link to Canon's white paper? If I can read what they've claimed in context that would help. Given that you mischaracterized and misquoted every source you've presented so far excepting those that are obviously wrong, it would make sense that I'd ask. Twice you posted square wave charts instead of sine waves from sources that had both. There's a reason they had both.... And it is pretty good. But clearly not mellowing me out enough. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 13, 2017 Share Posted August 13, 2017 Do your own homework, brother All you need are 2K and/or 4K test charts. Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 13, 2017 Share Posted August 13, 2017 I'll look into it. I actually would be curious to give this a try and I apologize for getting worked up earlier, it's been a rough week, and that's made me a little touchy. I'm as convinced as ever I'm right, but I do feel foolish for getting heated. Anytime anyone mentions money online it's as dumb as mentioning Hitler or resulting to ad hominem attacks. Pointless escalation. My bad on escalating that one. Maybe I was just trying to add a little more to my down payment. But that wouldn't be fair. Now that I see Beverly Hills in your profile, I'm starting to think you could escalate the wager a lot higher than me without suffering the consequences, but it's still just a shitty way of escalating an argument. Sorry about all that. That said, I'll see if I can rent a 4k zone plate or visit Red headquarters or something. I don't want to spend hundreds of dollars to win an argument online when I only have to prove something to you (those actually designing sensors are already following my model), but on the other hand I'm curious. Can we agree to use a sine wave plate rather than a square wave plate? And can we agree that a blurry image doesn't count as aliasing, only false detail does, and that slight false detail specifically from sharpening isn't aliasing, either? The real difficulty is that the best thing I have is a Foveon camera (don't own an M monochrome or anything) and Sigma's zero sharpening setting still has some sharpening, and one-pixel radius sharpening looks like slight false detail at one pixel. So the result will be a little funky due to real world variables. But I still contend that the result will correlate far more closely with my model: a properly framed 4k sinusoidal zone plate won't exhibit significant aliasing when shot with the 4k crop portion of a Foveon camera, even if the full resolution isn't clearly resolved when the two are out of phase. But we have to go with a sinusoidal zone plate (which is unfortunately the really expensive and scarce one, binary is cheaper and far more common) and recognize that if it's fully out of phase the result will be near-gray. That aside, I would be genuinely curious to put a 4k zone plate in front of a 4k Foveon crop. But we'd have to agree SINUSOIDAL lines (halves of full sinusoidal cycles). Even if it's just a gentleman's bet. Let's agree on a sinusoidal zone plate first. And I apologize again for getting worked up. That was really childish of me. It's been a bad week and I'm sorry about that. iamoui 1 Quote Link to comment Share on other sites More sharing options...
Shirozina Posted August 13, 2017 Share Posted August 13, 2017 Even if you deliberately choose to 'shoot' at 1080 your camera is very likley to be capturing at a much larger pixel size and converting to 1080p internally so you are shooting 4k - like it or not! Quote Link to comment Share on other sites More sharing options...
jcs Posted August 13, 2017 Share Posted August 13, 2017 Here's a test chart of the FS100 1920x1080 camera: Looks like it does right around 1000 TV Lines. Funny this HD test chart caps out at 1200, might be hard to find an HD test chart that goes up to 2000 4 years ago I looked at this issue, pretty much forgot about this post: Bayer sensors produce only 1/2 stated resolution (based on real-world test data): http://lagemaat.blogspot.com/2007/09/actual-resolution-of-bayer-sensors-you.html Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 13, 2017 Share Posted August 13, 2017 Sorry again about being pissy yesterday, I was having a rough day for unrelated reasons. Anyhow, a few things: 1) That chart still looks like square waves to me, so you will see more aliasing but also more resolution being resolved (false detail). Can we at least agree on using a sinusoidal plate, as I asked above? I keep coming back to your insistence on using binary plates being the biggest source of your misunderstanding. Let's address that first. Can we agree on using a sinusoidal zone plate? (Read Graeme's post again where he discusses why a sine wave plate correlates properly with Nyquist, although I've mentioned why a dozen times before: square waves have higher frequency overtones and Nyquist concerns the highest frequency.) As I said, I'd buy a sine plate, because I am curious to try this on my own, but they're really expensive! If I could rent one I would give it a try, but I'm not inclined to spend $700 to prove what I already know: https://www.bhphotovideo.com/c/product/979167-REG/dsc_labs_szs_sinezone_test_chart_for.html (Plus proper lighting and a stand... anyhow, more than I'd like to spend.) 2) I agree, that chart shows that the camera resolves about 1000 lines per picture height. Yes, tv lines are normally measured in picture width unless otherwise specified, or at least that's what I grew up thinking during the SDTV days, but that chart clearly specifies lines per picture height. The confusing thing is that there is some minor and expected aliasing due to the test chart's lines being binary (square waves). If we wanted a truer result, as I mentioned above, we'd use a sinusoidal zone plate, and we should see a softer image, perhaps mostly gray mush if the sensor and chart are out of phase, but without even a trace of aliasing at 1000 LPPH, no matter how the two are aligned–exactly as the Nyquist theorem predicts. 3) That article on sensor design doesn't even mention Nyquist. What it mentions is that Bayer sensors are only about 70-80% efficient in each axis because of the Bayer CFA and interpolation. No one disagrees with this. Assuming a Bayer sensor is 70% efficient (0.7X) in each axis and given that 0.7 X 0.7 is about 0.5 we can indeed say a Bayer sensor resolves about "half the megapixel count" or a little better if it's more efficient algorithm, as recent ones are. (It's an old article.) Just as the article states. And that's consistent with Red's claims of "3.2k" of real resolution for a 4k chip, which would be 80% linear efficiency or 64% of the stated megapixels. Etc. Due to Bayer interpolation. Your claim is that a monochrome sensor resolves about a quarter the megapixel count (0.5 X 0.5), which does not follow from anything mentioned in that article. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 14, 2017 Share Posted August 14, 2017 @HockeyFan12 hey man, no worries, all good Remember, there are no square waves hitting a sensor with an OLPF. Graeme's statement is flawed, and this is the root flaw: "If we think of a wavelength as a pair of lines" and that you only need one sample per pixel. You can't combine two lines/pixels as a wavelength accounting for Nyquist 2x sampling. Each pixel represents a sample, and we need at least two pixels (really 3) to represent a pixel (or line) without aliasing. Ask Graeme to provide test chart results showing a 4K sensor resolving 4K lines without aliasing (or 2K sensor providing 2K lines without aliasing). Why do you think his statement isn't replicated anywhere else? Have you found any 2K/HD test charts that go above 1200 lines? If his statement was correct, why is the C300 I resolving only 1K lines? What about the FS100 & F3 also only resolving around 1K lines? If his statement was correct they'd be resolving 2K lines, right? Consider the possibility that you are being hung up on Graeme's statements regarding sinusoids and the possibility that his statements aren't accurate given real-world test chart results alone. Also, think about how many end-result pixels are needed to represent sinusoids vs. lines: you need gradient pixels to form the sinusoid, right? How many horizontal pixels do you need to form a sinusoid? Is it more than 1 pixel wide? If so how is it possible to sample 2K into 2K or 4K into 4K without aliasing? That article reflects what I found when studying camera test charts. For example the 8K F65 provides only 4K of resolution without aliasing. This is Sony's position as well. Note Canon states ~1000 TV Lines for a 4K sensor (really 2K resolving 1K without aliasing, as per Nyquist). Again, here are actual camera resolutions, performing as per Nyquist (not Graeme's invalid statement), from https://www.provideocoalition.com/nab_2011_-_scce_charts/: "The chart lists line pairs per sensor height; the more traditional video measurement is “TV lines per picture height”, or TVl/ph. A “TV line” is half a line pair, so double the numbers shown to get TV lines.", which means in no uncertain terms horizontal resolution. If you are still not convinced, you can download and print your own charts for free: http://www.bealecorner.org/red/test-patterns/ (print multiple sheets and assemble if printer resolution not high enough). Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 14, 2017 Share Posted August 14, 2017 11 hours ago, jcs said: @HockeyFan12 hey man, no worries, all good What about the FS100 & F3 also only resolving around 1K lines? If his statement was correct they'd be resolving 2K lines, right? As I mentioned above, that FS100 chart specifically says lines per picture height (LPPH). Yes (I think) in the SDTV days horizontal lines of resolution was the standard measure (LoHR) but that chart SPECIFICALLY says lines per picture height. So does the graph you posted above state 540 vertical line pairs from the F3 (aka 1080 lines, aka full 1080p resolution). Just out of curiosity, I aligned the 1000 line chip in that FS100 chart you posted with a set of alternating one-pixel width lines. They should line up closely but not perfectly (1000 lines vs 1080 lines). And they do: Clearly this image shows that the chart represents alternating lines of approximately 1 pixel frequency on the sensor. Yes there's some aliasing, which is expected. Yes it's approximate (1000 vs 1080p and not perfectly aligned). But it's clear that these lines are of closer to one-pixel height than two-pixel height. Thus, we're seeing about 1000 lines per picture height, as the chart plainly reads (LPPH). Which would mean the camera is resolving close to full 1080p resolution, not half that. Just as my model would predict! I do agree with you that the OLPF gets rid of most of the higher order harmonics, but we both know very well it doesn't get rid of all of them. OLPFs are not always strong enough (another argument for over-sampling). That's why I suggested a center crop from a Foveon sensor (full raster RGB with no OLPF) and a 4k sine zone plate as a good real-world test case without the messy variables of Bayer interpolation and an OLPF. As for why Graeme's explanation is not more widely-circulated online, I don't know. I'm genuinely a bit confused about that. But I've learned not to trust everything I read posted by armchair experts on blogs. Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 14, 2017 Share Posted August 14, 2017 Here's an interesting layman's description of higher order harmonics in square waves (one that even I could understand and I'm not very good at math): http://www.slack.net/~ant/bl-synth/4.harmonics.html Let's agree on a common definition of Nyquist if we can. I propose this one, though I want to correct one minor error in it: https://www.its.bldrdoc.gov/fs-1037/dir-025/_3621.htm (even there they publish an error; it should not read equal to or greater than; it is equal to if you want to guarantee no aliasing, but must be greater than to preserve a recoverable signal, however potentially faint, because otherwise the frequencies can theoretically be aligned exactly out of phase, resulting in a gray signal) So, looking at the highest frequency components of a system: If we have a 1hz sine wave then we need >2 samples per second to represent it. (>2 samples per cycle) But what if we have a 1hz sine wave with a 3hz overtone at 1/3 the amplitude? We need >6 samples per second. (3hz is the highest order frequency; >2X3=>6) What if we have a 1hz sine wave with a 3hz overtone playing at 1/3 the amplitude and a 5hz overtone playing at 1/5 the amplitude? We now need >10 samples. How about 1hz + 3hz (1/3 volume) + 5hz (1/5 volume) + 7hz (1/7th volume)? >14 samples... How about 1hz + 3hz (1/3 volume) + 5hz (1/5 volume) + 7hz (1/7th volume) + 9hz (1/9th the volume)? >18 samples... How about 1hz + 3hz (1/3 volume) + 5hz (1/5 volume) + 7hz (1/7th volume) + 9hz (1/9th the volume)..... We need infinite samples, because we've hit an infinite frequency (granted, at an infinitely low amplitude). We've also just constructed a 1hz square wave. Hence my contention that we must discuss sine waves, not square waves. I contend that the effective frequency of a square wave is infinite, at least as concerns Nyquist. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 15, 2017 Share Posted August 15, 2017 The major issue here is the industry is a mess with confusing terms regarding resolution. So let's forget about external terms and keep it simple discussing the concepts and real-world results. What Graeme said is true only for the perfectly aligned cased, as noted and agreed on previously. As soon as we move slightly out of perfect alignment, we get major aliasing. Nyquist says > 2x and a line pair is = 2x. What's the next value: 3x, however that's not even/symmetric so the next is 4x (computer graphics operations tend to be on even or power of two boundaries for performance reasons). Also note the FS100 resolution chart shows similar horizontal and vertical resolution- it's a measure of sensor photosites (becoming pixels) per physical measure of space (e.g. pixel micrometers per physical mm): https://www.edmundoptics.com/resources/application-notes/imaging/resolution/ So it's not the total sensor width or height, it's the photosite density in space relative to image space resolution on the chart/object. The FS100 chart crop you posted is aliasing like crazy, showing high frequency mush along with lower frequency folding harmonics (4 visible bands). Where does aliasing start: when can we represent clean black & white lines, and when does it get messy and start to alias? At 400, we see nice edge-rounded clean lines (antialiased; OLPF rounding square wave). At 600, aliasing has started, actually just past 400, perhaps at 540? At 600-800 aliasing is severe, we can no longer get clean lines, and at the end of 800 it's highly aliased mush, approaching extinction of detail at 1000. Examining horizontal resolution: Again, at 400 we've got clean lines (2 black pixels with 1 gray pixel sides, 1 or 2 white pixels, and so on), and after 400, it starts to alias and we can't see clean lines anymore, by end of 600 it's starting to blend to gray, and after 800-1000 it's gray mush diagonal fat aliasing and very low contrast high-frequency-aliased lines as we approach extinction of detail. Thus the horizontal and vertical resolvable detail are about the same: just past 400 lines, perhaps 500 being generous. The total output pixel width and height don't really matter, We could have a 4:3 sensor or 2.35:1, etc., it wouldn't matter for the chart: it's the density of the photosites not final pixel width & height that matter. So in the real world it's clear we can never resolve single pixel alternating lines of resolution (it's not even possible to draw them in a bitmap, as we'll discuss below). This is a Bayer pattern, vs. a monochrome sensor, so that's part of the issue (but not as much as you'd think, since we are only working with black & white and all pixels contribute to luminance (note color aliasing starts after 400 as well)). So, we agree that an HD sensor can capture 1920x1080 pixels without aliasing in the perfectly aligned case, right? (theoretically- would be nice to see a test chart showing this possibility). The debate is what do we need for alias free capture? Nyquist says > 2x. We must tune the OLPF to filter out lines/detail smaller than one pixel (line) width. Though in practice, as with audio capture, we get much better results with higher oversampling (analog low pass filters are far from perfect, including OLPFs), so the finest real world feature should be blurred to a diameter > 1 photosite/pixel width, so it can be represented by 2 pixels minimum. In practice this isn't really enough, and ideally 3 or more pixels are needed (as seen above with the FS100 chart). At 4 pixels we're at the 'second divide by 2'. Take the alternating black and white lines you drew, and rotate them a couple degrees. Look what happens to the lines: a random jumble of pixels- massive aliasing! We can use your favorite topic, sinusoids to figure out how to create alternating lines that can be rotated and moved around the pixel grid without aliasing (same problem with a sensor capturing the real world). One pixel wide alternating black & white lines (64 black lines): Now lets rotate them 44.4 degrees: We get massive aliasing, and similar lower frequency folding harmonics as present on the FS100 test chart. Now let's draw lines with one center pixel and an 'antialias/sinusoid-approx' pixel on either side, so lines are now using 3 pixels instead of 2 pixels to represent the finest detail (>2x vs. =2x above, per Nyquist), 43 black anti-aliased (sinusoid approx) lines: And let's rotate 44.4 degrees: No more aliasing, as expected. 64/43 = 1.49 = 1.5, and 64/1.5 = 43. So we can use divide by 1.5 to show us anti-aliased line resolution for line triplets (vs. line pairs) in this purely computer graphics example. In the real world divide by 2 works very well, e.g. 8K F65 creating antialiased 4K. And again, as resolution drops from 8K down towards 4K, as shown in the test charts, aliasing increases as predicted. In summary, Nyquist states >2x, 3x works well in a computer graphics test (/1.5), and 4x (/2) works well in the real world, as shown in test charts. Measuring many real-world test charts for Bayer sensors, he found .58x before aliasing, pretty much the same as predicted by the 3-pixel line test (where 4 pixels would be even better). 1/1.5 = .67, 1/2 = .5, (.67 + .5)/2 = .58! http://lagemaat.blogspot.com/2007/09/actual-resolution-of-bayer-sensors-you.html. So .58x is pretty much an excellent predictor of alias-free resolution possible with a Bayer sensor. Here's another test showing we need at least 3x real-world sampling to eliminate aliasing, as predicted by the computer graphics test: http://www.clarkvision.com/articles/sampling1/ Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 15, 2017 Share Posted August 15, 2017 I totally agree about the confusing terminology being a big issue. There are so many technical committees trying to clarify resolution in obscure technical terms and marketing departments trying to obfuscate resolution in clear marketing terms that it’s a wonder anyone can agree on anything. Bloggers and academics who usually don't know what they’re talking about in the first place aren't helping either! But Graeme’s model actually doesn’t incite any aliasing, so long as we use sine waves, which he does, and as Nyquist definitively concerns. I hope my post above clarifies why using sine waves and not binary/square wave charts is essential if we're following the Nyquist theorem! See this example, which I posted before, modeled as a more accurate and relevant version of Alister Chapman’s claims: Yes, a sine wave signal can get grayed out almost completely if it is exactly out of phase. And no, you’re not going to get “tack sharp” black and white lines from a high frequency sine wave chart as you would with square waves (I fudged that a bit in the above graphic; the first chart should probably have a slightly lower contrast output). You’ll only get a reasonably sharp reproduction of it when it’s nearly exactly in phase. And a fuzzy near-gray one when it's not. But Nyquist concerns avoiding aliasing and maintaining a recoverable signal; it doesn’t say a thing about MTF or amplitude except that it doesn’t hit zero as long as you keep that ever-important “greater than” symbol intact. As I illustrate above, there’s no aliasing anywhere under Graeme’s model. The signal can get grayed out when it's out of phase and we're sampling at 2X exactly... But then the “greater than” symbol guarantees that at least a trace of the original signal is preserved, however faint. So while I agree that if you sample at twice the frequency and sample perfectly out of phase you will get a completely gray image... I maintaining that it will still have no aliasing. (As Nyquist predicts.) Yes, also with no signal. (As Nyquist predicts. We need that "greater than" sign to guarantee a signal.) But if you do sample at greater than twice the frequency you will guarantee at least a very faint trace of a signal. So if you're sampling sine waves, there won't be any aliasing so long as you sample at 2x the frequency. And there will be a recoverable signal so long as it's >2x the frequency. And yes, that's two samples per sine wave (line pair or cycle), not two samples per line (half of a sine wave or half of a cycle). (Admittedly, that faint trace might be obscured to zero by quantization error and read noise in real world use. Look at ultra-high ISO images–the resolution suffers! I never said oversampling wasn’t beneficial in real world use. Just that the math holds up with Graeme’s model. As I've illustrated, it does! No aliasing. Some trace of a signal.) I maintain that the “1000 line” portion of the chart is repeating in approximately single-pixel lines–and if you pixel peep, you'll see that it incontestably is. I also maintain that the FS100 chart specifically concerns vertical lines–and it does. That’s why it’s labeled as LPPH (line pairs per height) and why the frequency of lines is the same in both axes. The frequency of those lines would be stretched 1.78X horizontally to match the 16:9 aspect ratio if it were measuring each axis separately. It isn’t. We’re seeing roughly 1920X1080 of resolution out of the FS100 (as my model predicts). The resolution looks pretty similar to my eye on both axes. Do you really think this is a 1080X1080 pixel image? Or do both axes look roughly equally sharp? They do to me. It looks like a 1920X1080 image to me, equally sharp in each axis. But I have admit that your second claim is incontestably true. There’s a very significant amount of aliasing on the FS100 chart at 1000 vertical lines. In both axes. I’m in full agreement there. I won’t deny what the image shows. Only, I will contend that the aliasing is there mostly because it’s a square wave chart. As I keep repeating, there are infinite higher order harmonics on square wave/binary charts and Nyquist concerns the highest order frequency of the system. If it were a sine wave chart, it would be softer, but with no aliasing. If you have any doubt, read my post above this one, and you can see why a square wave chart is so sure to incite aliasing in these instances. With square waves of any fundamental, there are infinite higher order harmonics (at admittedly decreasing amplitude), and in real-world use, those are usually more than a standard OLPF will filter out at higher fundamental frequencies. I maintain, with complete confidence, that if they had used a sine wave plate, the results would be fuzzier in the higher frequencies, perhaps approaching gray mush at the 1000 LPPH wedge and certainly reaching total mush at 1200 LPPH... But there wouldn’t be any aliasing. Thus, fulfilling Nyquist. It is confusing. The model I suggest (not dividing by two the extra time, only using sine wave charts) and the model you suggest (dividing by two the additional time, using square wave charts) actually result in rather similar real-world results. But my model is mathematically sound and correlates closely with real-world examples. Yours divides by two an extra time, assumes harmonics don't count (of course they do), and predicts a fuzzier image than we see. Your computer graphics examples are interesting… but I don’t see how they’re relevant. Let’s focus on what's simple for now, rather than throwing obfuscating wrenches in as a way to distract from a very simple concept. (What's the frequency of a rotated sine wave anyway? Can you provide an equation for it? I'm guessing it might be something to do with root 2, but I don't know for sure. It's not the same as the original wave, or else the F65's sensor grid wouldn't be rotated 45 degrees to reduce aliasing, and you wouldn't be getting aliasing in your example.) Furthermore, those sine waves look suspiciously high contrast (like anti-aliased square waves, or with sharpening or contrast added beyond the quantization limit of the system). Whether that's the case or not, I suspect you introduced this example just to confuse things and deflect your uncertainty. It complicates rather than clarifies. Not the goal. Let's focus on simple examples and take a bottom up approach, rather than throwing in extra variables to obfuscate (that's what marketing departments do; engineers follow Gall's Law). Furthermore, both real life and computer graphics are Nyquist-bound in the same way. Both are sampling systems. So choosing different oversampling factors for each seems... very wrong. Unfortunately I’m busy with work for the rest of the week and won’t be able to return to this discussion until the weekend. But I would be interested in continuing it and will have time to then. Before I then, can we agree on two things? They shouldn't be controversial: Nyquist concerns the highest frequency sine wave in system. Not the fundamental frequency of square waves. Square waves are of infinite frequency at their rise and fall and thus their highest frequency as concerns Nyquist isn't their fundamental, but whatever the highest frequency is that the OLFP lets through. See my prior post. Simply stated, let's talk sine waves, not square waves. Or at least accept that square waves are of infinite frequency at their rise and fall, or are at the frequency cut off of the OLPF. Nyquist theorem concerns sine waves by definition. Can we at least agree on this? It should be a given in this discussion. Secondly, if we take a 4k sinusoidal zone plate and photograph it with a 4k center crop of a Foveon sensor and there’s no aliasing horizontally or vertically, my model is correct. Can we agree on that? (I believe you stated this as your condition for agreeing with my model earlier, so this shouldn't be controversial.) Quote Link to comment Share on other sites More sharing options...
jcs Posted August 15, 2017 Share Posted August 15, 2017 Good news for the Alexa 65: 3840/.58 = 6620, so the Alexa's 6560 photosite width should provide decent anti-aliased 4K. Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 15, 2017 Share Posted August 15, 2017 I haven't used an Alexa65 yet. But I agree fully with your 2x oversampling factor for Bayer sensors being ideal. If only so we can guarantee full raster in R and B. But that's obfuscating by adding additional factors (Bayer interpolation) instead of starting bottom up and following Gall's Law (the engineer's Bible, I always assumed?) to get to the root of the issue. Can we agree that: 7 minutes ago, HockeyFan12 said: Nyquist concerns the highest frequency sine wave in system. Not the fundamental frequency of square waves. Square waves are of infinite frequency at their rise and fall and thus their highest frequency as concerns Nyquist isn't their fundamental, but whatever the highest frequency is that the OLFP lets through. See my prior post. Simply stated, let's talk sine waves, not square waves. Or at least accept that square waves are of infinite frequency at their rise and fall, or are at the frequency cut off of the OLPF. Nyquist theorem concerns sine waves by definition. Can we at least agree on this? It should be a given in this discussion. Secondly, if we take a 4k sinusoidal zone plate and photograph it with a 4k center crop of a Foveon sensor and there’s no aliasing horizontally or vertically, my model is correct. Can we agree on that? (I believe you stated this as your condition for agreeing with my model earlier, so this shouldn't be controversial.) Quote Link to comment Share on other sites More sharing options...
jcs Posted August 15, 2017 Share Posted August 15, 2017 @HockeyFan12 my post with the computer graphics examples use exactly the same Nyquist sampling values as is matched with real-world test charts and Bayer sensors. The computer graphics results with 3 pixel lines (vs. 2) do the same thing as a sensor sampling the real world. Suggesting that I'm including the computer graphics results in an attempt to obfuscate (lie) is a projection from within yourself, is an ad hominem, and is disrespectful. The computer graphics examples show an expected result based on Nyquist: 3 pixels required vs. 2. This prediction is based on both Nyquist > 2x and testing with computer graphics matches precisely what was measured from real-world test charts. Averaging 3x and 4x (/1.5 and /2), or (1/1.5 + 1/2)/2 = .58, matches the .58x real-world measured Bayer actual resolution factor exactly! Nyquist > 2x theory matches computer graphics experiment with 3 pixels vs. 2 and matches real-world test charts with Bayer sensors. This is math and science working as expected! This post and the prior post with the computer graphics examples is all anyone needs to prove to themselves how Nyquist works in the real world with camera sensors. Bayer sensors can only provide .58x resolution without aliasing, which means 3-4x sampling instead of 2x, exactly what Nyquist states. A Bayer sensor shooting black & white will behave like a monochrome sensor. The Clark example above gets the same results from a scanner (3x Nyquist, or 6x (no details on the scanner, however the computer graphics examples are trivially clear: 3x pixels vs 2x)). Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.