HockeyFan12 Posted August 15, 2017 Share Posted August 15, 2017 I apologize for any ad hominem attacks. That said, we’re aiming to discuss the Nyquist theorem, which explains how to sample cycles of sine waves in (discrete, evenly-spaced) samples without aliasing or phase cancellation. I’m starting my argument with the simplest variable possible: the frequency of a sinusoidal cycle. And then building my argument from there to the necessary result: the required frequency of the sampling system to record the input frequency with a recoverable trace and with no aliasing. You’re making interesting arguments, but frequently throwing wrenches in that do exist in the real world, but only factor into this discussion to confuse a very simple fundamental principle, from which we should build an understanding bottom up if we intend to delve further. You're throwing in square waves, Bayer interpolation, OLPFs, a rotated pixel grid, a bunch of confusing (and confused) blog posts, differences between digital downsampling methods, 0.58 for some reason, perceived and purported differences between two equally Nyquist-bound systems (digital and physical, and yes, there are differences in the downsampling methods and quantization if not the laws that govern them)... it's all obfuscating the point, and frankly points to little understanding of it. Yes, those factors matter in the real world. As I said, both of our models return fairly similar results in real-world practice. And yes, all the factors that are responsible for that result are beholden to the Nyquist theorem. Yes, I understand what you’re arguing each time. But before we get into calculus we have to understand that 2+2=4. So let's address the basics first... Your arguments don't change the fact that you’re ignoring the fundamental disagreement we have and that you aren't answering my questions. I've been consistently responding to yours, and repeatedly posting images that show how my model works and yet you've simply ignored them each time. It's disrespectful to me that I keep asking you to accept simple statements that should be givens and instead you post links to irrelevant and often misinformed blogs. Rather than dodging the questions, can you answer these two: Do you multiply by two (>two to promise a recoverable trace) once or twice per cycle (full sine wave, or line pair) the highest order sinusoidal signal to avoid aliasing when sampling in a Nyquist-bound system? Does Nyquist concern sine waves in the first place, or are the fundamental frequencies of square waves interchangeable in the theorem with sine waves? If they are, why are the higher order overtones irrelevant? Quote Link to comment Share on other sites More sharing options...
jcs Posted August 15, 2017 Share Posted August 15, 2017 33 minutes ago, HockeyFan12 said: I apologize for any ad hominem attacks. ...it's obfuscating the point, and frankly points to little understanding of it.... You're doing it again. If you don't understand the computer graphics examples, perhaps try some experiments yourself in a bitmap editor and doing research into aliasing and computer graphics. A sensor is just another form of sampler vs. a pixel grid in computer graphics. Quote Do you divide by two once or twice per cycle (full sine wave, or line pair) of the highest order sinusoidal signal to avoid aliasing when sampling in a Nyquist-bound system? Does Nyquist concern sine waves in the first place, or are the fundamental frequencies of square waves interchangeable in the theorem with sine waves? All signal sampling systems are Nyquist bound. Nyquist-Shannon sampling theory is based on Fourier analysis/synthesis which is sums of sinusoids. Your position is we only need two pixels to sample a line pair without aliasing. My position is we need 3 or 4 pixels, which is matched by real-world test charts and predicted by the computer graphics tests. Additionally, Nyquist states >2x, so I'm really puzzled that you're arguing this point. I've written software to perform signal resampling for both audio and video, as well as Fourier analysis (FFT), with filtering in the frequency domain, and synthesis (iFFT). How about you? I ignore the fact that you bring up fundamentals and harmonics in every post because we can simply examine real-world results and not worry about square wave and infinite order harmonics (especially when OLPFs are used): the real-world chart results speak for themselves, which you ignore and bring up 'sinusoids are required' and 'square waves make it impossible' and all sorts of reasons why I must be wrong (which seems to be your primary focus, not on searching for the truth, which is what motivates me). If you can study the computer graphics examples and aliasing theory for computer graphics, the 3x results (3 pixels per feature) will become clear, and then the .58x resolution factor will make sense with camera sensors (both monochrome and Bayer when shooting black & white test charts). Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 15, 2017 Share Posted August 15, 2017 Your model works fairly well in real world practice. I don't dispute that. Both of our models provide fairly similar results in most cases, actually. And no one disputes that oversampling is beneficial. I'm not arguing that 1/0.58X or 2x or 3x isn't a good rule of thumb for a sampling rate in a specific complex system. It's just that we still disagree how you get there. I have little doubt the programs you wrote work well. But the basic math you're doing is sketchy. I have to return to that. I've answered your questions (not always to your liking, but as honestly and analytically as I could). Why haven't you answered mine: 51 minutes ago, HockeyFan12 said: Do you multiply by two (>two to promise a recoverable trace) once or twice per cycle (full sine wave, or line pair) the highest order sinusoidal signal to avoid aliasing when sampling in a Nyquist-bound system? Does Nyquist concern sine waves in the first place, or are the fundamental frequencies of square waves interchangeable in the theorem with sine waves? If they are, why are the higher order overtones irrelevant? Direct answers this time, please. Not "I don't care about the actual theorem because I have a rule of thumb I guessed on and seems to work most of the time." Lastly, if you're so evidence-focused, why did you cite 1000 LPPW as the FS100's resolution in support of your argument that it resolved about 1000 lines, and then when I pointed out it was 1000 LPPH you insisted that's what you meant all along but that there's aliasing (which both of our models predict, btw)? Evidence can be read subjectively. That's the problem. Math is simpler. Let's start with the math. We're debating the Nyquist theorem. Not the results. They speak for themselves. For the most part I agree with your statements on oversampling and its benefits. Just not how you got there. To me, that matters. Truth matters. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 15, 2017 Share Posted August 15, 2017 12 minutes ago, HockeyFan12 said: Do you multiply by two (>two to promise a recoverable trace) once or twice per cycle (full sine wave, or line pair) the highest order sinusoidal signal to avoid aliasing when sampling in a Nyquist-bound system? Does Nyquist concern sine waves in the first place, or are the fundamental frequencies of square waves interchangeable in the theorem with sine waves? If they are, why are the higher order overtones irrelevant? 1: where 2 pixels make a full cycle, we need 3 pixels, 2*1.5 = 3, ideally 2*2 = 4 to avoid aliasing. This is exactly what Nyquist states: > 2x. We can see from simple inspection that at exactly 2x, we can capture the signal exactly, however it will alias like crazy if not perfectly aligned (as expected). From simple inspection of a 3 pixel sample vs 2 as shown in the computer graphics line test in this thread, we can see that as per Nyquist >2x aliasing is eliminated (3 pixels) vs the =2x case (2 pixels). 2: Already answered. Sums of sinusoids, no re: square waves, and again sensors aren't going to ever see square waves with OLPFs, and again real world results show this to be true. FS100 chart- I didn't look at it closely. Stating 1000 line horizontal resolution was being way too generous (1000 lines out of 1920), and after downloading and blowing it up in Photoshop for a later post realized it was really only resolving at most 500 lines before aliasing. That's also when I realized all the terminology was distracting from simple ideas, and thus the trivial computer graphics examples were created. Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 15, 2017 Share Posted August 15, 2017 I have to get to bed, but I appreciate you answering directly. I apologize again if I get testy... I get really grumpy at night when I'm tired. I also have to get to bed I wanted to get to the point quickly. I'll have more time to discuss this weekend. As regards your first response, I would agree that Nyquist states >2. Which means, in theory, any number >2. (In theory, practice is more complicated because there are other systems involved, including quantization and Bayer CFAs and OLPFs etc.) 3>2 and so 3 fulfills the theorem, but anything between 2 and 3 does, too. (I would agree that 3 is a particularly good choice for Bayer sensors because Bayer interpolation is just over 2/3X efficient.) I disagree that >2 samples will alias when you have sine waves in the input domain on a monochrome sensor, though. While you might see aliasing with a square wave (as we see with the FS100 chart), if you look at the examples of sine waves in the input domain I posted a few posts above (granted, they aren't the clearest; I can try to put together a clearer animation this weekend) while a sine wave can approach phase cancellation and even possibly reach pure gray at exactly 2x (and will go in and out of phase depending on how the domains line up), it will never alias. It might turn gray or extremely low contrast. It won't induce false detail (aliasing). I agree with you on the second point, completely. But while a high frequency square wave passing through an OLPF does eliminate the highest order harmonics, the OLPF isn't always going to be perfectly matched with the frequency of the pixel grid, nor is it a perfect low pass filter... it blurs all frequencies, so most manufacturers tend to make OLPFs a bit too weak. So, for academic purposes, it is worth discussing sine waves specifically and sensors without OLPF specifically because we don't introduce those extra variables. And they are variable. Which is why I'm still confident that if we take a 4k zone plate and photograph it with a >4k center crop of a Foveon or monochrome sensor (with no OLPF) we'd get no aliasing at all. That said, I agree the 3X rule of thumb works well for Bayer sensors. The Alexa, for instance, samples 1620 pixels vertically to resolve 540 (1620/3) cycles, or 1080p vertical resolution. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 15, 2017 Share Posted August 15, 2017 Regarding sine waves as input to a monochrome sensor with no OPLF: it will alias or not depending on the highest frequency present in the chart as per Nyquist. You're not going to get 2K sinusoid lines into a 2K sensor from the simple fact that you need at least 3 pixels (vs. 2) to prevent aliasing in all but the aligned case. You can do experiments with computer graphics by taking very high resolution sinusoids (including converting a test chart from AI/PDF to an image) and nearest neighbor downsample to simulate a camera sensor. Quote Link to comment Share on other sites More sharing options...
HockeyFan12 Posted August 15, 2017 Share Posted August 15, 2017 25 minutes ago, jcs said: You can do experiments with computer graphics by taking very high resolution sinusoids (including converting a test chart from AI/PDF to an image) and nearest neighbor downsample to simulate a camera sensor. I almost entirely agree with this, but you will see more aliasing in nearest neighbor downsampling than Nyquist predicts. The reason why is that Nyquist requires evenly-spaced sampling (per the definition of the theorem), but nearest neighbor downsampling cannot be evenly spaced unless you divide the resolution of the image by a whole integer. So for a 1080p image, the only Nyquist-applicable nearest neighbor downsampling resolutions would be 1p ,2p, 3p ,4p , 5p, 6p, 8p, 9p, 10p, 12p, 15p, 18p, 20p, 24p, 27p, 30p, 36p, 40p, 45p, 54p, 60p, 72p, 90p, 108p, 120p, 135p, 180p, 216p, 270p, 360p, and 540p. (Factors of 1080.) I still argue that anything >2 fulfills the theorem... in theory. In practice it is more complicated. 3 may be a decent rule of thumb. Some systems will get by with <3. None will get by with <2. But 3 may be a good rule of thumb in the real world. Quote Link to comment Share on other sites More sharing options...
tomekk Posted August 18, 2017 Share Posted August 18, 2017 There is not really much content we can use to enjoy true 4k resolution in ours beautiful 4k screens, is there? I mean, I know that "The Smurfs 2" used Sony F65 exclusively but that's not exactly my genre ;). What else is there guys? Usually it's F65 alongside something else. It would be nice to have a benchmark movie for 4k besides the smurfs ;). jcs 1 Quote Link to comment Share on other sites More sharing options...
jcs Posted August 18, 2017 Share Posted August 18, 2017 1 hour ago, tomekk said: There is not really much content we can use to enjoy true 4k resolution in ours beautiful 4k screens, is there? I mean, I know that "The Smurfs 2" used Sony F65 exclusively but that's not exactly my genre ;). What else is there guys? Usually it's F65 alongside something else. It would be nice to have a benchmark movie for 4k besides the smurfs ;). Lucy & Oblivion have decent stories and epic visuals. After Earth at least looks nice Tomorrowland, Ex Machina, Ted 2, more here: https://shotonwhat.com/cameras/sony-f65-camera (probably more, not a complete list) Alexa 65 should be pretty good too, as should Red and any other camera shooting at ~6.6+K (3840/.58 = 6620). Quote Link to comment Share on other sites More sharing options...
tomekk Posted August 18, 2017 Share Posted August 18, 2017 Yeah, this list is nice, but for a benchmark it would be nice to have F65 used as a main camera like in "The smurfs 2". Otherwise, we don't really know how much the F65 was used and in what scenes. In Oblivion, for example, it's listed as a second camera... so how much really was it filmed on the F65? I think, there is a little bit of marketing going on here. I think, I've found a good list of movies shot mainly on the F65. It's on Sony's website itself https://www.sony.co.uk/pro/products/digital-cinema-4k-movie-releases#find . All the ones with small sign "shot on sony 4k" at the bottom left corner of the title seem to be shot mainly on the F65! Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted August 18, 2017 Super Members Share Posted August 18, 2017 It was quite a coup for Sony to get The Smurfs on board though as the little fellas had traditionally been Polaroid ambassadors. mercer and jonpais 2 Quote Link to comment Share on other sites More sharing options...
jcs Posted August 19, 2017 Share Posted August 19, 2017 They preferred color & skintones of the F65 vs. Alexa & Red (as a CGI/Green screen heavy film, "True 4K" makes sense too): http://www.fdtimes.com/2014/07/24/luc-bessons-lucy-described-by-thierry-arbogast-afc/ Quote It’s interesting that you chose the F65. I know a couple of rental houses in Paris bought F65s and they really couldn’t rent them for a while. And then all of the sudden you started using them and now everybody wants to shoot F65. I know the F65 was not very popular a few years ago. The F65 came out 3 years ago, but this camera was not very popular until now. Two years ago, nobody wanted to use it. From the beginning, Sony said why not use the F65, make some tests, try it. Few people knew about it before. The film business is almost like the fashion business. If the camera’s not stylish, they’re not going to use it. It’s like fashion. Yes, exactly. Not fashionable. Sony has to think about that. The Genesis was very ugly too. Actually the Genesis, the Panaflex, and the F65 all kind of look the same? The Alexa and the RED have the best designs at the moment for sure. The ARRI D-21 was not very pretty. @Ed_David shoots F65. tomekk 1 Quote Link to comment Share on other sites More sharing options...
jonpais Posted August 19, 2017 Share Posted August 19, 2017 I can say, having just unboxed my new 5K iMac and being able to view clips that I've been shooting for years in 4K for the very first time is nothing short of amazing as far as detail goes. The colors are truer on the new iMac as well. So for hobbyists like me, who just shoot for their own enjoyment, I'm liking it. Curiously, when viewing my clips on YouTube with Safari, the highest resolution available is 1440p. @jcs Or are the clips I've shot with the GH4 and G85 really 1080p? I'm so confused. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 19, 2017 Share Posted August 19, 2017 GH4 4K is well above 1080p, perhaps 3K max (4K*.8), beyond 2.2K (4K*.58) with aliasing. 4K video on a desktop 4K monitor is totally worth it for the quality jump. jonpais 1 Quote Link to comment Share on other sites More sharing options...
Shirozina Posted August 19, 2017 Share Posted August 19, 2017 1080 on a 4k monitor also looks better than it does on a 1080 monitor. jonpais 1 Quote Link to comment Share on other sites More sharing options...
hmcindie Posted August 20, 2017 Share Posted August 20, 2017 21 hours ago, Shirozina said: 1080 on a 4k monitor also looks better than it does on a 1080 monitor. Which is interesting because usually they alias like hell. That aliasing creates the feeling of "sharpness" eventhough the monitor can't display the 4k. But if someone made a camera that aliased like that, it would be mauled as shit 8) Quote Link to comment Share on other sites More sharing options...
Shirozina Posted August 20, 2017 Share Posted August 20, 2017 55 minutes ago, hmcindie said: Which is interesting because usually they alias like hell. That aliasing creates the feeling of "sharpness" eventhough the monitor can't display the 4k. But if someone made a camera that aliased like that, it would be mauled as shit 8) The extent of aliasing depends on how the smaller resolution is interpolated to the larger one. If it looks 'like hell' then it's probably doing a simple pixel multiplier converting one HD pixel into 4 identical UHD pixels. If interpolated by better methods it can look very smooth with minimal aliasing. Even HD displayed 1:1 on an HD monitor can have horrible aliasing depending on the viewing distance / magnification. Quote Link to comment Share on other sites More sharing options...
hmcindie Posted August 20, 2017 Share Posted August 20, 2017 4 hours ago, Shirozina said: The extent of aliasing depends on how the smaller resolution is interpolated to the larger one. If it looks 'like hell' then it's probably doing a simple pixel multiplier converting one HD pixel into 4 identical UHD pixels. If interpolated by better methods it can look very smooth with minimal aliasing. Even HD displayed 1:1 on an HD monitor can have horrible aliasing depending on the viewing distance / magnification. Sure. But people always have that "ooh 4k with my 1080p monitor!" reaction most to videos that have aliasing or are converted down quickly by the youtube player. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.