KnightsFan
Members-
Posts
1,292 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
I just got a 32" 4k monitor, and the difference between 4k and 1080p (native or upscaled) is immediately apparent at normal viewing distance. Uncompressed RGB sources without camera/lens artifacts, such as video games or CG renders, have absolutely no comparison; 4k is vastly smoother and more detailed. I haven't tried any AI-based upscaling, nor have I looked at 8k content. I get your point, but that's an exaggeration. Even my non-filmmaker friends can see the difference from 3' away on a 32" screen. Though only with quality sources, you are exactly right that bit rate is a huge factor. If the data rate is lower than the scene's complexity requires, the fine detail is estimated/smoothed--effectively lowering the resolution. I haven't done A/B tests, but watching 4k content on YouTube I don't get the same visceral sense of clarity that I do with untouched XT3 footage in Resolve, or uncompressed CG.
-
Yes, it is, which is why we get so many color problems from bayer sensors! I suspect I have less development experience than you here, but I've done my share of fun little projects with procedurally generating imagery, though usually in RGB. My experience with YUV is mostly just reading about it and using it as a video file. I'll dig a little more at the images. From my initial look, I don't see any extra flexibility in the Raw vs. uncompressed YUV (assuming that the Raw will eventually be put into a YUV format, since pretty much all videos are). The benefit I see is just the lack of compression, not the Raw-ness. I'm not saying the 8 bit raw is bad, but uncompressed anything will have more flexibility than anyone needs.
-
If we are talking fidelity, not subjective quality, then no, you can't get information about the original scene that was never recorded in the first place. I'm not sure what you mean by interpolation in this case. Simply going 8 bit to 12 bit would just remap values. I assume you mean using neighboring pixels to guess the degree to which the value was originally rounded/truncated to an 8 bit value? It is possible to make an image look better with content aware algorithms, AI reconstruction, CGI etc. But these are all essentially informed guesses; the information added is not directly based on photons in the real world. It would be like if you have a book with a few words missing. You can make very educated guess about what those words are, and most likely improve the readability, but the information you added isn't strictly from the original author. Edit: And I guess just to bring it back to what my post was about, that wouldn't give an advantage to either format, Raw or YUV.
-
I wasn't talking about 12 bit files. In 8 bit 4:2:0, for each set of 4 pixels in 8 bit 4:2:0, you have 4x 8 bit Y, and 1x 8 bit U and 1x 8 bit V. That averages out to 12 bits/pixel, a simplification to show the variation per pixel compared to RGGB raw. It seems your argument is essentially that debayering removes information, therefore the Raw file has more tonality. That's true--in a sense. But in that sense, even a 1 bit Raw file has information than the 8 bit debayered version won't have, but I wouldn't say it has "more tonality." I don't believe this is true. You always want to operate in the highest precision possible, because in minimizes loss during the operation, but you never get out more information than you put in. It's also possible we're arguing different points. In essence, what I am saying is that lossless 8 bit 4:2:0 debayered from 12 bit Raw in camera has the potential* to be a superior format to 8 bit Raw from that same camera. *and the reason I say potential is that the processing has to be reasonably close to what we'd do in post, no stupidly strong sharpening etc. About this specific example form the fp... I didn't have that impression, to be honest. It seems close to any 8 bit uncompressed image.
-
Previewing in resolve should not take up more file space unless you generate optimized media or something. Running resolve will use memory. Have you tried media player classic? Thats my main media player, since unlike VLC it is color managed. I don't kniw how performance compares. I am actually surprised that VLC lags for you. What version do you have? Have you checked to see if it runs your CPU to 100% when playing files in VLC?
-
Downsampled 4k looks way better than native 4k.. or not!
KnightsFan replied to ND64's topic in Cameras
Kasson's tests were in photo mode, not video. It's not explicitly clear, but I think he used the full vertical resolution of the respective cameras. He states that he scaled the images to 2160 pixels high, despite the horizontal resolution not being 3840. -
I agree. Part of the problem with gimbals--especially low priced electronic ones--is that they are so easy to get started with, many people use them without enough practice. Experienced steadicam operators make shots look completely locked down.
-
Downsampled 4k looks way better than native 4k.. or not!
KnightsFan replied to ND64's topic in Cameras
I don't know if I agree with that. The A7S and A7SII at 12MP had ~30ms rolling shutter, and the Fuji XT3 at 26MP has ~20ms. 5 years ago you'd be right, but now we really should expect lower RS and oversampling without heat issues. Maybe 26MP is more than necessary, but I think 8MP is much less than optimal. I don't know if it was strictly the lack of oversampling or what, but to be honest the false color on A7S looked pretty ugly to me. And I'm talking about the 4k downsample real world ones. I think it's a question of how big many photons hit the photosensitive part. Every sensor has some amount of electronics and stuff on the front, blocking light from hitting the photosensitive part (BSI sensors have less, but some), and every sensor has color filters which absorb some light. So it depends on the sensor design. If you can increase the number of pixels without increasing the photosensitive area blocked, then noise performance should be very similar. -
It can be, yes. I was responding to the claim that the fp's 8 bit raw could somehow be debayered into something with more tonality than an "8 bit movie." I suspected that @paulinventome thought that RGGB meant that each pixel has 4x 8 bit samples, whereas of course raw has 1x 8 bit sample per pixel. My statement about 4:2:0 was to point out that even 4:2:0 uses 12 bits/pixel, thus having the potential for more tonality than an 8 bit/pixel raw file has. Of course, 8 bit 4:4:4 with 24 bits/pixel would have an even greater potential.
-
Downsampled 4k looks way better than native 4k.. or not!
KnightsFan replied to ND64's topic in Cameras
Interesting! I think that A7S is also being oversampled, as far as I can tell from Kasson's methodology. The sensor is 12MP with a height of 2832, and it is downscaled to 2160. That's just over 31% oversampling. The numbers I see thrown around for how much you have to oversample to overcome bayer patterns is usually around 40%, so A7S is already in the oversampling ballpark of resolving "true" 2160. With that in mind, I was surprised at how much difference in color there is. Even on the real world images, there's a nice improvement in color issues on the A7R4 over the A7S. The difference in sharpness was not very pronounced at all. But bayer patterns retain nearly 100% luminance resolution, so maybe that makes sense. The color difference evens out a bit when the advanced algorithms are used, which really shows the huge potential of computational photography, all the way from better scaling algorithms, up to AI image processing. I suspect that some of our concept that oversampling makes vastly better images comes from our experience moving from binned or line skipped to oversampling, rather than directly from native to oversampling. And I also think that we all agree that after a point, oversampling doesn't help. 100MP vs 50MP won't make a difference if you scale down to 2MP. -
Haha thanks, I didn't even notice that! I have now passed full subsampling on EOSHD.
-
Doesn't make sense to me. 8 bit bayer is significantly less information per pixel than 8 bit 4:2:0. RGGB refers to the pattern of pixels. It's only one 8 bit value per pixel, not 4.
-
Can you share an example file with both the original H.265, and the transcoded ProRes?
-
Aha, I think I understand now. So the audio is going from the band out to the speakers without time manipulation? In that case, you would just monitor it normally. As for syncing up, that requires you to know what the band is going to do, and choreograph beforehand. Example: Say you want to reframe when the key changes. Since your video is being slowed down, anything you film will be seen on screen at some point x seconds in the future. Therefore, you would have to reframe x seconds before the band shifts key. I'm not sure what the purpose of chopping up the audio and playing back 1/4 of is anymore?
-
If you are slowing audio and video together from the same rig, you can manipulate the camera in real time based on the music being played by the band (stream A from my example), and all of those camera changes will already be in sync with the audio when they are slowed down together. Am I getting this right? I'm probably missing something about what you're trying to accomplish still Edit: Here's a more concrete example of what I'm saying. If you are recording the band, and you want to reframe when they change key, then you simply need to reframe when they change key. When the audio and video are slowed down, the slowed down video will still reframe exactly when the slowed down audio changes key. Hopefully that will clarify how I am misunderstanding you. Experimenting is the fun part! It doesn't necessarily have to produce results at all.
-
What do you mean by "live?" Do you mean live as its recorded or live as its played? I suggest for clarification we refer to monitoring as its recorded or as it is played, it's less confusing that way (to me!). Ultimately, you will need to do what @BTM_Pix suggested originally, which is record to a buffer, and selectively play out of that buffer. Essentially, record 4 seconds of audio to a file, slow that file down, and then play it over the next 16 seconds. You can continue to do this until you run out of file space. At that point, you have two streams of audio: A. the one being recorded, and B. the one being played (which is always 4x the length of stream A). I assume the question isn't how to monitor either of those. So I originally thought what you were trying to do is listen to the audio that you are recording, in a rough approximation of what it will sound like after it is slowed down. To do this, every 4 seconds, you listen to the 1st second slowed down to fill the whole 4 seconds. If this is the case, then all you need to do is introduce a third audio stream C, which is exactly the same as stream B, but only plays every 4th group of 4 seconds. This will monitor in real time as audio is recorded, with a slow down effect so you know what it will sound like later when it plays (although you won't be able to listen to all of it). However, you drew a diagram which implies you are listening to normal speed audio, but only listening to 1 second every 4 seconds, the exact opposite of my original thought. So that would also work, but obviously you are listening to audio significantly after it was recorded. So that would bring me to this question: what does meaningful mean to you? What are you trying to hear and adjust for in the audio? So to be clear, this is music being played on site, not a pre-recording of music? I ask because if both the video and audio are being recorded at the same time, and slowed in the same way, they should stay in sync already. So if a band is playing and you are shooting it with an HFR camera and audio, and you project the video in another room with that same audio, both slowed down, then they will be in sync.
-
I agree with you, but also the XT3 cost significantly more at launch. I wouldnt be surprised if nikon released a pricier APS-C with better video. At least the z50 looks like it has sensible ergonomics.
-
In one of the BTS features for The Hobbit, Peter Jackson said that one side effect of digital was they would roll through multiple takes. It makes me wonder how much of that is a "usable ratio," or actual footage where the actors are acting, and how much is between takes, clapperboards, etc, how much is from multiple cameras as mentioned, and how much is from content that was cut or done in multiple takes.
-
Budget PC and monitor for 4k editing and color grading.
KnightsFan replied to Krotor's topic in Cameras
We talked about monitors not too long ago here, though there weren't a lot of recommendations. I'd be interested to hear what you end up going for and how you like it, I'm still in the market for a monitor myself. I did a bunch of research and finally picked up an Asus PA329Q on Amazon, but it ended up being a scam (I got my money back, fortunately). I would ideally like LUTs, DCI coverage, and HDR if possible. The PA329Q doesn't have HDR, but what I like about it is that you can manually set it to different color gamuts when you use them. I think many cheaper wide-color-gamut monitors can't easily switch between wide and sRGB gamuts. Out of curiosity, do you use Resolve? I think a Decklink is required to view in 10 bit and/or HDR. For Resolve it almost makes more sense to get a cheap 4k monitor for the UI and then a top quality HD monitor + decklink for viewing. However, that would be really annoying for all other uses, like gaming, photo editing, etc. Yeah, unless it says otherwise, 10 bit always means 8 bit + 2 frc. But for the most part they are indistinguishable, and in any case 8+2frc is vastly superior to 8 alone. If you edit ProRes, you should very easily find a PC build that edits 4k nicely. If you edit 10 bit H.265, though, that might be tough and you might consider a proxy workflow. I've got a 6yo i7-4770 and a GTX 1080, probably in your budget ballpark or a little lower, and 4k 10 bit H.265 has some hiccups every now and then even without an effects. If you edit sound at all, my recommendation is to get a large PC case with a bunch of 120 or 140mm fans, and use SSDs. -
Scorsese compared MCU to modern theme park. What is your angle on it?
KnightsFan replied to heart0less's topic in Cameras
Oops, I meant Avengers Infinity War, I haven't seen Endgame. I do think that the original Iron Man had a much more classical arc, where Tony's choices mattered and were not given, so I wouldn't be surprised if Endgame put more effort into that area to round out his arc. I think that the difference is that in the MCU, the choices are actually just character traits. In an extreme example, Hulk will always decide to smash because that's what he does. While the audience technically knows Luke will leave Tatooine because the story wouldn't exist if he didn't, his character is not set up from the beginning to do just that. Honestly, I can't say the characters in the original Star Wars resonate with me, though. It's a bare bones story used as an excuse for some of the most imaginative worldbuilding ever, with a much more interesting cast of places than people. -
Scorsese compared MCU to modern theme park. What is your angle on it?
KnightsFan replied to heart0less's topic in Cameras
I don't think that the "lighthearted entertainment" aspect is what really makes MCU theme park-ish. I'd argue that the George Lucas Star Wars movies are also lighthearted entertainment, but without the theme park feel. I even think the first Transformers still felt like a "movie" compared to the MCU (and I'm not talking about quality, but tone and structure). I think this is exactly right: There's something different about MCU and modern Star Wars movies compared to other blockbusters even, they are more like sports in how they portray the heroes and villains, and structurally they are episodic. Edit: Another way to put it is that in the MCU, characters are defined by traits. Iron Man is a leader. Starlord makes 80's references. I would argue that in Avengers Endgame, the choice they make to fight Thanos is never examined at all, it's a given. In classic storytelling movies, characters are defined by choices and their reasoning. Frodo chooses to take the ring to Mordor to save the Shire. The T-800 chooses to melt himself to save the future. In the same way, we admire sports stars first and foremost because of their abilities and traits. -
While long tracking shots are always incredible, and difficult to physically pull off, here's a shot that is amazing for the way it affects the story. (Start at 3:14 in case the link doesn't automatically). What I love about this is that there's no complex choreography or VFX, it would be almost trivial to shoot. It's not even a particularly striking composition. Yet this completely ordinary shot both completes Cobb's emotional journey, and simultaneously asks the biggest question of the film to leave an otherwise closed ending completely open. It's worth noting that unlike franchise films, this open ending has no room for a sequel. If we disregard the cut in to Michael Caine, the shot really begins at 3:06, which makes this the only shot where Cobb is seen together with his children, and also the only shot where it is unclear whether he's in the real world.
-
The OP here has an easy method: http://www.dvxuser.com/V6/showthread.php?303559-Measuring-rolling-shutter-put-a-number-on-this-issue! There's some error of course, but his measurements line up with manufacturer claims in the rare cases that manufacturers publish their numbers, so it should be reasonably accurate. I'm too busy this week, but if you get the footage I'd be happy to try to do the measurements next weekend.
-
Speaking of, Victoria is another film shot in one continuous take, with no hidden cuts. It's impressively action packed with a huge scope of locations. Certainly one of the most impressive shots I've ever seen.
-
Scorsese compared MCU to modern theme park. What is your angle on it?
KnightsFan replied to heart0less's topic in Cameras
Wow, that is the most accurate summary of the MCU that I've seen. Like Scorsese says, they are not poorly made. Most MCU movies are clearly made with a lot of care and loving attention to detail, just not the details us snobs are looking for. Conveying "emotional, psychological experiences" is their antithesis. There's just the faintest scattering of storytelling: 3 acts, a villain, a half dozen callbacks to Act 1 in Act 3, and the rest is precisely a theme park ride. There is no attempt to communicate an abstract idea or feeling, all information presented by the movie is describing events as they occur, humorous one liners*, and information about the exact extents of a character's physical capabilities. And it's not that fantasy or blockbuster movies are necessarily theme parkish. The Lord of the Rings conveyed an enormous amount of emotion. The Star Wars prequels -tried- to convey emotion, but were hit or miss on execution. Nolan's Batman films were entirely character driven, delivering on the psychological experience moreso than emotional. *With the exception of Thor Ragnarok, and moments in Guardians, the one liners aren't even character-based. Any character could say any of the jokes and it would make sense. I'll also say that Thor is the only MCU movie I've seen that has recurring character jokes that genuinely advance our understanding of the characters and work off of previous content.