-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Thanks! Those shots are the more formal shots from the real video is much more casual thing, including things like my wife throwing seaweed at me, and other amusements that I'm not allowed to post publicly.. In that context those shots are quite stable! Compared to the action camera, a tripod would literally make the rig hundreds of times larger, and really defeat the "action" part of it 🙂
-
If it's worth saying, it's worth repeating, right? also, if it's worth saying it's definitely worth repeating. I completely agree. To put things into perspective, here's a video from the GH1 that I saw shared recently. To my eyes, it look better than almost everything I see posted in the last couple of years. 1080p camera, 1080p upload, from a camera that is so old it doesn't change hands much anymore, but when it does it can be had for about $100.
-
The mods continue - putting a filter thread on the 15mm f8 lens... I bought a filter adapter to go from something larger down to the 52mm thread that I wanted, then applied PVA glue where I worked out that the lens and filter thread adapter touch: Put it on the lens: I made sure it was flat and waited a long time for it to dry to ensure it had dried all the way through and had hardened properly, etc. Just for fun I took the whole filter stack from the Micro and put in on here, so this is the lens with the glued on adapter, a 52-58 adapter, a UV IR filter, the Tiffen BPM 1/8, and the vND. I have a 52mm vND so I'll just put that on there, but I also bought a super-cheap 52mm diffusion filter from China, so I'll try that on there and see how that goes - it was under $10 so can't really go wrong. I haven't shot anything with the lens yet, but it's on my list....
-
First colour video from the SJ4000 / zoom lens combo.... I pushed the colour, both towards warm/magenta and also in saturation, partly to experiment with a punchier look, but also to see how far I could push the image. This is the final look I applied: and this was the untouched SOOC footage: Not a bad look, but not the one I was going for. Sunsets aren't green, after all! The only mod that I still want to make is to extend the lever on the focus ring to give finer control and more leverage (as some parts of the focal range get a bit stiff). It's quite difficult to focus, even with the 4X digital zoom that can be used to 'punch-in' before shooting, and I missed quite a few shots while filming this. Overall though, it's quite a capable package.
-
Awesome... next steps are: figure out of there is a cheaper way to get that glass, perhaps in a consumer lens figure out if there is a different way to get the same optical recipe (for example the soviet lenses are famously replications of the Zeiss recipes that the soviets took from Germany at the end of WWII) figure out what the image qualities are that you like from those lenses and work out if there is a way to replicate them in other ways, like diffusion filters, streak filters, etc
-
It depends on the footage you're matching and what the differences are. It's one element of the image, but isn't the only thing, of course.
-
I'm skeptical. Just because something is the best of the available options doesn't mean that the available options were the best ones that could have been created. During the last decade we've gotten a 16x increase in sensor resolution (8K vs 2K) combined with a radical price decrease (compare the launch price of the Canon R5 or the UMP 12K with the launch price of the original Alexa), and yet we haven't even matched the colour science or dynamic range. The fact we have gotten radical "improvements" but still view the decade-old Alexa as having superior image quality means that the last decade was spent improving things that didn't matter, or at least weren't the most critical. It's like if I cooked you a meal but you found that it tasted quite bad, and I said I was going to work on it. I come back a decade later and you taste the food and it still tastes bad, and you ask me "you said you were going to improve your cooking - what happened" and I replied "I did improve my cooking - now for the same budget I can make a huge amount of the food that tastes like that". The edges you see when you zoom in are to do with the amount of sharpening, noise reduction, and compression being applied to the image, which Sony has (prior to the A7S3) had a pretty poor track record of. The autofocus system of a sensor has nothing to do with compression artefacts.
-
There are a number of blind camera tests around the place, and I find them useful to compare your image preferences (instead of the prejudices we all have!) so thought collecting them in a single thread might be useful. To get philosophical for a second, I think that educating your eye is of paramount importance. It's easy to "train" your eye through the endless cycle of 1) hear a new camera is released, 2) read the specs and hear about the price 3) build up a bunch of preconceived notions about how good the image will be, 4) see test footage, 5) mentally assume that the images you saw must fit with the positive impression that you created based solely on the specs and price, and 6) repeat for every new camera that is released. That's a great way to train yourself to think that over-sharpened rubbish looks "best". The alternative to this is evaluating images based solely on the image, and to go by feel, rather than pixel-peeping based on spec/price. To this end, it's useful to view blind tests of cameras you can't afford, lenses you can't afford, and old cameras you turn your nose up at because they're not the latest specs. News flash, the Alexa Classic doesn't have the latest specs either, so how many cameras have you dismissed based on specs but were making an exception for the Alexa all this time? Also, although you may find that you like a particular cine lens but can't afford it (or basically any cine lens for that matter), often the cine lenses have the same glass as lenses that cost a tenth or less of the cine version. Furthermore, you may be able to triangulate that you like lenses or a camera / codec combo that gives a particular image look, and perhaps that look can be created by lighting differently, or using filters, or changing the focal lengths you use. Educating your eye can literally lead to you getting better images from what you have without purchasing anything. To kick things off, here's Tom Antos' most recent test, with Sony FX3, Sony FX6 BM Pocket Cinema Camera 6K Pro, RED Komodo, and Z-Cam E2 F6 Test footage: and the results and discussion: Here are his tests from 2019 - BM Pocket 6K, Arri Alexa, RED Raven, Ursa Mini Pro Test footage: and results and discussion: Another blind test from TECH Rehab, comparing Sony F65, Sony F55, Arri Alexa, Kinefinity Mavo 6K, BM Ursa 4K, BMPCC 6K, BMPCC 4K Test footage: and results: Another test from Carls Cinema, comparing OG BMPCC 2K and BMPCC 4K: Another test from Carls Cinema comparing OG BMPCC 2K to GH5: A big shootout from @Mattias Burling comparing a bunch of cameras, but interestingly, also comparing different modes / resolutions of the cameras, and also paired with different lenses because (hold the front page!) the camera isn't the only thing that creates the image. Shocking I know.... I won't name the cameras here, as not even knowing which cameras are in there is part of the test. The test footage: And the results and discussion: More camera tests: Another great camera/lens combo test, this time from @John Brawley: And a blind lens test: If anyone can find the large blind test from 2014 (IIRC?) that included the GH4 as well as a bunch of cine cameras, it would be great to link to it here. I searched for it but all I could find was a few articles that included private vimeo videos, so maybe it's been taken down? It was a very interesting test and definitely worth including. If you know of more, please share! 🙂
-
Or at all. Telling truth to power / truth about power is regarded as an extreme act for good reason.
-
I've found one of the really important things in matching two cameras is to use a colour checker (under identical lighting conditions) and use the Hue vs Hue, Hue vs Sat, and Hue vs Lum curves to match up the colour patches from the colour checker. If you haven't done that then it's worth a go, as often those curves take a match from being quite bad to really close.
-
That looked quite nice, but still had the 'video' look to me. I wonder if everyone is so used to seeing stuff with the same look that people are now blind to it. Here's some things that don't share the same look, for contrast.
-
For anyone who is reading this far into a thread about resolution but for some reason doesn't have an hour of their day to hear from an industry expert on the subject, the section of the video I have linked to below is a very interesting comparison of multiple cameras (film and digital) with varying resolution sensors, and it's quite clear that the level of perceptual detail coming out of a camera is not that strongly related to the sensor resolution:
-
Ok.. Let's discuss your comments. His first test, which you can see the results of at 6:40-8:00 compares two image pipelines: 6K image downsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x 6K image downsampled to 2K, then upsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x As this view is 2X digitally zoomed in, each pixel is twice as large as it would be if you were viewing the source video on your monitor, so the test is actually unfair. There is obviously a difference in the detail that's actually there, and this can be seen when he zooms in radically at 7:24, but when viewed at 1:1 starting at 6:40 there is perceptually very little difference, if any. Regardless of if the image pipeline is "proper" (and I'll get to that comment in a bit), if downscaling an image to 2K then back up again isn't visible, the case that resolutions higher than 2K are perceptually differentiable is pretty weak even straight out of the gate. Are you saying that the pixels in the viewer window in Part 2 don't match the pixels in Part 1? Even if this was the case, it still doesn't invalidate comparisons like the one at 6:40 where there is very little difference between two image pipelines where one has significantly less resolution than the other and yet they appear perceptually very similar / identical. He is comparing scaling methods - that's what he is talking about in this section of the video. This use of scaling algorithms may seem strange if you think that your pipeline is something like 4K camera -> 4K timeline -> 4K distribution, or the same in 2K, as you have mentioned in your first point, but this is false. There are no such pipelines, and pipelines such as this are impossible. This is because the pixels in the camera aren't pixels at all, rather they are photosites that sense either Red or Green or Blue. Whereas the pixels in your NLE and on your monitor or projector are actually Red and Greed and Blue. The 4K -> 4K -> 4K pipeline you mentioned is actually ~8M colour values -> ~24M colour values -> ~24M colour values. The process of taking an array of photosites that are only one colour and creating an image where every pixel has values for Red Green and Blue is called debayering, and it involves scaling. This is a good link to see what is going on: https://pixinsight.com/doc/tools/Debayer/Debayer.html From that article: "The Superpixel method is very straightforward. It takes four CFA pixels (2x2 matrix) and uses them as RGB channel values for one pixel in the resulting image (averaging the two green values). The spatial resolution of the resulting RGB image is one quarter of the original CFA image, having half its width and half its height." Also from the article: "The Bilinear interpolation method keeps the original resolution of the CFA image. As the CFA image contains only one color component per pixel, this method computes the two missing components using a simple bilinear interpolation from neighboring pixels." As you can see, both of those methods talk about scaling. Let me emphasise this point - any time you ever see a digital image taken with a digital camera sensor, you are seeing a rescaled image. Therefore Yedlin's use of scaling is on an image pipeline is using scaling on an image that has already been scaled from the sensor data to an image that has three times as many colour values as the sensor captured. A quick google revealed that there are ~1500 IMAX theatre screens worldwide, and ~200,000 movie theatres worldwide. Sources: "We have more than 1,500 IMAX theatres in more than 80 countries and territories around the globe." https://www.imax.com/content/corporate-information "In 2020, the number of digital cinema screens worldwide amounted to over 203 thousand – a figure which includes both digital 3D and digital non-3D formats." https://www.statista.com/statistics/271861/number-of-digital-cinema-screens-worldwide/ That's less than 1%. You could make the case that there are other non-IMAX large screens around the world, and that's fine, but when you take into account that over 200 Million TVs are sold worldwide each year, even the number of standard movie theatres becomes a drop in the ocean when you're talking about the screens that are actually used for watching movies or TVs worldwide. Source: https://www.statista.com/statistics/461316/global-tv-unit-sales/ If you can't tell the difference between 4K and 2K image pipelines at normal viewing distances and you are someone that posts on a camera forum about resolution then the vast majority of people watching a movie or a TV show definitely won't be able to tell the difference. Let's recap: Yedlin's use of rescaling is applicable to digital images because every image from every digital camera sensor that basically anyone has ever seen has already been rescaled by the debayering process by the time you can look at it There is little to no perceptual difference when comparing a 4K image directly with a copy of that same image that has been downscaled to 2K and the upscaled to 4K again, even if you view it at 2X The test involved swapping back and forth between the two scenarios, where in the real world you are unlikely to ever get to see the comparison, like that or even at all The viewing angle of most movie theatres in the world isn't sufficient to reveal much difference between 2K and 4K, let alone the hundreds of millions of TVs sold every year which are likely to have a smaller viewing angle than normal theatres These tests you mentioned above all involved starting with a 6K image from an Alexa 65, one of the highest quality imaging devices ever made for cinema The remainder of the video discusses a myriad of factors that are likely to be present in real-life scenarios that further degrade image resolution, both in the camera and in the post-production pipeline You haven't shown any evidence that you have watched past the 10:00 mark in the video Did I miss anything?
-
Welcome to the conversation. I'll happily discuss your points now you have actually watched it, but will take the opportunity to re-watch it before I reply. I find it incredibly rude that you are so protective of your own time and yet feel so free to recklessly disregard the time of others by talking about something you didn't care to watch and also by risking the outcome that you were talking out your rear end without knowing it, but I'll still talk about the topic as even selfish people can still make sense. A stopped clock is also right twice a day, so we'll see how things shake out. Watching is not understanding, so you've passed the first bar, which is necessary but not sufficient. This is an excellent example and I think speaks to the type of problem. Light bouncing off reality is of (practically) infinite resolution, and then it goes through the air, then: through your filters lens elements (optical distortions) and diffraction from the aperture sensor stack and is then sensed by individual pixels on the sensor (can diffraction happen on edges of the pixels?) it then gets quantised to a value and processed RAW in-camera and potentially recorded or output at this point In some cameras it then goes on to be non-linearly resized (eg, to compensate for lens distortions - do cameras do this for video?) rescaled to the output resolution processed at the output resolution (eg, sharpening, colour science, vignetting, bit-depth quantisation, etc) then compressed into a codec at a given bitrate Every one of these things degrades the image, and the damage is cumulative. All-but-one of them could be perfect, but if one is terrible then the output will still be terrible. Damage may also be of different kinds that might be mostly independent, eg resolution vs colour fidelity, so you might have an image pipeline that has problems across many 'dimensions'. Going back to your example and the Sony A7s2, if you take RAW stills then I'm sure the images can be sharp and great - in which case, the optics and sensor can be spectacular and it's the video processing and compression that is at fault. This is yet another reason that I think resolutions above 2K are a bad thing - most cameras aren't getting anything like the resolution that high-quality 2K can offer, but they still require a more and more powerful computer to edit and work with the mushy and compressed-to-death images. Any camera that can take high-quality stills but produces mushy video is very frustrating as they could have spent the time improving the output instead of just upping the specs without getting the associated benefits.
-
I also found many of his articles particularly interesting, especially given that some have very carefully prepared tests to demonstrate how they translate to the real world. http://www.yedlin.net/NerdyFilmTechStuff/index.html Did you find anything else in here that was of interest?
-
A friend recommended a movie to me, but it looked really long. I watched the first scene and then the last scene, and the last scene made no sense. It had characters in it I didn't know about, and it didn't explain how the characters I did know got there. The movie is obviously fundamentally flawed, and I'm not watching it. I told my friends that it was flawed, but they told me that it did make sense and the parts I didn't watch explained the whole story, but I'm not going to watch a movie that is fundamentally flawed! They keep telling me to watch the movie, but they're obviously idiots, because it's fundamentally flawed. They also sent me some recipes, and the chocolate cake recipe had ingredient three as eggs and ingredient seven as cocoa powder (I didn't read the other ingredients) but you can't make a cake using only eggs and cocoa powder - the recipe is fundamentally flawed. My friend said that the other ingredients are required in order to get a cake, but I'm not going to bother going back and reading the whole recipe and then spending time and money making it when it's obviously flawed. My friends really are stupid. I've told them about the bits that I saw, and they kept telling me that a movie and a recipe only make sense if you go through the whole thing, but that's not how I do things, so obviously they're wrong. It makes me cry for the state of humanity when that movie was not only made, but it won 17 oscars, and that cake recipe was named Oprahs cake of the month. People really must be stupid.
-
You only think it's flawed because you didn't watch the parts of it that explain it. I have spent many many hours preparing a direct response to answer your question, and all the other questions you have put forward. I think after a thorough examination you will find all the answers to your questions and queries. Please click the below for all the information that you need:
-
Great post and thanks for the images. That is exactly what I am seeing with my Micro footage - the images just look spectacular without having to do almost anything to them. I've done tests where I have been able to match the colours from the BM cameras, but only under "easy" conditions where the GH5 sensor was not stressed. The fact you are talking about the performance under extremely challenging situations only reinforces my impressions. I'll be very curious to see your results. In my Sony sensor thread everyone was talking about how Sony sensors can be graded to look like anything, but I never see people actually trying, or if they do, it's under the easiest of controlled conditions. Here is a test I did a while ago trying to match the GH5 to the BMMCC: There are differences of course, due to using different lenses for starters, but the results would likely be passable. There's no way in hell you can get a good match under more challenging situations. To put it another way, when the OG BMPCC and BMMCC came out people were talking about cutting them together with Alexa footage and that they matched really easily and nicely with little grading required. No-one really says that about affordable modern consumer cameras, and what I hear from the professional colourists is that when someone uses a GH5 or a Sony or an iPhone, it's about doing the best the can and putting the majority of effort into managing the clients expectations.
-
The whole video made sense to me. What you are not understanding (BECAUSE YOU HAVEN'T WATCHED IT) is that you can't just criticise bits of it because the logic of it builds over the course of the video. It's like you've read a few random pages from a script and are then criticising them by saying they don't make sense in isolation. The structure of the video is this: He outlines the context of what he is doing and why He talks about how to get a 1:1 view of the pixels He shows how in a 1:1 view of the pixels that the resolutions aren't discernable Then he goes on to explore the many different processes, pipelines, and things that happen in the real world (YES, INCLUDING RESIZING) and shows that under these situations the resolutions aren't discernible either You have skipped enormous parts of the video, and you can't do that. Once again, you can't skip parts of a logical progression, or dare I say it "proof", and expect for it to make sense. Your posts don't make sense if I skip every third word, or if I only read the first and last line. Yedlin is widely regarded as a pioneer in the space of colour science, resolution, FOV, and other matters. His blog posts consist of a mixture of logical arguments, mathematics, physics and controlled tests. These are advanced topics and not many others have put the work in to perform these tests. The reason that I say this is that not everyone will understand these tests. Not everyone understands the correct and incorrect ways to use reductive logic, logical interpolation, extrapolation, equivalence, inference, exception, boundaries, or other logical devices. I highly doubt that you would understand the logic that he presents, but one thing I can tell with absolute certainty, is that you can't understand it without actually WATCHING IT.
-
Yeah. The limitations that Sony deliberately put in their imaging devices really places limitations on the wider industry, all the people that use them, and all the images that come from those imaging devices.
-
You're right that low light and DR are improving, but in comparison to the Alexa, 10 years has delivered 1600% of the pixels, and <100% of the DR. No-one in their right mind could suggest they're balancing those priorities, and this is in a world where people watch a large percentage of stuff on their phones, in 1080p. A good deal of the richness in the older BM footage is still there in the Prores files, even the low bitrate ones. Even with the 5K sensor and downsampling on the GH5, I'm still only able to approach (and not exceed) the look from the Prores 422 or LT files from the Micro. The DR of recent Sony cameras is very good, but more would also be useful. Think about a shot that is backlit - in order to not blow out the sky you are forced to underexpose the foreground, which is hardly the ideal in terms of exposure. I'd suggest that the phrase is "borderline usable". No-one is talking about 6K and saying we really need 8K and that 6K is "borderline usable". The high ISO performance is great, but once again, they've delivered a 1500% increase in the number of pixels while also raising the ISO performance, it's not like focusing on DR would have been to the exclusion of every other parameter in the whole device.
-
I agree. If I could rewire my GH5 sensor so that every second pixel went to a different gain ADC circuit and the two signals were then combined to increase the DR coming off the sensor (how the Alexa works) then I wouldn't even think twice about it. Even if it meant sacrificing 75% or more of the pixels I have. A gorgeous image is a gorgeous image in whatever resolution. We've been working on the wrong things.
-
Bingo - found it: It's older than I thought... which is even more amusing because the VFX guy was asking for 8K before we even had it!