Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. I think perhaps the largest difference between video and video games is that video games (and any computer generated imagery in general) can have a 100% white pixel right next to a 100% black pixel, whereas cameras don't seem to do that. In Yedlins demo he zooms into the edge of the blind and shows the 6K straight from the Alexa with no scaling and the "edge" is actually a gradient that takes maybe 4-6 pixels to go from dark to light. I don't know if this is do to with lens limitations, to do with sensor diffraction, OLPFs, or debayering algorithms, but it seems to match everything I've ever shot. It's not a difficult test to do.. take any camera that can shoot RAW and put it on a tripod, set it to base ISO and aperture priority, take it outside, open the aperture right up, focus it on a hard edge that has some contrast, stop down by 4 stops, take the shot, then look at it in an image editor and zoom way in to see what the edge looks like. In terms of Yedlins demo, I think the question is if having resolution over 2K is perceptible under normal viewing conditions. When he zooms in a lot it's quite obvious that there is more resolution there, but the question isn't if more resolution has more resolution, because we know that of course it does, and VFX people want as much of it as possible, but can audiences see the difference? I'm happy from the demo to say that it's not perceptually different. Of course, it's also easy to run Yedlins test yourself at home as well. Simply take a 4K video clip and export it at native resolution and at 2K, you can export it lossless if you like. Then bring both versions and put them onto a 4K timeline, and then just watch it on a 4K display, you can even cut them up and put them side-by-side or do whatever you want. If you don't have a camera that can shoot RAW then take a timelapse with RAW still images and use that as the source video, or download some sample footage from RED, which has footage up to 8K RAW available to download free from their website.
  2. I've been trying to pull this apart for a long time, or maybe it just seems like a long time, it's hard to tell! I get the sense that the difference is a culmination of all the little things, and that the Alexa does all of the things you mention very well, and most cameras don't do these things nearly as well. Further to that, each of us has different sensory sensitivities, so while one person might be very bothered by rolling shutter (for example) the next person may not mind so much, etc. Also, the "lesser" cameras, like the GH5, will do some things more poorly than others, for example the 400Mbps ALL-I 10-bit 4K mode isn't as good as an Alexa, but it's significantly better than something like the A7S2 with its 100Mbps 8-bit 4K mode. And finally, the work you are doing will require different aspects, like dynamic range being more important in uncontrolled lighting and rolling shutter being more important in high-movement scenes and (especially) when the camera is moving a lot. So in this sense, camera choice is partly a matter of finding the best overlap between a cameras strengths, your own sensitivities / preferences, and the type of work you are doing. Furthermore, I would imagine that some cameras exceed the Alexas capability, at least in some aspects. These examples are rarer, and it depends on which Alexa you are talking about, but if we take the original Alexa Classic as the reference, then the new Alexa 65 exceeds it in many ways. I believe RED has models that may meet or exceed the Alexa line in terms of Dynamic Range (it's hard to get reliable measures of this so I won't state that as fact) and I'm sure there are other examples. There are other considerations beyond image though, considering that the subject of the image is critical, and I couldn't do my work at all if I had an Alexa, firstly because I couldn't carry the thing for long enough, and secondly that I'd get kicked out of the various places that I like to film, which includes out in public and also in private places like museums, temples, etc which reject "professional" shooting, which they judge by how the camera looks. Everything is a compromise, and the journey is long and deep. I've explored many aspects here on the forums though, and I'm happy to discuss whichever aspects you care to discuss as I enjoy the discussions and learning more. Many of the threads I started seem to fall off, but often I have progressed further than the contributions I have made in the thread, often because I came back to it after a break, or because I've developed a sense of something but can't prove it, so if you're curious about anything then just ask 🙂
  3. A 4K camera has one third the number of sensors than a 4K monitor has emitters. This means that debayering involves interpolation, and means your proposal involves significant interpolation, and therefore fails your own criteria.
  4. Based on that, there is no exact way to test resolutions that will apply to any situation beyond the specific combination being tested. So, let's take that as true, and do a non-exact way based upon a typical image pipeline. I propose comparing the image from a 6K cinema camera being put onto a 4K timeline vs a 2K timeline, and to be sure, let's zoom in to 200% so we can see the differences a little more than they would normally be visible. This is what Yedlin did. A single wrong point invalidates an analysis if, and only if, the subsequent analysis is dependent on that point. Yedlins was not. No he didn't. You have failed to understand his first point, and then subsequently to that, you have failed to realise that his first point isn't actually critical to the remainder of his analysis. You have stated that there is scaling because the blown up versions didn't match, which isn't valid because: different image rendering algorithms can cause them to not match, therefore you don't actually know for sure that they don't match (it could simply be that your viewer didn't match but his did) you assumed that there was scaling involved because the grey box had impacted pixels surrounding it, which could also have been caused by compression, so this doesn't prove scaling and actually neither of those matter anyway, because even if there was scaling, basically every image we see has been scaled and compressed Your "problem" is that you misinterpreted a point, but even if you hadn't misinterpreted it could have been caused by other factors, and even if it wasn't, aren't relevant to the end result anyway.
  5. That could certainly be true. Debayering involves interpolation (like rescaling does) so the different algorithms can create significantly different amounts of edge detail, which at high-bitrate codecs would be quite noticeable even if the radius of the differences was under 2 pixels.
  6. Upgrade paths are always about what you want and value in an image. IIRC you really value the 14-bit RAW (even over the 12-bit RAW) so I'd imagine that any upgrade would have to also shoot RAW? A friend of mine shoots with 5D+ML and apart from going to a full cinema camera, you're going to find that the alternatives all have some problem or other that would be a downgrade from the 5D. The 5D+ML combo isn't perfect by any means, but other cameras haven't necessarily even caught up yet, let alone being improvements - it's still give and take in comparison.
  7. GH5 does this via custom modes. I don't use it for stills really, but I have custom modes for video that have different exposure modes (ie, I have custom modes that are Aperture priority and ones that are Manual mode).
  8. I just signed up for this course... (I linked to the YT video because it has a promo code in the description for a small discount) Walter is a senior colourist at Company3, which is one of (or the lead?) colour and post places in Hollywood, and I haven't seen much info from him so I think this might be a rare chance to get some insights from him.
  9. It would be nice to see some analysis of that. Considering that images are just bunches of numbers, we can analyse them in almost any way that you can imagine, but we do basically no analysis whatsoever, but instead just fight about our preferred religion manufacturer online... It's quite sad really. What a fascinating thing - that BM RAW has processing on the RAW output.. that almost completely defeats the purpose of RAW! There's a way to work out how to un-do it in post, but it's a PITA to do. Sharpening isn't always bad, but it can always be added more in post, so manufacturers should only add it sparingly. I've heard that the Alexa adds a small amount of sharpening to the Prores, as the compression smooths some detail so they add a little back in to match the original look. In a normal camera they add sharpening, then compress the image with h264/5, which I think is the killer combo. Sharpening and blurring are mathematical opposites, so with the right algorithm you should be able to reverse the sharpening in post with blurring, but because there is a compression in-between so the information gets lost and by the time you blur enough to get rid of the edge sharpening then the image is blurry as hell.
  10. To elaborate on what I said before, if they are applying different colour science to the Prores and to the RAW then you'll have to use a different conversion LUT, but I think there is only one LUT for ARRI, right? If so, either the magic is in the LUT or in the camera and being applied to the RAW. I suspect the latter, as it's the only good way to keep it far from prying eyes and people who would steal it.
  11. You raise an interesting point about the Prores vs RAW and I don't think I ever got to the bottom of it. With the Prores they can put whatever processing into the camera that they want (and manufacturers certainly do) but technically the RAW should be straight off the sensor. Of course, in the instance of the Alexa, it isn't straight off the sensor due to the dual-gain architecture which combines two readouts to get (IIRC) higher dynamic range and bit-depth, so there is definitely processing there, although the output is uncompressed. Perhaps they are applying colour science processing at this point as well, I'm not sure. The reason that this question is more than just an academic curiosity is that if they are not applying colour science processing to their RAW, then at least some of the magic of the image is in their conversion LUTs, which we all have access to and could choose to grade under if we chose to (and some do). Yes, testing DR involves working your way through various processing, if you can't get a straight RAW signal. I'm assuming that they would have tested the RAW Alexa footage but they haven't published the charts so who knows. Bit depth and DR are related, but do not need to correlate. For example, I could have a bit-depth of 2 bits and have a DR of 1000 stops. In this example I would say 0 for anything not in direct sun, 1 for anything in direct sun that wasn't the sun, 2 for the sun itself, and only hit 3 if a nearby supernova occurred (gotta protect those highlights!!). Obviously this would have so much banding that it would be ridiculous, but it's possible. Manufacturers don't want to push things too far otherwise they risk this, but you can push it if you wanted to. You're not the only ones, I hear this a lot, especially in the OG BMPCC forums / groups.
  12. I saw this image from Cine-D that shows some of their tests and includes the Alexa - it shows that ARRI was conservative with their figures while most other manufacturers took sometimes wild liberties with the figures. These numbers should be directly comparable to the other tests that they do, as the thresholds and methodologies should be the same.
  13. I understand what you're saying, but would suggest that they are only simple to deal with in post because they've had the most work put into them to achieve the in-camera profiles. It is widely known that the ARRI CEO Glenn Kennel was an expert on film colour before he joined ARRI to help develop the Alexa. Film was in development for decades with spectacular investment into its colour science prior to that, so to base the Alexa colour science on film was to stand on the shoulders of giants. Glenns book is highly recommended and I learned more about the colour science of film from one chapter in it than from reading everything on the topic I could find online for years prior: Also, Apple have put an enormous effort into the colour science of the iPhone, which has now been the most popular camera on earth for quite some time, according to Flickr stats anyway. I have gone on several trips where I was shooting with the XC10 or GH5 and my wife was taking still images with her iPhone, and so I have dozens of instances where my wife and I were standing side-by-side at a vantage point and shooting the exact same scene at the exact same time. Later on in post I tried replicating the colour from her iPhone shots with my footage and only then realised what a spectacular job that Apple have done with their colour science - the images are heavily processed with lots and lots of stuff going on in there. and now that I have a BMMCC and my OG BMPCC is on its way, I will add that the footage from these cameras also grades absolutely beautifully straight-out-of-camera - they too (as well as Fairchild who made the sensor) did a great job on the colour science. The P4K/P6K footage is radically different and doesn't share the same look at all.
  14. kye

    The D-Mount project

    I have a similar project that I shot with the BMMCC and the Cosmicar 12.5/1.9 C-mount and the Voigtlander 42.5/0.95 so I'll have to do the same re-cut process to remove all shots that don't include a model release! I also have an OG BMPCC on its way to me, so am planning on lots more outings with it, likely with the 7.5/2 and 14/2.5, but also perhaps with the 14-42 or 12-32 kit lenses, which have OIS, so should be much more stable 🙂
  15. His test applies to the situations where there is image scaling and compression involved, which is basically every piece of content anyone consumes. If you're going to throw away an entire analysis based on a single point, then have a think about this: 1<0 and the sky is blue. uh oh, now I've said that 1<0, which clearly it isn't, then the sky can't be blue because everything I said must now logically be wrong and cannot be true!
  16. He took an image from a highly respected cinema camera, put it onto a 4K timeline, then exported that timeline to a 1080p compressed file, and then transmitted that over the internet to viewers. Yeah, that doesn't apply to anything else that ever happens, you're totally right, no-one has ever done that before and no-one will ever do that again..... 🙄🙄🙄
  17. Why do you care if the test only applies to the 99.9999% of content viewed by people worldwide that has scaling and compression?
  18. Goodness! We'll be talking about content next!! What has the state of the camera forums come to?!?!?! Just imagine what people could create if they buy cameras that have a thick and luscious image to begin with, AND ALSO learn to colour grade...
  19. kye

    Panasonic GH6

    They should just make a battery grip that contains an M2 SSD slot that automagically connects to the camera and records the compressed raw on that - bingo.. "external" raw. Or licence Prores from Apple and offer those options. ....or just make it possible to select whatever bitrate and bit depth you want and turn off sharpening, that would do it for me.
  20. I'm also just looking at skin tones. I should probably try to zoom out a little, but it's interesting to see how we each perceive these things. As @TomTheDP said, Alexas are commonly a bit green. I was completely surprised when I heard this for the first time because you never see it in the final footage, but it's just a thing that everyone deals with apparently. I think that knowledge of colour grading is perhaps the biggest differentiator between how primarily amateur groups and primarily pro groups discuss image quality - the pros seem to view SOOC footage as a raw material whereas amateurs discuss it like it's a final product (or it's only a LUT away from a final product). I saw someone on another forum comparing two RAW-shooting camera models from the same company and their comment was that they really wanted to like the later model but there was a slight texture to the skin tones they didn't care for. The thing is that both the cameras share the same sensor, the person who made the test used the same settings and just applied a technical LUT to get a straight comparison, and they were taken outdoors non-simultaneously so small differences between them were inevitable. They were talking like the 12-bit RAW footage wasn't changeable, yet it is the most neutral flexible codec available. Even the way that cinematographers do latitude tests on cameras that shoot RAW indicates they're viewing the camera with a "shoot it so after colouring it I get the best image" but that mindset is almost completely absent elsewhere, other than the odd cult who have sold all their possessions to this deity they don't understand called ETTR. It's a bizarre world where the leaders are running around screaming "shoot in LOG so your footage looks miserable SOOC, but let's not ever talk about colour grading - you should buy my LUT instead!" and no-one questions this. You'd imagine a counter-culture would have emerged by now, but unfortunately it seems that if anyone is rejecting this premise they haven't done it by empowering themselves by learning about grading..
  21. I think you raise an interesting point about the pipeline. The video files that many cameras produce are only a pale impression of the capabilities of the sensor, and that is definitely the case when cameras have 8-bit low-bitrate codecs. Do you think this is also true for the RAW shooting cameras we now have, such as the BM or Prores-RAW cameras? I ask the question because although I don't think that anything should be missing with those setups, I wonder if there's something you're aware of that I'm not? In terms of DR, I think that we are most certainly not there. I still think that there is benefit to having more DR than even an Alexa has, at least, when shooting fast in uncontrolled circumstances. For example, I would like to have perfect exposure on a subject while also being able to have the sunset in the background, whereas (IIRC) even the widest DR cameras still clip more of the sunset than you'd want if the subject is placed at the right IRE for skin tones. The Alexa (apart from being generally regarded as more capable than ARRI suggest) employs a simultaneously-combined-dual-gain architecture combined with 10-year old sensor tech. The latest sensor tech has gotten a lot closer to that performance using a single-simultaneous-gain architecture, but if we took some of the zillions of pixels we now have and sacrificed them to implement that architecture, we could easily leapfrog the Alexa DR, which I think would create absolutely stunning images beyond what we have seen from current sensors in anything other than their sweet-spots. Going back to the destructive image pipeline that happens inside consumer cameras, it really makes me angry that in many cases, people have bought a sensor with X performance, an image processor with Y performance, and an SDcard card writer with Z performance, but instead of giving the overall performance of the least capable component (bottleneck), we get something like 10% of the bottleneck. Things like having a 709 profile that deliberately clips the top few stops of DR instead of putting in an aggressive knee for example is ridiculous and is simply just an adjustment of the profile itself and requires no hardware changes at all. Then people start wanting to hack the camera, and the manufacturers respond by encrypting and otherwise preventing these alterations. In effect, you are paying extra money for each camera you buy, in order for the manufacturer to prevent you from being able to get the full benefit of the product that you are buying.
  22. Were there any hacks for the G series of cameras?
  23. @tupp You raise a number of excellent points, but have missed the point of the test. The overall context is that for a viewer, sitting at a common viewing distance, the difference won't be discernible. This is why the comparison is about perceptual resolution and not actual resolution. Yedlin claims that the video will appear 1:1, which I took to mean that it wouldn't be a different size, and you have taken to mean that every pixel on his computer will appear as a single pixel on your/my computer and will not have any impact on any of the other surrounding pixels. Obviously this is false, as you have shown from your blown up screen captures. This does not prove scaling though. As you showed, two viewers rendered different outputs, and I tried it in Quicktime and VLC and got two different results again. Problem number one is that the viewing software is altering the image (or at least all but one that we tried). Problem number two is that we're both viewing the file from Yedlin's site, which is highly compressed. In fact, it is a h264 stream, and 2.32Gb, something like 4Mbps. The uncompressed file would have been 1192Mbps and in the order of 600Gb, and not much smaller had he used a lossless compression, so completely beyond any practical consideration. Assuming I've done my maths correctly, that's a compression ratio of something like 250:1 - a ratio that you couldn't even hope would yield a pixel-not-destroyed image. The reason I bring up these two points is that they will also be true for the consumption of any media by that viewer that the test is about. There's no point arguing that his test is invalid as it doesn't apply to someone watching an uncompressed video stream on a screen that is significantly larger than the TXH and SMPTE recommendations suggest, because, frankly, who gives a toss about that person? I'm not that person, probably no-one else here is that person, and if you are that person, then good for you, but it's irrelevant. You made a good point about 3CCD cameras, which I'd forgotten about, and even if you disagree about debayering and mismatched photosites and pixels, none of that stuff matters if the image is going to get compressed for digital distribution and then decoded by any number of decoders that will generate a different pixel-to-pixel readout. Essentially you're arguing about how visible something is at the step before it gets put through a cheese-grater on its way to the people who actually watch the movies and pay for the whole thing. In terms of why they make higher resolution cameras? There are two main reasons I can see: The first is that VFX folks want as much resolution as possible as it helps keep things perceptually flawless after they mess with them. This is likely the primary reason that companies like ARRI are putting out higher resolution models. The second reason is that electronics companies are companies, and in a capitalist society, companies exist to make money, and to do that you need to make people keep buying things, which is done through planned obsolescence and incremental improvements, such as getting everyone to buy 4K TVs, and then 4K cameras to go with those 4K TVs. This is likely the driver of all the camera manufacturers who also sell TVs, which is.... basically every consumer camera company. Not a whole lot of people buying a GH5 are doing VFX with it, although cropping in post is one relatively common exception to that. So, although I disagree with you on some of the technical aspects along the way, the fact that his test isn't "1:1" in whatever ways you think it should be is irrelevant, because people watch things after compression, after being decoded by unknown algorithms. That's not even taking into account the image processing witchcraft that things like Smooth Motion that completely invents entirely new frames and is half of what the viewer will actually see, or uncalibrated displays etc. Yes, these things don't exist in theatres, but how many hours do you spend watching something in a theatre vs at home? The average person spends almost all their time watching on a TV at home, so the theatre percentage is pretty small.
  24. My preference was A, C, E, D, B. The more of these that I watch the more I realise I'm looking at the colour, and in this test I didn't like the green reflection that wasn't there in real life. Of course, as they were all shot in very neutral codecs this should be editable in post relatively easily prior to the 709 conversion. But any of these cameras can create a great image, as Tom said.
×
×
  • Create New...