Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. It does hark back to Deezids point. Lots more aspects to investigate here yet. Interesting about the saturation of shadows - my impression was that film desaturated both shadows and highlights compared to digital, but maybe when people desaturate digital shadows and highlights they're always done overzealously? We absolutely want our images to be better than reality - the image of the guy in the car doesn't look like reality at all! One of the things that I see that makes an image 'cinematic' vs realistic is resolution and specifically the lack of it. If you're shooting with a compressed codec then I think some kind of image softening in post is a good strategy. I'm yet to systematically experiment with softening the image with blurs, but it's on my list. I'll let others comment on this in order to prevent groupthink, but with what I've recently learned about film which one is which is pretty obvious. When you say 'compression', what are you referring to specifically? Bit-rate? bit-depth? codec? chroma sub-sampling? Have you noticed exceptions to your 10-bit 422 14-stops rule where something 'lesser' had unexpected thickness, or where things above that threshold didn't? If so, do you have any ideas on what might have tipped the balance in those instances? Additive vs subtractive colours and mimicking subtractive colours with additive tools may well be relevant here, and I see some of the hallmarks of that mimicry almost everywhere I look. I did a colour test of the GH5 and BMMCC and I took shots of my face and a colour checker with both cameras, including every colour profile on the GH5. I then took the rec709 image from the GH5 and graded it to match the BMMCC as well as every other colour profile from the GH5. In EVERY instance I saw adjustments being made that (at least partially) mimicked subtractive colour. I highly encourage everyone to take their camera, point it at a colourful scene lit with natural light and take a RAW still image and then a short video clip in their favourite colour profile, and then try to match the RAW still to the colour profile. We talk about "just doing a conversion to rec709" or "applying the LUT" like it's nothing - it's actually applying a dozen or more finely crafted adjustments created by professional colour scientists. I have learned an incredible amount by reverse-engineering these things. It makes sense that the scopes draw lines instead of points, that's also why the vector scope looks like triangles and not points. One less mystery 🙂 I'm happy to re-post the images without the noise added, but you should know that I added the noise before the bit-depth reduction plugin, not after, so the 'dirtying' of the image happened during compression, not by adding the noise. I saw that. His comments about preferring what we're used to were interesting too. Blind testing is a tool that has its uses, and we don't use it nearly enough.
  2. I've compared 14-bit vs 12-bit vs 10-bit RAW using ML, and based on the results of my tests I don't feel compelled to even watch a YT video comparing them, let alone do one for myself, even if I had the cameras just sitting there waiting for it. Have you played with various bit-depths? 12-bit and 14-bit are so similar that it takes some pretty hard pixel-peeping to be able to tell a difference. There is one, of course, but it's so far into diminishing returns that the ROI line is practically horizontal, unless you were doing some spectacularly vicious processing in post. I have nothing against people using those modes, but it's a very slight difference.
  3. It might be, that's interesting. I'm still working on the logic of subtractive vs additive colour and I'm not quite there enough to replicate it in post. Agreed. In my bit-depth reductions I added grain to introduce noise to get the effects of dithering: "Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD." Thickness of an image might have something to do with film grain, but that's not what I was testing (or trying to test anyway). Agreed. That's why I haven't been talking about resolution or sharpness, although maybe I should be talking about reducing resolution and sharpness as maybe that will help with thickness? Obviously it's possible that I made a mistake, but I don't think so. Here's the code: Pretty straight-forwards. Also, if I set it to 2.5bits, then this is what I get: which looks pretty much what you'd expect. I suspect the vertical lines in the parade are just an algorithmic artefact of quantised data. If I set it to 1 bit then the image looks like it's not providing any values between the standard ones you'd expect (black, red, green, blue, yellow, cyan, magenta, white). Happy to hear if you spot a bug. Also, maybe the image gets given new values when it's compressed? Actually, that sounds like it's quite possible.. hmm. I wasn't suggesting that a 4.5bit image pipeline would give that exact result, more that we could destroy bit-depth pretty severely and the image didn't fall apart, thus it's unlikely that thickness comes from the bit-depth. Indeed there is. and I'd expect there to be! I mean, I bought a GH5 based partly on the internal 10-bit! I'm not regretting my decision, but I'm thinking that it's less important than I used to think it was, especially without using a log profile like I also used to do. Essentially the test was to go way too far (4.5bits is ridiculous) and see if that had a disastrous effect, which it didn't seem to do. If we start with the assumption that cheap cameras create images that are thin because of their 8-bit codecs, then by that logic a 5-bit image should be razor thin and completely objectionable, but it wasn't, so it's unlikely that the 8-bit property is the one robbing the cheap cameras of their images thickness.
  4. The question we're trying to work out here is what aspects of an image make up this subjective thing referred to by some as 'thickness'. We know that high-end cinema cameras typically have really thick looking images, and that cheap cameras typically do not (although there are exceptions). Therefore this quality of thickness is related to something that differs between these two scenarios. Images from cheap cameras typically have a range of attributes in common, such as 8-bit, 420, highly compressed, cheaper lenses, less attention paid to lighting, and a range of other things. However, despite all these limitations, the images from these cameras are very good in some senses. A 4K file from a smartphone has a heap of resolution, reasonable colour science, etc, so it's not like we're comparing cinema cameras with a potato. This means that the concept of image thickness much be fragile. Otherwise consumer cameras would capture it just fine. If something is fragile, and is only just on the edges of being captured, then if we take a thick image and degrade it in the right ways, then the thickness should evaporate with the slightest degradation. The fact I can take an image and output it at 8-bits and at 5-bits and for there not to be a night-and-day difference then I must assume one of three things: the image wasn't thick to begin with it is thick at both 8-bits and 5-bits and therefore bit-depth doesn't matter than much it is thick at 8-bit but not at 5-bits and people just didn't notice, in a thread especially about this I very much doubt that it's #3, because I've had PMs from folks who I trust saying it didn't look much different. Maybe it's #1, but I also doubt that, because we're routinely judging the thickness of images via stills from YT or Vimeo, which are likely to be 8-bit, 420, and highly compressed. The images of the guy in the car that look great are 8-bit. I don't know where they came from, but if they're screen grabs from a streaming service then they'll be pretty poor quality too. Yet they still look great. I'm starting to think that maybe image thickness is related to the distribution of tones within a HSL cube, and some areas being nicer than others, or there being synergies between various areas and not others.
  5. If I'm testing resolution of a camera mode then I typically shoot something almost stationary, open the aperture right up, focus, then stop down at least 3 stops, normally 4, to get to the sweet spot of whatever lens I'm using, and to also make sure that if I move slightly that the focal plane is deep enough. Doing that with dogs might be a challenge!
  6. Actually, the fact that an image can be reduced to 5bits and not be visibly ruined, means that the bits aren't as important as we all seem to think. A bit-depth of 5bits is equivalent to taking an 8-bit image and only using 1/8th of the DR, then expanding that out. Or, shooting a 10-bit image and only exposing using 1/32 of that DR and expanding that out. Obviously that's not something I'd recommend, and also considering I applied a lot of noise before doing the bit-depth reduction, but the idea that image thickness is related to bit-depth seems to be disproven. I'm now re-thinking what to test next, but this was an obvious thing and it turned out to be wrong.
  7. Which do you agree with - that film has poor DR or that Canon DSLRs have? I suspect you're talking about film, and this is something I learned about quite recently. In Colour and Mastering for Digital Cinema by Glenn Kennel he shows density graphs for both negative and print films. The negative film graphs show the 2% black, 18% grey and 90% white points all along the linear segment of the graph, with huge amounts of leeway above the 90% white. He says "The latitude of a typical motion picture negative film is 3.0 log exposure, or a scene contrast of 1000 to 1. This corresponds to approximately 10 camera stops". The highlights extend into a very graceful highlight compression curve. The print-through curve is a different story, with the 2% black, 18% grey and 90% white points stretching across almost the entire DR of the film. In contrast to the negative film where the range from 2-90% takes up perhaps half of the mostly-linear section of the graph, in the print-through curve the 2% sits very close to clipping, the region between 18% and 90% encompasses the whole shoulder, and the 90% is very close to the other flat point on the curve. My understanding is that the huge range of leeway in the negative is what people refer to as "latitude" and this is where the reputation film has of having a large DR comes from, because that is true. However, if you're talking about mimicking film then it was a very short period in history where you might shoot on film but process digitally, so you should also take into account the print film positive that would have been used to turn the negative into something you could actually watch. Glenn goes on to discuss techniques for expanding the DR of the print-through by over-exposing the negative and then printing it differently, which does extend the range in the shadows below the 2% quite significantly. I tried to find some curves online to replicate what is in the book but couldn't find any. I'd really recommend the book if you're curious to learn more. I learned more in reading the first few chapters than I have in reading free articles on and off for years now.
  8. That's what I thought - that there wasn't much visible difference between them and so no-one commented. What's interesting is that I crunched some of those images absolutely brutally. The source images were (clockwise starting top left) 5K 10-bit 420 HLG, 4K 10-bit 422 Cine-D, and 2x RAW 1080p images, which were then graded, put onto a 1080p timeline and then degraded as below: First image had film grain applied then JPG export Second had film grain then RGB values rounded to 5.5bit depth Third had film grain then RGB values rounded to 5.0bit depth Fourth had film grain then RGB values rounded to 4.5bit depth The giveaway is the shadow near the eye on the bottom-right image which at 4.5bits is visibly banding. What this means is that we're judging image thickness via an image pipeline that isn't that visibly degraded by having the output at 5 bits-per-pixel. The banding is much more obvious without the film grain I applied, and for the sake of the experiment I tried to push it as far as I could. Think about that for a second - almost no visible difference making an image 5bits-per-pixel at a data rate of around 200Mbps (each frame is over 1Mb).
  9. No thoughts on how these images compare? It's not a trick..
  10. Ok, what do we think about how thick these images are? vs vs vs
  11. I'm not sure if it's the film stock of the 80s fashion that really dominates that image, but wow... the United Colours of Beneton the 80s! Hopefully ray bands - I think they're one of the only 80s approved sunglasses still available. 80s music always sounds best on stereos also made in the 80s. I think there's something we can all take away from that 🙂 Great images. Fascinating comments about the D16 having more saturated colours on the bayer filter. The spectral response of film vs digital is something that Glenn Kennel talks about a lot in Colour Mastering for Digital Cinema. If the filters are more saturated then I would imagine that they're either further apart in frequency, or they are narrower, which would mean less cross-talk between the RGB channels. This paper has some interesting comparisons of RGB responses, including 5218 and 5246 film stocks, BetterLight and Megavision digital backs, and Nikon D70 and Canon 20D cameras: http://www.color.org/documents/CaptureColorAnalysisGamuts_ppt.pdf The digital sensors all have hugely more cross-talk between RGB channels than either of the film stocks, which is interesting. I'll have to experiment with doing some A/B test images. In the RGB mixer it's easy to apply a negative amount of the other channels to the output of each channel, which should in theory simulate a narrower filter. I'll have to read more about this, but it might be time to start posting images here and seeing what people think. I wonder how much having a limited bit-depth is playing into what you're saying. For example, if we get three cameras, one with high-DR and low-colour separation, the second with high-DR / higher colour separation, and the third with low-DR / high colour saturation. We take each camera, film something, then take the 10-bit files from each and normalise them. The first camera will require us to apply a lot of contrast (stretching the bits further apart) and also lots of saturation (stretching the bits apart), the second requires contrast but less saturation, and the third requires no adjustments. This would mean the first would have the least effective bit-depth once in 709 space, the second would have more effective bit-depth, and the third the most in 709 space. This is something I can easily test, as I wrote a DCTL to simulate bit depth issues, but it's also something that occurs in real footage, like when I did everything wrong and recorded this low-contrast scene (due to cloud cover) with C-Log in 8-bit.. Once you expand the DR and add saturation, you get something like this: It's easy to see how the bits get stretched apart - this is the vectorscope of the 709 image: A pretty good example of not having much variation in single hues due to low-contrast and low-bit-depth. I just wish it was coming from a camera test and not from one of my real projects!
  12. Another comparison video... spoiler, he plays a little trick and mixes in EOS R 1080p and 720p footage with the 8K, 4K and 1080p from the R5. As we've established, there is a difference, but it's not that big, compared to downscaled 1080p and through the YT compression.
  13. A Canon DSLR with ML is probably an excellent comparison image wise. Canon / ML setups are similar to film in that they both likely have: poor DR aesthetically pleasing but high levels of ISO noise nice colours low resolution no compression artefacts I shot quite a bit of ML RAW with my Canon 700D and the image was nice and very organic.
  14. Also, some situations don't lend themselves to supports because of cramped conditions or the need to be mobile. Boats for example:
  15. Depends where you go.. my shooting locations (like Kinkaku-ji) are often decorated with signs like this: which restrict getting shots like this: while shooting is situations like this: I lost a gorillapod at the Vatican because they don't allow tripods (despite it being maybe 8" tall!), and although they offered to put it in a locker for me, our tour group was going in one entry and out the other where the tour bus was meeting us, so there wouldn't have been time to go back for it afterwards, and the bus had already left when we got to security, so that was game over. Perhaps the biggest challenge is that you can never tell what is or is not going to be accepted in any given venue. I visited a temple in Bangkok and security flagged us for further inspection (either our bags or for not having modest enough clothing) which confused us as we thought we were fine and then the people doing the further inspection flagged us through without even looking at us. Their restrictions on what camera equipment could be used? 8mm film cameras were fine, but 16mm film cameras required permission. Seriously. This was in 2018. I wish I'd taken a photo of it. With rules as vague / useless as that, being applied without any consistency whatsoever, my preferred strategy is to not go anywhere near wherever the line might be and so that's why I shoot handheld with a GH5 / Rode VideoMic Pro and a wrist-strap and that's it. Even if something is allowed, if one over-zealous worker decides you're a professional or somehow taking advantage of their rules / venue / cultural heritage / religious sacred site then someone higher up isn't going to overturn that judgement in a hurry, or without a huge fuss, so I just steer clear of the whole situation. It really depends where you go and how you shoot. I've lost count of how many times a vlogger has been kicked out of a public park while filming right next to parents taking photos of their kids with their phones because security thought their camera was "too professional looking", or street photographers being hassled by private security when taking photos in public which is perfectly legal. Even things like a one of the YouTubers I watch who makes music videos was on a shoot in a warehouse in LA that almost got shut down because you need permits to shoot anything there even on private property behind closed doors!
  16. Sounds like you aren't one of the people that needs IBIS, and as much as I love it for what I do, it's always better to add mass or add a rig (shoulder reg, monopod, tripod, etc) than to use OIS or IBIS. I use a monopod when I shoot sports, but I also need IBIS as I'm shooting at 300mm+ equivalent, so it's the difference between usable and not. I'd also like to use a monopod or even a small rig for my travel stuff, but venues have all kinds of crazy rules about such things and I've had equipment confiscated before (long story) due to various restrictions, so hand-held is a must.
  17. This is interesting. If colour depth is colours above a noise threshold (as outlined here: https://www.dxomark.com/glossary/color-depth/ ) then that raises a few interesting points: humans can hear/see slightly below the noise floor, and noise of the right kind can be used to increase resolution beyond bit-depth (ie, dither) that would explain why different cameras have slightly different colour depth rather than in power-of-two increments it should be effected by downsampling - both because if you down downsample you reduce noise and also because downsampling can average four values artificially creating the values in-between it's the first explanation I've seen that goes anywhere to explaining why colours go to crap when ISO noise creeps in, even if you blur the noise afterwards I was thinking of inter-pixel treatments as possible ways to cheat a little extra, which might work in some situations. Interestingly, I wrote a custom DCTL plugin that simulated different bit-depths, and when I applied it to some RAW test footage it only started degrading the footage noticeably when I got it down to under 6-bits, where it started adding banding to the shadow region on the side of the subjects nose, where there are flat areas of very smooth graduations. It looked remarkably like bad compression, but obviously was only colour degradation, not bitrate. True, although home videos are often filmed in natural light, which is the highest quality light of all, especially if you go back a decade or two where film probably wasn't sensitive enough to film much after the sun sets and electronic images were diabolically bad until very very recently!
  18. IBIS is one of those things that gets lots of hype, which means that many people think it's great and then they get past the hype and think it's worthless. I think IBIS is a thing that some people really do need, and other people think they need but haven't really explored all the options and when they realise they didn't need it then assume no-one else does either. I've shot with and without IBIS on the same lenses at the same time (doing camera comparisons) and I also regularly shoot in situations where the IBIS gives normal hand-held looking motion because it's turning footage that would be completely unusable into footage with some motion in it. IBIS is like everything else in film-making, and in life more generally. It's neither good, nor evil, but somewhere in between, and is context dependent. Anyone making sweeping generalisations just doesn't understand the subject well enough to realise that everything has pros and cons.
  19. It's not my terminology - it's what I've put together over reading many threads about higher-end cameras vs cheaper ones and how people try to quantify the certain X-Factor that money can buy you. I've heard it enough over the years to get a sense that it's not being used randomly, nor only by one or two isolated people. I've seen first-hand that there are people who have much more acute senses than the majority of people. Things like synaesthesia and tetrachromacy exist along with many others that we've probably never heard of, so just by pure probability there are people who can see better than I can, thus, my desire to try and work out what it might be in a specific sense. Interesting. Downsampling should give a large advantage in this sense then. I am also wondering if it might be to do with bad compression artefacts etc. This was on my list to test. Converting images to B&W definitely ups the perceived thickness - that was on my radar but I'm still not sure what to do about it. Interesting concept from DXO - I learned a few things from that. Firstly that my GH5 is way better than I thought, being only 0.1 down from the 5D3 and also that it's way above my Canon 700D which I thought was a pretty capable stills camera. People always said the GH5 was weak for still images and I kind of just believed them, but I'll adjust my thinking! Yes, and then streaming video compression takes the final swing with the cripple hammer. This has long been a question in my mind about how we can evaluate the differences between a $50k camera and a $100k camera using a streaming service that uses compression that a $500 Canon camera would look down its nose at. I suspect that it's a case of pushing and pulling the information in post. For example, if you shoot a 10-bit image in LOG that puts the DR between 300 and 800IRE and then you expand out that DR to 0-100IRE then you've stretched your bits to be twice as far apart, so it's now the equivalent of a 9-bit image. Then we select skin tones and give them a bit more contrast (more stretching) and then more saturation (more stretching) etc etc. It's not hard to take an image that was ok to begin with and break it because the bits are now too far apart. We all know that, but the thing that I don't think gets as much thought is that the image is degrading with every operation, long before it 'breaks'. Even an Alexa shooting RAW has limits, and those limits aren't that far away. Have a look at a latitude test of an Alexa and read the commentary and you'll note that the IQ starts to go visually downhill when you're overexposing even a couple of stops.....and this is from one of the best cameras in the world! My take-away from that is that I need to shoot as close to the final output as possible, so I switched from shooting HLG to Cine-D. This means that I'm not starting with a 10-bit image that I have to modify (ie, stretch) as the first step in the workflow. Agreed. Garbage IN = Garbage Out. Like I said above, if you're having to do significant exposure changes then you're throwing bits away as the first step in your workflow. I think there's a couple of things in here. You're right that accurate colours isn't the goal, Sony proved that. But my understanding of the Portrait Score wasn't about accuracy, it was about Colour Nuance... "The higher the color sensitivity, the more color nuances can be distinguished." What you do with that colour nuance (or lack of it) determines how appealing it will look. Of course, any time you modify an image, you're stretching your bits apart, so in that sense it's better to do that in-camera where it's processed uncompressed and (hopefully) in full bit-depth.
  20. One of the elusive things that film and images from high-end cameras have is often called "thickness". I suspect that people who talk about "density" are also talking about the same thing. It's the opposite of the images that come from low quality cameras, which are often described as "thin" "brittle" and "digital". Please help me figure out what it is, so perhaps we can 'cheat' a bit of it in post. My question is - what aspect of the image shows thickness/thinness the most? skintones? highlights? strong colours? subtle shades? sharpness? movement? or can you tell image thickness from a still frame? ...? My plan is to get blind feedback on test images including both attempts to make things thicker and also doing things to make the image thinner, so we can confirm what aspect of the image really makes the difference.
  21. Which BMPCC? 2K? 4K? 6K? They're all different sensor sizes..... you need to be specific.
  22. kye

    Panasonic GH6

    They'll likely stick with DFD, and while it is painful now, it is the future. PDAF and DPAF are mechanisms that tell the camera which direction and how far to go to get an object in focus, but not which object to focus on. DFD will eventually get those things right, but will also know how to choose which object to focus on. I see so many videos with shots where the wrong object is perfectly in focus, which is still a complete failure. This is from the latest Canon and Sony cameras. Good AF is still a fiction being peddled by fanboys/fangirls and bought and paid for ambassadors. Totally agree - as a GH5 user I'm not interested in a modular-style cinema camera. and if I was, then I'd pick up my BMMCC and use that, with the internal Prores and uncompressed cDNG RAW. It might sell well. The budget end of the modular cine-camera market is starting to heat up, as people get used to things like external monitors (for example to get Prores RAW) and external storage (like SSDs on the P4K/P6K) and external power (to keep the P4K/P6K shooting for more than 45 minutes), then modularity won't be as unfamiliar as it used to be. Everything being equal, making a camera that doesn't have to have an LCD screen (or two if it has a viewfinder) should make the same product cheaper to manufacture.
  23. Is the DSLR revolution finished? If so, when did it finish? Where are we now? It seems to me like it started when ILCs with serviceable codecs became affordable by mere mortals. Now we have cinema cameras shooting 4K60 and 1080p120 RAW (and more) for the same kind of money. Is this the cinema camera revolution? What is next? Are there even any more revolutions to have? (considering that we have cameras as good as Hollywood did 10-years ago).
  24. This video might be of interest in matching colour between different cameras. He doesn't get a perfect match by any means, but it's probably 'good enough' for many, and probably more importantly is a simple non-technical approach using only the basic tools.
×
×
  • Create New...