Jump to content

kye

Members
  • Posts

    7,834
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Canon EOS R5C

    Hot damn! You mean we can shrink the camera bodies by just lopping bits off? Where's my hacksaw!!
  2. Most sensors do a full readout in the highest bit-depth at 24/25/30p, but at higher frame rates typically reduce the bit-depth of the read-out. Assuming this was to save a bit on data rates and processing, then that means they have been making progress - they just spent it all on resolution instead of bit-depth. Yet another hidden cost of this preposterous resolution pissing contest that the entire industry is doing, with consumers cheering all the way down.
  3. Availableism - nice! It reminds me of approaches like Dogme95 etc, which integrate that sort of element and go a lot further with it as well. Sadly, I'm not surprised about the grant decision. I've been to enough film festivals to know that the thinking is often enormously traditional / blinkered, and also motivated by who-you-know and all that crap too. One student film festival I went to had a film in the documentary category that was old people talking about their sex lives - it was very entertaining and the old folks were all very cute and it definitely deserved to win an award for concept / direction / producing, which it did. However it was shot terribly, there were booms in shot on half-a-dozen occasions, the camera wasn't held steady sometimes and was bumped significantly and obviously a couple of times, but it also won best cinematography and best sound, which was completely ridiculous. One of the things that dominates the overall architecture of how traditional films are made is that many of the people who are involved are not critical thinkers, they learned how to perform their role but they don't understand the other roles, the overall process, or even how to make a film. There are often territorial disputes as people defend their patch, etc. The film-making process is a factory production line, and most workers in a factory don't understand it's possible to redesign a factory and make it work better, let alone be ok with it when someone suggests it. Yeah, that's a big drawcard about the FX3 that makes it stand out. I don't understand why there aren't more cameras with the ISOs further apart. Most cameras are have lower base ISOs, and have a much smaller interval between the lower native ISO and the higher native ISO, combining to give the FX3 a huge advantage in low light.
  4. Never mind that colourists working on high-end material still regard noise reduction as a critical tool for every shoot, including the ones where all the material was exposed properly in-camera and recorded at native ISO! It's fascinating to download cinema camera footage for the first time and see that it has more noise in it than a mirrorless low-light test. I got a bit of a shock when I saw that for the first time.
  5. I agree, but would go further and say that not only will AI radically reduce the cost to make a film you could make now, it will also make films possible that really aren't possible (or aren't practically possible) now. So in that sense it doesn't just reduce costs, it expands the possibilities to be practically infinite.
  6. I agree - it wasn't that different. I've spoken at length to a few in PMs about this and I find it a fascinating subject, but I see modern film-making having three pivotal points. There are probably others, but these are the ones I'm aware of. 1) The French New Wave, which was (how I see it anyway) an exploration of new possibilities of 16mm film that weren't possible with 35mm film. In many ways they took the traditional "coverage" of Hollywood and radically expanded it to include almost all the techniques used in modern film-making. 2) The DSLR revolution This is pretty much what this forum is for, and what we talk about. The tricky thing about this is that it didn't deliver what people thought it would. They thought it would mean that anyone could film a movie with a camera and no money at all and that there would be another revolution in cinema like the FNW part 2, but this didn't happen. What we got instead, and this is my impression, was TikTok, YouTube, influencers, live streaming / Twitch, etc, which are all new forms of film-making (despite how creative or worthwhile you might think they are) because they all involve video recordings made into final products and distributed to an audience. The promise was real though, and people like Noam Kroll have mapped out a path for up-ending a lot of the traditional processes. One particular process that he's put forward that I really like, and WOULD change film-making is: Come up with a concept for the film, restricted to things you already have access to (cast, locations, etc) Cast the film Work out half-a-dozen or so sections with major plot points for the film, even in very high-level terms Workshop the characters with the actors, develop a concept for the first section Shoot the first section, involve lots of improvisation from the cast, potentially not even writing a script beforehand Edit the first section, see what worked and what didn't, concentrate on the performances Develop/update the concept for the second section Shoot and edit that one, once again with improvisation and concentrating on performances IIRC Noam shot a film like this and ended up giving the two main actors writing credits too. He mentioned that he made a lot of adjustments to later sections based on what worked and didn't from earlier ones, and I have a vague memory that during some improvisational parts the actors did while on location, he even ended up changing some major plot points to replace them with more interesting ones inspired by the process. This is the kind of thing that would change the future of film-making. Not having a new camera body. 3) AI The third pivotal point I foresee will be AI. Anyone who has watched any (decent) anime will know that writers in anime have been enjoying the freedom to create worlds without any practical limitation for over a century now. Up until the last few decades no-one could do that with realistic motion pictures, and right now it's mostly limited to those with huge budgets or with incredible skill and huge amounts of free time. AI will change that. The ability to shoot anything you like, however flawed, and just have AI add and remove and bend and change what you shot into whatever your imagination can come up with will be groundbreaking, just like the freedom that anime artists have enjoyed solely until recently. When AI can create Inception from your iPhone footage, things will be unleashed. However, much like we saw with the DSLR revolution, there will be other forms of film-making that are invented too, like deep-fakes, alternate histories, and who-knows-what else. No. I watch a lot of YT. Far more than streaming sites. I do try and include lots of film-making stuff, as well as a great many other interesting and niche things. The world is a fascinating place!!
  7. Sure - bigger is better. Easy. The real test would be how they evaluated a large, complicated looking, but crap setup vs a small one that had a much better image. Also, in this imaginary test, you can't pick something old looking vs something that looks much more modern, as that's another giveaway. Size is so correlated with image quality that it's difficult to think of counter-examples. Even the big but really old cinema / ENG cameras are either still actually really good (and 1080p), or they are horrifically dated and wouldn't fool many (old betamax cameras for example). I'm not saying there are no counter examples, but they're a very very small percentage of the possible comparisons. Also, people pretty much know that the longer the lens the more zoomed in it is.
  8. Absolutely. I do all sorts of side-by-side tests and nit-pick minor differences in colour grading techniques too, but it's important to always remember to make final judgements on the final result, rather than some artificial situation. This is why I encourage people to grade the footage, export it, and upload it to whatever final platform they'll use for delivery. It doesn't matter how the image looks in your NLE, it's how the viewers will see it that matters (and if it's a paid gig then obviously the producer/director/client need to be happy too). The only reasons I do these nit-picky tests is: to keep familiar with shooting in-between my trips (which is what I shoot) to eek out the best results I can when shooting in difficult conditions with modest cameras to eek out the nicest colour grading I can (spectacular colour is the culmination of a bunch of small tweaks, not a few big ones) to test out new techniques and keep learning and improving my skill levels to understand and optimise the many trade-offs that we're forced to make I also do all these things with my cinematography, editing, sound design, and delivery. The whole pipeline matters.
  9. I'm pretty sure there are only 4 types of cameras: Movie cameras: these are what they use to make movies Big cameras, which are used by "professional photographers" or rich tourists Cameras, which are used by normal tourists use to take photos and phones, which are used by adults to take photos of the family: or by millennials when they're awake and have left the house: There is another type of camera though. When you take any camera and point it at yourself in public, you instantly turn into a narcissist and the size of the camera no longer fits into the above categories, but is an indicator of how much better you are than everyone else:
  10. Glad to hear you're enjoying it! Some cameras record "super-whites" which are values above 100%. They can be recovered if you pull down the exposure so they come into the "legal" range of 0-100%. IIRC my Canon XC10 does it, and my Panasonic GX85 definitely does it. Yet another reason to use colour space transforms rather than LUTs is that LUTs clip any values that are below 0% or above 100%, whereas the colour transforms are able to access and use that data, so it's accessible downstream. When I'm getting familiar with a camera, or I'm grading shots with a lot of dynamic range I'll often pull the exposure down and see what's in the highlights and pull it up to see what's in the shadows, so that if there's anything relevant then I can grade appropriately.
  11. Hidden camera footage of how people react to a small lens being used... the judgement is real.
  12. Interestingly, I watched this video which talks about how "No CGI" normally means "a shitload of invisible CGI". TLDW; one example shows Tom Cruise repeatedly saying they shot the fighter plan scenes with "no CGI", but the final images only contained planes that were completely added in post or added in post to replace planes that were actually flown. None of the planes flown in the shoot were visible in the final image. "No CGI" indeed. Lots of other examples...
  13. I disagree - the budget is the budget - it doesn't matter the split between production and post. If I get my iPhone and walk through a shopping centre and record a 90 minute single take, then go home and spend $50M turning that clip into an action film by replacing everything with VFX, then that's still a blockbuster film, I still filmed it, and the question is relevant. The answer to the question is no, but that answer has nothing to do with the camera. This thread discussed the camera used to shoot the film because this forum only ever talks about cameras. You could start a thread asking if AI will use GMOs to fix poverty in sub-saharan Africa and we would discuss the camera used to shoot AI videos.
  14. The latest test video from Matteo shows quite a difference in noise between the BMCC6K and BMPCC6K in the super-dark areas when be boosts the image up a bunch, so unless he did something wrong (which I doubt), it's noisier than the previous model: However, it doesn't look visible unless he boosts the image to reveal it, so it's likely only an issue if you're not able to expose properly and/or want to see in the dark like night-vision-goggles. Of course, this is the internet, so every tiny detail must be completely removed from reality and then blown up beyond any sense of reasonableness (then everyone can latch onto the least significant issues and argue solely based on those!).
  15. Sounds like he needs to learn about colour management. The more I learn, the more I realise "I can get better colour with X" really just means "I don't use colour management and I prefer the LUT I happen to use for X over the LUT I happen to use for Y". In my recent attempt to match GH5 to OG BMPCC, I was surprised at how easy it was, and how difficult I found it in previous attempts. It's taken me a long time to get here, but now I'm here I look at cameras fundamentally differently - they are just devices that capture an image (with whatever DR / noise they have), process and compress that image (NR, sharpening, compression) and then I pull it into Resolve and (assuming they have a documented colour space) I can do whatever I like, applying whatever colour science I like. The thing that matters most is the usability of the camera to capture things and work reliably etc. Half the amateur camera / colour grading videos are "how do I grade Sony" "how do get the best from Fuji", but the colourist videos talk about colour management, and then talk about technique and workflow. There's a reason for that.
  16. Just noticed one other thing - when you're converting from ACES to Rec709 you might want to apply some Gamut compression, otherwise the signal might just clip on output. You don't have to do it in the transform, you can do it beforehand if you like, it's just something to keep an eye on while you're grading 🙂
  17. That all seems correct. I use Davinci as my intermediary, but I don't know of any arguments that put one above the other. Setting the Timeline Colour Space is the hidden one that often isn't mentioned, so good to see you're on top of that as well. Just to test it, I suggest pulling in a bunch of footage from a range of cameras onto the same timeline and seeing how that feels. If you were to do lots of work with multiple cameras, I'd suggest the following: Put all the clips from each camera into a Group in the Colour Page (this unlocks two extra node graphs - the Group Pre-Clip and Group Post-Clip Put the camera-to-ACES transform in the Group Pre-Clip Put the ACES-to-Rec709 transform in the Timeline That way the Clip graph can be copied between any clip to any other clip easily for anything you want to apply to the clips, and anything you want to put across all clips in your timeline can go in the Timeline graph. Just a note that if you do this, the Timeline graph will apply to everything in the timeline including titles / VFX / etc, which doesn't work for some folks.
  18. Insta360 just posted a teaser for a new action camera.. The only things I can tell are that it has a flip-up selfie screen: It has a Leica lens: ....and that they'll cost a billion dollars each because they appear to be manufacturing them in a particle accelerator. I'm also guessing it has AI in it, because of the red letters in the thumbnail, but then again, everything has AI in it these days, so....
  19. I think we're getting off track here. 1) The first job is getting the signal converted to digital. I pulled up a few random spec sheets on Sony sensors of different sizes and they all had ADC built into the sensor, and they specifically state they're outputting digital from the sensor. This effectively takes analog interference out of the design process, unless you're designing the sensor, which almost no-one is. 2) I am talking specifically about error correction. To be specific, to send digital data in such a way that even with errors in transmission the errors can be detected and corrected. The ability to send digital data between chips without it getting errors in it is fundamental, and if you can't do it then the product is just as likely to get errors in the most-significant-bits as the least-significant-bits, which would effectively add white noise to the image at full signal strength. 3) I was saying that the deliberate manipulation of the digital signal before writing it to the card as non-de-bayered data (ie, RAW) is the area that I didn't know existed and might be where some of the non-tangible differences are. Certainly it's something to be looked into more. 4) Going back to the idea of spending money on training, I stand by that. Here's the problem - there is good info available for free online, but it's mixed in with ten tonnes of garbage and half-truths. How do you tell the difference? You can't, unless you already know the answer. So, how do you get past this impasse? You pay money to a reputable organisation for training... To be frank, the majority of benefit I got from the various training courses I've purchased has probably been from un-learning the vast quantity of crap that I swallowed from people who looked like they knew what they were doing online but really had just enough knowledge to be dangerous.
  20. That's not a brutal business perspective. This is a brutal business perspective - clients will absolutely be able to tell the difference between buying the better camera and buying the cheapest one and spending the extra on training. Training on lighting, composition, movement, etc. In terms of what is "better"... in that much referenced blind test with the GH4 in it, many of the Hollywood pros preferred the GH4 over high-end cinema cameras. Talking about the difference between two RED sensors is fine, but trying to apply that difference to the real world is preposterous. Well, it's one of the following: Processing in the analog domain before the ADC occurs Noise reduction or other operations that occur in the digital domain that DO change the digital output Noise reduction or other operations that occur in the digital domain that DO NOT change the digital output The last one is simple error correction, and will be invisible. If it's the first one, then it makes sense to do noise reduction, but (to be perfectly honest), if you have 23 crossings in between your sensor and the ADC chip then you deserve to be fired from the entire industry, and probably would be because the image would look like ISO 10 billion. This leaves the middle one, which is explicitly what ARRI are doing, so that's what I think we should be talking about.
  21. That caught my eye too. The thing that made me really curious was that he said that on the BMPCC "that image signal crosses other things 23 times on that one board, and in those 23 crossings every time it crosses it gets a noise reduction afterwards because otherwise it would be a completely unrecognisable image afterwards". This makes no sense to me, from my understanding and experience in designing and building digital circuit boards, so according to my understanding the statement is completely wrong. However, I trust that the statement is correct, because the source is infinitely more knowledgeable than I am. There is obviously something there that I don't understand (likely a great many things!) and this is something that isn't talked about anywhere. It does make me wonder, though, if this might be the reason that some cameras look more analog than others, even when they're recording RAW. The caveat is that the differences we're used to seeing between sensors, for example between the OG BMPCC and the BMPCC 4K, is that it might simply be due to subtle differences in colour science and colour grading. Sadly, the level of knowledge of these things applied to most camera tests is practically zero, and it might be that with sufficient skill any RAW camera can be matched to any other RAW camera and that there is no difference at all, beyond differences that occurred downstream from the camera. Another data point for RAW not really being RAW is that when ARRI released the Alexa 35, they talked about the processing that happened after the data was read from the sensor and before it was debayered: Source: https://www.fdtimes.com/pdfs/free/115FDTimes-June2022-2.04-150.pdf Page 52. Not only is there some stuff that gets done in between these two steps, there's a whole department for it! RAW isn't so RAW after all.....
  22. One thing I find a real challenge these days is the pace of real learning. Once you've watched a few dozen 5-15 minute videos that explain random pieces of a subject (likely sprinkled with misunderstandings and half-truths) then going to a source that is genuinely knowledgeable is often very difficult to watch, because not only do they start at the beginning and go slowly, they also repeat a bunch of stuff you've heard before. The pay-off is that they are actually a reliable source of information and that not only will they likely fill in gaps and correct mis-information, but they might actually change the way you think about a whole topic. I had this with a masterclass I did from Walter Volpatto - I completely changed my entire concept of colour grading. It was a revelation. It's actually a bit of a strange thing because I can see that lots of people think the way I used to, but there's no easy way to get them to flick that switch in their head, and when you try and summarise it, the statements sound sort-of vague and obvious and irrelevant.
  23. The other thing to mention, beyond what @IronFilm said, is that the thing that matters most for editing is if the codec is ALL-I or IPB. To drastically oversimplify, ALL-I codecs store each frame individually, but IPB codecs define most frames as incremental changes to the previous frame, so if you want to know what a frame looks like in an ALL-I codec you just decode that frame, but in an IPB one you have to decode previous frames and then apply the changes to them. In some cases, the software might have to decode dozens or hundreds of previous frames just to render one frame, so the simple task of cutting to frame X in the next clip is a challenge, and playing footage backwards becomes a nightmare. Prores is always ALL-I, but h264 and h265 codecs CAN be ALL-I too, but it's not very common. This is probably the cause of almost all of the performance differences between them.
×
×
  • Create New...