Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. Yeah, and me becoming the Kardashians is an outcome that I think we can all agree that nobody wants!!! ???
  2. Both of these are really useful and will be really great.. my GH5 isn't that wonderful in low light, so better NR would be great. and of course the GH5 needs AI superscaling because it's only 5K and everyone knows you need FF 8K, so that probably means that I need MFT 16K to not be completely kicked off the internet!! ???
  3. Think about it like this, to get the 200Mbps 1080 file, what happens is: Camera captures a 5K image Camera downscales the 5K image to 1080 Camera compresses 1080 to 200Mbps That's not that different to this workflow: Camera captures a 5K image Camera downscales the 5K image to 4K Camera compresses 4K to 200Mbps Person downscales the 4K to 1080 in post Both of these image pipelines start with 5K, both are limited to broadly similar levels of compression. We know that a 200Mbps 4K image won't be as great at 100% as a 200Mbps 1080 image at 100%, but when you downscale the 4K to 1080, it takes 4 pixels from the 4K image to make 1 pixel in the 1080 image, so the amount of data per 1080 pixel is broadly the same. There's also the point of diminishing returns with this stuff - try encoding a h264 file at something decent and then try at double or triple that bitrate and see what differences there are. You may find they're less than you think.
  4. You might be right about RAW export. if there's some kind of feature already built-into the camera then all BM have to do is implement the manufacturers RAW export format and then they can compress with BRAW and it's done. That's pretty awesome!! and why I own a f0.95 lens for my GH5
  5. I think it depends. Partly because the compression may not treat each mode equally (4K has 4x the number of pixels as 2K, but may not have 4x the bitrate), plus other factors. I shoot with the GH5, which downsamples everything from a 5K signal, so I've played in this space and recently did a resolution/detail comparison on that camera between the 5K 200Mbps h265, the 4K 150Mbps h264, and the 1080 200Mbps ALL-I h264 modes (all 25fps and 10-bit) and I found that there was no visible differences between the 5K and 4K, and when downsampled to 1080 there was no visible differences between the three modes. This test was with real-world lenses and wasn't in lab conditions, so it was imperfect, but it was of a real person in real conditions so it was applicable to what I do. In the end I chose to shoot 4K because h264 is easier to edit than the h265 codec (my main computer is a laptop), the framing on the 4K is easier to use for me in-camera, and even if posted in 1080, it would still be advantageous to me to shoot in 4K because I do a lot of stabilisation in post, so the extra resolution can help with this. Also, if I'm recording in 4K and processing in 4K then I might as well publish in 4K so that I'm kind of future-proofing my projects. As I record my family history there is a chance that these videos will still have some usefulness in years or decades, when 25K-3D-VR-AI-recreation-immersion-whatever will be a thing, so 4K won't be a "but can you see any difference" question anymore. I'd suggest just trying them and seeing what you see.
  6. I record my families travels and adventures, and my setup is a handheld GH5 / Rode VMP combo and an action camera combo. I'd look completely ridiculous bringing a film crew on my holidays and days out, and there's no way I'd get to film in all the private places like art galleries, amusement parks, historical locations, tours, etc that don't allow commercial photographers... not to mention that it would take all the fun out of everything we do. People seriously underestimate how different other peoples film-making actually is.
  7. The HDMI specification allows different bandwidths for different versions, as well as different resolutions, bit-depths, refresh rates, and colour subsampling. So to get 4K60 4:4:4 at 12-bit the HDMI version would have to support 4K at 60p, 12-bit colour, 4:4:4, and also have enough bandwidth to transmit all that data. This should be pretty evident (that the camera has to be able to pump out what you want to record), but there might be another aspect to it. It might be that you can use the HDMI as a data link and do whatever you like with it (there's a feature listed on the HDMI wikipedia page called Ethernet Link) that might allow the camera and recorder to use their own format to transmit data, in which case you would need to program the firmware of the camera to process the video stream somehow then pump it out the HDMI port for the recorder. This might be the camera compressing the video to BRAW and then the recorder just recording it, or it could be that there's a middle step where the camera does a certain number of things to process it and the recorder does the remaining processing. This would mean that the video stream could potentially exceed the bandwidth of the HDMI connection because the data has already been compressed before pumping it out the HDMI port. Compression of data is a horrifically complicated thing and there are advancements being made all the time and BRAW is one of those advancements, so it might be that some cameras can be programmed to compress the signal and get a higher quality video out than the HDMI specification would support (without that level of compression). Of course, this is all speculation but it makes sense logically, and seems to explain what we've seen so far. So it would be down to the capabilities of each cameras HDMI spec, and also the manufacturers willingness to re-write firmware as well as each cameras specific hardware and processing capabilities that would be required to get BRAW over the HDMI connection. You should be able to record anything the camera can pump out the HDMI port now, but in terms of which cameras can support BRAW over HDMI I'd suggest to assume that it's not possible unless proven otherwise.
  8. I'd be surprised if they weren't working on these. Resolve has already got pretty good capability already and AI is where it's at for this stuff now (and 16 has it already included so we know it's where they're going) so I would say it's just a matter of time.
  9. This sentence is rich with things to ponder... EOSHD forums are The Pub, but now I hear there's a couch recliner?? and that it's possible to leave here as a better adjusted member of society??? I guess you learn something every day!
  10. I completely agree. It's only about if it's appropriate for the project or not. We wouldn't say that any other creative choice was cheating - it's a good choice if it helps the creative vision of the project and it's a bad choice if it doesn't. Everything is an artistic choice. [Edit: I will say that if you think it's being overused, then it's that you're not a fan of the type of aesthetic it makes, or people are using it badly and you're reacting to that]
  11. Video files are just data. Every operating system has a software tool that you can enter in any machine language data you like. So according to your logic, I don't need a camera, crew, lighting, sound equipment or anything - I should just start typing away, then hit save on Masterpiece.MOV and I'm done! What's that? Having to understand and memorise the MOV container is cramping your ability to get convincing dialogue? Understanding the header flags on codec container formats in HEX when converting from long_integer binary encoded data shells distracting you from getting good wardrobe? You said it yourself.... You are forgetting that film-making isn't just about using a camera. It's about many many many things, and if the camera can do something for me then maybe that means that I can take my finite capacity and concentrate on something I wasn't able to put my attention to. I understand your sentiment, but randomly assuming that limitations you're able to compensate for should be accepted by other people just because you say so is pretty arrogant, and also pretty ignorant and just tells me you don't know shit about real film-making or how other people do it.
  12. It shouldn't be Resolve 17. BM have a pretty predictable announcement/release schedule and this is way outside it.
  13. When you down convert 4K 8-bit down to 1080 you get 4:4:4 colour space and depending on a bunch of different factors, you will get a result that is somewhere between 8-bit and 10-bit. Going into the technicalities of why this occurs will derail the thread somewhat so I'd suggest looking it up and reading about it to anyone that questions this.
  14. We've all been there! You might have noticed the vitriol earlier in the thread about Canon crippling it's lineup. I don't know if you've ever seen the RAW footage from a Canon camera with the Magic Lantern hack, but it really is something quite special, so there is no technical reason that Canon couldn't have taken a good quality 1080 readout (at 10 or even 12-bit) and put it through a nice h264 compression to give a lovely image. Of course, is having an 80D with DPAF in video, great CS and a wonderful and robust image a good business idea for them? Well.... I own a Canon XC10 and it outputs the kind of image the C100 gives, only it does it in 4K at 305Mbps. It's a wonderful camera, except for the fixed lens which limits creative choice in that department.
  15. The T2i codec was probably better quality than the YT 1080 level of compression, so maybe it would be more similar than you might think!! ???
  16. Your proving my point. Imagine what the bmpcc would look like if instead of "upgrading" to the p6k you bought all the stuff and made a great bmpcc rig instead. Which would win between a p6k with only a cheap lens and a bmpcc with lenses and a rig to the value of the p6k retail price. And also don't think badly about vintage lenses, the absolute legendary vintage lenses are worth more now than they were new, but everything else is worth a tiny fraction of how much it cost new because demand has plummeted. Vintage lenses aren't 'cheap' they are spectacular bargains
  17. Is anyone here shooting with a lens that is more expensive than their camera body? I think that might be a threshold of some kind... sadly, I'm not. I take your point but disagree.. Although motion is very important (along with sound, acting, storyline, etc), stills show: colour, DoF, grading, composition, and do so without the YT compression crunch that obliterates much of the subtlety, so it's not meaningless.... Yes, Vimeo is nicer than YT but I have never been able to play anything on it without it pausing to buffer (and many others were similar when Andrew polled this some time ago). and people posting images from the photography mode of their camera is just cheating! (unless it's just to talk about the lens of course)
  18. Yes, what I was getting at was that by doing this you would be able to: Shoot a real project with a C200 Get 8-bit footage from a real job that you can do test grading on (instead of setting up a fake job and not getting paid for it) Not risk stuffing up a real job if the 8-bit footage wasn't good enough Get a taste of how difficult the RAW workflow is (maybe it's not as bad as you think..) Edit: Depending on your NLE you could deliver the project to the client from the RAW files and then just swap in the 8-bit files to the same grade and see how they hold up. It's not about comparing them to the RAW, but by processing them how you would normally and then working out if they're good enough.
  19. What are you going to use it with? Have you used it yet? Is it good?
  20. This is what a $62K setup looks like on a Canon M50. IMHO by far the most important aspect of that setup was the lens. Buy lenses, not cameras.
  21. kye

    Who experiments?

    Interesting experiment @TrueIndigo. It reminds me of the YT guy who uses a custom rig, moving the camera in relation to the lens, shooting a static scene several times and then combining them to get high-resolution or large sensor output files. I'm also reminded of the people that experimented with taking a flatbed scanner and putting some kind of lens on them so that when the scanner did a 'pass' it was actually taking a photo from the left to the right of a scene. It generated absolutely enormous resolution files, but of course it was a horizontal rolling-shutter and took a few seconds to make the pass, so if anything moved you were stuffed! I shoot 4K and will crop into it for 1:2.35 and my problem is that I forget that the resolution is so high and so I tend to try and get too close to things in framing. I need to train myself to have faith that things will be visible and I can go wider!
  22. An idea for consideration.. Rent a C200 for a real project but give yourself an extra day beforehand. On the prep day get setup and familiar with it and shoot some side-by-side shots with the two codecs, partly to practice switching back and forth, and also to compare in post that evening. Then on the day shoot a couple of takes of each shot with the 8-bit codec, then do the 'real' takes in RAW. Use the 8-bit takes to practice your camera moves or whatever, which I'm sure you'd do anyway, in a sense you can just hit record on the 8-bit mode, do your setup and let the camera roll (they're low bitrate so storage wouldn't be an issue) then just hit stop when you're ready for a real take. Then in post you can use the RAW ones to deliver for the client and not take any risk in delivering for your client, but you'll have gotten some real-world side-by-side comparison shots to play with in post, almost for free as you probably would have spent almost all the setup time doing stuff anyway. The C100 was famous for having a low bitrate codec that looked a lot better than the bitrate would have suggested, so maybe the C200 is similar, but you'd have to test it in real life to be sure.
  23. I suspect not. Canon has deliberately crippled its product lineup in order to take people such as yourself (who are heavily invested in Canon glass and like DPAF) and make you tempted to buy the top of the line camera. Kind of like making smokers buy Lamborghinis by not supplying ashtrays in all the cheaper cars. Of course, the unfortunate thing about this is that there is no viable alternative that gives you what you want. The way out of this is to understand what priorities you have and then work out what suits your priorities best. I ended up with a GH5 because I don't care about sensor size and I realised I prefer the manual focus aesthetic. Had I wanted AF I would have been screwed. No perfect camera exists...
  24. I think one of the biggest problems we face when talking to other film-makers is that we underestimate how differently other people might shoot compared to the way we do. I don't know if the OP has mentioned what kind of films they make in other threads, but even such (quite sensible) suggestions of lighting and audio might not apply if they were doing extreme sports, underwater shooting, natural light shooting, etc. I get that almost everyone will spend a lot of time recording someone talking (audio equipment) and will also spend a lot of time recording someone sitting in one place (lighting) but that's not everyone. You'd assume that someone would know which equipment they would use and things they wouldn't, but someone that is asking for equipment recommendations on forums without specifying what they shoot may very well not know these things. I'm sure that every now and then we all speak to someone who says they're getting started in photography and got a Canon with a prime lens because that's what someone else said they should get and now they're asking us why they can't pinch-zoom like on their phone - so you can't assume such things reliably I think!
  25. Enjoy creating! Hope to see some of your stuff when you're resurfacing! ???
×
×
  • Create New...