Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. I don't have any real shoots planned for this setup, but was just part of getting to know the camera and prepared in case.
  2. Perhaps a basic question, but how much of what AP does is video vs stills? My impression of AP was for print media - do they have a big video arm too? with journalists reporting from wherever? I only ever remember seeing the old "Credit: AP" "Source: AP" type captions on photographs.
  3. Played around and got this configuration which seems to meet all the criteria... It's a bit top heavy, but it's balanced (it literally stands up on the QR plate when placed on a flat surface). You can get to the handle, at least from the left, without touching anything but the handle, and the big hole in the vertical support even lines up very nicely with the coiled cable for the mic making a convenient cable tidy! It's not exactly compact or elegant though!
  4. @Sage on the GH5 have you compared the 1080p 422 200Mbps All-I vs the 4K 422 400Mbps All-I vs the 3.3k 422 200Mbps All-I anamorphic modes? IIRC you said that the 1080p 200Mbps mode had better colours than the 4K 400Mbps mode, but I don't recall any comments around the 3.3K 422 All-I 400Mbps mode. It has less bits/pixel (2.7x the pixels and only 2x the bitrate) but the bitrate is 2X for the whole image which includes the same amount of tonal variation in the subject (you've got the same FOV and twice the bitrate to describe it) so when watching full-screen the pixels from the 3.3K mode aren't quite as good but they're much smaller, so maybe that offsets it? If you haven't tested it but are curious, I can share some sample stills with skin tones and a colour checker for you to look at.
  5. Haven't had a chance to look at the above links, but I found this recently which is an interesting comparison of two lenses, listing their pros and cons and with comparison footage:
  6. Follow-up question. I have a grey card, so can do custom WB. Is there any way to reliably set the levels with it using GH5? I don't recall there being anything that will tell you what IRE something sits at.
  7. Yes, that's exactly what I was thinking - to design my own view-assist LUT and just be able to turn it on and off. IIRC there are two zebras and you can set them to be whatever you like. I got the impression you could only choose one of them at a time though, so I don't think you could have zebras showing 40-60IRE for example. If I could design my own view-assist LUT I was thinking of something that I could use all the time instead of turning on and off, like making the middle range colour and maybe expanding them a little, the shadows and highlights compressed and B&W, and have bright yellow and bright red as clipped or black. Something to be technical enough to tell you about your exposure, but something that wouldn't make me creatively blind to what I was shooting, considering that I shoot my holidays and family events and have basically no control of a situation whatsoever, so I'd want to be able to react to new things going on, etc. Oh well.
  8. Is there any way other than the V-Log View Assist to get a LUT into the GH5? HLG View Assist doesn't seem to allow external LUTs.
  9. Great job @hmcindie! It's refreshing to see an action sequence without having cuts at a dizzying pace, this seemed much more reasonable and didn't seem like the edit was trying to make dull choreography more exciting, as is often the case. Funny ending too.
  10. Computer vision is about the camera understanding what it is seeing. From wikipedia: "Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do." It could be about AF, in the sense of choosing what to focus on. In a sense, the logic of PDAF is something like this: Look at all the focus pixels on the screen and go towards the middle / closer objects One things are broadly in focus, employ face / animal recognition to identify faces Identify faces and rank which ones are the most important, and where to focus to (eg, could be one face or many) Use PDAF to focus on that face / focus the best on all target faces The problem with modern PDAF focus systems is that they go wrong because they're focusing on the wrong thing, not having problems with focusing itself. I suspect that Panasonic with its DFD system it hoping to 'out-smart' the PDAF systems by being able to intelligently work out the scene using some kind of AI. For example, if you see a blurry human-shaped blob that is sitting on top of the rest of the image, it's pretty easy to conclude that it's a person (by the shape), relatively how far away from the focal plane it is (how out of focus it is and where the focal plane is now), and that it's closer than the focal point rather than behind (if other things are more in focus but this isn't), and that it's the closest object (because it's over the top of everything else). If it's also the biggest and completely in frame then it would be reasonable to assume that might be the focus of the shot too. I suspect that this is how human vision works, in a way. These things are pretty straight forwards. PDAF is only good for telling if a given pixel is out-of-focus to the front or back, and by how much. At the point when you have an AI engine that can look at an image and understand it, blurry things and all, then there is no advantage to having PDAF. Our eyes don't have PDAF, we have killer AI, and out eyes are pretty good at focusing on objects and basically never get things wrong. I suspect that that'll be what Panasonic is doing with it's computer vision people. If they get it right and get it to fulfil the promise of the technology, then it will out-perform other systems because it will understand a scene even when everything is out of focus. Of course, they're a long way from that being market ready at the moment.
  11. kye

    bmp4k adventures

    A little late to this party, but good stuff in here @leslie! Firstly, great to get out and shoot. Those models you've got there don't look particularly bright - just hanging around each other like sheep. You're in good company with the FD lenses. I've got the 70-210/4 and a 2X TC, which I used yesterday to film my son playing football and also today to shoot a camera test. I was very confused about how to put the FD mounts together, and now I just have the lens, TC and adapter all as one piece. It works as I don't have any other FD lenses. The mount sure is strong though. The good company I speak of for FD lenses isn't me, it's this guy: https://www.youtube.com/c/MatteobertoliMe/videos He's a professional DP and a talented one at that, and you'll find a few videos he's shot with the P4K and FD lenses. He's upgraded now to P6K and Leica lenses, but the fact he had FDs before that implies they were a second preference which is high praise from him. Good score on those lights. I'd suggest first job is to put out the fire that appears to be burning the entire earth from below in this photo though: In regards to covid, we're also watching from the west with concern and disapproval.
  12. I have noticed this in a couple of comparisons between BM and Alexa. This is one: This is another: (The shots by the window show the window frame to be very green on the Alexa). Both tests were shot by professional DPs, so I can't imagine that they both screwed up in the same way. Of course you can fix these things in post, but it's interesting to note.
  13. OK, I made a mistake. The 3.3K mode isn't h265, it's h264. Damn. That means that it's very similar to the 4K 400Mbps mode, except it's 4:3 anamorphic. 4K 422 10-bit All-I 400Mbps h264 3.3K 422 10-bit All-I 400Mbps h264 I thought that all the modes in the anamorphic menu were h265, but it looks like that's not true. Double damn. More reading required. I also just shot the mode comparisons, but Resolve won't load the 5K h265 files for some reason, so working on that.
  14. Thanks. I also figure that the colour checker and skin tone tests will show colour macro-blocking if it exists. I guess a follow-up to my follow-up is to ask if anyone actually needs to see a comparison? I mean, when I'm watching something incredible on Netflix I don't need to see their attempts at shooting it on a different camera to appreciate the IQ of Dark, The Queen, Stranger Things, etc.
  15. Follow-up. If I stress test this codec vs the 5K Long-GOP 420 codec and the 4K 400Mbps ALL-I codec and the 1080p 200Mbps ALL-I codec, what should I film? I'm thinking: Colour checker plus my face under tungsten light Colour checker plus my face under natural outdoor light General external scenes I also have access to a city where people walk around inn outdoor malls, and a beach where there are often very few people. What would you like me to include?
  16. I'm not saying that the US doesn't get hot, obviously it does. What I am saying is that camera overheating tests, both by manufacturers as well as bloggers / vloggers / news sites are typically done in very mild conditions, eg Canons official testing at 23 degrees. Here in Australia we keep our air-conditioning at 24 degrees, so their test isn't even an indicative test of using their cameras indoors - let alone outside! It's great to see someone who lives in a genuinely hot climate testing overheating - it's not a common thing to see.
  17. Let us know how you go if you use it. Apart from the advantages of 422 and ALL-I I also think the 3.3K resolution might be an advantage over 5K resolution. IIRC many Alexas are 2.8K or 3.2K but upscale in post and I believe that the softness this creates (along with no / low compression) is partially responsible for the filmic look they are prized for. I did a bunch of googling last night to try and see if I could get some comparison of signal-to-noise figures for Prores vs h264 or h265 but couldn't find much. I have found in the past that h265 is about twice as efficient as h264 for the same IQ, so I am comfortable taking that as a rule. I also found the article by @Andrew Reid and thread by @KnightsFan comparing h265 and prores: https://www.eoshd.com/news/new-h-265-codec-test-prores-4444-quality-1-file-size/ Which indicated that h265 is something like 50X as efficient as Prores 4444. This would make 4K h265 at 200Mbps equivalent to 4K Prores 4444 at 1061Mbps, which intuitively doesn't ring true to me, as no-one is seriously talking about consumer cameras having h265 codecs as good as Hollywood grade intermediaries. If I had an external recorder then I could directly compare the two but unfortunately I don't. Can you record with an external recorder at the same time as internal with GH5? If so, someone could do direct testing with the same image stream. One thing I did see is that various encoders have varying levels of quality for the same bitrates, so we can't rely on the encoders in our NLEs to be a reliable proxy for what the GH5 is doing internally. We'll have to film real tests. However, one thing that I do think is promising is that Prores is an older codec, and compression is something that is getting steadily better with time, so in general a newer codec should be better than an older one given the same bit-depth, bit-rate, colour subsampling, and ALL-I mode. Incidentally @Video Hummus I shot a few test clips of the 3.3k mode yesterday and my 2016 MBP + eGPU was almost able to play the files real-time. In fact, with Resolve set to showing every frame, it was slow at the start of each clip but came up to speed in a couple of seconds and then was able to play forwards and backwards at 25p. I think it was an overhead of the SSD loading the file maybe, not sure. In comparison to the Long-GOP 5K h265 or 4K h264 codecs, it was a night and day experience. I normally render proxies in 720p Prores Proxy for editing as they're small enough to fit onto the SSD in my MBP and they cut like butter even with complex colour grades, but my challenge was getting to fine-tuning the colour grades where I switch back to the original files, which I store on a spinning disk. Essentially i'm only using them for tweaking and doing tracking and stabilisation (which benefit from the extra resolution) but the 5K and 4K modes are painful to work with. The 3.3K mode seems like it would be very workable, especially when I upgrade my MBP from a dual-core to quad-core in a month or two. Pretty much the only downside to this codec I can see is the storage space (although similar if you shoot 4K 400Mbps) and the need for expensive UHS-II cards, but to get an 800Mbps-equivalent codec I consider the cards as a camera upgrade rather than a nuisance.
  18. @Kisaha I guess if we can get a side-benefit from global warming then that's great, but I think on balance I'd rather it be the other way!!
  19. Colour science is incredibly difficult. I'm the first to admit that i'm rubbish at colour grading, and this is why I am attracted to it and spend a lot of time doing experiments and trying to learn. We've previously seen that Sony has the most accurate colours when tested scientifically, but they are regarded by many as aesthetically displeasing, so the secret is in the sauce, as they say. I've been on a mission to understand what is in that sauce, and so far have attacked this in a few ways: I've reverse-engineered a couple of the film-emulation LUTs in Resolve using standard grading tools I've bought the GHAlex LUTs and reverse-engineered them using standard grading tools I've bought a BMMCC and a colour chart and done indoor and outdoor comparisons trying to match the GH5 to the Micro I've reconstructed most of the node graphs from the Juan Melara videos to understand what he is doing and why I've done numerous side-by-side tests with my GH5, Canon 700D, Canon XC10, GoPro and iPhone matching the colours in various combinations to each other I've also graded real footage that I shot and struggled through trying to repair the vast quantity and variety of cruel and unusual mistakes I made while shooting, effectively putting myself through the colour grading equivalent of a special forces training (It's still uncertain if i'll complete the course alive, i'll let you know....) The reason I say all this is as a prelude to say this - what I have found is a pandoras box of craziness. There are colour tweaks inside the cameras we talk about, inside the LUTs from manufacturers and highly skilled colourists, in the colour space transforms, and elsewhere that are tiny, numerous, complex, and often make no sense. They take place in colour spaces that have probably never been mentioned on EOSHD, they do things that are not possible in FCPX or PP, and maybe not even possible in Resolve or Baselight. I have developed a relatively solid ability to reverse-engineer a grade given side-by-side footage. Not perfect, but solid. But I am absolutely no-where when it comes to making adjustments in the service of making a shot look great. Let alone strange and parallel-universe type adjustments. But even that isn't enough. Manufacturers are in the business of making these parallel-universe mind-bending transformations in service of making every shot look nice. Even when filmed by people they've never met in locations they've never been. It's taken me everything I have done over a period of years to get to the point of realising just how much there is I don't know about colour.
  20. NEWS FLASH!! Someone is talking about camera overheating while not standing in rain / snow. I'm sick to death of the people who talk about camera overheating coming from countries where their summer is the same average temperature as the winter where I live. And I live in the cooler part of Australia. The reality of the world is that colder countries are more affluent, so that's where the technology is made, purchased, and generally used: https://en.wikipedia.org/wiki/Geography_and_wealth
  21. I have watched this conversation play out over all the Canon R5 and now BM 12K threads, where one side gets excited and says it's great for their use-case and the other side says its ridiculous because the use-case isn't common / it doesn't align with their experience / it's too expensive / the data rates are too high / etc. I think one thing is missing here, which is to talk about budget and the state of the art. The state of the art in cameras is actually quite limited, and there are very very many situations where you can't get more for your money even if you had it. Here's an example... I make home videos and I want a small setup that won't attract attention going into parks, museums, aquariums, etc, and I like IBIS. I use my GH5 for this, but it has limitations. I would like extra DR. I would like a 12-bit mode. I would like dual-ISO. I would like a bunch of things, but I can't have them. At any price. If I was a multi-millionaire, I still can't have them. Think of how many people lament the Canon offerings, and want a set of features that is not available in any Canon camera, regardless of price. Think of how many people would love the P4K or P6K or URSA but are put off by the reliability or QC. Hollywood has embraced the Alexa Mini form-factor because it is portable / flexible / light-weight to the point that it enables creativity that had not previously been possible with their heavier models. This is because none of the previous models offered the combination of features that the Mini offers. At any price. Big Hollywood productions don't care how much a camera costs, they rent for a very small percentage of their production costs. If a camera cost double and did the job then great. Triple? No problems. This is how I see the BM 12K and Canon R5 cameras, and many others. They offer a combination of features that previously could not be had at any price. So, if you own a collection of S16 lenses and want to shoot 6K or at high-framerates that was previously not available at any price then why is this so completely ridiculous a concept? I think that @JimJones enthusiasm might make it sound like the whole cinema world will implode because of this, but of course it won't, and neither did it implode when the BMPCC 2K first came out, but it sure imploded if you were a fan of tiny cameras and shooting with older lenses.
  22. Great article, and answers some of the question about how good the downscaling in-sensor may be,
  23. It never is. The only way to work out what is a good compromise for you is to work out how you shoot and what your priorities are. Or think about it the other way, what are you willing to sacrifice? There is no perfect camera, but when you sacrifice one or two things then often there are options that meet all your remaining requirements. and the only way to really understand your requirements is to shoot.
  24. Footage is available here: https://www.blackmagicdesign.com/uk/products/blackmagicursaminipro Scroll down to the Colour Science heading and there's three clips.
  25. I've been doing some thinking and realised that I care about getting the highest quality image out of the GH5, without caring much about resolution. I have been thinking about the 'best' modes as a choice between: 5K 4:3 420 10-bit Long-GOP 200Mbps h265 UHD 422 10-bit All-I 400Mbps h264 1080 422 10-bit All-I 200Mbps h264 The h265 codec is twice as efficient as h264, so the 5K 200Mbps h265 should be broadly similar to the UHD 400Mbps h264 in terms of compressed-bits-per-square-cm. So I got to thinking those modes might be better, then I remembered that there were other h265 anamorphic modes, and stumbled upon this mode: 3.3K 4:3 422 10-bit All-I 400Mbps h265 It looks perfect! It's 422. It's All-I. It's 400Mbps, and it's h265. That makes it the equivalent of 800Mbps h264. This is double the bit-rates of the 400Mbps h264 mode and the 200Mbps h265 modes. That's also very similar to the UHD Prores HQ bitrate, which is 707Mbps. Does anyone use this mode? I can't be the first person to see this? (actually, a google search revealed someone mentioning this mode in EOSHD, and it was ..... me!) It might be time to do that comparison video.
×
×
  • Create New...