Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. Yes, but it's been the most popular camera for quite some years! Just look back at the Flickr camera stats from previous years...
  2. You and me both! They say the first step is admitting you have a problem, which we've both done, so that means we're probably ahead of half the pack already 🙂
  3. This isn't how the technology industry works. In order to cater for the long development cycle and also try and respond to market conditions you want to use something like set based design. This is a reasonable introduction to the topic. https://scaledagileframework.com/set-based-design/#:~:text=Set-Based Design (SBD),eliminating poorer choices over time.
  4. There is no good way to judge colour online - lets review all the possibilities: log images are shown this is the purest experience of the camera, but you can't judge anything with this log images + manufacturers LUT is shown this is the best way to judge images, but its random chance how good this will look and doesn't really show the potential of the footage graded images are shown, but they look crap this tells you nothing as you can't tell if the camera is bad, the LUT is bad, the colourist is bad, or all of them graded images are shown, and they look good this tells you what is possible but not what is possible for you. great images could be because the camera is great and colourist is ok, camera is good but colourist is very good, camera is mediocre but colourist is world class BTW, if camera forums and camera YT had the same level of knowledge about cameras as they do about colour grading then every video would be trying to work out the exposure triangle and failing. Even rudimentary colour grading and colour science knowledge online is rare outside professional colourist circles - I know more about these things that most and I am at the very very very shallow end of the pool so if I know more than you do then you're basically no-where...
  5. You'd have to watch that around young kids - now the corners will be razor sharp.
  6. That setup caught my eye too - it's basically a way to add "internal" NDs 🙂
  7. kye

    Panasonic G9 mk2

    I hear you, being one of the most "sizeist" users here, but I just don't see the strength of the reaction being proportional. We could make a list of all the things wrong with the G9ii release, or the release of any other model from Panasonic, and we could easily find parallel examples from other manufacturers of things they do that are just as egregious, but when Sony releases yet another camera that overheats, or GoPro releases another doppelgänger camera with microscopic spec improvements, or whoever the hell else waits years and then releases an uninspiring black box, we don't then spontaneously question the future of the brand or the entire format. If someone were to post in every Sony camera thread that FF was dead and Sony was going to exit the camera market because of the latest release then they'd be considered unhinged, but somehow with MFT every camera announcement is turned into a wake.. What will you use it for?
  8. kye

    Panasonic G9 mk2

    Right, now I get it. Throughout this thread I thought that people were saying "Look how huge it is..... by giving it the body of the S5 they made it huge" when actually they were saying "Look at how the mk2 is the same size as the mk1 - that's outrageous - every camera update should be drastically smaller!" TBH, if we're going to judge everyone that didn't give us what they could have given us, we'd all better be saving up for an Alexa, because the people who worked on every other camera released in the history of the world are going in front of a firing squad tomorrow at dawn.
  9. kye

    Panasonic G9 mk2

    I'm confused.... WTF was everyone complaining about?
  10. I think I understand where @IronFilm is coming from - the advantage of a larger body is that you get dedicated buttons and other things that are useful on set. Think about it, if there was no use for something then they wouldn't add it to the camera, regardless of how large they were allowed to make it. On a controlled set you'd imagine that they'd have a proper cinema lens with remote follow-focus etc attached, matte box, v-mount power, monitor, and the whole thing would be rigged appropriately. By the time you add all that then the difference between an FX3 and FX6 is maybe only an extra 25% to the size of the whole rig.
  11. Comparison with S5iiX... Connor shows that sometimes it looks over-sharpened: But that's still not as bad as I remember previous models being.
  12. It might be time to upgrade I think! That is a far cry from previous models.
  13. Just found this video outlining a bunch of new features in Baselight (the other worlds best colour grading platform) and it confirms exactly what my thoughts were about sharpening and blurring, starting at 31m: He set up a frequency generator that starts at low frequencies on the left and goes higher towards the right, then uses the waveform monitor to show the amplitude, essentially showing a frequency response plot. The plot looks like this before any transformations: Adding sharpening to the image increases the amplitude of smaller frequencies: and a simple Gaussian blur reduces them: which is what we want to counteract the effects of in-camera sharpening, as well as replicate the decreasing response of film: I'm HUGELY encouraged now that I've worked out that a blur is a reasonable approximation of a more analog look, and that it also reverses sharpening. Another thing he demonstrates is that by increasing contrast on the image, you are amplifying the contrast of both the fine detail as well as the lower frequencies, so you're changing the texture of the image in ways that may not have occurred had the scene been lit with a greater contrast ratio. At 46m he shows an example (from Mindhunter, which was graded heavily in Baselight, fun fact!) with this being how it was shot: then you might add an effect in post like there was an additional light source: which has the effect of amplifying the contrast on everything, including fine details (ie, what sharpening does), but then he uses a special tool they've developed that only adds contrast to the lower frequencies and doesn't amplify the finer details, and voila: Now it looks natural, like that could have been how it was on set. I suppose you might be able to build this operation using frequency separation, but in Baselight it's just a slider. I guess that's one of the many reasons why instead of buying Resolve for $299 you'd buy Baselight, which is more of a "mortgage your house" sort of thing. The rest of the talk is excellent and worth watching, and this one that is more recent has some new stuff that is also fascinating:
  14. Ah, indeed I was! My bad. I re-read the article and it doesn't look like there's any concrete information in it at all about where they shot and how many people were present. A lot like most other journalism then - you read it and feel like you've been told a lot of stuff but when you look again there's just a stream of vague statements 😕 So, who knows why, I guess.
  15. Wait until the autonomous vehicles and autonomous other-things market really starts ramping up - those things will have dozens of cameras, each needing an entire camera module of its own. It is decades away, but the future is likely to have many times more autonomous devices than there are human beings, and they'll all need an array of various sensors.
  16. I suspect it could have been because it was small and they didn't want to get too much attention. "so his crew consisted of just his actors (Scoot McNairy and Whitney Able), a sound tech, a line producer, a translator and a driver. Edwards operated the camera, grabbing footage guerrilla style whenever they came upon a compelling location while traveling through Central America." I'd imagine that security might have been a concern in that region (maybe I'm wrong on that) but combine that with not wanting to draw attention from local officials who might hassle you for permits etc. Also, if you're bringing equipment into a country and are going to leave with it again you will need to declare it at the border (so you don't have to pay import taxes - I can't remember what this is called) and this process is a huge PITA for documentary crews etc, so getting around that would be a huge time advantage (I've seen docos where it takes the better part of a day for each border crossing because of this).
  17. I would tend to agree with your assessment that it's deliberate and they'll continue with it. If it was accidental then it's a pretty amazing thing to screw up. Also, while it seems to do enormous "damage" to the video from a technical point of view, the images all looked absolutely fine to me from an aesthetic point of view, so the probability they accidentally created a problem that large that didn't look worse is basically zero. It's yet another example of how you can make huge changes to the video signal and they can be invisible or even desirable - being "accurate" is the starting point of the creative process rather than the goal itself.
  18. Thanks! It is a tricky thing that basically every piece of technology faces - how do you cater to entry-level, intermediate, and expert users all with the same product: - If you make it too simple then the experts complain about lack of customisation (which is what Apple has done here - there aren't enough options). - If you make it too complex then the entry-level people can't work it out or end up setting things wrong, then blame you for the bad results. - You can't even make a "pro" mode because half of the entry-level people who don't know anything think they're "pro" and will screw it up and blame you for the results out of embarrassment alone.
  19. I got quite excited when they originally released the Prores support in their previous model as every camera I've seen with Prores had great images that had minimal processing, but unfortunately they just implemented it as a high-bitrate version of the same heavily processed mess that they sent to h265 compression. However, I'd be extraordinarily happy to be proven wrong! I use my phone as the second camera on my travels and it would be great if it could live up to its potential 🙂
  20. *shrug* only looked marginally better than my own footage from my 12 Mini. Until they stop pummelling the footage basically to death internally it won't produce anything other than a brittle digital looking image. I've just spent about 90 minutes trying every blur technique I can imagine on some sample footage I have from my 12 Mini and then jumped over to some similar shots from the GX85 and it was night-and-day different because it wasn't over sharpened before the compression. The GX85 is 8-bit vs the iPhones 10-bit, but the GX85 was still outright superior, despite both cameras being used outside during the day, so have absolutely tonnes of light, so both sensors would have been base ISO. The RAW footage from the android phones looked great - even after being put through the YT compression which is far more severe than the compression that is applied by any smartphone manufacturer. I don't know why Apple are so intent on ruining the video signal they're pulling off the sensor.
  21. Sorry - just realised I screwed up the math so cut it out of the above posts while the edit window was still open, then re-thought it again. Here's the corrected version.... I'm confused about the math, depending on how you define "sine waves". Here's what I think is the correct one: At 10 cycles/mm, which is where there is no softening, on Super35 3-perf which is ~25mm x 14mm, that would be a horizontal resolution of 500 x 280. The curves end at about 75 cycles/mm, which is ~3750 x 2400, where the contrast has dropped to between 40-15% of the contrast in the scene. There is a chance they mean it in a different way, which doubles the resolutions to 1000 x 560 up to ~7,500 x 4200. I think the above one is more likely to be correct though. These curves are obviously dropping as the level of detail gets smaller, but if you were to extend them then they might have a 1% contrast response at over 600 cycles/mm, which would be approaching 30K resolution. This is an incredible assumption and should be taken with an entire bag of salt that you buy at the hardware store to put in your pool, but still..... this is why specifying the resolution of film in pixels is an inappropriate measure, it simply doesn't work like this. All this is just the resolution of the negative. I've heard knowledgeable people say that the resolution of film is a lot, but by the time that you print the negative, cut it up in the edit (in a purely analog pipeline), then duplicate that as a master, then take several prints off that to then duplicate to make the prints that go out to each megaplex, the resolution has been reduced to approximately 2K. This is why DI systems were at 2K for so long during the days of film acquisition and distribution. During that period people didn't suddenly start saying that movies were blurrier and asking for their money back - a 2K DI was sufficient.
  22. I've been looking at film, not as the ideal reference to replicate, but as a data-point to help understand what we are seeing. What I have been noticing is that on film, finer detail is present but at a lower contrast ratio, whereas digital keeps the same level of contrast regardless of the size of the detail (up to the sensor resolution limits, and subject to the lens characteristics of course). For example, here's an image from Game of Thrones, shot on Alexa/RED: and here's a closeup: Notice that the individual strands are basically as bright as the larger areas of his hair, right up to the resolution of the file. Same with this closeup of an image I posted earlier: There is a little bit of contrast loss in these examples, but lenses and compression are also in the mix too. Contrast that with this shot from the original Blade Runner, which was obviously shot on film: Here's a closeup from the above: Note how the individual strands have far less contrast with the background than the more solid areas with many strands of hair. Here is the MTF chart of Kodak Vision3 500T, which confirms these observations. So, what is the actual resolution? <snip> see below..... Ironically, digital sharpening methods have a completely inverse response - they increase the contrast on fine detail rather than decreasing it. So, digital cameras take a readout from the sensor that has an essentially flat MTF curve, and then apply a transfer function that does the exact opposite of what film did. Awesome..... screw you too! Food for thought.
  23. In the description he also says: So overall, I don't know what to make of the video. Plus, when you eliminate all the fast cutting by watching it at 0.25x speed, it looks pretty sub-standard.
×
×
  • Create New...