Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. I recently asked for book recommendations to learn about human vision and was given a link to a free PDF. It is incredible. I'm only a quarter of the way through, but I'm absolutely blown away. The human vision system looks like it was designed by committee and then re-imagined by Dali and Picasso, while on drugs. It is a wonder we can see anything at all! Did you know that the rods and cones (which detect light) are BEHIND a bunch of nerves and nerve cells and blood vessels, so the light has to go through a bunch of crap before you even sense it? The book is actually a mix of how the human vision system works and also what we have done with the tech to try and align to it, so it's a nice blend of biology and tech. It's also very readable and tries to be as non-technical as possible. This is a rare find compared to other books that are hugely tech heavy. Take the red pill with me... download it here: https://www.filmlight.ltd.uk/support/documents/colourbook/colourbook.php (download it by clicking on the box next to the file size).
  2. Well, it is a telephone, after all.... Considering how bad the 14 was compared to this, if they keep going the 17 will be like a MILC, the 18 a cinema camera, and the 20 an Alexa!
  3. Yeah. It also didn't look too bad with a Black Pro Mist filter either:
  4. I'm surprised no-one has posted this yet.... Gerald tested the DR: The TL;DR is that below about ISO 1000 it measures 12.2 stops, but then around ISO 1000 it starts rising and at ISO1480 managed to get 13.8 / 13.3 stops.. Gerald said that he suspects the native ISO is around 1250 or so, and the DR going down below that is typical of sensors set below their native ISO. All in all, this is a seriously impressive result. It makes me wonder if you set it to full-auto and use it in daylight if you'll be limiting the DR by forcing it to a lower ISO? That's not quite so ideal..
  5. Well, if it goes from GX3: to FX3: ..the GX3 will be the size of a matchbox!! Then, in a massive twist no-one saw coming... Sony releases the next version of the GX85....!
  6. kye

    Panasonic G9 mk2

    I know of some VFX folks that use these in bulk to capture high-res plates for compositing, so they're at least known and in-use in professional circles. I don't know of examples where it was used as the main camera, but that's the thing about people that actually make content - mostly they're not online talking about the brand of paper-clips they use 🙂
  7. 8K is cancelled. Just use AI to make it better... IMAX does! https://ymcinema.com/2023/10/02/imax-ceo-we-use-ai-to-blowup-images/?expand_article=1 From IMAX CEO, Richard Gelfond: "we use it to blow up images. We use AI to make the images look better, we sharpen the edges, and we take the grain out. We have been using AI for supplements for a while." "However, the best reference for that utilization of the IMAX proprietary algorithm and AI tech, is the talked-about sci-fi project, The Creator. The movie was shot entirely on Sony FX3 which is not an IMAX-certified camera. Nevertheless, the ProRes RAW footage was undergone special treatment by IMAX AI technologies, in order to boost the imagery and make it capable enough to hold up against the huge canvas." Other streaming services only store feature films in 2K and upscale to 4K for people who stream in 4K, and now IMAX, the supposed best quality folks upscale using AI. We all knew that streaming was a low-quality distribution, and now IMAX is too... How can they get away with this???? Maybe.... *gasp* ....because it's not visible? I mean, you could say that AI is good enough for IMAX, but you could also say that visual perception is so low that you can't even tell that IMAX is upscaling with AI! It works both ways!! 😂😂😂
  8. I don't think it matters as long as it's not white or black. Even if they just recorded a test clip where they point it at different stuff. If there was a pixel stuck on or off you'd notice it pretty easily I'd imagine. "Film something bright and something dark" might be relatively simple advice to understand?
  9. It might be worth trying to come up with an "identity" for your audience, so you can hit the right level of info. For example, if you imagined you were talking to your partner, or next door neighbour, or grandmother, you would say things in different ways. You could even imagine there are two or even three audience members with different levels of knowledge. I get the impression that once you get going and start getting lots of comments on videos then you'll get a sense of who is out there watching. YouTubers often talk to their followers in ways that make me think they have a good sense of what their expertise is and what they like and don't like etc. But, to get you started you might have to make up your audience. A wireless mic sounds like the best solution. Even if it fails occasionally, a VoiceOver in post is a good fall-back option and far from ruining the video. Lights is a sensible solution, especially because large aperture lenses have shallow focus planes, and I'm not sure if it's worse to have a noisy image or one where you're out of focus half the time. For monitors, is it practical to have a wireless monitor? If you had a wireless monitor, a wireless mic, and a wireless trigger then you could put the camera wherever you like and still be able to control it and check focus etc. Most solo shooters only record from close to the camera with wide lenses, and that's one aesthetic, but there are other aesthetics too, and using a longer focal length from further away gives a much more professional look. Martijn Doolaard is a self-shooter and films from far away as well as close/wide, which gives a higher production value I think. Here's a video linked to an example: https://youtu.be/Ybgr8OUskcM?t=563 The alternative solution to using a monitor with power-out is just using a battery plate. These seem to be really useful as thy just take a battery and often have many different power outputs. One of these might make your setup more flexible in future if you decide to add more accessories or change the monitor etc.
  10. 800 isn't really base ISO for the sensor, just for that mode, and as @Django mentioned they could do one at ISO 200 that is a lot cleaner. I'm fine with that level of noise, but that's because I'm a fan of cinema which doesn't require anything even remotely close to 8K - it's really just for cropped modes and pixel peeing folks.. I think people are broadly aware of oversampling and its advantages. However the other thing to keep in mind is that you don't need to double the nyquist frequency - you only need to have a slight advantage, like audio being recorded at 48K and then delivered at 44.1K, or cameras like the GH5 which was 5.2K downsampling to 4K. The more I learn about what is going on under the hood, the more I realise that statements like "8k bayer is 8k" don't even make sense. To truly un-pack that statement would require a whole textbook, and that's just the technical side and ignoring the perceptual aspects!
  11. kye

    Panasonic G9 mk2

    Considering how long it takes to design electronics products, the GH8 is probably also in "very early stages" of development. But, as they say, talk is cheap! Until it's in your hands it's not real..
  12. If true, that would definitely explain it. It sounds quite plausible too.
  13. Record a shot of the blue sky, or a wall perhaps? Anything that isn't 0 or 100% in any colour should be visible in that test clip I would imagine.
  14. There's a bit more noise in these images than I would have thought. Were you keeping to base ISO? It is 8K, so by the time its used in the real world then this will have mostly cleared up, but by that logic there's no point in having 8K - might as well have had better/bigger pixels.
  15. That was my impression from what he said. I'd imagine that Filmic Pro will provide an update that will use the full capability of the new Prores Log mode, but I don't think it will provide any advantages over the BM app. TBH, if I owned Filmic Pro, I'd be sitting in a closed room with the smartest people in the company scribbling on a whiteboard furiously trying to work out how to stay relevant. My prediction: we'll see a torrent of YouTube videos sponsored by Filmic Pro as a last-ditch effort to keep revenue up (a huge advertising campaign is a standard sign a company is in trouble). I think you make a good point. The way I think about it is this. Your smartphone contains one (or more) small-sensor (just under 1"), fixed-lens (probably a prime-lens), digital cameras, which records internally/externally to h264/h265/Prores/RAW files. If you want to shoot deep-DoF videos with the available FOV in good light and the codec is sufficient for you then it's a solid choice. If you want to shoot with a different lens, larger sensor, in very low-light conditions, or you need better image quality than it can provide, then it's not a good choice. This is the same for any camera. Name a camera and there are things it's not good for. The ARRI Alexa 65 is a terrible choice for skydiving, a smartphone is terrible for shallow DoF projects, the RED V-Raptor XL is not suited to home or cat videos, an IMAX 70mm camera would be miserable for on-location night-time safari shoots, etc.
  16. Ah, I just realised I mis-read your comment in my above reply as "the quality that the manufacturers keep in the drawer (through limiting their potential with too heavy-handed image processing and compression)". You are right, of course, especially considering that the main reason people keep them in a drawer is because of their limited technical specifications, when realistically people have just gotten used to the latest technologies. Most cameras we keep on shelves or in drawers are better than 16mm film, and that was what was used to shoot all but the highest budget TV shows and was used on a number of serious feature films too, like Black Swan (2010), Clerks (1994), El Mariachi (1992), The Hurt Locker (2008), Moonrise Kingdom (2012), The Wrestler (2008), etc etc.. I think the biggest problem is that people don't know how to colour grade, or don't know what is possible. I mean, anyone with a Blackmagic camera that shoots RAW has enough image quality to make a feature. Hell, if the movie Tangerine could be a success when shot on the iPhone 5S, then no-one has any excuses for not being able to write a movie that is within the creative limitations of their equipment. Even a shitty webcam could be used to shoot a found-footage horror movie set in the days of analog camcorders!
  17. The Hawk "emulation" was simply two blur operations, each at a partial opacity. In Resolve: Node 1: Blur -> Blur tool at 0.53 with Key of 0.6 Node 2: Blur -> Blur tool at 1.0 with Key of 0.35 The first one (0.53) is the small radius blur that knocks the sharpening off the edges, and if you were just using this one on its own you might even want to make it closer to 90% opacity. The second one is a huge blur (1.0) that provides the huge halation over the whole image. I use the Resolve Blur tool because it's slightly faster than the Gaussian Blur OFX plugin on my laptop, but the OFX plugin allows much finer adjustments so that might be easier to play with. You can also adjust the size of the blur and the opacity in the same panel, so it might be easier to get the look you want using it. What are you grading? I'd be curious to see any examples if you're able to share 🙂
  18. Yeah, if you can bypass whatever the camera is doing and get the RAW straight off the sensor then it should be a good image. Sony know how to make sensors, and the FX3 shouldn't overheat.... From the Atomos page on the FX3: https://www.atomos.com/compatible-cameras/sony-fx3 I never hear people in the pro forums talking about Canon, only ARRI / RED / Venice. Gotta shoot fast and get that IMAX image! Absolutely. It's one of the reasons I am so frustrated, especially as now they've "unlocked" this quality via bolting on an external recorder instead of just giving us better internal codecs. I mean, for goodness sake, just give us an internal downscale to 2.8K Prores HQ with a LOG curve and no other processing! Even the tiny smartphone sensors look great in RAW. Scale that up to MFT or S35 and imagine the quality we'd be getting from every camera! My favourite WanderingDP video explains everything...
  19. I'm curious to hear how it holds up on an IMAX screen too - keep us informed. The link that @ntblowz shared has lots of info: "the filmmakers use the Atomos Ninja V+ as an onboard ProRes Raw recorder" "75mm Kowa 2x anamorphic lens with a prototype of the Atlas Mercury 42mm as a backup for the small spaces where the 75mm was too tight" TBH, the choice of the FX3 could have been as simple (and uninformed) as simply being that they are aware of ARRI, RED, and Sony (through the Venice) and looked at their cine lineups to find the smallest cinema camera, but never evaluated Panasonic or Fuji because they were simply unaware of them. Sometimes a lot of these industry heavyweights can be just as dogmatic about their favourite brand and just as naive / hoodwinked by rumours / misunderstandings / marketing as the worst camera fanboys/fangirls online.
  20. I've developed a more sophisticated "false sharpness" powergrade, but it was super tricky to get it to be sensitive enough to tell the difference between soft and sharp lenses (when no sharpening has been applied). Here are some test shots of a resolution test pattern through two lenses - the Master Anamorphics which are gloriously sharp, and the Hawk Vintage '74 lenses which are modern versions of a vintage anamorphic. ARRI ungraded with the false colour power-grade: Note that I've added a sine-wave along the very bottom that gets smaller towards the right, and acts as a scale to show what the false sharpness grade does. Here's the Hawk: and the Zeiss one with a couple of blur nodes to try and match the Hawk: Here's the same three again but without the false sharpness powergrade. Zeiss ungraded: Hawk ungraded: Zeiss graded to match the Hawk (I also added some lens distortion too): Interestingly, I had to add two different sized blurs at different opacities - a single one was either wrong with the fine detail or wrong on the larger details. The combination of two blurs was better, but still not great. I was wondering if a single blur would replicate the right shape for how various optical systems attenuate detail, and it seems that it doesn't. This is why I was sort of wanting a more sophisticated analysis tool, but I haven't found one yet, and TBH this is probably a whole world unto itself, and also, it's probably too detailed to matter if I'm just trying to cure the chronic digitalis of the iPhone and other digital cameras. ....and just for fun, here's the same iPhone shot from previously with the power-grade: If I apply the same blurs that I used to match the Zeiss to the Hawk, I get these: It's far too "dreamy" a look for my own work, but the Hawk lenses are pretty soft and diffused:
  21. LOL I've watched dozens of hours of "how to edit a wedding video" tutorials. They're very similar to my work in many ways, where footage is likely to be patchy with random technical and practical issues to solve and the target vibe is the same - happy fond memories. BUT... I've never shot a wedding video, so I haven't taken the oaths to keep all your secrets!!
  22. I'm not sure what you're seeing, but there seems to be two things required. The first is to get Resolve to not automatically do anything to the footage. IIRC you can do this by going to the clips in the Media tab and there's some option when you right-click on the clips that is something like Bypass Colour Management or something similar. That should tell Resolve not to do anything automatically based on metadata in the clip. The second one is the conversion, which should just be a CST from the right space to the destination one. IIRC the video suggested it was rec2020/rec2100 HLG, so you should be able to do a CST from that to whatever LOG format you want to work in. Keep in mind that you might want to do the CST at the start to a common working colour space for all your media and cameras, so that any grades or presets you create will work the same on all footage from any camera. I use DI/DWG for this purpose. Then if you have a LUT that wants a specific colour space, you just do a CST from DI/DWG to that log space, then put the LUT after that and you should be good. For example, the iPhone shots above had the following pipeline: Convert to DI/DWG I manually adjust the clips to 709 with a few adjustments and then use a CST from 709/2.4 to DI/DWG all my default nodes etc are in DI/DWG CST from DI/DWG to LogC/709 Resolve Film Look LUTs (mostly the Kodak 2383 one)
  23. The format of this video is a pretty common one I think. My understanding of this style is this: Go out and do something, film what you can Review the footage and "find the story" Write and record a "piece to camera" (PTC) shot (https://en.wikipedia.org/wiki/Piece_to_camera) Edit the PTC into a coherent story, focusing on the audio Put the shots you recorded from (1) over the top of the PTC to hide your cuts If there are still gaps in the edit or it still doesn't work, record another PTC in-front of the editing desk that explain or clarify, put that into the edit I see these videos often, including the snippets from the person in the edit. Sometimes they have recorded a PTC so many times that the whole video is just a patchwork of clips from different times and locations that you're not even sure how it was shot anymore. Casey Neistat used to film his videos where each sentence or even every few words were recorded in a different location, so during the course of a sentence or two he'd have left his office, gone shopping, and returned home. Here's a video I saw recently that has this find-it-in-the-edit format: The above is an example of where the video was very challenging to make, which is why it required such a chaotic process, but it shows that if you are skilled enough in the edit you can pull almost anything together. Also, go subscribe to her channel - she's usually much more collected than the above video! 🙂 Wedding videos often follow a similar pattern in the edit: Find one or two nice things that got recorded (this is normally a speech from the reception, and perhaps if the bride and groom wrote each other letters and they opened their letters from their partner and read them out loud) Edit these into a coherent audio-edit (you literally just ignore the visuals and edit for audio only) Put a montage of great shots from the day over the top, showing just enough footage from the audio so you know who is speaking Put music in the background and in any gaps Done! I'd also suggest that when you say most other people film vlog style with a phone and you want to take it up a notch, try and do that just by filming with your camera on a tripod, but otherwise try and copy their format at first. Innovation is an interactive process, and the way they shoot and edit their videos likely has a number of hidden reasons why things are done that way. Start by replicating their process (with a real camera on a tripod) and see how that goes and what you can improve after you've made a few of them and gotten a feel for it. The priority is the content and actually uploading, right? So focus on getting the videos out and then improve them once you get going. It's always tempting to think you can look in from the sidelines and improve things, but until you've actually done something you don't understand it. Real innovation comes from having a deep understanding of the process and solving problems, or approaching it in a different way.
  24. I've seen this get recommended online elsewhere. Personally I just shot a colour chart with the phone and made a curve to straighten out the greyscale patches and a bit of hue vs hue and hue vs sat curves to put the patches where they should be in the vector scope. I've tried using a CST and didn't like the results from that as much as my own version. After I did my conversion my other test images all straightened out nicely and the footage actually looked pretty straight-forwards to grade.
×
×
  • Create New...