-
Posts
7,834 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
...beard flowing down majestically into the unbridgeable abyss...
-
Maybe some niche rental house in LA? Even just if folks making music videos wanted to rent it?
-
Time will tell if it's just familiarity or if it's actually something innate. I wouldn't be so quick to assume that the human visual system has nothing to do with how we feel about what we see, and that it's all equal and is just what we're used to.
-
I'd be curious to see proper research on the topic. My predictions would be that a certain percentage would identify that it "looks different" in some way, and would be able to identify the effect in an A/B/X style test. I suspect that a further percentage would say it looks the same, but that it somehow feels different, essentially anthropomorphising it to be sadder or more surreal or something. I've done this test with a few people in a controlled environment, where I recorded a tree in my backyard moving in the light breeze with my phone in 24p, 30p and 60p, and then put them all onto dedicated timelines in Resolve, then by playing back each one through the UltraStudio 3G it would switch the monitor to the correct frame rate for each one. It wasn't perfect, but all the clips were recorded with very very similar settings, so it wasn't a bad test. Perhaps the best test would be to render a 3D environment in the three frame rates but have each render start with the same random variables and so the motion of the scene would be exactly the same. Of course, if you were going to do it, I'd put in a few other things too, like varying the shutter angle. It would be a huge amount of work and would require a pretty large sample group to get meaningful results. Good question. I've noticed the gulf widening between "video should look like reality so technology advancement is awesome" and "video should be the highest quality to democratise high-end cinema and advance the state-of-the-art". There are also the "technology is always good, why are you talking about a story?" folks, but they're best ignored 🙂 The challenge in this debate is that if we're not even trying to achieve the same goals, then what's the point of discussing the tools? Interesting concept! I think you might be overstating the take-over of the heavy-VFX component of the industry, and even perhaps the nature of the segments themselves. Certainly a majority of Hollywood income might be from VFX blockbusters, but the world is a lot larger than Hollywood. The majority of films made likely weren't VFX-heavy, and the majority of film that people actually cared about definitely wouldn't have been. If you asked me if I'd seen <insert blockbuster here> then I probably couldn't answer, because truthfully they're mostly forgettable. On many occasions I've been pressured into watching a movie with my family that one of my kids chose, and the experience was mostly the same - famous actors / bursts of action / regular laughs / the USA wins in the end, and then a few days later I remember that I watched the film but genuinely couldn't remember the plot. This is counter to something like Roma where years later I remember some aspects but I also remember how it felt in critical moments and how my life is very different to theirs.
-
Computer displays are a long way from being superior to human vision, so it's all about compromises and the various aesthetics of each choice. I would encourage you to learn more about how human vision works, it can be very helpful when developing an aesthetic. A few things that might be of interest: Here's a research paper outlining that the human eye can perceive flicker at 500Hz: https://pubmed.ncbi.nlm.nih.gov/25644611/ Here's a research paper saying that people could see and interpret an image from a frame shown for 13ms, which is one frame in 77fps video: https://dspace.mit.edu/bitstream/handle/1721.1/107157/13414_2013_605_ReferencePDF.pdf;jsessionid=6850F7A807AB7EEEFA83FFEEE3ACCAEF?sequence=1 The human eye sees continually without "frames" and has continual motion-blur, normal video also has these with the motion blur represented as a proportion of the frame rate (where 360 degrees is 100%) but the human eye has a much faster frame rate than motion blur, so we might (for example) have a "shutter-angle" of dozens or hundreds of frames Video is an area where technology is improving rapidly and a lot of the time the newer things are better, but that's not always the case. The other thing to keep in mind is that there are different goals - some people want to create something that looks lifelike but other people want to create things that don't look real. Much of the tools and techniques in cinema and high-end TV production are to deliberately make things not look real, but to look surreal or 'larger than life' etc. I've been doing video and high-end audio for quite some time and have put many folks in front of high-end systems or shown people controlled tests of things back-to-back, and often people do notice differences but don't talk about them because they don't know what you're asking, or don't have the language to describe what they're seeing or hearing and don't want to sound dumb, or simply don't care and don't want to get into some long discussion. Asking people who have just seen a movie for the first time if they noticed "anything different" is a very strange approach - if they hadn't seen the film then everything about the film would have been different. Literally thousands of things - the costumes, the lighting, the seats, how loud it was, how this cinema smelled compared to the last one, etc. Better would be to sit people in front of a controlled test and show them two images with as few variables changed as possible. Even then it can be challenging. When I first started out I couldn't tell the difference between 24p and 60p, now I hate the way 60p looks and quite dislike 30p as well. Lots of people also knew what the 'soap opera effect' is, without being camera nerds..
-
I'm really interested in how long the longest end needs to be. You've basically said that 70mm is a bit short, that 75 or 80 are better, but that you use a 35-150mm outdoors. How do you feel about the 100-150mm range? Do you use it a lot? If so, what specific types of compositions and situations do you use it for? How do you feel about the 150-200mm range? Gathering from the above and other posts, you seem to have traded it in for other considerations, but is the focal length useful at all? Or is it too long for what you shoot? If you got given a weightless 28-200mm F2.8 lens then how many of your compositions would be above 150mm, and what would they be? What about 28-300mm? I guess what I'm looking for is feedback on the creative elements like that anything above 150mm is too compressed, or that it's only useful in certain situations, or that it feels too distant and out-of-context in an edit, or that it's not useful at all, etc.
-
I'm keen to get some feedback on focal lengths. As many know, I shoot travel and want to be able to work super-quickly to get environmental shots / environmental portraits / macro / detail shots, and have narrowed down to three options: GX85 + 12-35mm F2.8 GX85 + 12-60mm F2.8-4.0 GX85 + 14-140mm F3.5-5.6 The GX85 also has the 2x digital punch-in which is quite usable. GX85 + 12-35mm F2.8 This is essentially a 24-70mm (48-140mm) with constant aperture, and is the best for low-light. The question is if this is long enough for getting all the portrait shots. GX85 + 12-60mm F2.8-4.0 Same as above but slower and longer. I'm still not sure if this is long enough. GX85 + 14-140mm F3.5-5.6 Slower but way longer. This would be great for everything, and also zoos / safaris too, but isn't as wide at the wide end, which is only a slight difference but is still unfortunate. I'm keen to hear what people's thoughts are in terms of the practical implications of these options in terms of what final shots these will provide to the edit. My experience has been that the more variety of shots you can get when working a scene the better the final edit. I'm not that bothered about the relative DoF considerations, but aperture matters for low-light of course. I'm shooting auto-SS so am not fiddling with NDs, so the constant aperture doesn't matter in this regard. There are obvious parallels to shooting events, weddings, sports, and other genres - keen to hear from @BTM_Pix @MrSMW and others who shoot in similar available-light / uncontrolled / run-n-gun / guerrilla situations.
-
When I made the move from Windows to Mac, the primary reason I did that was I wanted to use a computer, but not have a part time job as a systems administrator, which is what Windows forces you to do in order to just use the computer. I was stunned at the time how much time I used to have to spend on the computer not doing the things I wanted to do, but doing technical things to enable those things to be done. TBH I'm over that, so while I'm perfectly capable of managing the IT of a medium sized business, I'd rather just use the computer for what I want to use it for, and not have to troubleshoot an array of file and network management infrastructure.
-
Panasonic S5 II (What does Panasonic have up their sleeve?)
kye replied to newfoundmass's topic in Cameras
There were horses and monkeys, so maybe sports something? I recognised very few people there, so maybe it's the MFT YouTubers rather than the Sony FF fan club? -
Yes, I understand the logic. One element of colour science often forgotten is the choice of RGB filters for the filter array, but I'm not sure if that would also be the same between these cameras, or if they'd just be supplied by Sony and therefore be identical between brands? In theory this would create differences due to the corners of the gamut, but once transformed to XYZ differences between two different filters on the same sensor would be rounding errors at best. The best discussion I've seen on digital processing is the discussion of the Alexa 35 - page 52 onwards. https://www.fdtimes.com/pdfs/free/115FDTimes-June2022-2.04-150.pdf As you say, much of what is going on might well be things that occur on the sensor. The other component that is worth looking at between RAW sources is the de-bayer algorithms between cameras. AFAIK you can't choose which de-bayer algorithm gets used on RAW footage, so there might be differences between the manufacturers algorithms contributing to the differences we see in real life.
-
Don't be so skeptical... there is much enlightenment available! https://variety.com/2023/digital/news/mean-girls-free-tiktok-23-parts-paramount-1235743213/
-
Panasonic S5 II (What does Panasonic have up their sleeve?)
kye replied to newfoundmass's topic in Cameras
These days I don't think even California would be big enough! -
I didn't realise you had access to every camera. I realise now that this makes every statement you make correct! I'll learn my place eventually....
-
You might benefit from the Voice Isolation feature in Resolve. I'm not sure if this is a paid feature or in free version, but it seems to work pretty well in some situations. In general though, having something on the microphones to block them from wind is essential. You can't side-step the laws of physics 🙂 That's true of almost every camera. The closest you get it right in-camera the better the output will be.
-
Gotta love a demo video that starts with a world-famous model / actress... All else being equal, products like this probably help to keep film alive. There's a chance the rich might adopt things like this, and perhaps burn through film like they're rich (because they are). For the rest of us, if you wanted to shoot on 8mm film then get yourself a second-hand real film camera and then benefit from the film consumption of the rich to give economies of scale. Personally, I think that the iPhone / smartphones have finally gotten here. The compression artefacts (and even the crushing over-processing) are made invisible by the time you add enough blur and grain to get a semi-decent match, and they've finally caught up in terms of dynamic range, so you can have contrasty mids with strong saturation but gentle and extended rolloffs. Also, the more these images trend on social media, the more my OG BMPCC and BMMCC go up in value!
-
Agreed that it's often about the processing. I think there's a bunch of stuff going on and often people don't understand the variables, or aren't taking into account all the relevant ones. Also, people forget that the main goal of any codec is getting nuanced skin tones in the mids of whatever display space you'll be outputting to. In that context: RAW in 12-bit is Linear, which is only about the same quality as 10-bit Log LOG in 10-bit isn't significantly better than 709 in 8-bit when exposed well It's commonly believed that only 12+bit RAW let's you adjust exposure and WB in post, and that 10-bit LOG is required for professional work, but thanks to colour management we can adjust exposure and WB in any of these (obviously 12-bit RAW > 10-bit LOG > 8-bit 709 from most cameras in real life and for big changes but if you're only making small changes then the errors are often negligible) 8-bit LOG is much worse than 8-bit 709 unless you were delivering to HDR (because if you convert LOG to 709 then you're pulling the bits in the mids apart significantly, which is absolutely visible) HLG is sometimes better than LOG - despite "HLG" being a marketing phrase and not a standard (unlike rec2020 or rec2100) it typically has a rec709 curve up to the mids then has a more aggressive highlight rolloff above that to keep the whole DR of the sensor, and combines this with a rec709 level of saturation. This is brilliant because it typically means that you get the benefits of having a 10-bit image with 709-levels of saturation and the full DR of the sensor. This is superior to a 10-bit LOG profile from the same camera (unless you clip a colour channel) because there is greater bit-density in the mids for skin tones (roughly equivalent to 12-bit LOG) and you get to keep the whole DR. You can also change exposure and WB in post with proper colour management. The GH5 and iPhone implementations of HLG are like this. The pros require greater quality than consumers because they have to keep clients happy when viewing the images without any compression. Much of the subtleties get crunched by streaming compression, and unless you're shooting in controlled conditions you can't really expect to keep the last levels of nuance right into the final delivery. I'm keen to learn more about what happens at this stage of processing. If you have resources on this, please share! The other aspect to consider is that RAW and uncompressed aren't the same thing. The megapixel race has obscured the fact that very close to 100% of material is now being shot at resolutions above the delivery resolution. This is fine, and oversampling is great for image quality, but if you want to shoot RAW (and get the benefits of having no processing in-camera) and not have to deal with 5K / 6K / 8K source files when delivering 4K or even 1080p, then you have to deal with the sensor crop and having your whole lens package changed because of it. The alternative to this, and what I think is the big advantage of Prores, is to have a full sensor readout downscaled in-camera, but unfortunately this typically means that you have some degree of loss of flexibility to the source image (either by compression, bit-depth reduction, or even straight-out over processing like iPhone 14 did). The alternative is to downscale in-camera and then just save the images as uncompressed. There is nothing stopping manufacturers from implementing this. The GH5 downscaled 5K in-camera and many cameras record Cinema DNG sequences in-camera - just combine the two and we're there.
-
Just got around to watching this - interesting stuff. One thing described as a key element to the French New Wave was the availability of affordable and portable 16mm film cameras, allowing the available-light, hand-held shooting style. I see from wikipedia that the Italian Neorealism was a precursor to the FNW, with Italian Neorealism in full swing from ~1945-1955 and FNW not really getting started until the late 1950s. I couldn't find any good timeline about when 16mm film became affordable - but I did note that wikipedia said "The format was used extensively during World War II, and there was a huge expansion of 16 mm professional filmmaking in the post-war years." so maybe the Italian Neorealism was the first movement to really benefit from this technological advancement? I was also under the impression that the FNW was the innovative movement that took the new tech and developed new techniques that fully utilised it, but maybe that's not the case?
-
While not being action-camera sized, the BMMCC had RAW (compressed and uncompressed) from a S16 sensor (only a tiny bit smaller than 1" sensor) approaching a decade ago. Of course, that was with 1080p - the mindless pursuit of increasing resolution means that what was once possible now generates too high data-rates, too much heat from processing, etc. Realistically, there isn't a huge overlap between the people who want RAW and the people who demand cameras smaller than a RED Komodo. I wish there was, as I think it would be great, but I just don't think it's true! You didn't answer my question. I'll just write lines until you tell me to stop....
-
Interesting! You must have really been utilising something that moved from software to hardware in the upgrade - every benchmark I've seen was incremental between the two. Unless your M1 wasn't an M1 Max? I use a 13" MBP as my only computer, but connect it to my 32" UHD panel / hifi audio setup for normal uses, and then when working in Resolve I connect the UI to a 27" FHD panel to the side of my view, and the BM UltraStudio to my UHD panel as a clean 1080p feed of the timeline. I also sit about 1.5m/yards from the screens and operate with wireless kb / trackpad / BM Speed Editor / BM Micro Panel, which is why I have a large 1080p panel for the UI. Absolutely. My current 2020 Intel MBP is enough for editing 4K 10-bit IPB h264/h265 on a 1080p timeline with very basic colour applied. Then I can apply heavier colour grading and effects before exporting. At the moment it's not fast enough to edit the footage with the heavy colour grading applied, but I make do. When the M1 Mac Mini came out I contemplated getting one as a fast editing machine but there was just too many hurdles to overcome with the need to sync two computers. The fundamental logic is this: If you need to be portable then you need a laptop If you want a desktop as well then you either need to separate your uses or you need to sync between them I didn't want to separate my uses, so that was it, game over, and so that meant I needed a relatively powerful laptop. It would have been cheaper to get a desktop only setup, but it just didn't work for me.
-
I thought that GoPro models are fixed focus? and were in-focus between about 30cm and infinity? Going to a larger sensor size might mean they'd have to include AF, which would be a significant change over a fixed focus approach. Still, it would be good to get a larger sensor, as that would improve low-light performance etc. The size of the camera is a variable to watch though. I thought that the RX0 was a really interesting camera and that despite Sony telling everyone it wasn't an action camera and not to review it like that, mostly people were too stupid to take that advice and so it was largely mis-understood and not really utilised for its potential. Unfortunately it had many of the normal Sony weaknesses including overheating. It would be really interesting to see what Panny or Oly would do with a similar form-factor, but based on the poor understanding of the RX0 I wouldn't think they'd make an attempt...
-
Not all. The Canon XC10 included the whole sensor DR in its standard profile. I really wish that other camera companies would adopt this practice, but unfortunately they don't. https://tech.ebu.ch/docs/tech/tech3335_s17.pdf Don't confuse the bit-depth with DR, they're independent variables. You can take an ARRI Alexa image and make it 4-bit if you like, and that doesn't radically reduce the DR. You're confusing a bunch of things. I personally absolutely HATE the look of 60p. I am so sensitive to this look now that 30p also looks pretty awful to me. It looks far less like reality than 24p does, 60p/30p look like hyper-reality, like how reality would look if I was part robot living in the year 3000 after I'd taken my smart drugs for the day. I don't know why they look like this, but they do. If you analyse all the variables involved then you become aware that 24p and 60p are both significantly different to how reality looks to our eyes, they are just making different trade-offs, and neither one is more 'correct' than the other. The errors of 24p are just less offensive than those of 60p to me, and many others too. I can understand that watching sports in 60p is useful because there are quick motions of the ball etc, so it's great for making things obvious, but it doesn't look real. Oddly, when playing video games, 60p / 120p / 240p don't have the same look to me as video. Maybe it's to do with interacting vs passively watching, I'm not sure. Maybe it's that games are artificial and so the artificial look of those frame rates doesn't clash with the aesthetic. I understand that you like the look of 60p and you want to make things look like reality, but that's not what everyone wants. Cool. If we promise to never say a bad word against the mighty and perfect GoPro will you promise not to come and tell us off for blaspheming? Seriously though, the sooner you accept that not everyone wants what you want, shoots what you shoot, shoots how you shoot, the sooner you'll be able to discuss things rather than always argue with people.
-
Here's a BTS and sample footage of a music video shot on the Pro model... It shows some temporal NR ghosting in this footage (which admittedly is a bit of a torture test in this regard) but still worth noting (linked to timestamp):
-
So, now that even a phone is a capable camera we'll have to stop obsessing on the specs and learn to make engaging films? ........NAH!!!!
-
Panasonic S5 II (What does Panasonic have up their sleeve?)
kye replied to newfoundmass's topic in Cameras
I have a very vague recollection that a Panasonic rep said that they weren't interested in going to 8K, but who knows if that was correct or if something might have changed. The idea of having a base model, a video variant and a high-res stills variant is a pretty common one now. -
Panasonic S5 II (What does Panasonic have up their sleeve?)
kye replied to newfoundmass's topic in Cameras
They've registered two new cameras in China recently: According to 43rumours: https://www.43rumors.com/panasonic-officially-registered-another-new-model-in-china-we-now-have-two-cameras-coming-soon/ How long is it normally between wild YouTuber parties and NDAs elapsing?