-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
@FHDcrew could always grade the footage and then run the finished edit or selected shots through it. Not sure how that fits with the workflow though.
-
Any idea what the differences are from a technical perspective? I've never heard of dual base ISO as distinct from native ISO.
-
Probably no camera would survive the temps in the depths of hell, but if you're only shooting at the gates of hell (in Turkmenistan) then I don't think you'd need great low-light as it's self-illuminating.. https://en.wikipedia.org/wiki/Darvaza_gas_crater
-
That would do. There are lots on eBay, just get m42 (the mount of those lenses) to whatever you have. It's a fully manual lens so just a dumb adapter is fine. That also means you can look into any of the other m42 mount lenses, which there are many. I'd avoid m39/l39 lenses though, as there were multiple standards and often the adapters don't work etc - I've got a couple of m39 lenses and I can't get them to work on my camera, even though I've bought multiple adapters and even modified one of them.
-
I've noticed that there is now, finally, feature films in Hollywood being edited on Resolve. This has really only started in the last year or two, so in that sense Resolve is still very young as an editing package. However, once the pros start using it in any serious capacity I think the features it lack and the little niggles and pain points will probably start to get addressed and it will become a much more polished editing platform for large projects. I know that FCPX and Premier have worked with editors taking feedback and suggestions for many many years, so Resolve would benefit enormously from similar attention. They could end up surpassing FCPX and AVID by taking the best of both of them, and its also integrated into the colour suite, so negates the need for cumbersome round-tripping between them. It's all just possibilities at the moment, but Resolve has consistently made significant improvements (and adding entire pages!) year on year, and show no signs of stopping, so I suspect its development into a professional-level editor is probably inevitable.
-
Only if you have a seriously powerful computer. A quick google gave me two sources that said that h265 requires 5-10x more processing power than h264. It likely is the processing, but unfortunately you don't get to choose any/enough options on that. Most cameras that offer Prores treat the image with respect and most with h26x do not, so the codec acts as a proxy for the intended market and therefore the level of molestation the image will have gone through.
-
If you haven't got one already - get a Helios!
-
I think when it comes to 1080p vs 2-3K it really matters if it's downsampled or not from the sensor, as obviously the downsampled will have more detail. You're absolutely right about ~2.5K sensor cameras outputting 2K or 1080p, hugely underrated. I'm not sure how I'd go with a 1080p sensor - I've got both the OG BMPCC and BMMCC but I've struggled to get the level of resolution and sharpness I want. The GH5 downsampling to 1080p is a bit on the sharp side but easily adjusted with a very slight blur. I think that my requirement for ALL-I codecs might not apply once I go to the new Apple silicone, as my MBP is intel based and really lags in video editing performance in comparison. It's still lightening on 1080p ALL-I footage though, so it's fine there. I remember researching the intel ones on h265 hardware acceleration and could never find if it was hardware with 10-bit files, although I think they said that it was with 8-bit. The Apple chips should have enough power just to muscle it regardless I'd imagine - especially if it's a 4K or lower file. For some reason the manufacturers still seem to think that only the good codecs/bitrates are needed on the native resolutions, which is a pity to have to choose between 50-100Mbps 1080p and 2000Mbps 6K! In that way an external recorder would be great as you'd have in-between options, but it's a whole other level of hassle.
-
It's also worth pointing out that the IQ that you get from a camera depends on: the codec the bitrate the performance of the camera the processing the camera does prior to compression the quality of the processing algorithm in the camera The C100 downscaled from 4K sensor to a 24Mbps file that was easily better than the 100Mbps 4K files from many cheap cameras. Sadly, in this subject, everything matters.
-
Great topic. Short answer is that I don't think I really need much more than 200Mbps. I'd prefer an internal downscaled compressed RAW, but as that's not available, I'd settle for ALL-I h264 or Prores in either 1080p or 4K. Long answer is - I wish you could control bitrate and codec independently on all cameras! Sadly, they're preset and not uniform between cameras/brands. As you know, I shoot my own travels and the odd adventure, which means I have a low number of projects per year, but shoot a lot of footage during those projects. It also means I'm shooting while in moving vehicles, while walking / on stairs, and it means I'm shooting in (essentially) completely uncontrolled conditions. Most venues ban tripods and frown on "professional" looking rigs, so an understated appearance is a criteria not a preference. I care about two things. The first is what the camera can capture, and the second is my experience in post. What the camera can capture is determined by stabilisation, DR, and practicalities (turn-on time, battery life, etc). This is why I've got the GH5. It's also why I'm extremely reluctant to get an external recorder. With a MILC, lens, and mic, the rig already gets long stares from security guards and members of the public alike. My editing experience is that I want an ALL-I codec so my MBP can edit smoothly (forwards and backwards is crucial when you edit to music like I mostly do). I edit on 1080p timelines so that my colour grading doesn't kill the machine, which is necessary when shooting in high-DR available-light situations, and might also need stabilisation in post too. I publish to YT in 1080p because it's good enough, and even if I wanted to upload in 4K for the YT bitrate I can just upres from 1080p and be fine - no-one can tell. I store the files on internal or external SSD for editing, which is why I care about the total file size for an individual project. It gets archived on spinning disks so my overall storage costs aren't huge. Considering all the above, I'd prefer an internal downscaled compressed RAW format, maybe in something like 3K, which would give some latitude for the stabilisation and cropping in post. This would be ALL-I and with less brutal compression so would be fast to edit, it would have excellent bit-depth for serious grading from bad lighting, it wouldn't have strange colour profiles baked-in that make WB refinement in post a challenge, and the bitrates wouldn't be prohibitive. Second to this, I'd be happy with either: 1080p Prores HQ (~170Mbps) ~200Mbps 10-bit 422 ALL-I h264 (not h265 as it's more intensive in post) 4K Prores LT is a bit larger in file sizes but the ease of Prores decompression compensates for the extra resolution (on a 1080p timeline) and the extra resolution would be useful if I wanted to stabilise or crop in post 4K 400Mbps 10-bit 422 ALL-I h264 (not h265) is also an option too Since the pandemic made my cancel all my travel I've taken a step back and concentrated more on editing and sound design, and been paying more attention to the look of professional TV/movies and ignoring camera YT, I've become less precious about IQ and more focused on what will be smaller and faster and inconspicuous to get more of the shots I need even if the image is a bit more rough-n-ready. In reviewing past projects I've found that I would have liked more shots for montages, so it's about coverage. In high-end travel shows (e.g. Parts Unknown and even things like Chefs Table) there are shots with things clipped (other than the sun) and shake and various artefacts of difficult shooting, so if it's good enough for them it should be good enough for me. It's obvious they edit for content, rather than making pretty (boring) pictures. My ideal shooting experience would be a smartphone with acceptable image quality. Sadly, the latest iPhone only uses Prores to faithfully reproduce the disastrous image processing they do prior to compression, so that's unlikely to be acceptable for a long long time. Maybe over sharpened 8K is good enough on a 1080p timeline to match un-molested 1080p footage? Maybe it'll need to be 12K before the artefacts are blurred out.
-
In a word, yes!
-
How well does the AF work with Sony cameras and EF lenses with an adapter? Is it a problem, or all good?
-
Great post, and lots of interesting things to pick up on. The idea that rather than shooting a large VFX scene on film and then doing the VFX on that, you'd instead just do it completely CGI is an interesting one, and something I'd imagine would be increasingly attractive. I'm reminded of the projection screen from The Mandalorian and how it has replaced other types of VFX processes (it's obviously VFX, but it's VFX done in pre and prod rather than in post - what a concept!). In terms of getting the film look, then I think it really depends on how picky you want to be. For example, in one of the colourist sessions I watched (from a working pro colourist, not a YT colourist... it might have been Walter Volpatto but not sure) the colourist said that a large number of projects (maybe more than half?) get graded under a print-film emulation LUT (PFE) like the Kodak 2383, and that it was kind of an unspoken secret of the colour grading world - that everyone does it and people just don't talk about it or easily admit to it. (This, of course, is a process that was common when DI went back onto film for distribution because the colour profiles of the film for distribution would need to be taken into account). Does that mean that the majority of professionally colour graded projects have the film look? I'd suggest not, but what about those with PFE and grain? What about if they also add halation? etc etc.. However, if you're genuinely after a film look, then you're basically screwed because Steve Yedlin was forced to make his own software to do it because nothing available was up to the task. I've seen colourists casually mention that they always use a film grain overlay that is scanned from real film because no film grain emulation they've ever seen was realistic. I don't know about you, but I sure as hell can't tell real film grain from the well-designed emulations. Is it a trend? Sure. I've come to realise that trends in things are based on the fact that we think something is uncool because our parents or grandparents did it, but when the next generation comes along and is too young to see that (or remember it) then those negative associations aren't there and so it becomes a rich source of ideas for the next trends. This happens like clockwork in music and fashion, where the fashions of 20 years ago are revisited, but in a new and more integrated way, into the current fashions. The quest to be 'new' also means that things tend to oscillate around various parameters like how natural/artificial something is, analog/digital, clean/dirty, happy/sad, etc. I suspect that the film look would rise up, peak, and with a long tail, will gradually decline, but with various elements of it that have lasting appeal being gradually incorporated into the ongoing repertoire to be drawn upon at times that are deemed appropriate. For example, black and white is still used occasionally, despite us having had colour for longer than most of us can remember, but it's only used in ways that we've since collectively settled on as being appropriate (e.g., getting a timeless or vintage feel, perhaps for very ombre pieces, etc). Which elements of the film look will persist is interesting. I think we could do without the overblown saturation that it had in various scenarios (for example the glowing pink/orange hues that parts of the face could generate) but the contrast curve is likely to be with us for a long time and the orange/teal look, while not being explicitly from film per-se, will definitely stay with us indefinitely, and for the quite sensible reason that it makes skintones stand out against the background more. The Wandering DP YT channel is just spectacular, and that one in particular! His videos are peppered with all sorts of amusing little quips about various aspects of things, and for me, being someone who doesn't work in the industry but understands enough to get what the joke is referencing I find them absolutely hilarious. He raises a good point in that video that shooting 4:3 and on film is something new and different, and I know this is a little OT but it's worth mentioning that there are other dimensions that can also be pushed. Here's a video outlining a few:
-
I think this is one of the main things that is actually being discussed - that of lens availability. From what I understand, you can halve the size of the sensor and halve the size of the lens and get the same rendering of FOV and DoF, but with a difference in how much light is gathered. Often "the Hollywood look" is referred to as being a wide close-up with shallow DoF, which is easier on FF. However the main point is that it's only easier on FF because of the lenses that are out there in the world, rather than any particular challenge with making wide/fast lenses for smaller (or larger) sensors. In this way, while people are talking about FF sensors, they're actually not talking about the sensor or camera at all, they're talking about the availability of lenses that exist in the world already, which is mostly dictated by 1) the almost completely unrelated topic of the historic popularity of 35mm negative film, and 2) the state of globalisation and trade during the time that 35mm film cameras were manufactured in great numbers. One thing I remember hearing was that often people shot on large format film for large / complex VFX shots where (IIRC) 70mm / LF film had more resolution / less noise than any digital sensor and therefore was better to form the basis of those shots for the VFX teams to work on. Assuming this is true (you have to question everything these days!) then it would be why film would get mentioned a lot on various blockbuster productions, when it's really being used as a specialist tool, like a Phantom might be for high FPS shots. Of course, now with the resolution wars that might no longer be the case anymore - maybe people just shoot the whole thing (or a few shots) on the Alexa 65 or equivalent RED/VENICE. Having said that, there is a bit of an underground for using film I think. Considering the relative difficulty in getting a perfect film emulation (Yedlin had to write his own software - that's pretty serious!) and also the relatively low cost of film if you're doing a larger budget project. Also, I wonder how many productions are shot on sub-S35mm film, like ~16mm and ~8mm formats. They do cost money but it's not prohibitive. Noam Kroll is into it and has given some good write-ups and cost breakdowns: https://noamkroll.com/why-im-shooting-my-next-film-on-super-16mm-how-you-can-afford-to-shoot-on-film-too/ https://noamkroll.com/shooting-16mm-film-on-a-budget-with-my-new-arri-16srii/ If I read those numbers correctly, you could shoot a 90 minute feature on S16 film, with a 5:1 shooting ratio, from ~$10K in total film costs (buying the film and processing and scanning) which would give you a 2K Prores file for you to colour grade yourself. If you're making a feature then $10K isn't nothing, but it's also not a prohibitive cost either, assuming you're closer to the 'professional film' end of the spectrum rather than the 'student film' end. I remember shooting a short film with my sister during her film school years for AUD $2K in total, and I think that catering was our largest cost! We pulled in a great many favours for that one, I think it had a cast of about 20 (mostly extras) and took place in a high-end bar which we borrowed overnight during the week. Fun times. Anyway... IIRC he also mentioned some costs about shooting with 8mm film, but I don't think they were in those two articles above? His blog is filled with great info if you're not familiar with it. In one of the above he mentions that film is having a resurgence, and obviously those articles are a bit dated, but not entirely sure what evidence he presents for it.
-
I am not sure which WB approach would generate the best match, or even if the WB differences I've seen in comparisons are due to the WB of the camera or just how the cameras were all treated in post. All the more reason to do your own tests I guess. Considering you own both cameras, I'd suggest you film both (match WB settings and also do a custom WB) and then play with the post workflow and see which overall approach is best. I'm curious to hear what you find 🙂
-
I've noticed that in all the cinema camera comparisons on YT there are significant WB differences between cameras, regardless of if they say they "set all the cameras to 5600K" or if they said they did a custom WB. Some of these differences are quite significant actually - so much so that they render the comparison pretty much meaningless. These are across enough DPs that I don't think it's user error. I would suggest that in addition to the custom WB on each camera, I'd get full-spectrum shots of the colour chart to use as a reference in post. In post I'd suggest using ACES or RCM to colour-manage your workflow, and test your conversions with the colour chart to see how closely your settings get the two cameras. The reason that I suggest this method is that there may be differences between how your colour pipeline handles the two LOG profiles, even though both cameras are the same brand, so any differences should be minimised. I'd be comparing the two shots confirming that the greys are neutral, but also that the primaries are all similar hues (ie, no hue rotation), the same luminance, and also the same level of saturation. The more colour swatches you have on the colour checker the better so you can tell if there are any non-linear differences that might need tweaking. Beyond that, unless you're doing a multi-cam with similar angles, you just need to match the cameras in post so that the footage from both looks like "they're both from the same universe". People are very forgiving when it comes to noticing colour shifts (unless your content is boring or they're very interested in cameras) - if you look at older movies shot on film there were often large colour differences and those didn't send people running from the theatre etc.
-
Good to hear you worked it out. Resolve has all the options but sometimes it guesses wrong and you need to roll your sleeves up and correct it. I guess that's what happens when you make something more powerful - it's got a steeper learning curve.
-
Why is 120p the normal frame-rate for 3D production? Do you deliver in 120p? or is there some sort of processing going on in post? I'm curious.
-
I disagree - those sensors are better because they're more recent. The challenge is that 1) older FF cameras didn't shoot video, and 2) in stills cameras the FF cameras were the most expensive. Most discussions also involve an element of "FF gives shallower DoF" only it's usually not spelled out like that, but instead it's phrased as "more cinematic" or some other nebulous thing that really just boils down to shallow DoF. This is because FF lenses are made to have shallower DoF than lenses for other systems, but that's just a quirk of history rather than a fundamental limitation - they could have been made similarly for other sensor sizes but they just weren't. FF does gather slightly more light, and all else being equal, that gives lower noise and therefore higher DR, but that means that in a decade or two we'll just be arguing about FF vs LF and the manufacturers will be trying to sell us 16K LF cameras and people will be saying how the cinema standard is FF (having forgotten about S35 completely) but the LF-Bros will be taking about how only the LF sensors can use AI to adjust the aperture to keep both eyes of a mosquito in focus at the same time and FF can't do that, and that's completely essential to them and that Sony will go bankrupt if they don't go LF dynamic mosquito aperture AF.
-
It was to try and put the size of the sensor in perspective, considering that the vast majority of features and pro video work are still shot on S35. Without a frame of reference that includes, well, reality, sensor size conversations quickly warp to another universe where nothing makes sense.
-
Not many Fuji-Bros on YT... I suspect there will also be lens/ecosystem inertia holding people back from moving to Fuji. In a way, because everyone now "simply must have" pet-eye-detect PDAF, the idea of buying vintage/manual primes and adapting them has lessened quite a bit, which is sad because it would allow you to move to any mirrorless camera without having the issues of re-buying lenses etc - just buy a dumb adapter and you're done.
-
I'm seeing more and more FX30 videos on YT and they all seem to have the same overall 'look' as the other Sony cameras like the FX30 and A7S3 (and FX6 to some extent). The fact that YT people (who often can't colour grade to save their lives) are getting consistent results across a variety of conditions really speaks to the FX30 having similar capabilities to the rest of their 'cinema' line. That could be a good thing or a bad thing depending on what you think about the Sony look, but I firmly believe that how easy a camera is to use (both the camera itself as well as how easy it is to colour grade) plays a huge role in how good a camera is. If something is the best camera in the world, but only one person on earth is talented enough to grade it to get better results than other cameras, then in practical terms, that camera isn't better than the others.
-
I guess the way I was looking at it was that there's the top end who can use whatever they want and will use the latest and greatest, then the mid-tier who hire, then the top-tier who own, then mid-tier who can afford to own an ARRI (no low-tier when it comes to owning an ARRI!), and the functioning cameras would essentially trickle-down through the tiers. The fact that the lowest second-hand price was still high sort of indicates that there's still that much demand, even in the lowest tiers. Considering that ARRI have been making Alexas for over a decade now means that there will be huge numbers our there in the wild, so if they make a years worth of sales of the 35, it's unlikely to be more than 10-20% of the Alexas already out there (plus assuming that there aren't many LFs compared to normal Alexas), meaning that 80%+ of ARRIs will be S35 and still in high demand. To me, if I was in the market for a cinema camera and an Alexa Classic was in my budget and I wanted the name brand recognition and the colour science etc, the fact that there's a new FF Alexa wouldn't really affect me that much. Yes, it would impact the top-end folks, but at some point in the 'tiers' the people will still value that an Alexa Classic is an ARRI, with all the pedigree and tack record the camera has, and none of that goes away just because there's a newer model. I think this is a significant thing considering that I suspect the majority of ARRI owners (non-rental houses) probably don't care that much. Unless you're somehow required to shoot 4K, which very few are, then an Alexa Classic is still a very desirable camera that meets probably all the requirements you'd have.
-
Interesting. If it was purely a supply and demand thing where people just needed more cameras then you wouldn't expect any price changes at all (sensor size doesn't create new production houses!) so any drop must be that they're less desirable, or that people were waiting to update their equipment. I guess if rental houses maintain a fleet of X cameras and they all decide to get more FF models at once (a sensible strategy I'd imagine) then it would be a temporary glut in the market as they all offload their worst condition models. That would suggest that prices will bounce back up again once the demand has absorbed the temporary supply blip. I suspect it's a combination of both so prices will bounce back but probably not to the same level that they were at previously. Good news for those looking for a bargain!
-
The example that stands out to me for shallow DoF was The Handmaid's Tale, which used shallow DoF to show the isolation of the handmaids from their surroundings and society in general. Such subtle use is far more unsettling than the average horror film that is little more than the output from a random motion / sound generator with credits at each end.