-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
One of the (potentially numerous) differences when swapping from 4K to 6K (on a 4K timeline) is that the downsampling goes from being before the compression to afterwards. Compression is one of the significant contributors to the digital look IMHO (just compare RAW vs compressed images side by side), and if you can downsample from 6K to 4K in post then you're downsampling and also interpolating (blurring) the compression artefacts that happen on edges. I would favour a workflow where the image is debayered, downscaled, then recorded to SD with very low or even no compression. It would effectively have the benefits of downsampling and the benefits of RAW, but without the huge file sizes of RAW at the native resolution of the sensor. Of course, no-one else thinks this is a good idea, so....
-
I find IBIS to be critical to my workflow personally, as I shoot exclusively handheld and do so while walking etc, but I definitely understand that I'm in the minority. But yeah, having that eND would be spectacular - set to auto-ISO / auto-eND / 180 shutter / desired aperture for background defocus and you're good to go!
-
Lots of them are, but as you brought it up, here are my impressions on the above: It's still far too sharp to be convincing - I noticed this in the first few seconds of the video - TBH my first impression was "is this the before or after footage?".. of course, once you see the before footage then it's obvious, but it didn't immediately look like film either The motion is still choppy with very short shutter speeds - this would require an VND which isn't so easy to attach to your phone I think that people have some sort of hangup about resolution these days and as such don't blur things enough. For example, here are three closeups from the above video. The original footage from the iPhone: Their processed version: A couple of examples from Catch Me If You Can, which was their reference film: As you can see, their processed version is far better, but they didn't go far enough. I've developed my own power grade to "de-awesome" the footage from my iPhone 12 Mini and detailed that process here, but here's a few example frames: I'm not trying to emulate film, I'm just trying to make it match my GX85. Here are a couple of GX85 shots SOOC/ungraded for comparison: I haven't got the footage handy for the above shots, but here are a few before/afters on the iPhone from my latest project from South Korea. iPhone SOOC: iPhone Graded: iPhone SOOC: iPhone Graded: iPhone SOOC: iPhone Graded: I'm happy with the results - they're somewhere in between the native iPhone look (which I've named "MAXIMUM AWESOME BRO BANGER FOR THE 'GRAM") and a vintage cinema look. My goal was to make the camera neutral and disappear so that you don't think about it - neither great nor terrible. Going back to Dehancer / Filmbox / Filmconvert / etc.. these are great plugins actually and I would recommend them to people if they want the look. I didn't go with them because I wanted to build the skills myself, so essentially I'm doing it the hard way lol. The only thing I'd really recommend is for people to actually look at real film in detail, rather than just playing with the knobs until it looks kind of what they think that film might have looked like the last time they looked which wasn't recently... I keep banging on about it because it's obvious people have forgotten what it really looks like, or never knew in the first place. And while they're actually looking at real examples of film, they should look at real examples of digital from Hollywood too - even those are far less sharp than people think. The "cinematic" videos on YT are all so much sharper than anything being screened in cinemas that it's practically a joke, except the YT people aren't in on it.
-
I have a vague recollection that some cameras were able to tag or put comments onto specific images as you were taking them (or just afterwards) and that the info would be embedded in the images metadata or some such. It might even have been you that mentioned it. Anyway, if you were taking full-res stills and they were being sent instantly to the photo/video village for quick editing and upload to the networks and social media channels, then you could simply tag which images you thought should be cropped? If you are shooting full-res images then cropping in post is the same as cropping in-camera I would imagine?
-
Great post - I just wanted to add to this from a colour grading perspective. I did a colour grading masterclass with Walter Volpatto, a hugely respected senior colourist at Company 3, and it changed my entire concept of how colour grading should be approached. His basic concept was this: You transform the cameras footage into the right colour space so it can be viewed, and apply any colour treatments that the Director has indicated (like replicating view LUTs or PFEs etc) - this is for the whole timeline You make a QC pass of the footage to make sure it's all good and perhaps even out any small irregularities (e.g if they picked up some outdoor shots and the light changed) At this stage you might develop a few look ideas in preparation for the first review with the Director Then you work with the Director to implement their vision The idea was that they would have likely shot everything with the lighting ratios and levels that they wanted, so all you have to do is transform it and then you can fine-tune from there. Contrary to the BS process of grading each shot in sequence that YouTube colourists seem to follow, this process gives you a watchable film in a day or so. Then you work on the overall look, then perhaps apply different variations on a location/scene basis, and then fine-tune particular shots if you have time. He was absolutely clear that the job of the colourist was to simply help the Director get one step closer to realising their vision, the last thing you want to do as a colourist is to try and get noticed. It really introduced me to the concept that they chose the camera package and lenses based on the look they wanted, then lit each scene according to the creative intent from the Director, and so the job of a colourist is to take the creativity that is encapsulated into the files and transform them in such a way that the overall rendering is faithful to what the Director and Cinematographer were thinking would happen to their footage after it had been shot. Walter mentioned that you can literally colour grade a whole feature in under a week if that's how the Director likes to operate. I have taken to this process in my own work now too. I build a node tree that transforms the footage, and applies whatever specific look elements I want from each camera I shoot with, and then it's simply a matter of performing some overall adjustments to the look, and then fine-tuning each shot to make them blend together nicely. In this way, I think that the process of getting things right up front probably hasn't changed much for a large percentage of productions.
-
I don't have any technical information or data to confirm this, but my theory is that Sony sensors are technically inferior in some regard, and this is what prevents the more organic look from the majority of modern cameras. My understanding is that these are the only non-Sony-sensor video-capable cameras: ARRI (partners with ON Semiconductor to make sensors to their own demanding specifications) Canon BM OG cameras (OG BMPCC and BMMCC) Digital Bolex (Kodak CCD sensor) Nikon (a minority of models - link) RED (there's lots of talk about who makes their sensors, but it's hard to know what is true, so I mention them for completeness but won't discuss them further) What is interesting about this list is that, to my understanding at least, all these cameras are borderline legendary for how organic and non-digital their images look. Canon deserves a special mention as its earlier sensors are really only evaluated when the footage is viewed RAW, either taking RAW stills or using the Magic Lantern hack). There are questions if it could be related to CCD vs CMOS sensor tech. This thread about CCD sensors from Pentax contains dozens/hundreds of images that make a compelling case for that theory. I have both the OG BMPCC and BMMCC as well as the GX85, GH5, Canon XC10, Canon 700D, and other lesser cameras and have tried on many occasions to match those to the images from the BM cameras, and although I managed to get the images to look almost indistinguishable (in specific situations) they still lacked the organic nature that seems to be shared across the range of cameras mentioned above. I regularly see images from my own work with the Panasonic cameras and my iPhone, as well as OG BM camera images and RAW images from the 5Dmk3 and Magic Lantern (courtesy of @mercer) and once you get dialled into the look of these non-Sony sensors, it's hard not to notice that the rest all look digital in some way that isn't desirable.
-
I also wish they'd continue pursuing the look. It wouldn't be too difficult either, as you can design a camera that has both looks - modern and classic. It's been shown that the Alexa is simply a high-quality Linear photometry device, and that the colour science is in the processing rather than in the camera. This leads me to think that you should be able to design a sensor to be highly-linear and then just have processing presets that let you choose a more modern image vs a more organic one. Depending on the sensor tech, you could even design a sensor to have a 1:1 readout for the modern look, and some kind of alternative read-out where the resolution is reduced on-sensor to match the look of film, which might even be a faster read-out mode and give the benefit of reduced RS. Obviously I'm not a sensor designer, so this might be a technical fantasy, but it would be worth exploring. Otherwise, cameras like the GH5 have shown that you can re-sample (either down or up) in-camera, so that sort of processing could be used to tune the image for a more organic look.
-
I think that we're a very long way away from people only having a VR display and not having a small physical display like a phone. Think about the progression of tech and how people have used it.. Over the last few decades people got access to cheaper and smaller video equipment with more and more resolution, and also to cheaper TVs that also got larger and higher resolution (and sound systems got better too), but despite this people still watch a considerable amount of low-resolution content on a 4-6" screen. This shows that people value content and convenience rather than immersion. The other factor that I think is critical is that photography / videography is highly selective. The vast majority of shooting takes place in an environment designed for some parts to be visible in the footage (and are made to look nice) and other parts are not (and often look ugly as hell). The BTS of a commercial production shows this easily and the BTS of most bedroom vloggers would likely show that all their mess was just piled up out of frame! Much/most/all(?) of the art of cinematography is choosing which things appear in the frame and which things don't - if every shot can see in every direction then it's a whole new ballgame. Casey Neistat played with this in a VR video he did some years ago, and I recall him talking about how they designed it so that the viewer would be guided to look in the right directions at the right times, rather than looking away from the action and not getting the intended effect. The idea that "where to look" would pass from the people making the content to the people consuming the content is something we haven't ever had in the entirety of the visual arts (starting from cave paintings, through oil paintings, to photography, then moving images). Furthermore, even if the first part happens and we all have a VR phone rather than a display that we hold in our hand, the VR goggles can display a virtual screen hovering in place just as easily as VR content, so VR goggles will enable VR content but won't mandate it. For these reasons, I don't think the industry will be moving to VR-only any time soon.
-
I'm definitely over arguing, but I though I'd close off this discussion of the stabilisation in the X3000 because I wasn't aware of how it worked and I think it's different to IBIS and to OIS. It seems to rotate an assembly of the sensor and lens together, sort of like an internal gimbal. I suspect this is functionally different to IBIS and OIS because in each of those they only shift one element of the lens, and the other elements remain fixed, potentially leading to distortions in the optical path. I suspect this is what causes the extreme corner wobble with extreme wide-angle lenses paired with IBIS. Also different to IBIS is that is is only a 2-axis stabilisation system, yaw and pitch: This lacks rotation, which has caused issues for me in the past - one example was when I was holding it out of the door of a helicopter and it was being buffeted by the winds and rotating the camera quite violently. I never managed to find a stabilisation solution in post that compensated for the lens distortion, but I suspect that if I did I would have discovered that there was also rolling shutter issues in there as well because of the fast motion. Anyway, it's mostly been replaced by my phone now, although I should compare the two and see what the relative quality of each is. The wide-angle on my phone wasn't good enough for low-light recording unfortunately. I just wish people would talk about film-making.
-
I'm also kind of over the GH5, despite it being my 'best' camera! On my last trip, which was to Korea, I ended up shooting much more with the GX85 and 14mm F2.5 pancake prime, just because it was smaller, faster, and less invasive, leading to the environment I was shooting being nicer, no-one talks about how the camera choice can make the subject look better! I also love camera tech, but I wish it was going towards the images that people want, rather than just treating specifications like memes, but somehow people don't understand that the specs they want don't lead to the image they want. It's easy to think that we're just hanging out and talking, but if you look around online this is one of the most in-depth places where cameras are discussed - most other places are one sentence or less - and so I would be surprised if we weren't influencing the market. I mean, can you imagine if instead of talking about megapixels, we were creating and posting stunning images, and every thread was full of enthusiastic discussions and great results? It would have a ripple effect across much of the camera internet. I'd go one step further than you - It's easier to talk about technical aspects than it is about creativity. I've partly swapped to the GX85 because I realised that it creates an image superior to probably half or two thirds of commercially produced images being streamed right now. The difference between me shooting with a GX85 and all content shot before 2020 that is still being watched is everything other than the camera. There are shows on Netflix that aren't even in widescreen, they're that old, and yet the content is still of interest. I had an ah-ha moment when I analysed the Tokyo episode of Parts Unknown (which won an award for editing) and saw the incredible array of techniques and then did the same analysis on some incredible travel YT videos and realised it was a comparison between a university graduate and a pre-schooler. Compared to the pros, we all know very little, and it's hard to talk about that, so we just talk about cameras. It's almost universal - it's almost impossible to get anyone online to talk about creativity regardless of what discipline is involved.
-
Well, I'll be darned, I thought the X3000 was OIS only, but it's something else entirely, sort of an internal gimbal. Ok. So, you made a statement, I disagreed, you challenged me to back up my claim with evidence, I did, and now you move the goal posts by saying that the sensor is too small. I've played this game before - it's what happens when someone is proven wrong and doesn't want to admit it. How can we have a sensible discussion if you're not willing to admit it when you are wrong? It seems like you only started this thread because the last one had too many criticisms of your pet camera.
-
You have gotten OIS and IBIS confused.
-
As I mentioned above, I don't really care if YT is like that, it's when it bleeds from there into this forum that it frustrates me. Things have gotten so distorted now.. If I was to scan basically any thread and sort all the technical aspects into "good" and "bad" columns, and then present a bunch of sample images and have people sort those, we'd end up with a "good" column that described the images in the "bad" column, and a "bad" column that described the images in the "good" column.
-
Absolutely. Once you've managed to go through all the effort of hiring a team of people and hiring a good amount of lighting equipment etc, you're not going to quibble about the cost if hiring a dedicated cinema camera, that would be silly. Of course YT is dedicated to new shiny things, that's to be expected. The reason I have some frustration here is that this attitude of OLD = CRAP is unfortunately very common on this forum, where (I would hope) no-one has a financial incentive to promote new cameras. I'm talking both subtly as well as directly, not only are the older cameras given less attention, but I have been explicitly told by forum members to stop talking about the GH5, even though it was relevant to whatever brand new camera body we were discussing and I was presenting a balanced view of pros as well as cons. It's one thing to criticise a camera, but to say that it's no longer relevant or even welcome in a discussion due to its age, that's a whole other level. To tie this together, I absolutely agree that no-one is using a BGH1 or GH5S as an A/B/C camera on a major production, but the fact that they're used at all by anyone in that world should give pause to those who think that they're no longer fit for use in low budget / no budget / amateur settings. I mean, I thought the entire premise of EOSHD was to use and make the most of affordable consumer cameras for video.
-
Artificial voices generated from text. The future of video narration?
kye replied to Happy Daze's topic in Cameras
Stumbled onto this.. Not perfect by any means, but it's almost entering the 'uncanny valley'. The fan fiction / fan film genre is going to be incredible in a few years... -
Please go ahead and "show" me that "No camera with IBIS can do this". How will you show this? By posting every clip ever shot with an IBIS camera? What about the ones that were uploaded without providing camera model details? What about the ones that were uploaded privately? What about the clips that were shot but never uploaded at all? How on earth can you "show" that your claim about IBIS cameras is correct? Simple - you can't. This is a claim that is impossible to make, impossible to test, impossible to even know. Even if it is true now, it might not be true in 10 years time, so therefore it isn't true now because it will be invalidated in the future. Let me be honest with you here. You are receiving criticism here because you are talking outside of your knowledge, and criticising things that other people do. Had you said "Here, look at this footage, I think it's good" then no-one could criticise because that's your opinion. Had you said that "I think it works well for the kind of things I shoot" then that's your opinion too. Had you said "I think this would be useful for a number of other film-makers" then that's an opinion, but one that makes sense based on the fact that many other people shoot how you do. But, you also say things that cannot ever be known, like "No camera with IBIS can do this". And, you also criticise the way that other people use their cameras, like "Seriously, guys. Overheating? Do you take long, boring takes from one position? That's what you shoot?". If you want to have a real discussion about cameras, then you need to follow these general principles: Only speak definitively from your own perspective Don't speak about things that you don't know about Be open minded and humble about what you do and do not know The way you're currently posting makes you look like someone who loves their camera and doesn't want to hear any criticisms about it and will argue with people to try and invalidate their criticisms. If you want people to respect you, you have to know your own limits and stick within them. It is easy to make claims, like "No camera with IBIS can do this" for example, but to show that those aren't the type of claims that I like to make, here's a clip I shot with OIS only. This is the whole clip, SOOC. I just dragged it from my footage folder into YT. The Sony X3000 action camera has a super-wide angle lens (something like 15-17mm equivalent) that has OIS built in. This clip was shot with the OIS enabled, but without the in-camera EIS enabled. It's not the most stable clip I have shot, but it's not too bad. I also need more practice. Things to take into account: I uploaded this clip because it didn't feature my friends and family and I don't post personal clips publicly This is a very wide angle lens and this submarine is actually a lot more cramped than it appears, and you can see that the floor isn't completely even or flat either, so I was walking carefully and trying not to bump into things (I'm also above 6' tall, so might have been ducking too - I can't remember) I definitely need more practice holding the camera steady The X3000 weighs practically nothing, the whole rig of camera + monitor + grip totals only 162g / 0.36lb so doesn't have the weight advantage of the Sony, which is probably 10-20 times heavier and has a much firmer grip than this (which you literally grip with one finger) I really hope that you will consider my comments and try to change how you post. There are lots of people online, and on camera forums especially, who are posting for ego reasons, or trying to push some kind of agenda, and this always generates a negative response and arguments rather than genuine discussion where everyone comes away better off.
-
....and in todays episode of "the camera YT echo-chamber doesn't know shit about the real world", here's a real-world and very high-end studio shooting VFX background plates with a gazillion BGH1 units, or arrays of GH5S paired with 12-35mm f2.8 zooms. Their website indicates they've worked on Joker, Stranger Things, Mission Impossible, and dozens of other high-end productions. The thumbnail appears to show 17 BGH1 units: One shot from the Stranger Things rig detailed on their website shows "our standard nine camera array" is 9 x GH5S units - 5 at the back (with two facing sideways) and 3 on the front: To all those who suggest that the size/weight advantage from MFT is gone, their page says "The rig was still able to fly as luggage and efficiently attach to a rental car, all while being street legal." How much does a 24-70mm F2.8 weigh again? If you can't remember then the short answer is more than double the MFT equivalent. In the video he talks about how each BGH1 + lens is about 1lb, and keeping the weight down allows them to rig the car up in such a way that keeps them from needing increased permitting and things like escort cars (which for 360 cameras appear in the shot). So, the GH5S (2018) and BGH1 (2020 - 3 years old), which aren't FF, don't shoot RAW, don't have IBIS, and have been completely forgotten by the entire camera YT echo-chamber, are being actively used on some of the biggest and most VFX heavy films being made in Hollywood. They also just casually mention in the video (11:35 if you don't believe me) they've been involved in over 2000 productions, and that "these cameras are able to match perfectly with all your A-cameras".
-
If you say so. I've gotten shots that look almost that good from OIS alone.
-
VR is an obvious exception to the resolution question, and so is VFX. What never gets acknowledged though, is that these are specialist applications, like super-slow-motion is a specialist application which has specialist cameras that are rented purely for this purpose. I mean, take all the arguments on these forums - who here is using their DSLR/MILC cameras to shoot VR or VFX? Zero people that I'm aware of. From this perspective the VR/VFX argument is just a theoretical red-herring.
-
We all have features we'd like to see on the next round of cameras, even if that feature is "the same but cheaper", but what features do the more modern cameras have that you would give up? Some might question why we'd want to give anything up, but in general terms, every feature costs money to manufacture or design or do R&D, and many features require extra battery capacity, make the camera larger, make it less reliable, etc. This is likely to be controversial, so please remember that we all use our cameras differently! I'll go first, to set the stage / start the fight: Resolution I don't need 8K, or even 6K. I'd be happy with a 5K sensor because it allows downsampling to 1080p even with a 2x digital crop. All-else-being-equal, a 5K sensor would have better battery life, better rolling shutter, and better low-light performance than an 8K one. I'd also be happy to give up any output modes greater than 4K, or even 1080p. Dynamic range I don't need anything more than about 12 stops, in fact it creates issues in post when trying to "fit" the whole DR into a rec709 output file. Also, the less DR the sensor has, the less stretched the codecs bit-depth is - 14 stops of DR requires a 12-bit log file to match the quality of fitting 12 stops of DR in a 10-bit log file. This means that the 10-bit log profile on your 14 stop camera is only as good as a 12 stop camera was with a 8-bit log file, and I've had colour quantising issues with those in the past. External RAW An external recorder (plus the cables, extra batteries, extra chargers, etc) is too large and heavy for me to basically ever use. If no consumer cameras offered external RAW then they'd be forced to get off their a$$es and improve the internal codecs, offering things like All-I in all resolutions/framerates, as well as paying attention to things like how to change WB in post (there is no reason why compressed codecs can't be as flexible in post as RAW - it's only because they implemented crap codecs/profiles) - the Prores from the OG BMPCC is night-and-day better than the 10-bit log from other cameras. ....and show me a single consumer camera that supports a 12-bit compressed codec - I don't know of a single one, yet they're all falling over themselves to give external RAW at firehose bitrates. What would you prefer you weren't paying for / carrying around?
-
It definitely has the Sony FF look... and the ISO 12,800 shot looked pretty darn clean to me. I'd argue that in some ways it's too good for a travel camera. Although everyone films it differently, I think that travel is potentially the genre where the most amount of compromises need to be made, and manufacturers seem to make them in odd combinations that assume you do things in a certain way. Fortunately for you, you seem to be the kind of shooter that Sony assumes everyone is. For me, there are features that almost no-one offers that would be of huge value to me and would strongly influence my purchasing decisions. Then again, the entire consumer industry has been in full problem-reaction-solution mode for our entire lifetime, so it's hardly surprising.
-
Thanks, and actually, that doesn't sound too slow at all. When I've got a long tele setup on the camera then I'm not looking up or down that much normally, and if I needed to quickly rotate it a lot then I could just rotate the tripod legs as I'm mostly shooting on flat surfaces. I understand about having a balanced rig and that's probably something I should look at regardless, although I don't have any trouble getting my current setup to roughly the right direction, it's just the fine-tuning that is where I have difficulty because it sticks and then gives suddenly and makes adjustments that are too large. Obviously with the geared head this issue would be completely rectified. I'm assuming that one of those controls is the one that will enable you to level the horizon? It seems so, but the description only says that it can go "90° sideways for portrait orientation" but I'm not sure if it would go any amount the other way (ie, the opposite direction to turning the camera to portrait mode) if that's what was required to level the horizon?