-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Oops, I thought we were discussing film-making. Feel free to go back to telling other people how they should film, what equipment they should have, how they should run their businesses, etc... sorry I interrupted. My bad!
-
Quite a few people on here are filming things like weddings or concerts where you need to run multiple cameras for a long time and have them run unsupervised because they're operating a roaming camera while the others are rolling. In these situations it's not uncommon for each camera to need to record for an hour or more, sometimes in full sun, without encountering any issues. If you are shooting a wedding with three cameras and one of them shuts down after 25 minutes then you're potentially screwed in the edit. I've watched wedding videographers edit a 3-camera multi-cam from a wedding ceremony and even though they didn't have a camera overheat and stop working, there were points in the edit when two of their three camera angles were unusable and they were down to one usable angle. Had that camera overheated, they'd have been down to zero (or forced to use the angle that was setup when everyone was sitting but in the moment that everyone stood up it just showed the backs of the people sitting in the back row - not exactly a professional moment). Add to this the fact that in these situations it's useful to have identical cameras so that all the lenses and cards and accessories are all interchangeable. So if one camera is at risk of overheating then it isn't impossible to have multiple cameras overheat, which would well and truly screw you. Besides, 87F is pathetic in terms of overheating tests.... I have overheated an iPhone before because it was 107F, I was recording clips that were several minutes long, it was in full sun, and the brightness of the screen was up full so that I could see what I was pointing the camera at. I literally submerged half of it in a river to cool it down because I was missing moments (the people swimming in the river). You can't do that with most cameras, and if they overheated without them being attended, you'd never know until it was too late. One thing that causes almost all the head-scratching (and starts almost all the arguments) is when one person cannot understand that someone else uses their equipment differently, to achieve a different result, for a different audience. I suggest you start paying closer attention to how people talk about their camera choices - video is one of those fields where there are lots of ways of doing things and where techniques from one approach can be really handy to understand in your own work which might be very different.
-
To return to the original question, perhaps the most important element in all this is the ability of the operator to understand the variables and aesthetic implications of all of the above layers, to understand their budget, the available options on the market, and to apply their budget to ensure the products they use are optimal to achieve the intellectual and emotional response that the operator wishes to induce in the viewer.
-
Ah crap.. I missed a step. Once your stream has been delivered to the streaming device, it will likely apply its own processing too. This is anything from colour space manipulations, calibration profiles, etc, all the way through to the extremely offensive "motion smoothing" effects, NR, sharpening, and all other manner of processing that is as sophisticated and nuanced as a TikTok filter. Plus, grandmas TV is from the early 90s and everything is bright purple, but no-one replaced it because she can't understand the remote controls on the new ones and she's too blind for it to matter anyway.
-
To expand on the above, here is a list of all the "layers" that I believe are in effect when creating an image - you are in effect "looking through" these items: Atmosphere between the camera and subject Filters on the end of the lens The lens itself, with each element and coating, as well as the reflective properties of the internal surfaces Anything between the lens and camera (eg, speed booster / TC, filters, etc) Filters on the sensor and their accompanying coatings (polarisers, IR/UV cut filters, anti-aliasing filter, bayer filter, etc) The sensor itself (the geometry and electrical properties of the photosites) The mode that the sensor is in (frame-rate, shutter-speed, pixel binning, line skipping, bit-depth, resolution, etc) Gain (there are often multiple stages of gain, one of which is ISO, that occur digitally and in the analog domain - I'm not very clear on how these operate) Image de-bayering (or equivalent for non-bayer sensors) Image scaling (resolution) Image colour space adjustments (Linear to Log or 709) Image NR, sharpening, and other processing Image bit-depth conversions Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) Image container formats This is what gets you the file on the media out of the camera. Then, in post, after decompressing each frame, you get: Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to timeline colour space) All image manipulation done in post by the user, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of export processing) Image bit-depth conversions (as part of export processing) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of export processing) Image container formats (as part of export processing) This gets you the final deliverable. Then, if your content is to be viewed through some sort of streaming service, you get: Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to streaming colour space) All image manipulation done in post by the streaming service, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of preparing the steam) Image bit-depth conversions (as part of preparing the steam) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of preparing the steam) Image container formats (as part of preparing the steam) This list is non-exhaustive and is likely missing a number of things. It's worth noting a few things: The elements listed above may be done in different sequences depending on the manufacturer / provider The processing that is done by the streaming provider may be different per resolution (eg, more sharpening for lower resolutions for example) I have heard anecdotal but credible evidence to suggest that there is digital NR within most cameras, and that this might be a significant factor in what separates consumer RAW cameras like the P2K/P4K/P6K from cameras like the Digital Bolex or high-end cinema cameras ..and to re-iterate a point I made above, you must take the whole image pipeline into consideration when making decisions. Failure to do so is more likely to lead you to waste money on upgrades that don't get the results you want. For example, if you want sharper images then you could spend literally thousands of dollars on new lenses, but this might be fruitless if the sharpness/resolution limitations are the in-camera-NR or you might spend thousands of dollars getting a camera that is better in low-light when there is no perceptible difference after the streaming service has compressed the image so much that you have to be filming at ISO 10-bajillion before and grain is visible (seriously - test this for yourself!).
-
I don't think there is a "most important". I tend to think of photos/video like you're looking through a window at the world, where each element of technology is a pane of glass - so to see the outside world you look through every layer. If one layer is covered in mud, or is blurry, or is tinted, or defective in some way, then the whole image pipeline will suffer from that. In this analogy, you should be trying to work out which panes of glass the most offensive aspects of the image are on, and trying to improve or replace that layer. Thinking about it like this, there is no "most important". Every layer is important, but some are worth paying more or less attention to, depending on what their current performance is and what you are trying to achieve. Of course, the only sensible test should be the final edit. Concentrating on anything other than the final edit is just optimising for the wrong outcome.
-
Artificial voices generated from text. The future of video narration?
kye replied to Happy Daze's topic in Cameras
Now is the time to buy a couch, just so that when he's homeless he's got somewhere to sleep.... -
1) colour The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera. The sensor then just measures the light that hits each photo site and is completely Linear. Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card. 2) DR DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting. If a sensor has more DR or less noise then the overall image has more DR. The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image. The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR. 3) Highlight rolloff Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun. All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image. There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect. I'm happy to discuss this further if you're curious.
-
I guess I haven't kept up! Oh, who am I kidding... I never kept up 🙂
-
Artificial voices generated from text. The future of video narration?
kye replied to Happy Daze's topic in Cameras
Yeah, like early internet filters protected people from learning about breast cancer, etc.... What is new is stupid, like how what is old was stupid when it was new 🙂 -
Panasonic S5 II (What does Panasonic have up their sleeve?)
kye replied to newfoundmass's topic in Cameras
No need to flipping censor yourselves! Speak flipping freely!! -
She didn't compare them with a cinema camera - how am I supposed to watch a review that doesn't do that??? What I concluded from this review (and others) is that: 1) this entire category is half-baked and camera companies are light-years away from providing a solid and slick vlogging experience, and 2) these reviews are like watching people half-way-through the process of getting Stockholm Syndrome.... they clearly don't love them, they clearly don't enjoy the process, they don't like the price, and then their conclusions are positive - like they're saying "please send me the next one to review". Blink three times if you need help!!
-
I don't recall seeing her rely on other people to film actually, normally it's selfie-cam or table-selfie-eating-cam or POV cam. Most of her vlogs are out with different people too, so I don't think this is the typical example of a vlogger dating a camera-holder as I'd imagine is quite common.
-
Indeed. I'm aware that at least two of the YT channels I follow are filmed with these, although (shock horror again) these aren't camera channels, they're channels that talk about other things, so I've only noticed when they caught a reflection or something like that. I suspect there are likely lots of these out in the wild being used. Their size alone makes them incredible for the minimalist travel digital nomad types who are all travelling out of a pencil case. Yes, that was much better considered. I'd forgotten it has an internal ND - that's a serious feature. Does that make it a cinema camera? FX3 users please send in your thoughts.... I was a tad unkind when I mentioned the ZV-1 because there is the ZV-1F which has a 20mm equivalent lens, but what I said was true, so there is that. I guess the old marketing trick of "this thing I'm selling is good for whatever problems you are having" never gets old! At least this one bears some resemblance to the task at hand. If I was vlogging it would be an interesting option. Did you watch any of the other reviews? The setup where it's on a table top looking up at you sure reminds me of this young lady.... Not only does she tend to film herself eating a lot, but I'm 99% sure she uses a G7X. @IronFilm would be proud - in windy / noisy situations she whips out her iPhone and uses it like a lav, sycing in post I would imagine.
-
Actually, when you turn it sideways, it sort of does? @IronFilm - what's it called when you put two omni mics in vertical alignment facing the direction of your non-vlogging hand???
-
I'd say that's because vloggers don't want to vlog while wearing headphones, but.....
-
Indeed! It might even align with @Emanuels philosophy because when it's being held normally it's in widescreen and when you turn it then it becomes the "wrong" alignment of vertical video. So it's oriented around landscape by default, but for those doing vertical it easily supports it. That's quite cool, and the styling is one flavour of "what the future looked like in the past" which are always interesting - this one is quite 'Casio digital watch meets dictaphone' 🙂 One thing I've found fascinating, and sort of prompted my comments about this being an innovation, is that the biggest attempts to sell cameras for "vlogging" were of cameras that were so clearly not suited for vlogging, and yet, there have been many cameras in the past that are much more suited but weren't marketed nearly as much under the 'vlogging' banner. Something like a GoPro is actually a killer vlogging form-factor. It's very small and combined with a handle is still smaller and far less noticeable in public than even a small mirrorless or P&S. The lens is easily wide enough, and when combined with their killer EIS is still really wide, is fixed focus so has no AF issues at all unless you want to focus on something closer than its minimum distance, etc. The DJI Osmo Action is a solid choice too, for all the same reasons, and the Osmo Pocket literally has an integrated gimbal, and is a 20mm lens which doesn't need any cropping to get perfect stabilisation. Actually, it has a tripod thread on the bottom too, so that's covered. In terms of video quality, it seems pretty lacklustre unfortunately, unless it's given really good lighting and conditions. Here's one of the reviews I watched that shows lots of sample footage, I've linked to a section that shows it in less-than-ideal lighting and its got lots of ISO noise + NR + macro-blocking: I thought that the bitrate was low, but actually it's 120Mbps in 4K, so that's not too bad actually. There are a few vloggers that I follow that have Canon G7X cameras and unfortunately I know that because the quality of their images is noticeably lower when they go handheld and it's super shaky or when they go indoors and it's grainy. I'm not sure how much of that is simply the small sensor, or if it's older tech or what, but as long as it's priced appropriately then I think that's fine. Absolutely, especially compared to something like the Sony ZV1, which according to Sony "Watch the announcement of vlog camera ZV-1. A new compact camera from Sony designed for content creators and vloggers." The review in Forbes put it quite plainly: "That flaw? The 20-megapixel Zeiss lens has a 24-70mm lens, so the widest it can get is 24mm, which after accounting for the crop factor of it not being a full-frame camera, plus more cropping from EIS (electronic image stabilization), it's essentially a 30mm lens." Source: https://www.forbes.com/sites/bensin/2020/06/14/sony-zv-1-review-baffling-decision-almost-ruins-made-for-vlogging-premise/?sh=7a741bff5715 How hilarious - Sony made a vlogging camera with a 30mm FOV! This one has a 19mm lens and the EIS didn't look like it was that much at all - Kai shows a side by side in his video.
-
I didn't say it's good... just that it's designed for vlogging! The Canon PowerShot V10: Pocketable, flip-up screen, integrated stand 19mm equivalent FOV, and the EIS crop isn't that much Everything else is just like a PowerShot though..... 1" CMOS sensor, Contrast AF, low bitrate, etc It's a sad day when simply combining an ergonomic chassis and a wide enough lens counts as innovation, but it does. I've seen at least one camera YouTuber say that if they keep the form factor and improve the specs then this might tempt them away from the high-end S35/FF cameras they currently use.
-
Seems like it's the same reason that everyone else thinks more resolution is better... We've been shooting moving images for over a century and most of that time it was genuinely below the optimum resolution/sharpness, and now we have achieved it, very few people are smart enough to realise that we've arrived and we need to move on and focus on something else. Plus, you know, brands do marketing for a reason - it works.... and if your product doesn't fulfil an existing need then job #1 is to create the need.
-
Once in a lifetime shoot. What primes should I bring?
kye replied to MurtlandPhoto's topic in Cameras
While we're talking about event shooting... What are people's attitudes towards slow-motion and speeding things up (timelapses)? I'm curious from the point-of-view of the edit, rather than just the logistics of shooting them. @herein2020 already talked about shooting 60p in case you need to stretch footage in post, which is an obvious use of this and makes total sense in the context of social-media event hype trailer style videos. This could be sort-of an "invisible" use where it's not so obvious (or visible at all), but things like 120p conformed to 24p or time-lapses are a lot more obvious. Do you find they have a place in your edits? I'm asking because I'm contemplating what place they have, if any, in my edits, and looking for opinions.... -
Artificial voices generated from text. The future of video narration?
kye replied to Happy Daze's topic in Cameras
Thinking about this a little further, this won't change much at all. The problem with CGI people is that they're fake.......... but so are a vast number of influencers so no change there. The other problem with AI-generated scripts is that AI tends to "hallucinate" and be factually incorrect as well as potentially getting into the realms of bigotry.......... but so are a vast number of influencers so no change there either. TBH this is true of lots of traditional media too. Keeping Up With the Real-Life Leave-It-To-Beavers of Beverly Hills Who Wants A Wife was actually more typical of TV rather than an exception. It was blogging and then vlogging that actually started the trend of being authentic - TV and film never did it. -
Artificial voices generated from text. The future of video narration?
kye replied to Happy Daze's topic in Cameras
I can't remember where I heard it, but apparently there are already some YT channels where the presenter is AI generated and so are the scripts. The person indicated that these were successful channels and that no-one had any idea. I don't know any more details, or if this is true, but it soon will be. -
Here's an overview of the FX3 version 3.0 firmware and the "gotchas" that lurk behind the new features Sony claim.. Incidentally, this guys YT channel is very high quality, in terms of production design as well as quality of information. He's a DP-first and tech second kind of guy, so things are all practical and level-headed. My only criticism is that his uploads are few and sometimes far between, but maybe that's a fundamental problem of expertise - those that have it are elsewhere doing things in the real world instead of just being on YT.
-
Once in a lifetime shoot. What primes should I bring?
kye replied to MurtlandPhoto's topic in Cameras
Yes, this can be an absolute killer... I filmed a friends birthday party on a GoPro once, which was in a nightclub, and edited it up as a sort of birthday present for her. I put it on a handle and in it's waterproof case (it was an older one that wasn't water resistant when naked) and let them pass it around and sort of crowd-source shots. The footage I got back was great from the perspective of angles and content, and the noise was obviously an issue being that it was an older GoPro and the nightclub was almost pitch black in-between the light pulses/flashes, but the colour rendering from the various "white" light sources was spectacular, and not in a good way. This combined with the GoPros 709 colour science was an incredibly difficult challenge to colour grade, because so many shots were close-enough to neutral to need to be balanced but were so far from proper that it was a real challenge. I see similar things in street markets etc where every vendor has their own brand of 70 CRI LED lights that are all different colours to each other, with some vendors even having a mixture of several different colours just in their own booth. Or even string lights where you can tell which bulbs they're replaced and not matched the existing ones. It's like stepping into an alternative universe where you can see all sorts of colours, but not in a good way. I think this is potentially the most significant factor that defines what focal lengths you need. For my own videos of my family I worked out that what I wanted was environmental portraits, and I wanted to shoot them from where I was standing, which was generally at a comfortable distance from them (here in Australia the personal space distance is on the larger side) but without having other people in-between us. I settled on 35mm equivalent, as I felt that the 28mm that smartphones used at the time was too generic a look, but on my last trip I used the 14mm on the GX85, which works out to be a 31mm in 4K mode and I didn't mind or even notice that it was much wider, so I think I might have gotten over my phobia of the 28mm look. Certainly there are situations where it's too crowded for the 35, and in some situations (for example in a crowded local market in India where you were pressed up against at least two other people at all times) I had to film by using my phones wide angle and do it from above, as if I held my phone at eye height it might have been touching both my head and also my wife's at the same time! For my work I like to get the perspective that the video is from my own individual perspective, so I shoot the things I see from where I am when I'm seeing them, but within the confines of what lenses I have. Obviously this is counter to @mercers point about zooming with your feet, but I find that human vision can easily be "zoomed" in that you can easily narrow your attention when looking at a far away object, so I think there's some flexibility if you're trying for the perspective a human might have rather than just watching events and not being in them (as you are in most narrative work). Yes, your eyes are still a wide-angle lens and light from the table you're sitting at and from the people sitting next to you is still hitting your retina but because you're staring at the sailboat out on the water you're no longer aware of the people and the table, you've kind of zoomed in cognitively. Absolutely. My experience has mostly been that I'm experiencing all of them at all times! -
I wonder when we'll see an AI that you point at a YT video and it creates a transcript for you to read. People like Gerald etc who script their videos speak in a relatively logical structure that wouldn't be too hard to read (unlike the vloggers who just do stream of consciousness to camera - those transcripts would be unreadable). Maybe it exists already?