-
Posts
7,831 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I agree.. YT 'shows' is a pretty good name, although I'd still stress that there are people doing both 'shows' and 'vlogs' and also people who use a mixture of techniques, or even a technique somewhere in the middle. Only earlier today did I watch a video by a lady who shows how she does construction projects (this particular video was about her making a large wood and metal gate) and as we'd been having this conversation I was paying attention to the shots in it. It seemed that she was sitting and talking to camera with the narration of the video and was cutting b-roll of the build over the top, however what was interesting was that her shot was a tripod / lav mic shot but it was outside in uncontrolled conditions, so that's somewhere in-between your two scenarios described above. Also, included in the standard tripod b-roll shots was a fancy dolly shot where the camera follows her from one room of her shed to another (the front face of the shed was open, so it was one of those "looking into the building from outside" style shots). It was an interesting shot because I think it shows she's got an appetite for upping the production value, and the fact she's already got a lav and sound is nice means she's not new to the game. She also sells plans for projects online, so her channel is a business. Would she benefit from having a cinema camera? Maybe. I didn't see any evidence of auto-focus requirements, or IBIS requirements, so a cinema camera wouldn't be ruled out. If she shot 1080 in prores then maybe her edits would go faster, so there's perhaps some benefit. I don't think she'd benefit from having more DR via a flatter profile, but others like her might. I think the lines between (hand-held + selfie + outside) and (tripod + not selfie + studio) scenarios that the industry thinks in are so blurry that they ceased to exist quite some time ago. And yes, I did notice Jon Olsson getting excited about the RED and then it not featuring much again. I heard a mention that their camera guy had a routine of taking it out to get b-roll early in the morning when they were in Monaco before Jon had woken up, but yes, in terms of lugging it around during the day or through airports etc it's not the right camera for the job!!
-
LOL, crying face is right!! But actually for a single operator it looks quite functional, it's quite elegant really. Then again, I've used the HDMI input on my 32" UHD computer monitor as a monitor when doing some camera tests in my office!!
-
I guess that's part of my point @jonpais - there are more and more people who don't fit the title 'vlogger' neatly. Any vlogger who watched a "how to vlog" video will be trying b-roll as an alternative to jump-cuts, and any vlogger who bought anything and has watched an unboxing video will be doing product shots, so to say that a vlogger is someone who only ever shoots selfie shots would eliminate most vloggers. Take Kelsie Humphreys for example (who I showed above interviewing Tony Robbins) - she makes vlogs as BTS from her interviews. Does that make her a vlogger or is she the producer of a talk-show / interview channel? Same for Laura Kampf (also above) who shoots no-dialogue creation videos, using MF-focus tripod shallow DoF techniques, but also shoots vlogs, is she a vlogger or not? We can use your definition of "recording a sort of video diary of your life" which I think is quite a nice definition, but it isn't very useful if we're talking about what equipment matches that style of creation, because video diaries can be shot in any style with any equipment, because it refers to a subject not a style. It might have been the case that certain types of content matched certain types of shots and equipment (game shows vs TV drama vs blockbuster movie) but those pesky Youtubers haven't been told the rules, so they're just using every colour in the whole paint-box with reckless abandon!
-
I wouldn't be surprised @jonpais, not even a little bit. It's the same in the hifi arena with CD vs SACD - you'd go to a demo where they'd be comparing CD with SACD and the SACD would definitely sound better. You'd look around and people would all be nodding their heads about SACD, but I'd be standing there and thinking "both sound completely awful compared to my CD setup at home - how is this a valid comparison?".
-
Vloggers tend to operate in three distinct environments, a fixed camera in a purpose-built studio, a hand-held camera while out and about (selfie and b-roll), and an action camera for harsh environments. Some have drones too. These are often covered by a G7 or RX100 for the first two and a GoPro for the third, but there are vloggers who do use a dedicated camera for their studio setups, and I've seen REDs, C300s, FS5s and 1DXs as well as the usual suspects of the various Canon cameras. These 'premium' setups have boom mics, monitors, and large soft-boxes permanently setup. I think for those that have a fixed studio camera setup the BMPCC4K would be an excellent choice because of the combination of low cost, the anticipated high quality, and the convenience of having prores to edit with SOOC. They might also find use as a mobile vlogging camera when combined with a gimbal, MF (like Sony users have learned to do), and a fast wide lens. I find that increasingly YT creators are blurring the lines between documentary film-making, studio film-making, studio talking-to-camera (vlogging), mobile vlogging, travel vlogging, travel film-making, etc etc. Will these niches drives a lot of sales? No. But I do think they'll have a small footprint in the YT / vlogger ecosystem. Here are a couple of examples of potential customers that blend "vlogging" with more traditional film-making elements, and who might be interested in upping production quality: I'll stop now, but there are many many examples of this blending of style, and I think the idea that you can approach RED/ARRI quality for mid-range mirrorless prices will be a HUGELY attractive factor for these people As a special bonus, here's a couple that shoot mostly hand-held with an FS5 (IIRC) and a GoPro. Production quality isn't top notch (stabilisation is an issue), but I've seen previous videos where they had a smaller Sony video camera, so if they can afford an FS5 then a BMPCC4K wouldn't be out of the question.
-
They are interesting. That the project is dead doesn't matter because I think it's something that they will have learned a bunch of stuff from, and will keep that stuff in their back pocket for future times and products. I think in the past companies would have done things exactly like this but just never told anyone about it, but because this is the cutting edge they don't really lose much from telling people it's a product 'coming soon' and then just don't launch it if people don't line up with their wallets open. It's interesting that Intel went for middle-class-businessman - which is probably the right market to go for eventually, but the early adopters will be 'those pesky kids' and they will need something a little more fashionable.... Not a display, but an example of wearable tech that's actually fashionable.
-
If you're referring to the resolution after debayering, then you're absolutely right. People say that the 1080 from the C100 is so nice because it's downscaled 4K, and the A7III downscaling 6K to 4K seems to be a feature that isn't spoken about enough. I find it strange that through this whole conversation about 4K vs 1080 on YT no-one mentioned debayering. With film you shot at the same 'resolution' as what you delivered in, but digital doesn't do that. Anyone who wants to see what 4K YT can look like if done correctly should compare it to a video by MKBHD, who shoots 5, 6, or 8K RAW. and if you think he can't possibly be shooting 8K RAW because it's ridiculous in terms of camera equipment and storage, check this out: There's a pretty strong technical argument that the A7III should be the C100 of FF mirrorless, because it should have a bunch more resolution in its 4K output than anything that shoots at 4K. I'm surprised that the pixel peeping people aren't publishing test charts of this. If I end up with an A7III then I'll do a comparison just for my own curiosity.
-
I thought it was that they made you look like a complete turnip! ???
-
I think this depends on what level of film-making you're doing. The professional colourists over at liftgammagain are likely working with RAW and high quality Prores files from the high end cinema cameras, and live in a different world to the one we talk about on here. There was a thread about buying "cheap" client monitors for their grading studio (these aren't monitors to grade with, they're only for the clients to view the grade) and I just about had a heart attack when someone said that their budget was $8000 per monitor, and they were looking to buy half-a-dozen of them!!
-
I think it's to do with the colour accuracy after a conversion, especially considering that most proxies are much lower quality than the original footage. If you were transforming from RAW to something like Prores 4444 HQ then I don't imagine it would be a huge problem, but I could be wrong about that. Another thing I didn't mention is that if you're doing VFX work or any precise tracking then you want to use the original files too because they'll enable much better tracking accuracy. I've heard VFX people say that for realistic 3D compositing work you sometimes need to be able to track to within a single pixel of accuracy, or even to within half or a quarter of a pixel. I've never done it but it makes sense if you're moving the camera and putting in a 3D object that is also meant to move like it's part of the environment then if the VFX object wobbles around in comparison to the real environment you filmed then that's going to be quite noticeable if your tracking isn't great.
-
Cat overlord!!! oh, that made me laugh!! ???
-
I remember back in the day seeing a little video about a guy who worked for Nokia whose job it was to travel the world and see what cool things people were doing with phones, and report those things back to Nokia as product and feature development ideas. One of the most interesting ones he mentioned was that villagers in rural Africa were buying a mobile phone as a business for their village. Obviously the other villagers would pay them to be able to call people on the phone, and the phone owner would make a percentage, but the big thing was that the phone acted like a bank. In Africa there is a very rural-oriented culture where people go into the city to get a job for a few years, but are sending the money back to their families in their village. So the city worker buys a pre-paid phone card, sends the card details to the phone owner in the village who then pays the family most of the value of the card, taking a percentage on top as a 'transaction fee'. I thought this was brilliant for Nokia to be basically crowd-sourcing their R&D. I wonder if Apple or Samsung are doing similarly, or if perhaps the innovation is simply in software not business models? The most interesting thing about the smartphone, to me at least, was that it's basically a generic device, it has no specific interface, no specific display limitations, no specific processing limitations, etc. It's a device that can be adapted to as many different styles of interface, display, or app design that app developers care to create. It's kind of the antithesis of how old phones used to work (and how cameras currently work) where you choose a product and you get their hardware, their OS, and their apps all in one.
-
I agree that the chipset and requirement for AC power are both severely limiting factors.. I watched a Hackintosh video where the guy mentioned something about the compatibility of various chipset manufacturers that surprised me, but the fact they didn't put a higher powered Radeon in there is a bit strange perhaps.. Unless it was designed to a spec, perhaps something like "FCPX must be able to play 4K 30 in real-time with 2 LUTs and some basic curves adjustments applied on the nicest Apple display" or similar? The first battery-powered one will be interesting. One thing I learned over at liftgammagain is that you can ingest footage with a slower machine, you can generate proxies with a slower machine, you can edit with proxies, you can do sound with proxy video, but you can't grade with proxies, so if you are grading in front of a client then your machine needs to be able to play the original footage with all the grades applied in real-time. Grading, however, is done in a controlled lighting situation with calibrated monitors, so it's a situation where AC power is available, so in that sense an AC powered solution still makes sense. Personally I don't need to be able to play graded footage in real-time, so it doesn't matter for me. I'm happy to scrub the playhead around and see how things look over the footage, and then render it out and watch that render back again, and then make changes if required. For my home videos my 'clients' are my family so I can play them the mostly finished project and get their inputs but also be reviewing the output for any strange things that catch my eye too.
-
Google data shows HUGE decline of interest in Canon 5D series
kye replied to Andrew Reid's topic in Cameras
You forgot to include "flange distance" which after all that discussion in the Nikon FF and Pocket 2 threads must surely be the leading search term in all of photography today!! ??? -
My view is that the novelty will wear off, and that's a good thing, but ultimately the future is 3D because AR (augmented reality) will dominate. The popularity of smartphones is undeniable, and while they are great for having constant access to apps and the internet, they absolutely suck as a user interface because the screens are so small. The future of the smartphone interface will be AR, essentially placing smartphone technology within your field of view, like the heads-up-display did for driving. A glimpse of it is with the Apple Watch which is essentially a second screen for your phone, but one that is much handier than having to retrieve your phone from your bag / pocket / nightstand, and AR would eliminate that separation between a person and their device. In terms of how well this particular product does in the market, who knows. There's a saying in startup culture - "being early is just like being wrong" but even if they are early, establishing yourself as one of the early developers with the knowledge, patents, tech, and company culture will mean that when demand goes up they can be ready. Capturing 3D won't really take off until there is a decent appetite for consuming 3D, which didn't happen with TVs, but might with VR, and definitely will with AR, however I think that AR could well be a long way off, at least in product lifecycle timescales. Google glass was obviously a failure (for many reasons), but products like snapchat spectacles are bringing wearable tech into the market in ways that google glass couldn't accomplish, and if snapchat spectacles had the functionality of the Apple Watch to display instant messages and other basic info then they would be very popular. I think VR will get some traction before AR, most likely from gaming, but the fact it blinds you means that it's limiting in how and where people will actually use it. We're not about to see the average commuter put on a pair of VR goggles on the train for example!!
-
You may well be right. I have an Apple laptop and I use it for the above, but I chose Apple because of other factors. If someone didn't have a laptop already and was wanting to only use it for video I wouldn't recommend an Apple laptop unless they were a FCPX user. When I was in the market for a new laptop I did a detailed comparison between the MBP and a couple of non-Apple laptops, and it was things like the integration with the Apple ecosystem that decided it for me - something that had nothing to do with video. Of course, being my only machine, it's what I use for video, and so being able to upgrade it with an eGPU if I want the extra performance provides some extra flexibility and options to extend my current setup, which is nice to have. I think that the average person on this site is oriented around video more than the target users for eGPUs, or certainly eGPUs with this processor anyway. If you're looking for a video-first machine then this eGPU isn't the way to go, and I think people that are video-first find it hard to understand that video products should be made for anyone other than video-first people. In the same way that it would be silly for me to hang around on mobile phone forums criticising every smartphone because it doesn't shoot 4K 60 in 16-bit RAW, have XLR inputs, or SDI connections, I find it strange that people who would never buy an Apple laptop are all-of-a-sudden the experts in what to buy for those people who do own an Apple laptop. It's like the film-making industry hasn't worked out that convenience and decent video can now co-exist, that the vast majority of film-makers are amateurs, that the biggest networks aren't broadcasters, and that the majority of video content isn't consumed on projectors, and possibly even on TVs!
-
Ah! I thought you were saying it couldn't be done. If you're saying it would be a bad idea, then that's a different conversation. I agree that triggering people with epilepsy everywhere you go would be a bad idea, and you can't overpower the sun with anything except very very bright lights, so that's the ballgame for overpowering lighting for video right there. The alternative would be continuous lighting for a burst but that would be pretty nasty in power requirements and pretty horrific for the poor people the light is aimed at too. The answer is probably high ISO and digital relighting. Apples portrait mode but with the tech advanced by a dozen or so generations. People say that? Wow! Now that sounds great!! Why didn't I think of that!
-
That makes sense.. It sounded like you were suggesting that because some codecs require a lot of processing everyone should switch from a laptop to a desktop and completely lose all the benefits of having a portable computer. The way I see it is that there are many stages of film-making in which a computer is required: On set, ingesting footage Reviewing dailies after shooting Editing Colour correcting VFX / compositing Grading Titles and export Archiving If you use a laptop you can do all of the above with the one computer - with the same computer. But then 4K H265 codecs appear and because you can't edit or grade that footage the advice is to switch to a desktop, which means maintaining two complete setups, or losing the ability to do many of the remaining steps. There's an assumption that because grading requires a calibrated monitor and environment that you'll be doing all the other things in that environment too, which is complete bollocks.
-
Changing your lifestyle makes sense? So, when I bought a 4k camera I should have stopped editing video on the 2 hours per day commute that I had to my day job and instead edited video at home instead of spending time with my family? Are you on drugs? How about this - if you can edit on a desktop computer then why are you even in a thread about an eGPU which is clearly aimed at laptop users...?
-
Nice post @jonpais. There are so many different aspects to consider, and for many of us, a camera that really stuffs-up a single feature is worse than one that is passable at everything but doesn't wow. I think that's the difference between different types of filming - some situations call for a camera to be great at some things and don't need other things, whereas other styles need everything to operate above a certain minimum level of performance, even if that minimum level is quite modest. A great example is the GH5 vs A7III - 10-bit or 4K60 doesn't matter to me if the AF has failed, yet there are many cinema cameras that don't even have AF. People have told me flat out that I'm expecting too much but unlike perhaps the vast majority of people on here, I started making films with what I had (a Samsung Galaxy S2 and a ~$300 Panasonic GF3 m43) and only upgraded when I went on a trip, filmed real things, messed up shots all over the place, and then looked for ways to improve.
-
Whenever I see things like "we can't see in 4K" or "no-one will ever need 8K" I just hear "640k should be enough for anybody" ???
-
A really simple example might be the home videos from Minority Report: Ignoring the 3D aspect of it, right now we have the ability to shoot really wide angle and then project really wide angle. All you need is a GoPro and one of those projectors designed to be close to the screen - existing tech right now. If you shoot 4K but project it 8 foot tall and 14 foot wide then most people sure as hell will be able to see it - especially if you've shot H265 at 35Mbps!! Projecting people life-sized is a pretty attractive viewing experience, so we're not talking some kind of abstract niche kind of thing - we're talking something that a percentage of the worlds population would see in the big-box store and say "I want that" I understand that. If you read my post carefully you will notice I mentioned that they might have a 24/25/30fps sync - this is different to continuous lighting. While this isn't currently available at full power, there are strobes that can recycle fast enough (eg, Profoto D2 - link can recycle in 0.03s and can already sync to 20fps bursts). All that is missing is having a big enough buffer (capacitor bank) to do full power that fast.
-
That looks really good.. thanks!
-
My impression was that it was a quick 'fix' to bypass some kind of issue. Considering how important aspect ratios are in both still and moving images, and the fact that everyone has known they're important for almost the entire history of photography, it's unlikely it was a mistake. Of course, regardless of how and why it happened, it's likely they'll fix it quietly and we'll never find out