Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. My view is that the novelty will wear off, and that's a good thing, but ultimately the future is 3D because AR (augmented reality) will dominate. The popularity of smartphones is undeniable, and while they are great for having constant access to apps and the internet, they absolutely suck as a user interface because the screens are so small. The future of the smartphone interface will be AR, essentially placing smartphone technology within your field of view, like the heads-up-display did for driving. A glimpse of it is with the Apple Watch which is essentially a second screen for your phone, but one that is much handier than having to retrieve your phone from your bag / pocket / nightstand, and AR would eliminate that separation between a person and their device. In terms of how well this particular product does in the market, who knows. There's a saying in startup culture - "being early is just like being wrong" but even if they are early, establishing yourself as one of the early developers with the knowledge, patents, tech, and company culture will mean that when demand goes up they can be ready. Capturing 3D won't really take off until there is a decent appetite for consuming 3D, which didn't happen with TVs, but might with VR, and definitely will with AR, however I think that AR could well be a long way off, at least in product lifecycle timescales. Google glass was obviously a failure (for many reasons), but products like snapchat spectacles are bringing wearable tech into the market in ways that google glass couldn't accomplish, and if snapchat spectacles had the functionality of the Apple Watch to display instant messages and other basic info then they would be very popular. I think VR will get some traction before AR, most likely from gaming, but the fact it blinds you means that it's limiting in how and where people will actually use it. We're not about to see the average commuter put on a pair of VR goggles on the train for example!!
  2. You may well be right. I have an Apple laptop and I use it for the above, but I chose Apple because of other factors. If someone didn't have a laptop already and was wanting to only use it for video I wouldn't recommend an Apple laptop unless they were a FCPX user. When I was in the market for a new laptop I did a detailed comparison between the MBP and a couple of non-Apple laptops, and it was things like the integration with the Apple ecosystem that decided it for me - something that had nothing to do with video. Of course, being my only machine, it's what I use for video, and so being able to upgrade it with an eGPU if I want the extra performance provides some extra flexibility and options to extend my current setup, which is nice to have. I think that the average person on this site is oriented around video more than the target users for eGPUs, or certainly eGPUs with this processor anyway. If you're looking for a video-first machine then this eGPU isn't the way to go, and I think people that are video-first find it hard to understand that video products should be made for anyone other than video-first people. In the same way that it would be silly for me to hang around on mobile phone forums criticising every smartphone because it doesn't shoot 4K 60 in 16-bit RAW, have XLR inputs, or SDI connections, I find it strange that people who would never buy an Apple laptop are all-of-a-sudden the experts in what to buy for those people who do own an Apple laptop. It's like the film-making industry hasn't worked out that convenience and decent video can now co-exist, that the vast majority of film-makers are amateurs, that the biggest networks aren't broadcasters, and that the majority of video content isn't consumed on projectors, and possibly even on TVs!
  3. Ah! I thought you were saying it couldn't be done. If you're saying it would be a bad idea, then that's a different conversation. I agree that triggering people with epilepsy everywhere you go would be a bad idea, and you can't overpower the sun with anything except very very bright lights, so that's the ballgame for overpowering lighting for video right there. The alternative would be continuous lighting for a burst but that would be pretty nasty in power requirements and pretty horrific for the poor people the light is aimed at too. The answer is probably high ISO and digital relighting. Apples portrait mode but with the tech advanced by a dozen or so generations. People say that? Wow! Now that sounds great!! Why didn't I think of that!
  4. You're right about the size / resolution of the output files not changing, but the aspect ratio of the things being filmed has changed (circles aren't round any more).
  5. That makes sense.. It sounded like you were suggesting that because some codecs require a lot of processing everyone should switch from a laptop to a desktop and completely lose all the benefits of having a portable computer. The way I see it is that there are many stages of film-making in which a computer is required: On set, ingesting footage Reviewing dailies after shooting Editing Colour correcting VFX / compositing Grading Titles and export Archiving If you use a laptop you can do all of the above with the one computer - with the same computer. But then 4K H265 codecs appear and because you can't edit or grade that footage the advice is to switch to a desktop, which means maintaining two complete setups, or losing the ability to do many of the remaining steps. There's an assumption that because grading requires a calibrated monitor and environment that you'll be doing all the other things in that environment too, which is complete bollocks.
  6. Changing your lifestyle makes sense? So, when I bought a 4k camera I should have stopped editing video on the 2 hours per day commute that I had to my day job and instead edited video at home instead of spending time with my family? Are you on drugs? How about this - if you can edit on a desktop computer then why are you even in a thread about an eGPU which is clearly aimed at laptop users...?
  7. Nice post @jonpais. There are so many different aspects to consider, and for many of us, a camera that really stuffs-up a single feature is worse than one that is passable at everything but doesn't wow. I think that's the difference between different types of filming - some situations call for a camera to be great at some things and don't need other things, whereas other styles need everything to operate above a certain minimum level of performance, even if that minimum level is quite modest. A great example is the GH5 vs A7III - 10-bit or 4K60 doesn't matter to me if the AF has failed, yet there are many cinema cameras that don't even have AF. People have told me flat out that I'm expecting too much but unlike perhaps the vast majority of people on here, I started making films with what I had (a Samsung Galaxy S2 and a ~$300 Panasonic GF3 m43) and only upgraded when I went on a trip, filmed real things, messed up shots all over the place, and then looked for ways to improve.
  8. Whenever I see things like "we can't see in 4K" or "no-one will ever need 8K" I just hear "640k should be enough for anybody" ???
  9. A really simple example might be the home videos from Minority Report: Ignoring the 3D aspect of it, right now we have the ability to shoot really wide angle and then project really wide angle. All you need is a GoPro and one of those projectors designed to be close to the screen - existing tech right now. If you shoot 4K but project it 8 foot tall and 14 foot wide then most people sure as hell will be able to see it - especially if you've shot H265 at 35Mbps!! Projecting people life-sized is a pretty attractive viewing experience, so we're not talking some kind of abstract niche kind of thing - we're talking something that a percentage of the worlds population would see in the big-box store and say "I want that" I understand that. If you read my post carefully you will notice I mentioned that they might have a 24/25/30fps sync - this is different to continuous lighting. While this isn't currently available at full power, there are strobes that can recycle fast enough (eg, Profoto D2 - link can recycle in 0.03s and can already sync to 20fps bursts). All that is missing is having a big enough buffer (capacitor bank) to do full power that fast.
  10. That looks really good.. thanks!
  11. My impression was that it was a quick 'fix' to bypass some kind of issue. Considering how important aspect ratios are in both still and moving images, and the fact that everyone has known they're important for almost the entire history of photography, it's unlikely it was a mistake. Of course, regardless of how and why it happened, it's likely they'll fix it quietly and we'll never find out
  12. I watched that too.. He makes a point, BUT he's assuming that we shoot, process, and display in the same resolution, and that we display that resolution in a 16:9 rectangle with a viewing angle aligned with the THX or SMPTE specifications. This is old thinking. We need to shoot higher than 4K if we want visually-acceptable results with reasonable cropping, VR, or any kind of immersive projection. Not if Canikon gives us an early xmas!! I've gone a little ways down this path, and there are a few adjustments. With a shutter speed you can either shoot for video (180 degree rule), for images (very short exposures), or a compromise. I shoot a compromise because I think that the motion blur you get from something like 1/100 or 1/150 is appropriate because if it's significantly blurred then it was an action shot and I like that in the photo, but this is personal taste. In terms of formal posed moments, video modes aren't currently well suited to this, so you're better off just shooting images. However, cameras could evolve in this aspect (and would need to) by perhaps having something like a 'burst video mode' where it shot RAW for a burst, perhaps 0.5s before and after the shutter button was pushed. This combined with high ISO performance could give sufficient image quality to work fine with continuous light, or you could get continuous lighting with a burst mode, or 24/25/30fps sync for that burst. These technologies are either already with us, or are pretty close, so it's more a case of them being combined and the market working out what photogs would use and find practical. In terms of picking shots, I think it's actually a bit easier to choose shots than with photos, but it requires a different mind-set. When you're taking photographs you're trying to notice everything and then hit the shutter at exactly the right moment to capture it. You then take those moments into post and have to choose between the shot with the nicest groom smile but slightly forced bride smile, and the better bride smile but less groom smile, or to spend the time photoshopping the two together. With video, you're capturing every moment and now you get to choose any moment to retrospectively 'hit the shutter button'. In this sense, you should view it as a continuous capture, not individual images. During a moment, there are lots of micro-moments happening: the bride smiles, the groom smiles, the trees flutter, the distracting noise happens in the background, the couple hear it, the couple look puzzled while they process it, then they laugh at slightly different times. You will have captured all of those moments and all of the moments in-between, so the task is to scrub the play-head back and forwards to find the moments of 'peak smile'. This is kind of how sports photographers choose images from bursts - they don't think "crap, I've got to choose between 3000 images" they know that in each burst there is the best moment and they just scroll through them to pick that best moment. The difference will be that even if you're shooting in the fastest bursts of Canikon (around 10-12 fps) the magic moment in sports is often between frames. There's a reason that flagship bodies are pushing for higher burst rates - pro shooters aren't saying "oh shit, higher burst rates make my life worse by making me choose between more images". In the future the burst video mode for flagship bodies will be much faster than video - it will be 50, then 100, then higher frame rates. If you look at the iPhone burst functionality, it captures a burst and then shows you the whole burst as one image, which if you click on it to edit it you can choose which ones are kept and there's a button to discard all the rest. Apple has worked out that a burst is different to a whole string of individual images, this is more aligned with the new way of looking at this type of capability.
  13. Same here. It's just strange when a topic like "how can I boost the power of my laptop" comes up and people answer "buy a 24 ton supercomputer" and you're like "how on earth does that even make sense?"
  14. In case anyone hasn't seen it... The video references a kickstarter for a battery adapter that looks really useful as it's both an external battery solution for multiple devices but is also a charger:
  15. LOL! Yeah, I love the guessing too!
  16. Cool. The more control they have over these things and the better they are at colour the closer they should be able to get in matching colours between cameras with different sensors. With tools like Resolve, if you have time and some dedication you should be able to get the look you want from most cameras these days. Matching in a multi-camera setup requires a higher skill level, but I've seen youngsters on YT match many cameras passably well in a very short period of time, using only the LGG controls, and if you're willing to dive into colour checkers you can dial things in pretty quickly if you do a bit of homework about how the colours differ beforehand.
  17. So in one corner we have the argument that companies without a cinema line could go all in and will make a non-crippled camera. In the other corner we have the argument of trickle-down technology where those companies with a cinema line will have already recouped the R&D costs of really fancy tech and the low-end of their range will be a combination of high performance and lower cost. Maybe we should just toss a coin to predict these things!
  18. It won't be a direct competitor, no. But for some people who are in between, it might be a toss up between two options that only partially provide what they want. I know this won't be that many people though.
  19. @srgkonev - thanks for sharing, that is a really cool finished video! Cameras really are like musical instruments: they create nothing while sitting on a shelf, good instruments don't improve bad players, and they only fully bloom in post production!
  20. This makes me wonder what the architecture of 'colour science' is inside a camera. Does anyone know exactly how sensors are implemented? I figure that it could be one of these two scenarios: Scenario 1: Sensor gives RAW readout -> image is processed according to colour profile / image settings / codec -> file is output Scenario 2: Sensor is configured to give 'tuned' RAW readout -> image is processed according to colour profile / image settings / codec -> file is output If we look at what Magic Lantern does for Canon, the RAW files look quite Canon-ish. Does that mean that scenario 2 is the case? My impression is that making a sensor that can get 'nice skin tones' by changing the RAW electro-chemical properties of the photo sites would be almost impossible but maybe I'm wrong. Either way there's pretty powerful options for adjusting the image, so they've got a fighting chance of doing an impressive job matching them.
  21. That video is just wonderful.. Normally I remove videos when I quote posts, but it deserves a bump!! One of the things that struck me was the DR. I'm no expert, but the highlights looked soft (ie, not harsh) and the shadows looked nice too, with nice mid-tones. People keep talking about skin tones when 'colour science' gets mentioned, but more and more I think (for my eyes at least) it's the DR and highlights that really matter. As an example, the video above is in direct contrast to the below video that was shot on Canon, where the highlights look clipped / harsh / videoish. To be fair, Matti could have shot this on his 1DXII or his 6DII, and I'm pretty sure he doesn't shoot log, so it won't be using the full DR from the sensor, and it's also some pretty difficult lighting conditions, and he might have also rushed the grade (considering he's travelling with friends and family and trying to vlog at the same time), but yeah, it's not nice. However, to also be fair to the Nikon D850 video as well, there are lots of shots in there with quite challenging lighting conditions too. I was saying that my shortlist was the A7III, Pocket 4K, Canon FF ML and Nikon FF ML, but I think Nikon has just gone from an academic inclusion to an option I will take seriously! You missed out the people waiting for the Canon FF mirrorless. The consensus is that it's unlikely Canon will do anything revolutionary, but there's still a chance, and I think there will be quite a few people waiting to see what that looks like before ordering anything. Also, those on the fence about the Pocket 2 might wait to see sample footage and reviews start trickling in, considering the initial issues / bugs that BM had with the Pocket 1.
  22. A camera in-between the XC10/15 and C200 would be absolutely fantastic. The C100 is sized closer to the C200 than XC10 but only does 1080. Those industrial cameras you mention only do 1080 (link link) and would be large and attention grabbing by the time you rigged them up so they had screens and the proper ergonomics. You're right about the camerasize.com images looking strange, for some reason they cropped the lens out of the top angle!
  23. I totally get your point. It'll be a hangover from the days when burning film was a fixed lower limit on making a film - you couldn't shoot a 90 minute film without exposing, processing and developing at least 90 minutes of negatives. In that sense, "No Budget" would have made sense if it was below that minimum cost. In todays world when digital cameras are being thrown away, old computers capable of editing video are being thrown away, and editing software is available for free, you can literally make a film for nothing. In this sense, $2200 is a large amount of money because it doesn't have to be spent on equipment. It's yet another culture clash caused by technology letting all the riff-raff in! Anyway, enough tangents from me
  24. Actually, this thread is mis-named. It should be called "YouTube has come to its senses and doesn't add black bars to non-16:9 content".
  25. It seems like the decision is if you're swapping to FCPX or not. I'd suggest watching some of those "I switched from PP to FCPX" videos, some about FCPX to PP, and see if you can find any where people went PP -> FCPX ->PP again, and listen to the reasons why people made those switches and what their impressions were. It depends on what style of editing you do as each editor will have slightly different strengths and weaknesses, and will have slightly different philosophies / approaches to things. It's about how well they fit to how you like to think and what your workflow looks like. I use Resolve, not because it's "the best" (it's not) but because it fits my mindset and workflow.
×
×
  • Create New...