Jump to content

kye

Members
  • Posts

    7,817
  • Joined

  • Last visited

Everything posted by kye

  1. Thanks to @TomTheDP for finding this comparison between the Original BMPCC and Alexa, and thanks to Alain Bourassa for performing the test! As @Matt Kieley says, the BMPCC and BMMCC have the same sensor, so it's a proxy for the BMMCC as well. Not a bad match, considering that this appears to have no adjustments, and shots have different White Balances: The WB comparison: Latitude tests: What I see from this comparison is: Obviously the Alexa has more latitude, no surprises there Both comparisons vary quite a bit in tone and levels between the shots (this is an indication to me that the shots were taken quickly and the results not processed too much, so I think is a good indicator of what things are like when used for real rather than in meticulous lab environments) The BMPCC fails at +4 and the Alexa doesn't, but both look reasonable at -3, so the highlights are where the Alexas increased DR is revealed The BMPCC looks very usable across the whole range and this reflects my impressions of shooting with it Vectorscope comparisons, with colours extended 2X in the scope for ease of viewing.. BMPCC Primaries: Alexa Primaries: BMPCC chart: Alexa chart: For reference, this is a colour chart I shot with my GH5. The absolute level of saturation can't be compared as I can't remember what I did in post, but check out how different the overall shape is, which is much closer to 'correct': I suggest this is a very good result, especially considering how old and how cheap this camera is, even after the price has risen since COVID lockdown and nostalgia shopping.
  2. kye

    BMPCC6K Pro or FS7?

    You're spending a lot of money on such a decision, and will have to live with it, so I suggest getting a bit organised and taking some time to think about it systematically. My advice is this: Make a list of all the differences between the cameras (ie where they differ significantly), including your own personal feelings and impressions as well as specs Find the top 5 (or top 10) items that are most important to you, considering things that impact the final image as well as things that impact how easy and pleasant it is to use as well as speed and efficiency (which effect your income) Weigh these factors up and see what comes out of that analysis, it may be relatively equal with pros and cons on both sides, or may be more clear cut Check how you feel about that result. It's often that we don't know how we truly feel until a decision is made, and then we are either relieved about it, or have a negative reaction as we were wanting it to go the other way. I find personally that I can think about things over time, reading more info and gathering opinions as food for thought, and that eventually I develop an understanding about what I value and what might be the right direction to go in. But if I do something like the above using pen and paper then it reduces that thought process from weeks to days because it forces you to see clearly. I'd also suggest that you consider business and personal experience aspects more than tech specs. A camera that has makes beautiful images might be nice, but if you hate your life as you slowly go broke, well, the image would have to keep you warm at night for it to be worth that price...
  3. I have no idea what they were doing there, but your confusion between "absence of evidence" and "evidence of absence" is a pretty basic logic flaw, and one that pretty much prevents reasonable discussion, especially when I told you where to go to look at the evidence that I have seen. Let's get back to the BMMCC and stop assuming that our own personal lack of experience or understanding somehow translates to the rest of the world.
  4. What are the specific things you're looking for in the XT3/4? If you're wanting to match cameras it's always easier to start with similar cameras, or at least from the same brand in order to have a similar style in colour science. The EOS-M series cameras are mirrorless and have an APSC sensor that can take a speed booster to use FF lenses, but also has a port of ML so you can get RAW out of it, plus it will be Canon colour science so should match much better. Of course, if you were after features that the EOS-M lacked then I'd understand why that might not be a good fit.
  5. Most of the BTS shots were from Cinematography Database YT channel, but were embedded deep within his videos. The irony wasn't lost on me at the time, he was saying how everyone thought they weren't being used, but then he was only including the evidence hidden in the middle of other videos. I remember at the time he was doing lots of lighting breakdown videos and so would be sharing stills from whatever BTS footage he could find, often of people being interviewed on set or other little snippets where you could see lighting rigs etc, and he'd go off on a little tangent and mention the XC10 seen in the background of the lighting setup he was breaking down. It went on for 6-12 months IIRC, so to find them you'd have to watch a years worth of his videos unfortunately. It was very difficult to tell what they might have been using them for. Considering that they were designed to match exposure settings and colour profiles with other Canon cameras, and had a fixed lens, and internal media, all you had to do was to run a timecode into them and it could literally be a XC10 sitting on a tripod being used for a real shot. There would be no way to know from the rig what it was being used for. An A7S3 sitting on a tripod with the kit lens would typically mean it wasn't being used for anything significant, but if it had an anamorphic lens, matte box, monitor, etc then you'd know it was being used more seriously. Often they were pointed at the set, and sometimes from interesting angles, so maybe they were being used for the odd random angle, or maybe as a webcam to the control room, who knows. Regardless, they were on set and being used for something.. Lots of older cameras still being sold as refurbished units, but not available new. Comparing the BMMCC with the F5 I think is appropriate and really signifies the pedigree. It was good at release and is still good now. The unique form-factor of the BMMCC means it doesn't really have any direct competition, and typically the other options are GoPros or Zcam Z1, but neither of those are even in remotely the same league, let alone serious competitors. Studying colour science is only possible in direct comparison of the same scene, or using a colour checker. Otherwise who knows what is the camera and what is the scene. If you have any stills from the S1 that include a colour checker then I'd be curious to see a waveform and vector scope of them. If I didn't care about size, weight, or cost then I'd be a fan of the P6K too, and probably own a P6K Pro. Of course, I'd be shooting 1080p Prores HQ downsampled from the whole sensor, but that's just me. The FX6 seems like an interesting camera too, and capable of a great image. Of course, almost everything can make great looking images if you're a colourist of Juans capability, and although I haven't experienced it myself, apparently the Alexa is a complete pain to shoot with, especially as a solo operator, so if you can get 90% of the image from a P6K then you're going to reach for it almost every time I would imagine.
  6. I should also say that I shot a few outings (walk on the beach / visit to a park / etc) in Prores LT and even Prores Proxy and what was interesting was that although you can see the compression artefacts, the 'feel' of the footage didn't change, it still felt malleable and cooperative. I'd be tempted to say that the linear response and overall cooperative colours isn't matched by other cameras because they do something in their colour science to tint or otherwise process the shadows differently to the mids and highlights, which is why when you underexpose or overexpose and then adjust levels in post you don't get the same kind of image as if you shot it with the right exposure. However, I'm not sure if it's true. In theory if you process all colours the same, regardless of their luma value then you should be able to pull up underexposed shots or pull down overexposed shots and still keep the same colours. But it's more complicated than that to get a "neutral feeling image", because it doesn't work like that with magenta/green shifts in WB, like I obviously have in the above. If you have a WB control that essentially moves the whole vector scope around then you have to process all hues the same otherwise you'll bend the colour response. For example, imagine a scene where there is a white wall lit by three lights, one is neutral, one has a CTO gel and is mostly on one side and the other has a CTB gel and is on the other side. If you WB the camera properly, you should get a straight line on the vector scope going from orange to cyan through the middle of the scope. Now imagine we introduce a magenta shift in the WB of the camera. We'll get a straight line on the vector scope, but it won't go through the middle of the scope, it will pass on the magenta side of the curve. Here's the kicker - the BMMCC rotates and desaturates hues on the magenta and green sides of the vector scope, which will bend that line. If we then adjust that line in post we won't have to pull it towards green as far because the magenta got desaturated (quite a lot actually) and that will mean we still have magenta in the warm and cooler parts of the image - the middle of the wall will look neutral but the sides will look magenta. If we adjust further towards green the sides of the wall will be warm/cool but not magenta or green, but now the middle of the wall will be green. I haven't done this test, but I do know that the camera desaturates the magenta and green parts of the vector scope. It's actually quite a long way from neutral, you'd be amazed at how something so distorted could look so nice. I don't have the BMMCC one handy, but this what the Kodak 2382 film emulation LUTs in Resolve does, which the BMMCC and Alexa would be doing similar things to. Unprocessed vs processed: Note that hues are rotated, saturation is compressed, blues and green are desaturated and yellows are quite saturated. Having said all that, I don't know what the BMMCC is doing because if you shoot something and then adjust the green/magenta WB it doesn't feel like it's falling apart or that colours are going off in that way. Maybe they are but I just haven't noticed, or maybe there is some 'buffer' built into the skin tone areas so that if you get the WB wrong the skin tones will behave predictably when you try and recover them in post. Who knows.
  7. The response of the BMPCC / BMMCC is strange in that it isn't strange. You adjust a control and the adjustment gets made and your eyes see what they were expecting to see. That might sound strange, so let me provide some context. My first attempts at colour grading were trying to match low quality footage (GoPro, iPhone footage, etc) shot in mixed lighting. Even using relatively advanced techniques, and animating various adjustments over the course of a shot, it was hard to get things to match, and footage would break. It was enormously frustrating. I then bought the XC10, which I now know I didn't use properly, as I used it with some auto-settings and so was seeing its poor ISO performance, as well as sometimes trying to bring up shots there were underexposed etc. This was similar to trying to adjust the bad quality shots, and also got bad results. The 'feeling' is that the footage is very fragile and that any adjustment will break it, and the feeling is also that the sliders in Resolve were fighting with me all the time. You adjust WB and instead of the image getting neutral and good, the colours go from being too blue or too purple to just being awful, so you make localised adjustments and the closer you get to neutral the more it looks like you shot the whole thing at ISO 10,000,000. Then I bought the GH5 and it was like a revelation. The 10-bit footage could be pushed and pulled and you basically can't break it even if you try ridiculous things. The 'feel' of the footage is the same as watching a colourist grade footage from a RED or Alexa on a YT video - they pull up a slider and the image just does it without falling apart. The footage comes to life when you adjust it, rather than falling apart. However, when you're shooting on auto-WB (like I do) and every shot requires manual WB adjustments in post, and occasionally have to be animated in post too if the lighting changed during the shot, there is still an element of the colours not cooperating, and kind of feeling like they're degrading under your controls. It's night and day better than the XC10 shot outside its sweet spot, but still not ideal. Then I bought the BMMCC and it was a revelation. I've shot both RAW and prores and really pushed and pulled the footage in post and the problems are just gone. You push and pull it and it just does what you tell it, the footage doesn't break and the colours don't feel like you're fighting them. Adjusting colours feels more like you've been given a spectacular image that has been deliberately degraded in post to look flat and dull and that you're un-doing the degrading treatment. The image gets better as you play with it and get it to where you want it. I posted this video earlier in the thread and it's useful as a reference point: I shot that earlier in the year, in Prores HQ with a fixed WB of 5500K and ETTR. The footage was nice, but didn't match shot for shot (my vND has a green tint) and some shots are obviously exposed very differently to each other depending on if I was protecting the sky or not. One thing you will notice in post is that there is a small knee to roll-off the highlights, but it's a very linear response across the whole range. So if you shoot a shot 3 stops under the exposure of another shot and adjust them in post to match (either by lowering the brighter one or raising the darker one) then you'll get very similar looking images, taking into account the raised noise floor and clipping of course. It's like an Alexa like that, with a broad and very neutral response, unlike other cameras. Here's the shots on the timeline with a 709 conversion and not much else: and here's the final grade: As you can see, I've played a lot with the colours, and especially the green/magenta balance that is the key to a great sunset. Hopefully that helps to explain what "the footage just does what you tell it" actually means in practice. I agree. I feel there's a whole other world out there that's invisible due to NDAs and "professional appearances". During my time owning the XC10 I really had this made clear to me, as on the one hand the entire keyboard warrior camera forum world was saying how the XC10 was a disaster and that no-one would ever use it, and simultaneously I was subscribed to a YT channel where the guy was posting a BTS still from some high-end production or other where an XC10 was visible on set in the background. Of course, if you google "XC10 <movie name>" then there are no hits, so it seems like it doesn't exist, but the pictures are unmistakable. So a camera can have huge press because it's YT camera reviewer flavour of the month, and yet a consumer camera could be in-use daily on high-budget sets across the world and you'd never hear a peep about it. The amateurs make all the noise and essentially the pros move silently. I've heard people say that this is false and that there's heaps on info about what happens on-set available online, but when you see the XC10 in the BTS pics of a half-a-dozen feature films and not a single google hit across any of them, you know that there's an entire world of professional use that's simply invisible. I suspect the BMMCC is firmly in this camp. After all, name another camera that was released in 2012 and is still available as a current product by a major manufacturer...
  8. I couldn't find any direct comparisons between the BMPCC and Alexa either. However, my rationale for suggesting they are similar is that I have spent many hours looking at the response of both cameras against properly shot colour charts, and examining the various diversions each has from a 'correct' rec709 response, of which there are a great many. Essentially I picked apart the response of the Alexa in order to try and reverse-engineer some of the colour translations that make its colour science so delightful. I found that they share many similarities. For example, their response to pure red, and the range of colours between pure red and orange, is very similar, pushing reds to pink. The way they handle various hues of green is similar too. Which hues they desaturate are similar. The irony is that they're both emulating film, so essentially have a broadly common goal. A third point of enquiry is using the GH5 and applying the Emotive LUT, then comparing how it pushes colour around the colour checker hues gives a third point of triangulation. I found this similar to the Alexa response (of course!) and also similar to the BMPCC / BMMCC. I've directly compared GH5+Emotive Colour vs BMMCC footage before and it matched very well. So while I haven't intercut footage from the same shoot on both cameras, I'm familiar with the main 'flavours' of their colour science and am pretty confident that they're pushing in the same directions. It doesn't mean they'll instantly match in an edit together, but if you're at the level where you're shooting with an Alexa then you're at the level where every shot in an edit requires individual adjustment in post anyway, and I'd be surprised if the level of adjustment required to match a BMPCC shot to an adjacent Alexa shot was much larger than matching two adjacent Alexa shots that were taken from two different angles with different lighting setups.
  9. LOL, every time I comment about camera sizes there's always someone who replies and says that in comparison to the size of the universe that the thing we were talking about is much smaller and therefore must be tiny and any size discussion must be meaningless, but that "everything is relative" approach doesn't really work when you apply it to the real world, where things are judged according to size, both in good and bad ways. It's about context, and this is a thread about the BMMCC, which is literally a cinema camera you can mount on a helmet: The P4K is huge for a consumer camera, just like the GH5 and A7S3 and Canon 5D are as well. Try putting a P4K on your helmet, or try putting a bunch of them into a vehicle and see how far you get. BTW, the BMMCC gets radically more battery life than a P4K as well. Colour science is definitely a thing, and in case you don't know, the Alexa revolutionised the look of digital by making it look like film, which was a huge driving force in the widespread adoption of digital in Hollywood and other high-end cinema markets. At the time this was the absolute bleeding edge of film emulation, but we've gotten a lot better at that look now and this has become the dominant look of all medium-high budget TV and movies. Yes, I have seen that look from other cameras, but I have never seen that look from any camera apart from the OG BMPCC or BMMCC unless there was a professional colourist involved. If you're happy with a large camera that shoots Sony-looking images then that's fine, and is a matter of taste, but this thread is about tiny cinema cameras that have an image that is remarkably like an Alexa, which the P4K is most definitely not.
  10. I'd suggest that you wouldn't need to get a LUT. I've graded a small amount of Alexa footage and my impression was that the OG BMPCC / BMMCC was just as easy to grade and get good result from as Alexa footage. It's really hard to describe how effortlessly the footage responds to grading, it makes GH5 footage feel like there's something wrong with it in comparison. It's also hard for people to imagine that the Alexa images aren't instantly magical, unless you've worked with them before, but poorly shot Alexa footage looks basically just like poorly shot footage from almost any other camera. If you want a match that would survive a forensic investigation then there are some LUTs from Juan Melara that translate the P4K or P6K to Alexa colours, which look like they do a great job and Juan is obviously a very talented colourist so I don't doubt that he would have done a meticulous job. But to re-iterate the benefits of the OG BMPCC / BMMCC, you can get an Alexa-like image just converting the colour space to 709, but it takes a serious colourist to get Alexa-like image from the P4K/P6K. I'd be tempted by the P4K / P6K because those LUTs are available from Juan, but the form factor is just absolutely ridiculous, the RAW is crippled, and they're quite expensive too. I'd shoot 1080p Prores HQ because it is downsampled form the whole sensor and would avoid aliasing issues of the OG cameras. You really need a 2.5k sensor to get true 2K, so that's potentially the only weakness of these cameras. But, if size matters, at all, in your work then there's no comparison:
  11. 300EU is a steal. Had you done the background research, you wouldn't have waited one minute, let alone one day. It may seem like that's a lot for an older camera, but you're buying the image. Used Alexas are still $10K because you're buying the image, so spending 3% of that on a camera that can be perfectly intercut with an Alexa. The P4K is a great camera, but it's not the same as the OG BMPCC / BMMCC. When you grade the OG BMPCC / BMMCC for two minutes you get something that looks like film, when you grade the P4K for two minutes you get something that looks like the Sony a7s3. It is possible to grade the P4K to match the OG BMPCC / BMMCC, but if you have that level of skill then you may as well buy almost any camera and grade it to look like whatever you want, because both tasks are about the same level of difficulty.
  12. I have it and for my shooting in uncontrolled situations with varying WB I couldn't get good results out of the LUT at all unfortunately. Maybe in controlled shooting it would be different, but it was too fragile to work in the situations I work in. The BMMCC, however, gets spectacular colour right out of the gate. It's literally giving me better colour after a 2 minute effort than I could get from my GH5 with hours of effort, regardless of the colour profile and use of LUTs, CSTs, of manual grading. I believe people think about cameras in the wrong way. The conversation is about "how good is this camera" or "how good is the image from this camera", and the answer to that is almost always "as good as the operator". But some cameras are really easy to get great results from, and others are almost impossible to work with and only the best operators in the world can get great results from them. The question should be "how good is the image from this camera when used by an above average user under normal circumstances and with a moderate amount of effort in post?". In this instance, the OG BMPCC / BMMCC have to be some of the best cameras ever made.
  13. I guess the point I'm trying to make, and the one that people just never seem to understand, is that if the AF doesn't know what to focus on then it's useless. People forget that focusing is more than just aligning the plane of focus with an object in frame. Sure, face-detect is pretty good these days, and if you're doing an interview shoot with a wider-aperture lens then having it track the person as they lean forward sometimes is great. But it's the equivalent of having a focus-puller who only knows how to either: focus on the nearest face, or focus on the nearest object. That would get you fired immediately in that job, and yet, when a camera comes along with that limited skillset, it's red carpet and balloons time because it's problem solved!
  14. Neither can most "journalists" who write articles for news sites.... if you're expecting higher professional standards from a guy talking about cameras on YT than we get from people writing about foreign policy, well, I'd suggest that's... unrealistic.
  15. I was more referring to when you want to change subjects during a shot. Face-detect-PDAF is pretty good but a lot of what people are shooting isn't covered by that. Take a common example from my shooting, which was one of my kids playing with a claw-machine during a family trip. The sequence of the shot was: (focus on kid / mid-following-shot) kid walks up to claw machine, puts token in (focus on kid / push-in-shot by walking forwards) kid starts operating machine and hits the go button (focus on claw / mid-shot) claw goes across and down, picks up toy (focus on kid / reaction-shot) kid looks surprised then goes "yeah" (focus on claw / mid-shot) claw drops toy into slot (focus on kid / mid-shot) kid picks up toy from prize slot (focus on kid / pull-out to kid and family members shot) kid shows toy to family members (focus on family members / pull-out further to group-shot) family members react to kid and cute toy (focus on kids sister / close-shot) sister cuddles very soft and fluffy toy I did that shot in one take, there was no warning as it started by me following a kid in an arcade armed with a token in their hand, shallow DoF to isolate the background and random strangers (plus the bokeh looks great). These shots aren't every shot, but they're common and I could easily lose count with these on a typical trip, with many of these per day. AF isn't even in the same universe as being able to do this, and this is essentially the whole job of what I do. I agree. One of the challenges is that we, as consumers, are exposed to tech from all sorts of places which is very innovative, and camera companies are essentially companies that made clockwork, and then electrified clockwork products based around disposable chemistry until only decades ago. The cultural shift for them to start thinking like Apple is radical. Think about how long it will take for their stills and video departments to get in alignment, when Apple never had separate departments. They will continue to release the least innovation and features that the market can bear until they go bankrupt. That's the consequence of capitalism and the human condition.
  16. Interesting to hear you're getting good matching with the S1. I'd be curious to see any images you're willing/able to post.
  17. Great to hear! If there's any before/after you're willing to share I'd be curious to see it. I don't recall ever seeing other people doing this stuff, just me playing around with things. There's likely to be a reasonably good average grade that would work across all shots that you can use while editing, and then fine-tune once you're almost at picture-lock. That avoids you working on shots that don't end up in the final edit, but also means you're not working with the unprocessed footage, which can be distracting. If that's a true stream of the data on the tape then that should be as good as it gets. Then the next steps would be how to de-interlace and process it further. I seem to recall several ways to de-interlace, either with their pros and cons. One was to have each frame made up of alternating lines from the current and previous frame, and the other was to just duplicate the lines from the current frame. The former had more apparent resolution but had a horizontal blind type of effect on movement, and the latter had less resolution and was prone to flickering, especially on hard edges that ran horizontally. Choosing the overall approach might be subject-dependent, and maybe even shot dependent? I'm curious to hear how you go with this as well.
  18. I never thought about using the joystick. I guess you navigate it around and then click it when you want the camera to re-focus? Once I learned to manually focus and realised I liked the look even more, I got manual lenses and haven't bothered with AF, until recently. Ironically, since I got the 12-35/2.8 to have an OIS lens with the OG BMPCC, I've started testing the AF-S to get an initial focus before I hit record. It's funny because having a wide f2.8 lens is the hardest to manually focus using the tiny screen. Slower lenses are easier to focus because it's not so critical, and faster lenses are easier to focus because it's clearer what's in focus and what isn't because the OOF areas are a lot more OOF. Thanks! It's tough to make a compact setup that isn't a head-turner in public if you have to have a monitor, but putting it behind the camera on a flash bracket seems to work well. I think I still prefer the OG BMPCC + 12-35mm f2.8 + Black Promist filter combo. Still sorting out audio for that setup though, the preamps are... not great.
  19. True! I thought the Micro matched pretty well to the Alexa? Unless you're talking about 4K+ Alexas and shooting a hyper-modern look with the latest high-end cine lenses? This is a slight aside, but I've been seeing the promo videos they have for their new Signature Zoom lenses and dear god are they going for a sharp look with those videos! I understand why of course, everyone who can afford to use ARRI knows you can soften lenses up in post of course, but I was thinking how I wasn't really a fan of that particular treatment! If it was Christmas and my birthday and I won lotto too, I'd wish for a BM Pocket Cine Camera in the P2K form factor with an updated 2.8K sensor with the same colour science, and set to the global shutter mode rather than rolling shutter. Still, the P2K and M2K (Micro) are pretty amazing for what they cost and how big they are.
  20. How do you hold the camera when you're using touch-AF? You could hold the camera in your fingers and use your thumb on the touchscreen to focus, but that's not a very ergonomic approach. Typically the manual focus approach is like the below, only with the bottom hand slid further forwards and the other hand holding the other side of the camera. The below is a slightly unusual rig, as normally I'd use my left hand to hold the weight and focus and my right on the camera grip.
  21. That doesn't sound right. I'd maybe reach out to some experts, like on the LiftGammaGain forums perhaps (colourists used to do a lot of DI stuff before everything went digital) and confirm. I hear those guys talking about the older digital formats and there were so many gotchas in there that it's almost amazing that anyone got a good result! Yeah, because it's a personal project you can really take your time and let your subconscious process the footage in the background as you do other things. I'm always amazed at how a good editor can put little moments together that aren't related, but compose a thread that wasn't literally there, but somehow captures the feeling more than a 'correct' version. Like when Herzog said “Facts do not constitute the truth. There is a deeper stratum.” In a way, you're lucky that it's miniDV because you're less likely to get swayed by things that look good but aren't meaningful. I'm sure that if I had a 1DX2 and fast primes I'd be tempted to include too much 120p B-roll in my edits like McKinnon does, so you've side-stepped these challenges! Perhaps calling it degradation was misrepresenting it. Think of it like this... reality doesn't have compression artefacts, it's not 'sharp', it's not grainy, and smooth surfaces are not featureless, but the miniDV footage has introduced these things due to its limitations. Your job is to make the footage look the most like reality was, or the most how it felt. You've definitely got license to take some creative liberties to make it more like poetry and less like prose! If you don't blur the footage at all, you get all the nasties and all the content in the footage. If you blur it a lot then you get no nasties and very little of the content, but there will be a little sweet spot where each effect hides much more of the nasties than the original footage, and that's what you're aiming for. I'd suggest setting up a bunch of effects and going through them one by one at a sensible viewing distance and fine-tuning their parameters and opacity by eye, trying to make the footage look the most like a window to being there, or of a Hollywood film of being there (or whichever aesthetic you prefer!). By adjusting those parameters by looking at the monitor and not the control panel you may find that effects that aren't worthy get set to the sweet spot of zero opacity. Make sure you're always turning each effect on/off to make sure you're improving the image, as it's easy to get lost and get used to something, but when watching things we can also easily get used to quite strong looks as well, so don't shy from making it stylised. I'd make a few passes through each effect to optimise each one in combination with the others. A cool point of reference is to look from the screen to the other objects in the room, so effectively using reality as a reference. Comparing to reality will quickly sort out how sharp/unsharp things should be etc. Save that as a preset, reset them all and do the same exercise a few days later. Do it a few times. Then you can compare them and see which you liked, and maybe blend them together. You are trying to create a look that looks 'right' and the original footage looks awful in comparison. When I've done this (like in the example I posted above) I got to a point where the final grade seemed like footage from a Super-16 film camera and the original footage seemed like someone had applied a bunch of awful distortions for some reason. That's a good place to end up. It's useful to review some older film captures as a reference too. They were blurrier than you think, but never looked offensive, so it's a good look to go for. It could be due to the interlacing, but it could also be due to any capture issues you're experiencing. Once you confirm your capture then just see what's least offensive.
  22. Just found this, which is absolutely spectacular. If you think you need to shoot in 4K, think again. Simply wonderful.
  23. So now instead of having a dedicated person to pull focus, or giving the job to the camera operator, you can have the camera automatically focus for you..... only, it just requires a dedicated person to tell the camera where to focus, or the camera operator has to do it. The only problem that solves is if you can't actually pull focus properly, which isn't that difficult a skill to have in most situations. Of course, if you're a run-n-gun operator then it also means you can't put the weight of the camera into your palm, wrapping your hand around the lens, which is the natural place for a manual focus ring, and steady the camera using the other hand on a hand-grip. Instead you have to carry the full weight of the camera with the hand-grip hand in order to touch the tiny little screen in the right place with a outstretched finger when you want to pull focus. Sure, if you're on a tripod then it's relatively easy, but then, so would just turning the focus control on the lens... It sounds like it solves the easy-to-solve part of focusing and works well when focusing wouldn't be difficult but doesn't help much in situations where focusing is actually more difficult. I think mostly you guys are missing the point with this tech stuff. PDAF is great at focusing perfectly, but can't reliably choose what to focus on. The times when focusing perfectly is difficult for manual focus is when subjects are moving, but that's actually the time when focusing perfectly matters least, because the subject is normally moving in frame and so there will be motion blur with 180 shutter, and often camera movement as well. TV shows and movies regularly have the focus catch up to the person when they come to a stop after running towards or away from the camera. In a way it's actually nice that for fast-paced movement they're in and out of focus for their transition because typically the composition of such a shot is that: you have the person in focus and their character is focusing on their world and something causes their reaction the person is now reacting and aren't in focus and their character is in the midst of moving and isn't seeing clearly either the person comes to a halt and then comes back into focus at the same time as the characters perception comes into focus and they reevaluate their position after having reacted That's actually a very common shot in narrative because it shows action and re-action, which having a focus puller not do it perfectly actually helps us identify with the character. I think it's the kind of thing that sounds much more useful in the design room, or lab, or in forums on the internet, than it does in real-life for lots of people. Sure, it's probably a great feature to have. But saying that a camera needs it as a feature is just buying into the marketing BS, or showing a lack of understanding about shooting, or both. I used to think AF was a must, and I was very vocal online about it too. But I challenged myself to manually focus and now I enjoy it more, get better results, and it gives me far more flexibility in equipment choice and aesthetic as well. and I'm just a guy who makes home videos, so if I can manually focus an f0.95 lens, surrounded by the real world where things happen without warning then really there's not much excuse! I think half the people online talking about AF as a critical feature must be shooting like this....
  24. Oh yeah, and a nice highlight rolloff can hide clipped highlights, and if the footage is 30p you can slow it down to 24p and get a bit of slow-motion for free, which I also did on the above footage.
×
×
  • Create New...