-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
You forgot about people dropping them, fungus, rust, and botched attempts to fix them. In terms of getting on the bandwagon earlier, I bet these manual lenses were at rock bottom prices when AF reached significant market penetration, so that was probably the time to buy. I bet if you went to the right places you could have bought boxes of them, maybe by weight.
-
We have, we did, and.... *sigh* Let me ask you this. If Yedlin has made such basic failures, and you claim to be sufficiently knowledgeable to be able to easily see through them when others do not, why don't you go ahead and do a test that meets the criteria you say he hasn't met? I will then proceed to persistently claim you haven't met your own criteria, criticise every line you have written in isolation, and generally take the perspective that if the test does not directly apply to every single camera ever made, every screen and every eyeball in existence then it can't have any value whatsoever. I think it will be fun, I've seen it done recently with such gusto....
-
Clicking is a common symptom of a dead drive. I remember back in the day I ordered the latest and greatest HDD to my local IT store and it was the largest and fastest HDD they'd ever seen and it was a special order - way larger and hugely more expensive than the ones they stocked. They were quite excited to know what kind of person would order such a thing and what I wanted it for and it was quite a buzz when I collected it. When I brought it back in the next day, clicking and refusing to format there was mild shock in the store, but they confirmed that indeed it was dead and so they ordered a replacement. When I brought the replacement one back in the day after I got it, clicking like the previous one had, the tone had changed completely and I think they might have suspected that I might be the cause. I was also trying to ignore that thought myself. Luckily the third one was all good. Needless to say that when I plugged it in and it worked I was quite relieved!!
-
Have you used lenses with OIS? IBIS is often of a similar performance, so perhaps that will be a useful reference point? You can always shoot a short shutter speed and stabilise in post, and algorithms for adding motion-blur in post are getting better too, so that's an option under very simple conditions (they're not great yet). If your shots are out of focus there is nothing you can do in post that will fix that. I'm not really familiar with what cameras are around for those prices, but maybe someone else will give some suggestions. At this point all cameras can make a great image under the right circumstances and working within their limitations, so it's more about what you shoot and how you shoot than the camera. Everyone looks at a camera and sees different things because we all shoot differently.
-
It probably won't. It might slow down when phones get better, but even then people won't want to sell for much lower than they paid. Housing bubbles burst because people buy them on finance and are forced to sell because they can't make repayments, that doesn't happen with vintage lenses. Think about it this way - digital cameras aren't getting any more analog looking and both cameras and lenses are getting more and more perfect so people will increasingly look to the past for 'character'. India and China and parts of Africa are growing their middle classes by millions of people a year, and they will get better access to online shopping and eBay. and lastly, lenses don't last forever and wear out or get broken. So.. increasing demand + decreasing numbers available + no additional supply = prices go up
-
Very nice. I find that the footage looks 'right' with little to no work, and it really responds well to grading if you have the time/inclination to try. It's also very unforgiving. Sure, lots of your shots are overexposed from a technical point of view, but they don't look overexposed, or at least, look overexposed in a bad way. Take a Canon camera and do that and watch the image fall apart into digital clipping hell almost immediately.... The texture of the image might be the nicest aspect to it. It's really a pocketable super 16mm film camera with virtually unlimited film refills and no development costs or delays. The Laowa 7.5mm MFT lens isn't that much larger than the 14mm Panny lens, and is 22mm FF equivalent, so is nice and wide. It's a nice lens too. I'd suggest staying away from the SLR Magic 8mm F4 - it's too slow for lots of things and the ergonomics are pretty rubbish.
-
Judging what something is worth to someone in dollars is very difficult considering we don't know how much money that is to you and what else you might spend it on instead. I'm sure lots of people will take the tiny scraps of info about what you do and how you do it and guess wildly what would be suitable for you, but if you were to give more info then the guessing wouldn't be quite so wild. What do you currently really like about the XT3? What are your current limitations from the XT3? What future goals do you have for your film-making? etc..
-
.....and is FF and 8K. What price would we pay for such a thing? Easy - history teaches us this lesson time after time... 100Mbps 8-bit codec with 11 stops of DR.
-
Makes sense. Let's hope they decide to focus on image instead of specs...
-
I think @scotchtape nailed it - it's about AF. Step 1: work out if you want AF or not and then buy the Canon if you do and P6KPro if you don't Step 2: setup some test scenes with a colour checker (or some brightly coloured objects) and shoot it with both cameras and then work on a preset colour grade that will match the colours in post Step 3: go shoot stuff
-
I got that from watching - the idea that there's a normal public street and a secret garden right next to each other was a cool idea to start the teaser with and pulls you in.
-
The 14mm really surprised me from that shoot actually. The edit you guys saw was only a subset of the larger video I shot which included lots of shots of my wife who was there with me (she's internet shy) and the lens looks great for people. Specifically the ~40mm equivalent focal length, the rendering of detail at a normal subject distance and with the lens wide-open is quite flattering, it's borderline as long a focal length as you'd like to use hand-held in a small rig without OIS, and the DoF was shallow enough (or even moreso) to create some subject separation. My follow-up to that video was to do more tests with the 14-42mm f3.5-5.6 kit lens to see what the DoF and subject separation is like on that slightly slower lens, and I also bought a 12-35/2.8 to try that too. I think that using these lenses (any of the ones I mentioned above) wide-open is probably a good balance between getting a softer rendering to avoid moire, to not have too digital a look, to get shallow enough DoF to separate your subject, but also to not have too shallow a DoF to be able to manually focus relatively easily, especially on a smaller screen. I can imagine that I might end up with these setups: P2K with 7.5/2 and 14/2.5 as a truly "pocketable" setup M2K with 7.5/2 and 12-35/2.8 as a larger rig for slow-motion P2K / M2K with my 17.5/0.95 as a low-light solution
-
As @fuzzynormal suggested, proxies are the best approach. In terms of habits dying hard, which I understand, it sounds like you're at one of the rare points where you're going to change your habits anyway, so I'd suggest considering what will be the best investment in the long-term. Sure, proxies are a different way of working, but: If you render to a Digital Intermediate then you're encoding an extra step unnecessarily and taking the hit to the image quality - I'd even question if shooting 6K with an extra conversion lowers the quality to 4K levels.... a test I'd suggest you try before committing to a DI workflow When you archive your project you will either have to delete the DI (meaning you can't load your project and hit play), store both (more than doubling the storage requirements because h265 is much more efficient than Prores for the same quality), or if you delete the originals then you'll both increase the size of the media you stored and be stuck forever with a degradation in quality. When you work with proxies you can just delete them at the end (if you want to) and the project will still be in-tact, or if you don't delete them they won't take up much space as they'll be much smaller You can edit on virtually any computer you like, as you can make your proxies whatever resolution you want Yes, there are some operations best done on the full-resolution files (like stabilisation and adjusting image texture such as NR and halation / grain effects etc) but these typically don't have to be done real-time and they can be done at the end of a project during the mastering phase. The only obstacles I've seen for these is 1) if you are colour grading live with a client in the room, or 2) if your computer is so under-powered that things like the new AI functions just won't work regardless of how long you leave them to process, but these may not apply to your situation. The only thing worse than having to spend $4K on a new camera is having to spend an additional $4K on a new machine that can handle the extra K's in the image.... oh, and just wait for h.266 - it's coming, and it will make editing h265 seem like editing, well, h.264 in comparison!
-
My question was deliberately ridiculous. I asked it that way because your answer is obvious - that you want a camera more than you want the crypto you have that totals that value. This is the basis for any exchange economy as division-of-labour and specialisation is the foundation of civilisation. The fact you own crypto and know it's worth money sort of implies that you're aware that you can sell some for fiat currency and turn that into a camera via a normal store, but the way you worded your question indicates you don't want to go via a fiat currency, and also that you kind of wanted to hide that aspect of it, or at least not lead with it. This has certain implications, most significantly: Your question is actually "how can I buy a camera with crypto while bypassing the whole fiat currency system", and begs the question of why you are so interested in bypassing it I own some crypto myself, not a lot of it, but enough to be familiar with the waters, so to speak, and it's a rather strange and involved question, thus my deliberately ridiculous answer. To actually go back to your question, I heard PayPal was implementing a feature that allowed you to pay a vendor in fiat currency but to pay PayPal in crypto, so if that's been rolled out already then you can use that mechanism to buy almost anything from almost anyone.
-
Why trade an appreciating asset for a depreciating one?
-
They can always implement a haptic feedback engine that can emulate it, assuming there is enough demand. Not sure if you've used a MacBook Pro touchpad or an iPhone home button, but they're pressure sensitive and don't click, however the illusion that they do click is (at least to me) completely flawless. I didn't realise it they weren't physical buttons until I tried it when the devices were powered off and literally I got a small shock because I genuinely thought it was physically clicking. Even now if I happen to press when the device is off I am still surprised. ...who are fuelled by the continuous appetite for more resolution. That's the thing about capitalism, it's a feedback loop from the consumer to the manufacturers, and then back again through marketing. If we all just stopped voting for huge-resolution cameras with our wallets, they'd eventually stop selling them. Do most people do that? No.
-
Music Video with Bmpcc4K in CNDG and Bmmcc with old 16mm Russian Lenses
kye replied to alvaromedina's topic in Cameras
Great stuff! You are taking on a mammoth challenge by trying to match the Fairlight sensor from the Micro with the Sony sensor of the P4K. I would suggest the differences are in colour, texture, and resolution. To match the colour, I'd suggest building a test scene with a number of colour swatches and lighting it with a high-CRI light, then shooting the same scene with both cameras. After that you can adjust one camera to match the other. I'd suggest matching the WB first, the levels second, and then paying attention to the Hue of colours, the saturation of them, and the luminance of them third. That should get you into the ballpark. In terms of resolution, I'd suggest filming a test scene with some high-contrast edges and fine detail. One thing I have used in the past is a few grass seeds that have been backlit as they tend to have little hairs and other textures, like this: Once you have that scene, shoot with both cameras, match the framing between them, and ensure to stop the lens down (I'd suggest 4 stops from wide-open) to get the sharpest image hitting the sensor. Then, in post, match the cameras by either adding sharpening to the Micro so it perceptually matches as closely as possible to the P4K, or adding a small-radius blur to the P4K, potentially adding the blur at a lower opacity than 100% (experiment with it). Lastly, we deal with texture. I'd suggest doing the above matching first, then leaving this one last. Texture is a funny thing, and is very subjective, but I'd suggest experimenting with some combination of adding NR and film grain. Maybe you don't need any NR but maybe you will. Obviously matching the Micro to the P4K will be very different to matching the P4K to the Micro, but it's probably worthwhile doing both so you have these things worked out later on when you want to use these on real projects. It's also worth noting that you won't be able to match these exactly, and that it won't matter. Firstly, you won't be shooting the exact same angle with both cameras (and I'd suggest avoiding that if you can), secondly, on a real project you would probably match the cameras and then put a look over the top, which will partially obscure any differences that you didn't manage to match, and thirdly, you don't have to match cameras at all, you only have to match them so that the differences don't draw too much attention to themselves and break the viewer out of the experience of the content of your edit. Good luck!! -
I shot prores, which does a good job of minimising the aliasing, and also the 14mm isn't tack sharp wide open. The setup for that video was this: So the image pipeline was: vND -> IR/UV Cut-> 14mm at f2.5-> Micro -> Prores A number of people online say that there is a special synergy between the P2K / M2K and the 14/2.5 lens, so it could be that. Certainly as a 40mm equivalent prime it's a nice focal length, although it's at the longer end of what you'd want to hand-hold unless you had a bigger rig. I'm also looking at options with OIS like my 14-42/3.5-5.6 kit lens, the 14-42/3.5-5.6 PZ pancake lens, and also perhaps the 12-35/2.8 so will see how I go. It would be great if there was a wider zoom with OIS, but the crop factor is working against us here. I have bought a Black Promist filter since I shot this as well, so I'm keen to try that out on a real location too.
-
This is my concern too. Hopefully I have dissuaded them from your arguments sufficiently. Once again, you're deliberately oversimplifying this in order to try and make my arguments sound silly, because you can't argue against their logic in a calm and rational way. This is how a camera sensor works: Look at the pattern of the red photosites that is captured by the camera. It is missing every second row and every second column. In order to work out a red value for every pixel in the output, it must interpolate the values from what it did measure. Just like upscaling an image. This is typical of the arguments you are making in this thread. It is technically correct and sounds like you might be raising valid objections. Unfortunately this is just technical nit-picking and shows that you are missing the point, either deliberately or naively. My point has been, ever since I raised it, that camera sensors have significant interpolation. This is a problem for your argument as your entire argument is that Yedlins test is invalid because the pixels blended with each other (as you showed in your frame-grabs) and you claimed this was due to interpolation / scaling / or some other resolution issue. Your criticism then is that a resolution test cannot involve interpolation, and the problem with that is that almost every camera has interpolation built-in fundamentally. I mentioned bayer sensors, and you said the above. I showed above that bayer sensors have less red photosites than output pixels, therefore they must interpolate, but what about the Fuji X-T3? The Fuji cameras have a X-Trans sensor, which looks like this: Notice something about that? Correct - it too doesn't have a red value for every pixel, or a green value for every pixel, or a blue value for every pixel. Guess what that means - interpolation! "Scanning back" you say. Well, that's a super-broad term, but it's a pretty niche market. I'm not watching that much TV shot with a medium format camera. If you are, well, good for you. And finally, Foveon. Now we get to a camera that doesn't need to interpolate because it measures all three colours for each pixel: So I made a criticism about interpolation by mentioning bayer sensors, and you criticised my argument by picking up on the word "debayer" but included the X-Trans sensor in your answer, when the X-Trans sensor has the same interpolation that you are saying can't be used! You are not arguing against my argument, you are just cherry picking little things to try and argue against in isolation. A friend PM'd me to say that he thought you were just arguing for its own sake, and I don't know if that's true or not, but you're not making sensible counter-arguments to what I'm actually saying. So, you criticise Yedlin for his use of interpolation: and yet you previously said that "We can determine the camera resolution merely from output of the ADC or from the camera files." You're just nit-picking on tiny details but your argument contains all manner of contradictions.
-
I think the answer is simple, and comes in two parts: They aren't film-makers. The professional equivalent of what YouTubers do would be ultra-low-budget reality TV. Hardly the basis for understanding a cinema camera. They don't understand the needs of others, or that others even have different needs. If someone criticises a cinema camera for "weaknesses" that ARRI and RED also share (eg, lack of IBIS) then it shows how much they really don't know. I recently bought a P2K, but was considering the FP and waited for the FP-L to come out before making my decision. The Sigmas aren't for me for a number of reasons, but I don't think they're bad cameras, just that they're not a good fit for me.
-
Thanks! I keep banging on about how the cameras used on pro sets are invisible because they often don't get talked about, whereas YouTubers talk about their cameras ad-nauseum. Well, do these look familiar? They switched from GoPros to the 4K BMMCC Studio camera, and piped them into these: http://www.content-technology.com/postproduction/c-mount-industries-rides-on-aja-ki-pro-ultras-for-carpool-karaoke/ There's a reason that the BMMCC is still a current model camera.
-
@tupp Your obsession with pixels not impacting the pixels adjacent to them means that your arguments don't apply in the real world. I don't understand why you keep pursuing this "it's not perfect so it can't be valid" line of logic. Bayer sensors require debayering, which is a process involving interpolation. I have provided links to articles explaining this but you seem to ignore this inconvenient truth. Even if we ignore the industry trend of capturing images at a different resolution than they are delivered in, it still means that your mythical image pipeline that doesn't involve any interpolation is limited to cameras that capture such a tiny fraction of the images we watch they may as well not exist. Your criticisms also don't allow for compression, which is applied to basically every image that is consumed. This is a fundamental issue because compression blurs edges and obscures detail significantly, making many differences that might be visible in the mastering suite invisible in the final delivered stream. Once again, this means your comparison is limited to some utopian fairy-land that doesn't apply here in our dimension. I don't understand why you persist. Even if you were right about everything else (which you're not), you would only be proving the statement "4K is perceptually different to 2K when you shoot with cameras that no-one shoots with, match resolutions through the whole pipeline, and deliver in a format no-one delivers in". Obviously, such a statement would be pointless.
-
Those cookies look mighty tasty!
-
Yes and no - if the edge is at an angle then you need an infinite resolution to avoid having a grey-scale pixel in between the two flat areas of colour. VFX requires softening (blurring) in order to not appear aliased, or must be rendered where a pixel is taken to have the value of light within an arc (which might partially hit an object but also partially miss it) rather than at a single line (with is either hit or miss because it's infinitely thin). Tupp disagrees with us on this point, but yes. I haven't read much about debayering, but it makes sense if the interpolation is a higher-order function than linear from the immediate pixels. There is a cultural element, but Yedlins test was strictly about perceptibility, not preference. When he upscales 2K->4K he can't reproduce the high frequency details because they're gone. It's like if I described the beach as being low near the water and higher up further away from the water, you couldn't take my information and recreate the curve of the beach from that, let alone the ripples in the sand from the wind or the texture of the footprints in it - all that information is gone and all I have given you is a straight line. In digital systems there's a thing called the nyquist frequency which in digital audio terms says that the highest frequency that can be reproduced is half the sampling rate. ie, the highest frequency is when the data goes "100, 0, 100, 0, 100 , 0" and in the image the effect is that if I say that the 2K pixels are "100, 0, 100" then that translates to a 4K image with "100, ?, 0, ?, 100, ?" so the best we can do is simply guess what those pixel values were, based on the surrounding pixels, but we can't know if one of those edges was sharp or not. The right 4K image might be "100, 50, 0, 0, 100, 100" but how would we know? The information that one of those edges was soft and one was sharp is lost forever.