-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
The secret is in the title of the video.... the three words you should be paying attention to are "$80", "projector", and "trick"
-
YouTube has a comments section?
-
And here's the poor persons version: There's a heap you can do even with a very low budget.
-
None of those things are unique to the P4K - any camera body would require those things under similar circumstances. In terms of how small a percentage the camera takes up in a rig, leave the P4K alone and go look at smartphone rigs they use to shoot the "Shot on smartphone" ads! This is a very nice lens.. well made, sharp wide-open, and wide as all hell even on my GH5. In terms of lenses for the BMMCC, i'm thinking the above, a 12.5mm c-mount I have, and the Panny 14mm f2.5 are all great choices. I don't really see the 2.88 crop factor as being that much of a limitation. The battery life, need for an external screen, terrible buttons and ergonomics are a different story though! Of course, then there's the image which might make someone forget about all the rest of those things....
-
Justin Odisho - https://www.youtube.com/user/Justthisgood His channel is a bit of a mixture of stuff, but if you go back through his back catalog then you'll see videos where he re-creates various effects from music videos using PP / AE. I think he might have named those ones "music video breakdown" or something so searching might be useful.
-
The camera hasn't left the box yet and remains un-modded, unfortunately, but the project is still on my list. The idea is definitely a good one. Do you know anything about how they're constructed? When I was researching the action camera mod I discovered that most of them are based on a CCTV lens standard called M12, which is a screw-type mount and there are cheap lenses all over ebay for it. If you can get the cameras apart then you can just break the glue and unscrew them, replacing them with other lenses, and finding infinity focus before locking them in place. Amusingly the lenses are often sold with the spec of how many MP of resolution they have, and 2MP is the common one, for that cinematic vintage 1080 feel I'm less optimistic that the smartphone lenses will be as accessible a standard..
-
Great stuff! This style of music isn't as high up my preference list as it used to be years ago, but the video seemed to strike a good balance of heaviness, humour, satire, and with a slight overtone of mayhem in the faster bit at the end. Overall, I'd say that with the limitations you had it was a pretty good result. In terms of missed opportunities (and armchair quarterbacking) I think that slightly more mayhem might have suited, especially in the heavier parts, and maybe some shakier camera movement and strobe style effects might have been all you needed. Both of which can be done in post if you know how to push your NLE a bit harder than most do. Of course, that stuff takes time. I used to watch a YT channel from a guy that edits music videos and he was always breaking down music video effects and re-creating them, and the take-away I got from him was that music videos are kind of the experimental bleeding-edge of mainstream film-making. You can almost do whatever you want and it's not too extreme for an average person to watch, whereas if you made a TV show or movie at even 25% of that intensity people would say it was "weird" and not be into it. I look forward to the next one!!
-
Great stuff! Anything that looks that close from a side-by-side image comparison should be indistinguishable in any other situation..
-
Or a lot cheaper! hahaha.. Then again, think about what anamorphics used to cost. It certainly does look well suited to the GH5 though, that's true
-
Apple started with their computational photography strategy with the iPhone 5 (IIRC) by keeping the same resolution sensor but upping the image processor with each model, and being smarter with the data available as opposed to just scaling up the raw amount of data. This includes things like when you hit the shutter it takes multiple exposures, processes and combines them to improve colours and lowers noise, eliminates blinking, etc, then saves the resulting image. This is a change of thinking rather than a chip selection decision, and camera manufacturers are showing very little capacity to think differently. To illustrate this, this is a comparison of the philosophies from the early film days and now: Early film days: shoot what you want to see in the final output (one frame captured = one frame in output) apply image adjustments edit deliver strategy for gaining better output image quality is to improve the quality of the original image capture (DR, resolution, noise) Current "towards 8K" strategies: shoot what you want to see in the final output (one frame captured = one frame in output) apply image adjustments edit deliver strategy for gaining better output image quality is to improve the quality of the original image capture (DR, resolution, noise) There are basically no innovations that the major players have implemented - every innovation falls under the "acquire a better image" strategy. I wouldn't hold your breath for getting more horsepower than is needed to do the absolute minimum from the data coming off the sensor.
-
That Vazen looks really cool. If you're into anamorphic then it might be a situation of "the only lens you'd ever need" having the width of a FF equiv 31mm spherical lens, putting it squarely between the 24mm, 28mm, and 35mm lenses. Many classic films were shot on a single prime, and that one isn't too big or heavy (720g), unlike the 40mm (1.8kg).
-
No experience, but the guys over at reduser seem to like them.... https://www.reduser.net/forum/showthread.php?159096-Kinoptik-anyone and this was apparently shot on Kinoptik and Alexa Mini:
-
If only the film-making ability of the average customer would catch up to the Super-16 cameras from the 70s!
-
+1 for proxies, which cut like butter on almost any hardware. Yeah, Resolve really could do with looking at their proxy setup. I have to use a manual online/offline setup that is cumbersome and complicated because their integrated proxy engine doesn't include sound in the proxy files, so if you have many tracks or if you have a slower hdd with the source footage then sometimes the playback jumps as it waits to go load the sound from all the clips. Also, which is what annoys me, if you disconnect from the source files then you get no audio at all. I used to hear that people would have issues with their proxies getting all messed up and there being no way to fix them apart from deleting them and rendering them again, but I don't hear that much any more.
-
Wait until my colour checker arrives and I start attempting to turn the GH5 into a BMMCC..... 😈😈😈
-
Starting when you should be stopping. And stopping when you should be starting.
kye replied to hyalinejim's topic in Cameras
I've done this many times as well. With GH5 and other cameras as well. I now try to look and confirm things are recording, but even then I'm still not 100%. My dad used to do electronics and telecommunications R&D and later on taught classes where they designed and build custom hardware / software products, and he was telling me that when you hit a button the circuit that is sensing that click will be capable of operating at thousands or millions of times per second and it's not uncommon for a button press to trigger start and then stop hundreds or thousands of times, even more if the button isn't absolutely pristinely clean inside. The solution for the circuit designers is to put a delay in there so that once it's triggered there's a time delay before it can be triggered again, often 0.5s or something. I had that problem with my GoPro Hero 3 when it was in its waterproof case - I'd hit record and it would record for 2s and then stop because it sensed too quickly but took a bit of time to act on the stop signal so it looked like it had started recording successfully. I missed some great footage that way. For that reason all my GoPro footage begins with me scowling into the camera for a few seconds watching the screen to see if it stopped recording after a couple of seconds. On the GH5 the OSD and overlays disappear when you're recording, so that can be a good thing to look for.. -
Some time ago I looked at the skin tones from a range of demo videos from the camera manufacturers, including ARRI LF, Canon, etc and I found that the skin tones on those videos were almost always contained between the indicator line and the line of pure red. Canon was the most red, and ARRI had a few tones slightly to the left of the indicator line, but it wasn't that far, and their tones were very yellow. The profile above is very interesting to me because I can see that: skin tones are pretty good it's compressed in the G/M axis both through saturation as well as by rotating the colours towards those hues, which is common amongst film emulations that try and put more things on the orange/teal dimension it's shifted warm, which you can see by comparing how saturated the red and yellow points are compared to the cyan and blue points are Fun stuff, and great result I wonder if the client was happy with the results, I'd imagine so.
-
I guess we could put it another way by saying that film-making takes so long to learn and films take so long to create that lots of life stuff happens during the process lol. I didn't used to be that interested in lenses, focusing more on post, as it was infinitely adjustable under controlled and non-time-critical circumstances, but the thing that got me onto lenses was the realisation that film is 2D and all / most of the cool stuff that we like is in service of trying to put more of that third dimension back into it. We blur backgrounds, we try and have the subject lighter or darker than the background, we like sliders and camera movement as they add depth by parallax, even the old orange/teal helps there to be colour contrast between the talent and the non-talent parts of the image. And in this task, the lens is the adapter / converter between the 2D world that begins with the image sensor and the 3D world of everything that happens before the light goes through the lens. So if you want more 3D pop, the 3D to 2D converter (the lens) is an absolutely critical component, and even more than that, once that 3D information is lost, it's stupidly difficult to put it back in in post. I completely agree with you about the imperfections of a lens having a humanising effect over a very digital (and depending on the camera, sometimes very brittle and thin) image and processing pipeline. I guess this is where we start to look at the various distortions and their relative aesthetic qualities and if we like them or not. For example, people tend to like a slightly softer lower contrast image for skin tones and a softening to compensate for digititis, some like a lower resolution as an OLPF, field curvature and lens pincushioning, but then there are things like CA which I dislike intensely but others prefer, edge softening which is nice for some compositions but not for others, etc.
-
I disagree. When was the last time you spoke to someone wanting to get their first "real camera"? This keeps changing over time, but my experience out here in the suburbs is that people want to get their first real camera when they have their first kid. Having kids is the most significant life event for most people and increasingly we want to share nice photos with distant friends and family. Now, we all know that baby pics are easy - babies can sleep through a tornado once they're out (or the phrase "sleep like a baby" wouldn't mean anything..) so you can slowly take photos of a motionless well-lit infant. Unless you're arty, you take them looking down, so no messy backgrounds to contend with either. and then they learn to walk, and then run. Now, you're trying to take photos of a fast-moving target, often in interior lighting, and often with the messiness of a kid-trashed house in the background. The most common reason people have asked me about buying a camera is that the $100 P&S (or now their phone) can't keep up "the photos are blurry, or out of focus, and the background is all messy". I've had parents ask me about cameras that can take "a lot of photos at once" and then they can choose the best one. In reality, my first DSLR = my first high-end sports camera. The Canon entry-level camera will only meet this requirement when it's exceeded the specs of the 1DXmk3. Prores in more cameras is in licensing territory isn't it? In which case, as consumers, typically we lose. Beta vs VHS, BluRay vs HDDVD, SACD vs DVD-Audio......
-
Great write-up! I've heard bits of it before, but not the total picture. I have had a journey through film-making as well, with it being signposted with various equipment and tests, trials, and realisations. There's a huge debate over equipment and if it matters or not, and I think one element that doesn't get enough attention during these discussions is inspiration. If a piece of equipment is inspiring to use, then that can and does have a real impact on the creative process. Even if you could get an identical image out of two bits of equipment the one you like using will be the one that makes you pick it up and go shoot, and when you're shooting you'll be in a better headspace as you're looking at the images and using the equipment and enjoying the experience, and this will ripple through your directing, cinematography, and all the other creative aspects. Unfortunately sometimes the equipment that inspires us is expensive, or cumbersome, but so be it. We should all be so lucky to find a lens we love, and then have time to go shoot with it. ..and speaking of flawed but lovely vintage lenses, the Cosmicar is sitting and waiting patiently for some filter adapters - I haven't forgotten!
-
I think the video mentioned Jaws. It does make sense though, considering that at sunrise and sunset the sun is warm and the shadows are cool, so we've probably learned that at those times of day dark things are bluer, so logically (and incorrectly) we'd assume that night would be very blue B&W is great. For one thing, you don't lose resolution in de-bayering, which is great if you're shooting on 1080 or lower. It's also great for bad quality codecs. and any camera that has chroma noise. I found the luma noise on my 700D with ML RAW was terrible, but when you de-noise only the chroma the remaining luma noise was very nice. In a sense, I have. I decided early on that I wanted to get a high quality but neutral capture, and then I want to process it heavily in post. In a sense I'm implementing a kind of human-based computational photography. As I shoot available everything and try to exert as little influence over what I shoot as possible, it's just about capturing it in the most nimble way possible, and spending the limited skill and attention I have on what matters, which was composition and artistic elements. I very quickly realised that lenses can't be simulated in post, and also that AF is stupid, which is why I went manual. It's also why I chose Resolve. Resolve was (at v 12 when I bought it) a very basic editor, but it had advanced features for processing the image. The colour engine is good, the stabilisation was great, and the slow-motion was world-class. It also cost less than the standard stabilisation plugins and the slow-motion plugins at the time too. My adventures in Micro-land are primarily to learn how to make the GH5 look as good as possible, which feeds into learning to grade, which feeds into my pursuit of post-processing. The Micro and P2K are a reference image second only to the Alexas (and five-figure cinema cameras) of this world, so this is my main goal. Everyone knows that a well-edited film is a joy to watch. If I had to choose between a solidly-shot and beautifully edited (including sound design, music, grading, etc) piece and one shot on an Alexa but edited by a first-year film student, I know which I'd choose. Anyone who has shot anything on a GoPro and tried to make it look good in post knows that the footage SOOC is garbage unless you're doing something with huge insurance premiums. Anyone who has downloaded the sample clips from RED and tried to grade them also knows they don't immediately spring to life in post either. Of course, if you own a Micro and light well, then you can set a few settings in the RAW panel and get a lovely image out, which means that with some basic editing and sound and music you can end up with a very nice final product, which makes it easier. My approach is to try and get more convenience when I shoot for less convenience in post, but to get broadly the same result. I'm optimistic
-
Absolutely. The camera market is still moving quickly. We only have to look at: RAW availability coming down in price Resolutions steadily increasing (which has advantages beyond IQ, such as over capture) 360 cameras becoming feasible for over capture etc.. ..and all of these are only to create 2D media. 3D formats are gradually getting more standardised and available and there will come a time when the interface to everyones smartphone will be a wearable augmented reality setup. It's not close, and it won't look like google glasses (thank goodness!), but it will come, and people will take it up because it will be fundamentally more useful than having your window to the online world limited to the size of a letterbox and stuck to a shiny brick in your pocket. I think such a thing as peak camera does exist, but we're probably a century away from "the 5000MP RAW AI processed 180 degree wide-band vision capture from every nano-wearable is good enough for anything that anyone wants to do - we've hit peak optic".
-
I saw an amusing video that talked about how people always grade moonlight as being blue but in reality it isn't, it's just kind of grey, but it's like a cinematic trope that we all just kind of know what it means I didn't realise there were tropes in colour grading, but it makes sense that symbolism would be everywhere! I do get colour shifts with the GH5. It's pretty good, but I spend a lot of time and effort on correcting them, and even then, I'm still nowhere near as good as I could be. I've posted a couple of problem images to the LGG forums and those guys post back images with it nailed and I couldn't even replicate them after an hour or so directly copying. I think it might be one of the most critical part of colour grading TBH, I'm sure we've all experienced shots where we just add contrast and saturation and it's glorious, but when the WB is off, you can spend hours on a single shot and not even get in the ballpark of the 5s grade of something perfectly shot. The tricky thing about shooting in available light is that there is so much fluorescent / cheap LED and other poor quality lights that the magenta/green balance is off, which was never a problem before when all we had was incandescent lights. You can even get the WB a bit off and it just looks warm or cool, but get the green/magenta even a little bit off and things look absolutely awful!
-
Ah, yes. This makes sense now. Coming from shooting available light this is pretty much an alien concept, but of course. I've heard how cinematographers speak about the various looks from different WB and lighting setups, it's all completely known and they can nail a look first try without having to test anything because they just know. I'm yet to test RAW vs the compressed modes, but the fact that big productions often render RAW to Prores then just use that as the master footage and never go back to the RAW suggests the quality involved, so it makes sense. Thinking about all the people that shoot in h264 as their master when prores is of higher quality puts it back in perspective I know it's good for me, but I'm not sure that manual is better than auto for everything... with my limited cognitive CPU power the less time I spend on doing things the camera can do the more time I spend on the things that the camera can't do, like composition, camera position and movement, focusing, etc