-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Seems like a pretty fair overview of the camera, and interesting that it's more sized like the BMCC rather than original BMPCC.
-
It's interesting to hear snippets like this as who knows where on the spectrum from is-that-all to I-didn't-even-think-that-was-possible-take-my-money it will end up. One of the natural things it could end up being is to essentially turn your smartphone and your monitor/recorder into the same device, which would be sweet - imagine having a monitor that supported apps from third-parties! Who knows what else it will do beyond that, but this board sure does like hype and not much information!!
-
Everyone knows you need 8K RAW for flower videos.. I mean, they're flowers, right?
-
I can't offer any answers, but in terms of reducing over-acting I worked on a number of student films and getting actors from theatre was a common practice. I remember that everyone struggled with over-acting but can't remember that anyone found any solutions. One thought though, I do remember that one problem was that these actors aren't aware of how much they were over-acting, and didn't understand when you try and explain that you've got a tight shot and their face is filling a third of the frame. Perhaps it might work to have some practice sessions where you record a few lines then play it back to the actor (on a big TV!)and then you talk about it, and repeat that process a number of times? At least that has a good chance of them re-calibrating their sense of how what they are doing will appear in the final product. They will likely have had this process done to train them in theatre with "more more more" being the guidance offered - this is what you have to counteract. Or it may not work - just an idea Good luck!
-
Need advice for future proofing my pc for later upgrades for editing
kye replied to Color Philospher's topic in Cameras
Is it still a thing to try and buy your memory capacity in less larger sticks than more smaller sticks? I haven't had a desktop PC in a long time, but I remember back in the day having to re-buy all my memory because I had 4 sticks of the same size (eg, 4 x 1Gb) and no free slots, therefore to upgrade you have to go to bigger sized sticks (eg 4 x 2Gb) and then you've got all this old RAM you paid for and can't use.. -
@OliKMIA You raise excellent points, however I still believe that "black box" testing as I've described above would still be useful. The same kind of testing would apply, but you'd have to re-test given firmware updates. It doesn't matter what the mechanisms are within the gimbal, it can be reduced to a "black box" and tested by providing a known input vibration and measuring the output vibration (which would ideally be zero above some cut-off frequency). In analog audio circuits there are two main parts of the circuit - the signal path and the power supply. The job of the signal path is to create an output signal as close as possible to the input signal but amplified (voltage and/or current amplification). The job of the power supply is to take the awful noisy mess the AC power from the power company normally is and make it a DC power source with zero AC on it, both at idle and during heavy amplifier loads. There are dozens / hundreds of designs for signal paths with varying architectures (global feedback / local feedback / zero feedback / Class-A / Class-AB / Class-D / MOSFETs / JFETS / pentodes / triodes / etc) and there are as many power supply designs (linear / regulated / passive filtering / active filtering / valve / solid-state / etc) but all of these can still be tested by looking at what they output with a given typical load. In fact, these don't even require the same testing signal to be applied for calibrated testing setups to create measurements that can be compared to each other. Everything I said above about audio applies to the analog components of video processing and broadcast as well, just at a higher bandwidth and with the video embedded on a carrier wave instead of 'raw' through the circuitry, but the principles remain. If an analog video signal path had a high-frequency rolloff or the power-supply was noisy or didn't have a low output impedance it would result in visual degradation of the picture - something that the test pattern would ruthlessly reveal, which is why it designed and used.
-
@tellure I agree that manufacturers probably have equipment that does this - it's a pity they can't (or won't) share their results with us! Of course, having all the setups calibrated is another whole thing, I'm just talking about a single test regime applied to all the gimbals. @capitanazo & @elgabogomez I agree that gimbals are more than just how well they stabilise, a lot matters in terms of ergonomics, features, how well the app works, etc. this is just one aspect, but a pretty important one! @elgabogomez I'd imagine that you'd need test setups for different weights, for example: smartphone, large smartphone, 500g, 1kg, 2kg, 3kg, 5kg, 8kg, etc. Of course you'd need a balanced version of those setups and an un-balanced one to test how well they do without a perfectly balanced load. You might also find that a gimbal may perform worse with a load a lot lighter than its maximum load, so you might want to test it near its maximum load and also at its minimum load too. I think there's a business opportunity here for a site that really reviews gimbals instead of the kind-of reviews that are being done now - perhaps the DxOmark of gimbals? I don't want to be that person, I just want that person to tell me the answers so that I can buy the right device when I'm in the market for one! This thread is kind of an open letter to that person - please go ahead!!
-
Are you saying that because it can't be perfect forever that it shouldn't be done at all? I guess we should stop testing cameras because no-one can test Mojo (TM) yet!
-
Not at all. You simply have to have an arm that you can mount the handle on that can output repeatable vibrations, and then mount a camera (have a few different weight setups) and record what comes out, then analyse it for how much motion came through on the footage. In a way it would be a device like an orbital sander, but where you can control the direction and amount of vibration. Think about music, it is infinitely complex and hugely complicated but that doesn't mean we don't have measurements for frequency response, distortion etc. Light is hugely complex with infinite shades and colours, but we can measure devices in terms of DR, colour gamuts, etc. The testing method would be straight-forward - setup and balance the gimbal, put the device on the arm, hit record on the camera and go on the arm, the arm does several 'passes' where the vibration gets larger and larger, then you download the footage and analyse it for motion. You'd see that gimbal A eliminated all motion up until 7s in, but gimbal B made it to 11s in, or that gimbal C let through higher frequency vibrations, etc. It's not simple, but it's not impossible. Edit: In order to test different camera setups, you might have a few weights and for each weight you might test a camera that's well balanced and then one that isn't (eg, it's front-heavy to simulate having a long lens). If gimbal A setup with the off-balance setup stabilised better than gimbal B then you could assume that this difference in performance would translate to all off-balance setups as this typically is down to the strength of the motors. You could also test battery life under identical loads.
-
We have standards for tonnes of things, why not gimbals? Specifically, how well they stabilise? As far as I can tell, a gimbal is a physical device that receives vibrations from the handle and through the three motors forms a low-pass filter such that only large slow motions are able to make it through to the camera. This should be easily test-able via a test rig of some kind. I would expect a graph showing dB of attenuation across a range of frequencies over the three axis's of motion. That way we'd be able to say things like: "gimbal X has better attenuation than gimbal Y up to vibrations of strength Z, but above that X runs out of steam and Y is better, therefore for fine work X > Y but for difficult environments Y > X" or "gimbal A has much better attenuation of higher frequencies than B or C or D, therefore if you plan on mounting it to a vehicle (which has a vibration frequency distribution shown in the graph below) you're better off with A". Instead, what we get is "I'm going to watch youtube videos where people compare two different gimbals by running with each in turn, therefore seeing how well each performs IN STABILISING A COMPLETELY DIFFERENT SET OF VIBRATIONS". Hardly the best way to compare devices costing hundreds or thousands of dollars.
-
My experience has been discovering that I also can't grade, but I discovered that Resolve has built-in transforms for C-Log (and name others) and they look lovely to my tastes. I'd bet that the Pocket 4K will integrate beautifully with Resolve, and simply by using their recommended settings will produce nice images that you can then adjust to taste if you want to do something specific. If you're not a Resolve user then I'm not sure how easy it will be to get good results..?
-
This is a topic I'll be visiting when the BM Pocket 4K comes out as if I buy the Pocket 4K I'll probably also be looking for a gimbal for it to make it my single camera setup. What kind of interest do we anticipate in the Ronin from DoPs / professional steadicam operators? I would imagine that Zhiyun wouldn't have a great name in industry, but my (outside) impression is that Ronin is a different story?
-
In my head I think of the Pocket 4K is a kind of "Official ML". What I mean by this is that it will provide results in the same league as ML (eg, a 4K version of the 3.5K RAW in 5D3) but will be fully supported with documentation etc, will be fully functional (eg, having full realtime monitoring while recording) and will be running the hardware well within its spec (unlike ML which is draining every last drop out of the hardware by pushing it to its limits, or past them in terms of overheating etc). I don't think it's better than ML, just different because it's for a different purpose. I have full respect for the wizards behind ML.
-
My favourite travel vlogger / film-maker just posted this gear video that might be useful for some people. Every time I go somewhere and shoot it I reflect on it afterwards and try to adapt my approach and equipment and I've found that over time my setup looks more and more like his setup. I wish I had been a bit more of a fanboi in this regard because I've bought a lot of gear that didn't end up working for me and eventually replaced it with things similar or identical to this setup and it's worked really well for me. I can second the Gorillapod, Rode VMP+, use of USB charging for as much stuff as possible, and use of clear bags for cables and whatnot. In watching the above he also mentions the 16-35 can turn into a 56mm with the A7SII crop mode switched on, which I'd forgotten. He's spoken in other videos about his lenses (he used to be a wedding film-maker so there's lots of gear videos on his other channel Wedding Film School) and IIRC he would shoot weddings with only the 16-35 and a 50mm prime because those lenses combined with the 1.6x crop mode gave him enough focal length options. He also shoots in 4K and delivers in 1080 so he can punch in digitally as well. It's interesting that he mentioned wanting to try his wireless lav mic for travel, which I think @IronFilm suggested at some point. It makes sense if you do lots of talking to camera stuff like Kraig does.
-
Ok, figured it out. Here's how to key out the edges and apply any Resolve adjustments to just the non-edges. This will allow NR without blurring edges (or any other adjustments you want to make). In the Colour panel create two nodes Add the EdgeDetect OFX plugin to one of them to make it the "key" node (Node 1 below) Connect the video signal like this: Make the adjustments you want to in Node 2 (eg, chroma NR) Adjust the settings in Node 1 to refine the mask that gets applied in Node 2 (I recommend adjusting Brightness to maximum to get a strong mask) The above setup excludes edges from being processed in the node, but it seems like you can also exclude other areas as well by adjusting the masks in Node 2 like qualifiers or power windows too, so you could (for instance) use the above to de-noise non-edge shadow areas by having the Qualifier in Node 2 exclude brighter areas of the frame. Enjoy! I wish I had worked this out before so I could do heavy NR processing without heavily blurring the video!!
-
Hang on a minute - I never agreed to that!! Show me evidence in writing!! **jumps up and down demanding evidence in true EOSHD style**
-
I'd be interested to hear your impressions of the native noise that comes out of the camera vs the additional noise settings that you have developed. To my eyes it looked nice (once you remove the chroma noise of course). Considering your style it might even be something you could use for creative effect, using full manual settings at the ISO that gives you the right noise level and then control exposure with a variable ND. @mercer I'm probably stating the obvious but Resolve seems to have pretty good NR, which if you put it in a separate node and use masks then you can get it pretty specific. On the video I shot on the GoPro in the club (with tonnes of ISO noise) I tried to run a NR node with a mask eliminating the edges (as NR is basically blur) but I couldn't find a way to generate an edge mask. The Sharpen Edges OFX plugin detects them, but the mask channel out doesn't seem to work for some reason. I haven't looked into this fully but it would be a great way to do heavier NR without blurring the main edges of objects. Have you compared it to the "Force sizing to highest quality" option in the render settings? (it's under Video -> Advanced Settings in the Resolve Render tab). I typically just turn this on when doing a final render but never tested if it made any difference. I'm reminded of early photoshop days when rescaling images often gave you options about what algorithm to use (nearest pixel, linear, bicubic, etc). My memory was that there were lots of different ways to scale images and they made a real difference in both quality and CPU time required.
-
Your logic makes sense to me. If it's not the LS300 mk2 then perhaps another camera, but definitely something related.
-
What's that saying? Never let the facts get in the way of a good story!!
-
Thanks! The ISO noise in the RAW files is a COMPLETELY different beast from the ISO noise when using the same ISOs in Canon stock video mode. This noise is filmic and soft and, perhaps most importantly, has a fine texture. That same noise once scaled and compressed in the Canon stock mode is just horrible. In the above video I did do NR and added some noise before export (on the recommendation from @kidzrevil) but even without that treatment the noise isn't a bad thing. One thing you might want to do though is to remove the chroma noise (colour noise) which doesn't have that nice feel.
-
@IronFilm I have a couple of beliefs (theories perhaps) that apply here. The first is that everything can be thought of like a signal-path.. obviously audio, where I think the concept comes from, but everything else as well. In the end, everything matters because everything has the potential to go horribly wrong. The second is that it's not enough to just not stuff something up in each stage of the signal-path, but every technical aspect of film-making has associated aesthetic attributes, which may or may not be aligned with what you're trying to do in the film. For example a horror film shot with high-key lighting and bright colours isn't as good as gloomy lighting and dull gritty colours. In the end we're trying to manipulate things (set, costumes, lighting, camera, sound capture, script, acting, colouring, editing, VFX, etc) so that they're all suited to and adding to the overall aesthetic of the film.
-
I have 700D with ML.. it's a decent setup! I believe the limitation of the 700D in non-crop-mode is 1728 because non-crop-mode takes every third pixel across the sensor and therefore runs out of pixels. It's not a limitation on the SD card. 1728 RAW upscaled to 1080 isn't bad at all, but you'll need to play with the sharpening to keep a bit of detail. I believe the low quality of the non-ML video from the 700D is the combination of 1728 upscaled to 1080 and then heavily compressed, making everything blurry. With ML RAW in non-crop-mode you'll still face the same challenge as you'll likely be compressing the final output for delivery. My card can't do 1728 continuously, so I've used 1600 for this video: This was also a low light test (ISOs at up to 3200 / 6400 for some shots) so not the perfect conditions. With 1728 you'll be able to do better in terms of sharpness and detail. I'm looking forward to the SD card hack being stabilised and re-released so I can get higher resolutions!
-
Just In Time For Christmas!!!! This space is really hotting up.. A7III, Fuji XH1, BMPCC4K, GH5, etc - so many great options in the compact-4K space! and more to come I'm sure with A7SIII probably only a release cycle away.
-
What mode/codec do you anticipate would be used on the higher-end productions using this camera? I'm curious because when I look at those bit-rates and combine them with the size of typical media everything above 1080 seems to need large capacity recording For example, a 64Gb card recording RAW 4:1 at 24fps only gives about 14 minutes per card. Or 18 minutes in UHD ProRes 422. These seem like quite low card capacities to me, but maybe you're rotating cards pretty quickly? When I'm out shooting I only take a couple of cards out for a day and I have pretty large shooting ratios, but I'm shooting at much lower bitrates (~30MB/s).
-
My understanding is that it's somewhere between "in a while" and "never". I certainly wouldn't be making any plans on it!