KnightsFan
Members-
Posts
1,292 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
I haven't used Adobe since CC 2015 so I have no idea how Resolve's encoder stacks against theirs. I suspect that it's just that their H.264 encoder isn't great and that they don't really focus on that as it isn't a "pro" codec like ProRes or DNxHR. To be honest, I don't know much about ProRes in general, but my impression is that there are fewer options, whereas H.264 is a massive standard with many parts that may or may not be implemented fully.You'd have to do your own tests, but I would doubt that Resolve's ProRes encoder is as bad as their H.265 one. Actually, to be fair, their H.265 encoder isn't even their product, you have to use the native encoder in your GPU if you have one, and if you don't, you can't even export H.265 at all I think.
-
No converter ever seems to have the options I'm looking for. I first started using ffmpeg to create proxies. After shooting I run a little python script that scans input folders and creates tiny 500 kb/s H.264 proxies with metadata burn in. I tried other converters but I had so many issues with not being able to preserve folder structure, not being able to control framerate, not being able to burn in the metadata I want, etc. I've also had issues with reading metadata--sometimes VLC's media information seems off, but if I use ffprobe I can get a lot of details about a media file. I also use ffmpeg now to create H.265 files since Resolve's encoders are not very good. I can get SIGNIFICANTLY fewer artifacts if I export to DNxHR, and then use ffmpeg to convert to H.265 than if I export from Resolve as H.265. And recently I did a job that asked for a specific codec for video and audio, the combination of which wasn't available to export straight out. So I exported video once, then audio, then used fmpeg to losslessly mux the two streams. And another little project that required me to convert video to GIF. It's become a real swiss army knife for me. Yes, Resolve has a batch converter. Put all your files on a timeline, and then go to the deliver page. You'll want to select "Individual Clips" instead of "Single Clip." Then in the File tab you can choose to use the source filename.
-
So the current situation is that: 1. You have H.264 footage, but it is not linking properly in Premiere 2. You can convert H.264 to ProRes with Adobe Media Encoder, but it bakes a black bar at the bottom 3. You can convert H.264 to ProRes with Compressor, but there is a color shift 4. IF you could convert to a properly scaled ProRes file, you can get it to work properly in Premiere. Are all of those correct and am I missing anything major? If not, one option is to use ffmpeg for conversion. It's is my go-to program for any encoding or muxing tasks, and I've never had any issues with it encoding poorly or shifting colors. Is this an option?
-
I dont think thats a fair assessment of blackmagic. Prores is, for whatever reason, an industry standard. Blackmagic includes it because thats what standard workflows require. If blackmagic were responsible for making prores standard, then you could say they were just trying to market an inferior product. Moreover, as much as i believe that more efficient encoding is better, it is significantly easier on a processor to edit lightly compressed material. Editing 4k h265 smoothly requires hardware that many people simply dont have yet, such as the computer lab at a university i recently used. Prores was simply easier for me to work with. But for the most part, you are right. Processors seem to be a limiting factor for cameras at this point. Even "bad" codecs like low bitrate h264 can look good, if rendered with settings which are simply unattainable for real time encoding with current cameras. Its great to see sony making bigger and better sensors, but with better processors and encoders, last generation sensors could have better output.
-
Have you looked at the file in other programs? Can you confirm whether it's exclusively a Premiere problem, or is the problem with the files themselves?
-
Because downsampling decreases noise, thus giving more dr in the shadows (at the expense of resolution of course).
-
will we ever see the rise of the global shutter cameras?
KnightsFan replied to Dan Wake's topic in Cameras
My point was that eventually DR and sensitivity will be good enough, and the convenience of global shutter vs. mechanical will take over. 14 stops vs. 17, 14 vs. 140, same thing if we don't have screens that can reproduce it anyway. Global shutters are already sought after for industrial uses, which means there is always going to be some innovation even if consumers aren't interested. When GS sensors get good enough to put in consumer devices and make satisfactorily good photos at a low cost, then we'll see them proliferate, and us video people will benefit as well. I don't think the DSLR video market is large enough to push towards global shutters on its own. I think global shutters are more likely to be pushed in the photography world. Just my prediction. -
will we ever see the rise of the global shutter cameras?
KnightsFan replied to Dan Wake's topic in Cameras
If global shutter technology gets reasonably good, I imagine it will find its way into photo cameras in order do do away with shutters. If 10 years from now we have a choice between 20 stops of DR with 6ms rolling shutter vs. 14 stops with global shutter, I'm sure a lot of people would choose the latter. -
Actually you can. For DR, downscaling reduces noise because for each pixel in the downscaled image, you combine four signal values that are (almost always) highly correlated, and combine 4 noise values that are not correlated. Thus, your SNR will be lower and with a lower noise floor, you have more dynamic range in the shadows. You can verify this by using dynamic range testing software, or by a simple calculation-- imagine 16 adjacent pixels that each have an ideal value of 127 out of 255 (e.g. you are filming a grey card). Add a random number between -10 and 10 to each one. Calculate your standard deviation. Now, combine the pixels into groups of four, each of which has an ideal value of 508 out of 1020. Calculate the standard deviation again. The standard deviation on the downscaled image will be lower if the random number generator is, in fact, random and evenly distributed. (This works because in the real world, the signal of each pixel is almost always correlated its neighbors. If you are filming a random noise pattern where adjacent pixels are not correlated, you could expect to see no gain in SNR.) As for color fidelity, a 4:2:0 color sampled image contains the same color information as a 4:4:4 image that is 25% of the size. Each group of 4 pixels, which had one chroma sample in the original image, is now 1 pixel with 1 chroma sample.
-
Yeah, I'm sure the 16 bit refers to the linear raw file. Applying any curve, whether a creative color grade in post or a simple gamma on a SOOC jpeg, could see benefits from a higher bit depth ADC. An extra 2 bits at the digital quantization stage wouldn't even translate to larger files on a 10 bit video output, but could improve the dynamic range and such.
-
Z Cam E2 will have ONE HUNDRED AND TWENTY FPS in 4K??
KnightsFan replied to IronFilm's topic in Cameras
It is a sony sensor. @DBounce thats correct, two models, one for 30p and one for 60p. -
Z Cam E2 will have ONE HUNDRED AND TWENTY FPS in 4K??
KnightsFan replied to IronFilm's topic in Cameras
@sanveer yes, its been talked about a few times on the facebook group. It should do 4k60 at 10 bit with a native iso of 250. -
Z Cam E2 will have ONE HUNDRED AND TWENTY FPS in 4K??
KnightsFan replied to IronFilm's topic in Cameras
I agree. Im a vocal global shutter supporter, but z cam is a small company and might be better off making one really good camera. And as much as i want global shutter, sensor tech isnt there yet without major compromises. @DBounce yeah it is 1". -
Z Cam E2 will have ONE HUNDRED AND TWENTY FPS in 4K??
KnightsFan replied to IronFilm's topic in Cameras
The GS variant will probably have even fewer takers. It has a 1" sensor, lower frame rates, and less dynamic range. Basically, the RS version will be better in every way except that one feature, so I doubt it will influence sales of the RS version much. -
Panasonic announcing a full frame camera on Sept. 25???
KnightsFan replied to Trek of Joy's topic in Cameras
True. But how long will it be before 8k is commonplace? What about photographers? My point was that M43 cannot match FF in resolution, dynamic range, and noise levels all at the same time, in response to a general prediction about the future of camera tech. If you say "the GH5s is better than any FF currently out there," I think that's a reasonable claim about a specific collection of products. However, saying "the GH5s shows that M43 can match FF" is a prediction I don't agree with. If we were to say that the GH5s sensor is better in low light than the (significantly older) a7s2's--which I disagree with--then make an array of 4 GH5s sensors and you have a FF camera with the same per-pixel noise performance, but 4x the resolution. Is such a thing reasonable to imagine? Not right now--but five, ten years from now it's hard to believe we won't have sensors like that. Right now, M43 benefits from an explosion in camera technology that naturally comes to smaller, cheaper cameras first. The benefits of 10 bit, higher bitrates, etc. are not inherent to M43 sensors, it's just that they haven't filtered up to FF products yet. Of course, I'm being sort of philosophical about the future of tech. There's a limit to how useful extreme performance is when shooting in the real world. -
@androidlad If half of the "leaks" you've posted are even close to the truth, I'll be absolutely floored. So it's going to be a 60 megapixel camera that shoots a full readout downsampled to 4k 60p, outputting RAW to the AXS-R7 recorder utilizing a quad-base ISO? Oh yeah, and the E to E mount focal reducers. You are saying that Sony--who lost the 10-bit photo/video hybrid race to Panasonic, Nikon and even Canon--and who currently maxes out at a 100 Mbps codec--is going to release a camera that leapfrogs the market, logic, and physics itself. If any of these are true, I'll be the first to admit you were right. But to say I'm skeptical is an understatement.
-
The only time i really used it was when i had a single outdoor shot between two indoor scenes. The outdoor shot was very busy with high frequency detail, especially in grass and leaves. The indoor shots had almost no hi frequency patterns. So i reduced midtone detail on the outdoor shot a little bit just to soften it, and make it stand out less.
-
Nikon Z6 features 4K N-LOG, 10bit HDMI output and 120fps 1080p
KnightsFan replied to Andrew Reid's topic in Cameras
It was mainly a joke, but smaller surface area means less signal (or more noise, depending on how you look at it), and wider lenses means deeper DOF. Jokes aside, in some scenarios ultra fast lenses are needed--regardless of how sensitive your sensor is. I don't shoot at f1.4 very often, but it's great to have that option. -
Nikon Z6 features 4K N-LOG, 10bit HDMI output and 120fps 1080p
KnightsFan replied to Andrew Reid's topic in Cameras
People who shoot 4k on an eos R. -
Prores data rates are based on the frame rate and resolution. 24 fps 1080p is 117 Mb/s on normal prores, or about 3x the data rate you have. That means you would need about 11.1 TB to store it. Prores lt is smaller. If i were you, i would convert one or two files and judge the quality difference yourself, but i imagine its fairly similar.
-
The mod should work after you take the card out. It's been a long time since I've installed the mod, and I forget exactly how I did it. From the readme: So it looks like you just put the files on the SD card, put the card in, and follow on screen instructions. Afterwards, you should be able to use the mod without the card being in the camera. I never had any need for removing a time limit, but it seems that the patch should remove the time limit. (https://github.com/ottokiksmaler/nx500_nx1_modding/blob/master/Removing_Movie_Recording_Limit_Without_Hack.md)
-
I don't know of any commercial solution, but I 3D printed a cable clamp. I think smallrig makes some sort of cable clamp which attaches to their cages. They have a few generic DSLR cages. That may be overkill, though, to have an entire cage just to hold the HDMI cable. You might also be able to use a velcro wire tie to secure the HDMI cable someplace on the rig. That way it won't come out simply by tugging on the monitor end.
-
Z Cam E2 will have ONE HUNDRED AND TWENTY FPS in 4K??
KnightsFan replied to IronFilm's topic in Cameras
I agree, it's not good that they still have bugs, but it is excellent that they continue to release updates. I see Z Cam as the polar opposite of a company like Canon: if you want maintenance-free reliability and instant integration into existing workflows, look to the traditional products like Canon's C line. If you want bleeding edge, not-tried-and-true technology that you can experiment with, and provide meaningful feedback on to guide the development of the next generation of products, that's where Z Cam fits in. It's similar to the difference between Linux and Apple. It's entirely possible to have a great experience, reliably using Linux to do real-world work, but you do have to do your own research, and accept more responsibility as an end-user to debug things on your own. And the truth of that situation is that Z Cam will likely never have the market sway that Canon et. al. do, and will never have the resources or vast QA teams to match the big players. But the benefit of the small team is you can have a more personal experience--I mean, you can't just hop on Facebook and provide personal input to the Canon CEO on what features they should include.