KnightsFan
Members-
Posts
1,292 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
I don't watch GoT so maybe the few images I've seen online are misleading, but it looks unnecessarily dark to me. Compare the screenshot at the top of this page https://www.theverge.com/tldr/2019/4/30/18524679/game-of-thrones-battle-of-winterfell-too-dark-fabian-wagner-response-cinematographer With another nighttime battle: https://images.app.goo.gl/3wP5kc7T9JKKCi7m7 The LotR image is obviously nighttime: it's dark, bluish, and moody, and yet the faces are bright enough to see without any trouble, and there are spots of actual pure white in the reflections. It's the job of the cinematographer to give the impression of darkness while keeping the image clear and easy to understand. If it were an end-user calibration problem, everyone would be complaining about every other movie as well. It seems like something was different here.
-
Nikon Z6 features 4K N-LOG, 10bit HDMI output and 120fps 1080p
KnightsFan replied to Andrew Reid's topic in Cameras
I've read that FF is more expensive than APS-C because the same number of defects on a wafer will mean a lower percentage yield if you are cutting larger sensors off of that wafer. In other words, 5 defects on a wafer that will be cut into 100 sensors could mean 5 defective sensors and 95 good ones, 95% yield in the worst case. If you are only cutting 4 sensors off of that wafer, those same 5 defects give you a 75% yield at best. The conclusion is that a large sensor is actually more expensive per sq. mm than a smaller sensor. I'm not sure what the actual numbers are--maybe it is just a $150 difference as you read. -
Nikon Z6 features 4K N-LOG, 10bit HDMI output and 120fps 1080p
KnightsFan replied to Andrew Reid's topic in Cameras
That makes sense, because Long GOP 100 Mbps should be better quality than All-I 400 Mbps for a static shot. For example, something like a medium shot of an interview subject has like 75% of the frame is completely static, and will be encoded once per group of pictures, instead once every frame for All-I. That's significantly more of the bitrate that can be allocated to the 25% of the frame that is moving. There are very few circumstances in which I would choose to record All-I. I don't see 140 Mbps in The Z6 as a downside.. Even 4:2:2 vs. 4:2:0 makes no visual difference, though it's better for chroma keying. The real advantage of the GH5 is 10 bit. 8 bit falls apart almost immediately, particularly when used with log curves. But despite it's codec fidelity, GH5 footage is not fun to work with so I'd take a Z6 myself. I do wish Nikon had included a 10 bit HEVC codec though. -
Nikon Z6 features 4K N-LOG, 10bit HDMI output and 120fps 1080p
KnightsFan replied to Andrew Reid's topic in Cameras
What do you mean by "data"? Because in terms of digital data collected, the GH5 records more pixels (in open gate mode), and stores it in more bits. So no, the Z6's larger sensor does not record more data. True, more photons hit the Z6 sensor at equivalent aperture, due to its larger size. That's why the low light is significantly better. I agree, the Z6 has better color. Like I said, I'd rather have the Z6 for many reasons, color being one of them. All I am saying is that the GH5 holds up much better to an aggressive grade before you showing compression artifacts, not that it looks subjectively "better." -
Nikon Z6 features 4K N-LOG, 10bit HDMI output and 120fps 1080p
KnightsFan replied to Andrew Reid's topic in Cameras
The GH5 also totally destroys the A6500. I believe the A6500 is one of the main reasons there is a general negativity towards Sony. It has odd color (to be generous) and the worst possible rolling shutter. I think we all know how big sensors are. Size does not necessarily mean more detail, or better image, a lot of that is determined by the processing done afterwards. And sensor size has nothing to do with the amount of data unless you keep pixel pitch the same. I think the GH5's 4:3 open gate mode records more pixels/data than the Z6 in 16:9 UHD, and certainly packs it into a beefier codec. From what I have seen, the Z6 doesn't have a particularly good image, compared to a GH5 at native ISO. Maybe that's because I've actually worked with Gh5 footage on real projects, whereas I've only downloaded test clips of the Z6. But the GH5 holds up better to grading with high data rates. I haven't tried to work with any externally recorded Z6 footage yet, but, as soon as you require an external recorder attached with a feeble HDMI cable, you've really lost my interest. It's a workaround to the fact that many DSLR-style cameras don't come with the video codecs and tools we need. That isn't to say the Z6 is bad. I'd rather have a Z6 than a GH5, but it's not because the video image fidelity is better at native ISO. Low light is much better, and full frame means I'd use vintage lenses at their native FOV. But I really think it's a trade off with the current batch of smaller sensor cameras that have more video-oriented features. -
Thank you. As someone who has to listen to the audio while I edit it, I have a great appreciation for boom ops. I think it's more likely that we use deep learning to clean bad audio than we invent a robot that can hold a physical boom pole properly.
-
https://www.newsshooter.com/2019/04/19/premiere-pro-version-13-1-1/ Hmm, i wonder if the editor updated to 13.1, which caused the crashes to start, and then today we went to 13.1.1 which fixed the crashing, but also made our footage incompatible. Anyway, this is why it is great that blackmagic releases betas. I have seen people deride it as "releasing incomplete software", but having public betas is a great idea as long as people aren't stupid enough to assume a beta is for anything othet than testing. And it's infinitely better than having to patch a "non beta" release a week later because it crashes every 2 seconds.
-
We were hours away from finishing an edit in Premiere yesterday on a fairly big project, and randomly Premiere stopped working. Just crashes. Backup projects did the same thing. It was working one minute, and stopped the next. So we updated Premiere, only to find that the new version doesn't load any of our clips at all (simple H.264 files). So tonight I am batch converting all of our footage to a new format so we can finish the edit tomorrow, before Adobe decides to update Premiere again. This is eerily similar to another story we had here. So yeah, I hope they are in damage control, controlling the damage it does to end users. Needless to say, my longstanding suggestion of switching to Resolve has been accepted. I'm excited to use 16 for our next project!
-
That's the danger. With our current economy, the more we employ robots, algorithms, and artificial intelligence, the more economic disparity we'll see. So if we don't figure out how to value humans before poverty rates go through the roof, we'll have war. Also, side note, I saw this just now. It will be interesting to see how this kind of service evolves over the next 5, 10, and 20 years. https://petapixel.com/2019/04/11/this-robot-photographer-just-shot-her-first-wedding/?fbclid=IwAR0lbBvyZ-dMj7DHrwqGsQuaTXEFAjUX7uup-AkpWisfvTTV8qzn_Iqry1k
-
@heart0less I get that in 15.3 when doing Fusion composites sometimes, even on a 2048x1080 timeline. It's one of many reasons I still use Fusion standalone for anything that uses more than like 2 nodes. For reference I've got a GTX 1080 with 8GB VRAM.
-
That is exactly what machine learning is for. Studying human nuances and internally creating an abstract model based on real world patterns, instead of a human-programmed algorithm. It is exactly what a human wedding videographer does: use experience to govern future actions.
-
Depends on how old you are. A lot of the stuff I mentioned are real things that we can do now. Here's some really interesting things to look into: Many of us have probably already seen this, where they generate a 3D map of the entire soccer match from an array of cameras. That was a tech demo from over a year ago. Here is nice overview of where we currently are with machine learning as it relates to 3D modeling. Includes some links to tools you can go try out right now. In the later half of the video he shows off some text to image generators, and photoreal facial generators. Certainly worth a watch. Speaking of which, there's the amazing deepfake engine. we've already seen the beginning of machine learning creating screenplays or even entire movies. And before you point out that these aren't anywhere near the quality that humans can produce, look at the timeline. According to Wikipedia, deep learning "became feasible" in the 2010's. In 2018, nVidia announced the Turing chips with Tensor cores, which use machine learning for denoising, really the first real integration of machine learning into consumer vocabulary that I have seen. It's used for real time raytracing in video games. Just in the past month, both Adobe and Blackmagic have announced integrating AI into their NLEs. We've barely begun with AI and machine learning. Where do you think we'll be in 20 years? As for your thing about robots taking over jobs, that is exactly right, which is why we need to figure out what an economy that no longer requires human input will look like, before it's too late, which comes full circle back to the original post. What will be the monetary value of work when the end for which that work is a means is unnecessary? Edit: Couldn't resist adding this one: An AI found a glitch in the video game Qbert to get an obscenely high score. In 35 years, no human had found the glitch.
-
First of all, a lot of the video/editing jobs aren't art. Analyzing a billion ads and creating something similar for a new product is EXACTLY what machine learning does best. And it's not like it's just a black box--an AI can spit out a dozen, a hundred, or a thousand samples, let a human pick what they like best, and refine, and with each iteration, the machine creates a slightly better algorithm. Instead of hiring a motion graphics artist, a business owner who wants a commercial can just sit down with an AI and pick which ads they like out of a never-ending stream. Second, I disagree entirely. How do human artists work? They build a knowledge of art history, change a few things, and build off of feedback. That is exactly what machine learning does. Instead of C-3PO wandering about shooting a wedding, picture this: A robot scouts the venue ahead of time and sets up a few dozen small cameras to film the wedding from all angles, and then uses those cameras to reconstruct the entire ceremony in 3D. It then picks the best angles based on the knowledge of every single wedding video ever shot, taking into account the satisfaction ratings of the couples (using videos of the couples' faces when they see their video). With each video, it experiments slightly by changing a few things. It composes music for the wedding based on knowing what songs the couple plays, and knowledge of all music ever written. It does all of this by the next morning. No one sees the robots at any stage--completely discrete. With each wedding it shoots, this system improves slightly. And since it's a machine, it can shoot virtually unlimited weddings every day, thus quickly becoming the best wedding videographer on the planet. Obviously this isn't going to happen tomorrow, but there is no way to stop it from becoming a reality in the near future.
-
@kye Oh, okay, I must have misunderstood what you were getting at when you said: ...since the F6's 32 bit mode should be useful in pretty much any scenario that dual channel is--which, as you say, is "all the time." But yeah, the F6's dynamic range doesn't increase the microphone's dynamic range. You still can't get leaves rustling and a jet engine 50m away in the same file with the same mic, unfortunately.
-
@kye Have you ever used dual channel recording? How is what you are explaining different from dual channel recording and using the peaks from the lower track to replace the clipped portions of the higher track? Because I do it all the time and it works, especially when mixed in (surprise) 32 bit space, because then you can match the relative levels of the two tracks without distortion. I don't know all the maths, but I know from experience that dual channel recording is a life saver at times. I assume the F6 basically does the same thing, combining two input gains based on peaks, but automatically and internally, saving time and file space (1x 32 bit file is smaller than 2x 24 bit files).
-
@SR It's absolutely a big deal, but primarily for non-pros. I believe what @kye is saying is that Zoom didn't do anything particularly difficult--it's not like they completely redesigned how the circuitry works. It's similar to the dual channel recording feature many recorders have had for years, except that it merges the two files automatically into a 32 bit file, instead of giving you two 24 bit files that you can manually splice if you so desire. But it's absolutely a useful feature, especially for one man bands who don't have enough eyes to watch the camera and the audio meters at the same time, or for ultra low budget projects (like mine) who employ non-pros without much experience. The dynamic range of the audio file should increase dramatically. 32 bit 48kHz is exactly twice the file size of 16 bit 48kHz, not including the negligibly small amount for metadata.
-
It has two ADCs. If you overpower the higher gain one, it still has the lower gain one. If you overpower the lower gain one, then your microphone is already distorting (according to Zoom) and no amount of gain reduction will fix it. It doesn't look like you have any control over either of the gain settings on the ADCs.
-
The difference with photography is that you are never in a situation where your lens clips yoyr highlights, whereas a microphone has a dynamic range. If the 32 bit recording has more dynamic range than the microphone, then yeah, you would never need to adjust gain. Basically all they are saying is the digital recording format is no longer the limiting factor in dynanic range, it's the microphone itself.
-
I'm glad someone finally went to 32 bit audio recording. A lot of software processes audio in 32 bit for essentially infinite dynamic range, it was only a matter of time before people started recording in that format. It would certainly help on a lot of sets I'm on, where we have students/less experienced people running sound who are more likely to clip audio. Or any time I'm the solo audio guy, I can concentrate 100% on the boom without worrying about gain. Currently, I use dual track recording, but then I've got to patch things together in post manually. However, without having used it obviously, I think the F6 looks a lot less ergonomic than the F4/8. One thing I like about the F4 is having an immediate button to turn tracks on or off, and to solo a track. It looks to me a though you probably have to menu dive to do those actions on the F6, which would really suck. I also like the headphone knob on front on the F4.
-
Z-CAM quietly announce 8k and 6k FULL FRAME cameras - no joke!
KnightsFan replied to Oliver Daniel's topic in Cameras
6k60, 4k100 "at least" and "more than 200" in HD. Mentions dual native ISO sensor as well with 14+ stops of dynamic range. There will be MFT, EF, and PL mounts. Nice! -
NAB 2019 predictions and major talking points - BMPCC 4K Pro anyone?!
KnightsFan replied to Andrew Reid's topic in Cameras
Ah yes! I totally forgot about that, but now I remember reading it before. Hmm, I actually might really look into that URayTech one now that I have some time. Ideally I'd like to integrate the video stream into a custom app, and I have a feeling the AccSoon has some closed source pieces since it seems to do intelligent switching and stuff, whereas the URayTech one actually mentions having an open SDK. -
NAB 2019 predictions and major talking points - BMPCC 4K Pro anyone?!
KnightsFan replied to Andrew Reid's topic in Cameras
I haven't seen anyone talk about the Acsoon CineEye HDMI transmitter yet. It looks like a good, very cheap way to get wireless video. For $219, you get a single device that wirelessly sends HD video directly to your phone. Currently, the only way I know of to do this on the same budget is with consumer wireless HDMI transmitters, which means two devices that need to be rigged and supplied power, and an external monitor at the receiving end (also requiring power!). With this, it's a single device with a builtin battery that can send video to a director, producer, or any 4 people in the area who have phones or tablets. I really think that sending video over Wifi should be a standard feature in all decent cameras these days, although Z Cam is the only one doing it. -
After reading up a bit, the whole ZaxNet concept is brilliant. Really a phenomenal feature. Adding an npf battery plate for the f6 is a nice addition. It also says it can be powered by USB c, so I assume that you can use it as an audio interface with a single cable for power and data. Speaking of which, my biggest annoyance with the F4 is that it doesn't boot back into interface mode. It lives on my desk as an audio intetface between shoots, and it's an extra 10 button pushes every time I turn it on.
-
I don't know much about the more expensive audio toolset and most of those better features would be wasted on the small sets I've been on anyway, to be honest. I'll have to look more into ZaxNet and thr way the Nova has builtin slots for wireless receivers. The zoom F6 is an odd looking thing for sure. Almost cube shaped. Any idea why they would depart from the F4/F8 shape?
-
It's not theoretical. Check out 4k footage from the NX1 vs. the 1080p footage from the same camera. Downscale the 4k footage to 1080, and compare them at that HD resolution. It's a night and day difference, from the same sensor, processor, and codec. You can do a similar comparison on most other cameras as well. Whether it's "useful in practice" depends on whether you're talking useful to the general public's enjoyment of a compressed YouTube file, or useful to a cinemtographer's critical eye looking at a high bitrate master. And just to be clear, downscaling in post doesn't gain quality. It loses quality. The original 8k 4:2:0 file has better information fidelity and more spatial resolution than a 4k 4:4:4 file downscaled from the 8k original. The only claim is that downscaling to 4k 4:4:4 will retain more information than downscaling to 4k 4:2:0.