-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Been online singe 1994 - the strength of my filters should be a testament to that!
-
Could be, they're very easy to drink! Speaking of Korean TV series, they're surprisingly well produced and image quality is often remarkably high. My wife says the storytelling is also very good, so if you're inclined to a bit of soap then they might be worth a watch. Netflix is full of them if you can navigate their algorithm in the right direction 🙂
-
Transcoding ProRes or other with M1? Better results than h.265?
kye replied to filmmakereu's topic in Cameras
Everyone wants to talk about 4K and 6K and 8K and 12K, and to talk about 8-bit vs 10-bit vs 12-bit vs 14-bit, but when it actually comes to the quality of the image, people don't want to know. I was interested in it but found so little online that I ended up having to do my own study on the SNR of various codecs myself. It's here: You're talking about the quality / performance of the same flavours of codec but with differing encoding methods (software/hardware). I'd bet you won't find anything and you'll have to do your own testing to find answers. I found it almost impossible to get information on the T2 chip that even specified if it was supported, and if so, what codecs / bit-depths / subsampling it supported. Good luck, but my advice is to give up asking. -
We bought one of each and tried them all at a dinner party, with my wife (who is heavily into watching Korean TV series) explaining the customs around drinking it, with the eldest person serving first, how to hold the bottle as you pour, who has the glass highest when toasting, etc etc. The only challenge was the 'original' flavour, which reminded me of grappa / methylated spirits! Urrgh.
-
Yeah, that's what I thought you were getting at. Wow... you mean that hiring a photographer (or videographer) isn't just hiring their camera, and that the operator is more than just a technician????? HOLD THE FRONT PAGE!! But seriously... yeah. All this equipment stuff is BS if you're putting it ahead of the creative side. No-one does their best work when fighting with the tech, even if the tech is nice tech. "The acting was uninspired, the lighting was awful, and the story was un-engaging, but overall it was a great film because it was shot on a great camera with high resolution and great colour science" - said no-one ever.
-
This conversation gets better and better.... The simple choice of wanting 8K output not only completely dominates the hardware that the OP has to use, necessitates a proxy workflow which the OP seems to be against, but now it dominates the choice of NLE! Never mind if I hate the computer, workflow, or software.... I have 8K! I think the OP should really be asking themselves how much 8K is worth in comparison to the other things it looks like they have to trade-off. I used to be very 4K-centric and was wanting to future-proof my videos as much as possible (which is a genuine thing considering I am basically the family historian so videos will get more interesting over time rather than less) but as I gradually discovered what I liked in an image and what made the most difference to getting those results I have gone down to 1080p. My interest in 4K was naive and it was only through learning more about the craft that I realised how little it actually matters. Everyone is different in their priorities, but when priority #1 means having to compromise on #2, #3, #4, and #5... it's a good time to question how that stacks up.
-
I was advocating for 1080p Prores / DNxHD proies, so that only requires a 1080p ALL-I capable computer, which these days is almost all of them. If he wants to shoot 8K and master in 8K then good luck to him, it's the editing experience that is the question. Plus who knows what kind of hype around 8K is present in the marketplace these days - until clients work out that 8K is pretty well useless and doesn't improve IQ even if it can be broadcast then there might still be money in offering 8K as a service, and considering the state of film-making and 2020 I understand anything to give a competitive edge..
-
Soju! I had my first soju experience only the other day, and it was quite delicious 🙂 I also like that the grapefruit one was featured - it was one of the ones that everyone liked the most!
-
The other challenge that @herein2020 was avoiding was that of incompatibility. My dad used to work for a large educational institution and ordered a custom built PC to replace their main file server, so naturally ordered the latest motherboard, CPU, RAM, RAID controller and drives. Long story short, two months after getting it he still hadn't managed to get an OS to install correctly, and neither had the other people on the hundred-page thread talking about the incompatibility, in which multiple people verified that the manufacturers of various components were all blaming each other for the problem and no-one was working on a solution, so my dad did what everyone else in the thread did and gave up. He was lucky enough to be able to lean on their wholesaler to take the equipment back with a full refund, but others weren't so lucky, and the thread continued for another year as various people swapped out parts for other models/brands to see what the best fully-functional system was. Or you just buy something that someone else has already checked for you. There's a reason that many serious software packages are only supported on certain OS versions and certain hardware configurations. It's because their clients value reliability and full-featured service and support rather than getting a 12% improvement on performance.
-
XMAS DOGGIES!! My only question is - what alcohol did you put in those little barrels? Brandy perhaps, for the holiday season? Whisky for a more straight-up drink? ...Tequila perhaps? 🙂 🙂 🙂
-
While the C70 has many advantages and attractive features, I think the above is probably the most important factor of any camera because it impacts the level of creativity and inspiration of the video, not just what colour or clarity the pixels are in the output files. I consistently find that creativity evaporates when I feel like i'm fighting the equipment rather than it having my back and supporting the work. In the creative pursuits, this is a night-and-day difference, but of course, is also different for all people, so it's about the synergy between the camera and the user rather than one camera suiting everyone. Great stuff!
-
I thought quite a few folks here liked the images from the Z6 but maybe the timing wasn't right for people to actually get one. Certainly, the colour from Fuji on their latest cameras is very nice, and the eterna colour profile is very nice indeed.
-
Fair enough. Unfortunately, your budget isn't sized appropriately for the resolutions you're talking about. I think you have three paths forward: Give up on the laptop and add a zero to your budget, making it $20000 instead of $2000, then go find where people are talking about things like multi-GPU-watercooling setups and where to put the server racks and how to run the cables to the control room Do nothing and wait for Apple to release the 16" MBP with their new chipset in it (this could be a few years wait though and no guarantees about 8K) Work with proxies Proxies are the free option, at least in dollar terms, and you probably don't need to spend any money to get decent enough performance. I'd suggest rendering 1080p proxies in either Prores HQ or DNxHD HQ. This format should be low enough resolution for a modest machine to work with acceptable performance, but high enough resolution and colour depth so that you can do almost all your editing and image processing on the proxy files, and they will be a decent enough approximation of how the footage looks. Things like NR and texture effects would need to be adjusted while looking at the source footage directly, but apart from that you should be able to work with the Proxy files and then just swap to the source files and render the project in whatever resolution you want to deliver and master in.
-
There are two ways to buy a computer for video editing. The first is to look at what performance you need and buy something that can deliver that for you, regardless of price. The second is to set a budget and get the most you can for that, accepting whatever level of performance that gives you and working around the limitations. $2000 is isn't even in the same universe as the first option, so your only hope is to buy the best performance you can, and then work out the best proxy workflow for your NLE and situation. To get good editing and colour grading performance, your system needs to be capable of maybe 2-4 times (or more) the performance required to play the media you're editing. Even a simple cut requires your computer to load the next clip, skip to the in point of the next clip, if it's IPB then it needs to retrace in the file back to the previous keyframe, then render each frame from there forwards until it knows what the first frame on your timeline looks like, and it needs to do all that while playing the previous clip. This doesn't include putting a grade on the clips once they're decoded, or even having to process multiple frames for things like temporal NR, etc. Playing a file is one thing, editing is something else entirely. By the way, Hollywood films are regularly shot in 2.8K or 3.2K and processed and delivered in 2K, so trying to convince someone that you need an 8K workflow is basically saying you need 16 times the image quality of a multi-million dollar Hollywood film, so good luck with that. Most systems work just fine with 2K by the way....
-
For some time I've been thinking about the texture of film. I've also been thinking about the texture of RAW images, both 4K and also 1080p. And I've been thinking of the texture of low-bitrate cheap digital camera images, and how much I don't like it. Last night I watched Knives Out, which was very entertaining, but of note was that it was shot by Steve Yedlin ASC, and that it was shot in 2.8K RAW and mastered in 2K. For those that aren't aware, Steve Yedlin is basically a genius, and his website takes on all the good topics like sensor size, colour science, resolution, and others, and does so with A/B testing, logic and actual math. If someone disagrees with Steve, their work is cut out in convincing me that they know something Steve doesn't! This inspired me to do some tests on processing images with the goal being to create a nice timeless texture. Film has a nice analog but very imperfect feel with grain (both the random noise grain but also grain size of the film itself which controls resolution). Highly-compressed images from cheap cameras have a cheap and nasty texture, often called digititis, and is to be avoided where possible. RAW images don't feel analog, but they don't feel digital in digititis way either. They're somewhere in-between, but in a super clean direction rather than having distortions, with film having film grain which isn't always viewed as a negative distortion, and highly-compressed digital having compression artefacts which are always viewed as a negative distortion. Here's the first test, which is based on taking a few random still images from the net and adding various blur and grain to see what we can do to change the texture of them. The images are 4-7K and offer varying levels of sharpness. The processing was a simple Gaussian Blur in Resolve, at 0.1 / 0.15 / 0.2 settings, and adding film grain to kind of match. On the export file the 0.1 blur does basically nothing, the 0.15 blur is a little heavy handed, and the 0.2 looks like 8mm film, so very stylised! The video starts with each image zoomed in significantly, both so that you can see the original resolution in the file, but also so that you can get a sense of how having extra resolution (by including more of the source file in the frame) changes the aesthetic. Interestingly, most of the images look quite analog when zoomed in a lot, which may be as much to do with the lens resolution and artefacts being exposed as it has to do with the resolution of the file itself. My impression of the zooming test is that the images start looking very retro (at 5X all their flaws are exposed) but transition to a very clean and digital aesthetic. The 0.15 blur seems to take that impression away, and with the film grain added it almost looks like an optical pull-out on film was shot of a printed photograph. In a sense they start looking very analog and at some point the blur I'm applying becomes the limiting factor and so the image doesn't progress beyond a certain level of 'digitalness'. In the sections where I faded between the processed and unprocessed image I found it interesting that the digitalness doesn't kick in until quite late in the fade, which shows the impact of blurring the image and putting it on top of the unprocessed image, which is an alternate approach to blurring the source image directly. I think both are interesting strategies that can be used. Now obviously I still need to do tests on footage I have shot, considering that I have footage across a range of cameras, including XC10 4K, GH5 4K, GH5 1080p, GoPro 1080p, iPhone 4K, and others. That'll be a future test, but I've played in this space before, trying to blur away sharpening/compression artefacts. There are limits to what you can do to 'clean up' a compressed file, but depending on how much you are willing to degrade the IQ, much is possible. For example, here are the graded and ungraded versions of the film I shot for the EOSHD cheap camera challenge 18 months ago. These were shot on the mighty Fujifilm J20 in glorious 640x480, or as I prefer to call it 0.6K.... IIRC someone even commented on the nice highlight rolloff that the video had. All credit goes to the Fuji colour science 😂😂😂 Obviously I pulled out all the stops on that one, but it shows what is possible, and adding blur and grain was a huge part of what improved the image from what is certain to be several orders of magnitude worse than what anyone is working with these days, unless you're making a film using 90s security camera footage or something.
-
I also don't get it, although I suspect this is more because my projects are too disorganised at the start (multiple cameras without timecode) and my process is too disorganised during the edit. One thing that might be useful for you, and which I use all the time, is the Source Tape viewer. It puts all the clips in the selected bin into the viewer in the order that they appear in the Media viewer (ie, you can sort however you like) and you can just scrub through the whole thing selecting in and out points and building a timeline. The alternative to that in the Edit page is having to select a clip in the media viewer, choose in and out points, add it to the timeline, then manually select the next clip. Having to manually select the next clip is a PITA, and I don't think you can do it via key combinations, so it's a mouse - keyboard - mouse - keyboard situation, rather than just hammering away on the keyboard making selects. The impression that I got from their marketing materials and the discussion around launch was that it was for quick turnaround of simpler videos like 30s news spots, vlogs, or other videos that are simpler and require a very fast turn-around. They even mentioned that the UI has been designed to work well on laptop screens, which further suggests that editing in the field because of fast turn-around times is a thing. Watching how YouTubers clamour over themselves to 'report' from conferences like CES, Apple events, etc, trying to be first comes to mind. If you were a YT or 'reporter' who shoots in 709, does a single-camera interview / piece-to-camera and then puts music and B-Roll over the top, puts on music, end cards (pre-rendered) and then hits export, it would be a huge step up in workflow. I suspect that it's just not made for you. I don't use multi cams, Fusion, or even Fairlight, but the Cut page is still too simple for me.
-
Welcome to the DR club. Come on in, the waters fine! I think you have three main decisions in front of you - the first is if you want to focus on the Cut or Edit page first, the second is if you want to invest lots of time up front or learn as you go, the third is if you're going to use a hardware controller of some kind. All have pros and cons. I'm not sure how familiar you are with DR, so apologies if you already know this.. DR has two main edit pages, the Cut page and Edit page. The Cut page is the all-in-one screen designed for fast turnaround and you can do everything inside it including import - edit - mix - colour - export, but its got limited functionality. The Edit page is fully-featured but is a bit cumbersome in terms of needing lots of key presses to get to functions etc. The Edit page also only focuses on editing, and is designed to be used in conjunction with the Fairlight and Colour and Delivery pages. I think BM have decided to leave the Edit page mostly alone and are really focusing on the Cut page. For example their new small editing keyboard is targeted at the Cut page but apparently doesn't work that well with the Edit page with buttons not working there, at least at the current time. I started using DR long before the Cut page, so haven't gotten used to it yet but if your projects are simpler then it might be worthwhile to get good at the Cut page to rough-out an edit and just jump into the Edit page for a few final things. If you're shooting larger more complicated projects then it might be good to focus on the Edit page first. Eventually you'll need to learn both as they have advantages. The other major decision is if you take time to watch decently long tutorials, taking notes along the way, or if you want to jump in and just do a few quick tutorials to get up and running. The first approach is probably better in the long run but more painful at the start. I suspect you're going to have three kinds of issues learning it: Finding the basic controls which every NLE has Finding the more advanced things that DR will have but won't be named the same so are hard to search for Accomplishing things when DR goes about it a different way than what you're used to (eg, differences in workflow) which will be impossible to search for Learning deeply / thoroughly at the start will give you all three, whereas learning as you go will leave the latter two subject to chance, and potentially leave you less productive for months or years. Plus it's pretty painful to go through the deep / thorough materials once you've already got the basics as much will be repeated. If you're getting a hardware controller, at least for editing, then that can steer your other choices. Like I said before, the new Speed Editor editing keyboard is designed to work with the Cut page, so that will steer you in that direction. The other reasons I mention it is that it will give you clues about what things are called and the rationale of wider workflow issues, especially if you watch a few tutorials on how to use it as they will cover the specifics as well as at least nod in the direction of the concepts. If you're going to get a hardware controller then now is probably the time, you can always sell it again if it doesn't fit your workflow or you change devices to a different one. The Speed Editor is pretty cheap (in the context of edit controllers) so that might be something worth considering. Some general advice: Make good use of the Keyboard Customisation - it has presets for other NLEs but is highly customisable so use that to your advantage RTFM. Even just skim it. It is truly excellent, and is written by a professional writer who has won awards. I open it frequently and keyword searching is very useful, assuming you know what stuff is called. Skim reading it may answer a lot of more abstract / workflow type questions too. It's long though - current version is 2937 pages, and no I'm not kidding! Google searches often work - I didn't learn deeply and thoroughly when I started (as I didn't really know how to edit so much of it would have been out of context) so I do random searches from time to time, and I often find other people are asking the same questions as me, so this can help find things, or help you with what stuff is called at least. Workflow searching unfortunately doesn't yield much help, at least in my assistance. As questions here, EOSHD has a growing number of DR users, and even if we don't know, an experienced editor asking a question is useful to me as it gives away how pros do it, which helps me, so I often research questions for myself as much as for others. It seems so. 12.5 used to crash about 2-4 times per hour for me, but 16 basically hasn't crashed in weeks/months. I think it's probably related to your software / hardware environment.
-
Shooting music videos in a warzone... 2020 was a full-on year!!
-
Merry Christmas @Andrew Reid and everyone.. Just think how great it will be once we're all able to get out and actually start shooting again!
-
True. If they'd designed an interchangeable lens mount then it could still have been an M12 based system, but having the thread permanently mounted to the camera wouldn't have worked. Much better would have been to mount a few metal female mounting points around the sensor and have some kind of assembly that contained an M12 mount and lens that could be fastened to the board with the sensor on it. M12 is a reasonable mount because it's widely used in security / CCTV cameras, but the mounts are not designed to be re-used beyond initial assembly of the product.
-
The problem is that the flange distance of the GoPro lens setup, which IIRC is M12, is very small and the camera has a very narrow opening. This is a GoProp replacement lens - note that the entire lens is very narrow, essentially fitting inside the M12 mount (which has an internal thread). Most interesting lenses are much thicker, like this one, so can't be mounted without major surgery to the GoPro electronics: However, if GoPro had designed it a little differently then it would have been pretty easy to design it so that other lenses could have been used - the could even have sold them at horrifically inflated prices like every other product they sold.
-
A couple of years ago my wife and I did a trip to India with a charity and the trip consisted of a mixture of tourist stuff as well as visiting the beneficiaries of the charity in a bunch of rural villages. Due to a combination of factors I didn't film the people we visited in the villages, but I have a number of stills photos that were taken and am now editing the project and looking for advice on how to integrate these images as seamlessly as possible. The rest of the project consists of footage of travel legs of cities and rural India, as well as the tourist locations like Taj Mahal etc, so my concern is that the images will be a bit jarring, but visiting the villages was a personal highlight of the trip so if anything I want to emphasise those parts rather than de-emphasise them. The images I have are smartphone images and range from being single images (like a group photo), to having sequences of photos (how people will take 3 or 4 images of something happening), all the way to a few bursts of 20+ images of things like a guy operating a loom. I'm immediately reminded of the work of Ken Burns, and will definitely animate some movement in the images, but I don't typically narrate my videos and I have very limited audio from these locations, so the images may be required to stand on their own, which I'm concerned about. I can probably cheat a little and find some audio from elsewhere and use that as ambience, which I've done before on other projects to cover gaps in my filming (lol). I also took audio when the women sang so I have those too. I'm thinking that I should embrace it and deliberately give it a more mixed-media feel, considering that I can make stop-motion from the sequences / bursts, and I could even use stills instead of video in other moments where I did shoot video, or even go so far as pulling frames from video to use as stills, in order to kind of 'balance' the amount of stills vs video and make it a consistent style element. Has anyone edited a project where there were a lot of still images, stop-motion, or other non-video elements? I'm keen to hear ideas....
-
So, to paraphrase... In March, GoPro stocks were down to 3% of their 2014 peak But they've gone up from 3% to 10% of their peak The rise was unexpected In a conversation about a companies business model, where that business model hasn't really changed in the entire lifetime of the company, I would suggest that "down by 90% since 2014" is a more relevant figure than what they did in the last quarter, month, quarter-of-an-hour, etc... It's relatively easy to select any time portion you like to prove a point. For example, in the last few days they lost over 8% of their value. At that rate I guess they'll be bankrupt by the new year. Unless, of course, you think I'm taking the most current data out of context and I should zoom out a bit? 🙂
-
The question "what sensor most matches film?" is about as useful as "what film matches digital?". If you're interested in using digital sensors to emulate film, then learn to colour grade, like I said. What is possible far outreaches what people think is possible.
-