-
Posts
7,817 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
"looks less and less like reality, HDR and high resolutions make for a fake look that one does not see with our eyes in real life" .......well said. I think this is the fundamental aspect that I dislike - it's that the image looks unnatural because it's too sharp. Reality just isn't that sharp, and so I think it gives a completely artificial look. When I see lots of images they look like they've been slightly embossed. Video and film are essentially creative fields, so everything is allowed, so using this hyper-real look for Transformers or an action movie or tech startup is appropriate, but using it for a romantic comedy or period piece is silly. People have taken "cinematic" and made it apply to something that never actually occurred in the cinema. So we all end up with specialist VFX cameras that fall short in other ways. No problems if you're creating a spectacle, but is it really the best situation for Canon and BM and Sony to be making cameras for Michael Bay and thinking that we all do the same type of work as he does? I don't think so! I'd much prefer the camera they'd make for Werner Herzog! There will always be that element - trendy stuff sells. But in the background there will always be people making things of substance that stay relevant for 10 years instead of 10 days. Maybe you're watching the wrong stuff 🙂 Interesting thought about specialist cameras - I agree. Phantom have done well to carve themselves the high speed niche. In a sense ARRI have too with the best colour lower-resolution space (I hazard to call this a niche as it's really all non-VFX films and TV). I wonder about the GH6 and if it will try and do something super-well or if it will be an all-rounder at launch. "if you have a lot of talent and money you can get away with being more subtle".. totally agree, but I would rephrase this to be less subtle.... I'd say that if you have talent and almost any budget at all then you can make a film worth watching without having to resort to using party tricks from the tech to get people to keep watching.
-
I suppose that it could happen - it's easily possible to build a LUT where multiple input values go to the same output variable, in which case reversing that adjustment would be impossible, although I'm guessing that it's not likely. I would suggest that the Panasonic colour science may be compressing a certain part of the colour space (which is flattering to pull skintones closer together so is definitely a thing that happens) but I would doubt they'd completely crush anything. That's why I was suggesting a higher resolution colour chart - it might be that two patches are very similar but if you knew what skewing was happening around those then you might be able to see what was happening and find a way around it. I did a bunch of tests in Resolve by using two 10-step generators (one horizontal and one vertical) to create a 10x10 grid of things like Hue vs Saturation etc, and pointed the camera at the monitor in a dark room. You can "zoom in" to the hues and see what the GH5 is doing. That's relatively easy and might be a quick exercise to get a feel for those trouble spots. You could even setup the computer with one of those, film it with the GH5, put the GH5 feed into a monitor that applies the LUT and displays a Vectorscope, and then in Resolve you could play with the Hue and Sat controls in the LGG panel to move the grid around, and watch what happens on the Vectroscope. If it pinches or ripples or something then it should be pretty obvious. I wouldn't suggest that a monitor is a good source of light, but it might be useful to get a sense of what the colour profile of the GH5 is doing. I'd either shoot HLG, which isn't quite rec2100 or rec2020 but is close enough, or I'd shoot CineD. The challenge is that I shoot in 24p, 60p, and 120p and the HLG profile isn't available in 120p mode, so for a while I was shooting in CineD on all of them so I'd have the same colour in post. Since then I've changed back to HLG and just CineD in 120p because I can match things easily enough in post and I appreciate the extra DR. I also shoot the same projects (trips) on the Sony X3000 action camera (prior to that it was a GoPro), and my current iPhone of the time. There aren't ACES profiles for either of them, so there was no advantage to the ACES compatibility of the Vlog upgrade for the GH5. I kind of went down the colour grading rabbit hole and ended up buying the BMMCC to copy the colour science on the GH5 (as I couldn't afford an Alexa to compare it with) but since then I've learned a lot about colour grading and how to get good colour and what that means so I'm now less concerned with that detail. Since then Resolve implemented the Colour Managed feature which does a great job of adjusting WB etc in post with the rec2100 colour space. I also worked out how to get nice results when grading under a film emulation LUT (I like the 2393 LUT quite a bit). Now I'm more focused on cinematography and editing, as I can get colour that's good enough for my purposes. I'm not winning any awards, that's for sure, but I still wouldn't be if I managed to perfectly replicate one camera with another. On the colourist forums there is thread after thread of people getting super detailed about tiny little details to do with film-emulation, and then in the other threads there are casual comments that mention that a PFE LUT is one of the 20+ adjustments they typically do to make a look, or that in real projects they can get away with one or two nodes instead of a PFE to give a bit of a film vibe because the rest of the look is obscured by the other adjustments. The number of times when an authentic film look is really required seems to be pretty low, and most people get a good enough 8mm simulation if you just blur, add gate weave, apply a simple HueVsHue and HueVsSat, and apply a ton of grain. I've seen threads where people have said that they've never before seen a film grain emulation that looked remotely real, and that they worked with film for 30 years so they are especially attuned to it, but then you've got Walter Volpatto (about as senior a colourist as you get) saying that he doesn't even add grain on most projects anymore because the streaming compression kills it.
-
I think this is where the art and true deep-knowledge come into this whole challenge. I don't know what tools or techniques you're using to make your adjustments, but there are a number of ways to accomplish something, and certain ways are more or less likely to break an image. I highly recommend watching Juan Melaras YouTube videos where he matches things (and if you can find it, he did a video replicating the Linny LUT but pulled it down - maybe it's available somewhere else). I suggest this because he does a bunch of really cool things using alternate colour spaces (HSL, YUV, Lab, and more) and using tools that won't break the image such as the channel mixer or curves. It's obvious that Juan can look at the response of a LUT or look and see the big picture and knows what global tools can align to that look the easiest way without breaking it. Have you read a lot of resources about film emulation? Happy to share my bookmarks on it if you're interested. What reference setup are you using? I would imagine that you'd want to use as high a quality of light you can (definitely a black-body radiation source) and either the sun or a halogen lamp, and then just ignore all other lights as they will be inferior. The other challenge will be the shape of the RGB spectrum sensitivities, eg: The way that Juan has matched these is with the RGB mixer, although there might be some situations where that won't be completely effective, not sure. Yeah, huge variation exists between batch, processing method and labs... Steve Yedlin had a lot to say about that in this article that I'm assuming you're familiar with: http://www.yedlin.net/OnColorScience/ A phrase for colour matching that I really like is "in the same universe". It accounts for how closely you need to match individual shots in an edit, but it also accounts for what @MrSMW says about getting used to the look while watching a film, which is absolutely a factor too. We never watch the same scene through two different cameras / processes / grades, so differences have quite a degree of tolerance. I have a theory that making a grade that allows for WB adjustments should be completely possible. I never got around to trying it, but my theory is this. The camera does things in a certain order: light comes in and hits the sensor with its spectral sensitivity the camera applies the WB the camera applies the colour profile I think that the secret is to organise your adjustment so that it peels-the-onion by reversing the order of operations, like: undo the colour profile (GH5) undo the WB (GH5) adjust the spectral sensitivity from the source camera (GH5) to target (film stock) apply WB adjustment of target (film stock) apply the colour properties of the target (film stock) The challenge is to separate the three layers, which in my case I was matching two digital cameras and so you could just take a RAW still image in each which allows you to separate out the colour profile from the sensor and WB. I think you could potentially still apply some of this logic by building the adjustment in a modular way and shooting the colour checkers in a range of WB situations (maybe using gels?). Then you might be able to adjust the WB and spectral sensitivity adjustments in their own nodes and you could see if they are compatible with the same colour profile in matching the reference images. The order of operations isn't completely clear in my head, but I think you need to do controlled tests to work out the spectral sensitivities first using a proper WB, then the WB adjustment by shooting RAW stills in different WB settings, then the colour profile. Hopefully that made sense? It would require getting the order of operations completely down, and executing in a meticulous way, but if I'm right then it should be able to be done. If you make it modular like I suggest then once you've done the GH5 version the Canon RAW version would be super easy. The adjustment would only require reverse engineering the spectral sensitivity and WB on the Canon and then applying the properties of the film stock which you've already done. Juan Melaras discussion on how he made his BMPCC 4K and 6K to Alexa conversions, and in my GH5 to BMPCC thread were very interesting and I think because he was just going from RAW to RAW he didn't have to nullify colour profiles in either camera, just having to do the two lower-levels.
-
I completely agree. I'm a solo-operator so do literally everything and I'm very aware of editing being limited to the footage captured - in fact one of my primary goals in learning editing is to shoot better footage in the first place. My biggest challenge in editing is not having a clear understanding about what I'm actually doing, but the second biggest challenge has been working with my own rather mediocre footage! Everything in film-making should have an effect on the end result, either directly or indirectly, but the way that the industry is segmented it's hard to get that overall understanding. There are lots of resources available for each step in the process, or even each aspect of each profession within each step, but they mostly focus on doing things in a certain way "because that's how it works" rather than explaining the actual end result. Alternatively, there are people who are solo-operators or work across many more departments and have a broader view, but they either don't know how to do things well, or do things really well but don't know how to explain what they're doing and why. I've decided to devote my efforts in 2022 to learning editing (and sound design), as I feel it's the part about film-making I know the least about, and I think it might be the part that perhaps has the most to teach me about the overall process. Welcome to the forums!
-
I don't have V-Log so can't apply it my my own footage, but I have a few thoughts. To me, I think the colour charts can only get you so far with skintones as they don't have enough hues to work with. Assuming you're shooting your own film and can point it at whatever you want to, I'd recommend making your own colour checker that has a much more detailed set of hues around the skin-tone hues. You could either do this the arts and crafts way, and mix some paints. I'd recommend getting a hue that is quite red and one that's quite yellow and then blending them together in various proportions to create a number of steps that smoothly go between the two hues. if you then mix each of those with white in greater proportions you should get a version of each hue in gradually diminishing saturations. I'd let them dry then compare them to your own skintones and see if they include your own skintones. You might have to try this a few times to get the source colours and strengths right, but you should be left with a grid that covers the pie in the vectorscope around the skin tone indicator quite well. The other way to do it is to do it digitally and then have it printed. Then try and compare it to real skintones and iterate if required. Those tests won't be calibrated the way a colour checker is, but it'll give you the ability to really dial in the response of that critical region, which is useful as long as you can shoot test shots from both cameras under identical conditions. The other dimension is to shoot the charts in a number of exposures, to capture what happens to hues when they are exposed lighter and darker. Another thought is that if you're not doing it already then you should confirm with a LUT stress test image to ensure you're not accidentally breaking the image. I use this one and find it very useful: https://truecolor.us/downloads/lut-stress-test-image/ Good luck with this, I've done a lot of camera matching over the last few years and it's a frustrating but interesting challenge and very educational. Actually, doing this is a spectacular way to get a deeper understanding of colour science. I've matched cameras on multiple occasions and it's an amazingly difficult technical exercise if you want to get a good match, and I've always learned a lot each time I've done it. LUTs have a poor reputation which is mostly undeserved. Yes, there are lots of YouTube camera bros out there selling LUTs as the answer to getting good colour, but someone writing a bad book doesn't mean that literature is worthless. Apart from the fact that manufacturers supply technical LUTs to take their various LOG formats to a 709 colour space, the use of film-emulation LUTs is one of the best-kept secrets of the professional colour grading industry. I've heard that the majority of TV shows and movies include the use of a film-emulation LUT of some kind. It makes sense - if you're a professional colourist you're looking to get the best results in the shortest possible time you'll use whatever tools can do that. It doesn't mean that those using a LUT aren't knowledgeable enough to have created it, some in-fact did create their own film-emulation LUTs for their own use and their own secret sauce. Some even custom-wrote their own plugins in scripting language with all the mathematics involved. Colourists save their own power-grades to apply quickly, and a LUT is simply a type of power-grade. Using a LUT doesn't mean that the skill of a colourist isn't required. It's the same as everything else, an Alexa and Master Primes doesn't make you a great cinematographer, using Sky Panels or haze doesn't make your film look wonderful, and using an NLE doesn't mean that you don't have to know how to edit.
-
I agree that the early Alexas were emulating a 2K film scan and that with more resolution they are further away from that look, and that the look of cameras is converging. What is interesting to me is that on the one hand we have resolutions that are going up exponentially, which are taking us further and further away from the resolution / sharpness of film (specifically the resolution/sharpness of the film you would see in the theatre, which was a few generations removed from the original), but on the other hand we have a continued obsession with film-emulation in colour science. Now, don't get me wrong. When it comes to the pros, DPs are tempering the high resolution sensors with vintage lenses / filters / haze, and in post the same people that are applying film-emulation in the colour reproduction are also emulating the resolution / halation / gate-weave / grain, which all also reduce the effective resolution of the image. The pros seem to be taking a more holistic view of the overall look. However in the amateur / low-end space it seems to be only about high resolution sensors and lenses and emulating the colour reproduction of film. It seems very strange. Keen to hear your thoughts on if you think the target aesthetic is changing, or if people have just lost sight of the whole picture and got swept up in the hype of the camera market..
-
I must admit that I have always been confused by their model numbering system. Something like the Panasonic lines (G, GX, GH) or Sony (Ax, AxR, AxS) or Canon (xD, xxD, xxxD) etc make it easy for people to navigate. If Olympus had a system in their model numbers, I never saw anything that explained it. That makes a lot of sense. I would have been surprised if I was choosing between an Olympus and the GH5 and the Olympus brand ambassador I was talking with just failed to mention the Oly had PDAF! I do remember that the Olympus was more about getting it right in-camera than the GH5 which had the log profiles and 10-bit, which would allow more flexibility in post. That definitely matters to me as I shoot travel in available light and really appreciate the flexibility in post that I can get. Yeah, it definitely smells like a rebrand from some marketing people. I'd imagine that such a thing would be a pretty standard process - failing brand comes on as a client and you do the normal brainstorming / focus group / market research / graphic design / blah blah blah stuff.
-
I'd suggest it's possible they'll get into medium format - it's where ARRI have gone so why not. In terms of lenses, I wonder how many of their RF lenses actually cover MF? Maybe that's something they've been working on in the background. It would certainly help to justify their enormous prices. Or maybe they'll make an adapter for medium format glass from others, eg, the ARRI medium format glass. The other thing that strikes me about it was the 186x56 size, which would obviously require specialist glass, but might be for recording a huge resolution panorama for backgrounds for VFX work? Obviously it would be a speciality set of electronics (sensor, processor, etc) but it seems plausible to be something that industry would want, similar to having Phantom cameras as a slow-motion speciality camera. Do you have any links for this? I'm curious to read a bit more. I view it as a micro version of going to court and being acquitted. It's embarrassing, interrupts what I'm doing and takes up my leisure time, and is basically a whole lot of hassle for zero benefit. I prefer not to have to "educate" anyone, as quite frankly, I'm skeptical of actually changing anything. My experience of security guards is that the ones that are likely to hassle someone who is obviously not shooting something professional but do it anyway is that they're not likely to actually listen to what you say. Plus, if I'm in a museum or whatever, they have full control and what they say goes, even if it makes no sense. You can try and escalate to someone more sensible, but you're putting them in a position of choosing between being sensible and supporting their staff, which normally they side with their staff because they have to work with that person. Anyone who has actually been to court and witnessed what can happen knows that the best strategy is to just never get put into that situation in the first place. I went once to support a friend of a friend and the judge was in a bad mood and basically gave everyone the maximum penalty, and highlights included "I hate people like you" and "you wouldn't be here if you hadn't done something wrong". Another reason I'd prefer not to stand out, and thus, smaller cameras are better.
-
It's not about rules, it's about perception. We're at a funny time in history. Anyone can shoot in public with a phone and no-one bothers them. Governments and private residences can setup permanent security cameras that record people in public without their consent. It's legal in most public places (here in Australia anyway) to record video. Most private places such as museums and galleries and amusement parks and events allow photography and videography for private use. The only thing that really isn't allowed is professional shooting without a permit. Unfortunately, the way that people tell the difference between the two is by the size of the camera. If I went to a park with a bunch of other parents and we all stood in a line recording our kids, everyone with an iPhone and me with an FS5, I'm going to be seen differently. If I go to a museum and film my kids running around with a BMPCC6K and a shotgun microphone I'm going to get interrogated by security. Guerilla film-making is a phrase to indicate that you are shooting without permission. It doesn't mean that you NEED permission, it just means you don't have it - just like all the mums with smartphones don't have it either. It means that fitting in and not getting noticed matters. It doesn't mean you're doing anything wrong. If someone calls security and I have to talk with them and convince them that I'm not doing anything wrong and they walk away, that's still failure to me because I don't want to have that happen in the first place. It doesn't matter that I'm not doing the wrong thing. To give you an idea about how poorly venues are able to distinguish between pros and amateurs, I went to a temple in Bangkok and there was a sign at the entry. 8mm film cameras are ok, 16mm film cameras are not. That was 2019. I hoped that no-one would think my GH5 was a 16mm film camera.
-
The patent system is broken. I remember years ago seeing enough examples that were beyond ridiculous that I now just think of it as yet another system that is dysfunctional and if you have enough money you can basically do anything. I just want lots of camera manufacturers to start making smaller cameras so that it gives us solo hand-held guerrilla-shooters more options 🙂
-
This absolutely blows my mind and makes all kind of sense. I've made a new thread because I think it needs to be highlighted more than in here, where it'll get buried quickly. So, the GH6 could have that sensor from the UMP12K, retain the MFT mount, and just use the middle 8K part as it is MFT sensor sized? That would be incredible. Could that be mounted on an IBIS mechanism? I don't really know how those things work. In theory then, assuming it kept the GH5 ethos, it could offer 8K downsampled to any resolution you want. The UMP12K can do 8k120 and 8k160 in 2.4:1 aspect ratio, so with downsampling it could offer 120p in any resolution you wanted without cropping. That would be something to put the GH6 on the map!
-
I am completely blown away by this, but it's great news. In another thread, @sanveer has shared a link to Imatest who have shown that lens design might be the limiting factor to the DR of a setup, not the sensor. This is great news, especially for those of us who aren't shooting on the most pristine modern lenses, and don't want to. It means that we don't need to keep buying cameras with higher and higher dynamic range specifications. I have actually been chasing lenses that have slightly less overall contrast so that the veiling flare pulls up shadows, giving the sensor a bit more light to digitise and getting it a bit higher compared to the noise floor. This also makes the image much more like film, which has a nice shadow roll-off and looks more organic. Lots to unpack in this one.
-
Wonderful! More resolution wouldn't improve those images, quite the opposite I think.
-
you didn't need the word "res" in there... "Though at 10k maybe not." then it works either way 🙂 Heavily.
-
A recent Canon patent shows a RED Komodo style body with huge specifications. https://ymcinema.com/2022/02/11/canon-develops-high-end-boxy-cinema-camera/ It's an interesting glimpse into Canons thinking. Firstly, it's a dead-ringer for the Komodo, including the screen on the top. It would be great if they'll use this as a screen or if it'll just be for controls, but that's potentially promising for building a small self-contained rig. But the specs show where Canon are prepared to go. 10K or more resolution 120fps or more sensors up to 56x42 (a 4:3 with crop factor of 0.64) or even 186x56 (which is a very wide aspect ratio, but the horizontal crop factor is 0.19 !!!!!!) Now, this is a patent, and it makes sense for Canon to design a body that is future proof so they can get economies of scale and standardise on accessories etc, so these are just the upper limits of what they might actually do, but they're essentially betting that these things are likely enough that it's worth designing them in. It's time to stop thinking "more is better" and start asking "how much is optimal", because Canon will forever be your pusher..
-
Sorry to hear about your experience - hopefully someone on here can help you with some info. Some of the ecosystems for these cinema cameras can be pretty difficult to navigate sometimes. When RED users talk to each other it seems like they're speaking a foreign language!
-
When I was in the market for my GH5 I was seriously considering both the A73 and an Olympus camera. I can't remember which Olympus model it was, but it was the direct competitor to the GH5. The equation basically boiled down to this: GH5 - poor AF, great codecs including 10-bit, very good stabilisation Oly - limited to 8-bit, excellent stabilisation (better than the GH5) A73 - great AF, limited to 8-bit codecs I don't remember if the Oly had PDAF - I don't think I even knew that at the time. Later on I learned that they had it (thanks to @Dave Maze on YT) and it was one of the best kept secrets - no-one was talking about it. Anyway, I realised that it came down to AF vs stabilisation vs codecs. I used one of my existing camera setups to try to learn manual focusing, and discovered that not only was it much simpler than I thought, but I actually preferred the experience (not silently screaming in my head as I watched the camera screw up my shot) and most importantly I learned that I preferred the more human feel of MF - including the mistakes and imperfections. So that ruled out the A73, because AF was it's main feature, and left the Olympus and the GH5. I eventually decided that the 10-bit internal and higher-quality codecs were more important to me than the edge on stabilisation that the Oly had, and went with the GH5. I've often wondered why there was no love for Olympus as the quality of the footage was there (I watched a LOT of it while I was evaluating cameras) but I think it was more that no-one knew about it. The GH5 was famous (10-bit! 400Mbps! ALL-I! etc) and infamous ("Panasonic AF has ruined my life!" "I want to kill myself!!" etc), but no-one even mentioned Olympus - not for their strengths or weaknesses.
-
Noam Kroll just posted a new blog post comparing Prores vs h265 on the iPhone which makes interesting reading. Some aspects of the comparison are specific to the iPhone, but others apply more widely. https://noamkroll.com/filmic-pro-log-vs-prores-sample-images-test-results/ Incidentally, Noam is an example of someone who prefers shooting high-quality HD rather than lower-quality 4K or more because of the associated aesthetic, as well as practical considerations. His blog is excellent and contains lots of articles discussing various aspects of film-making. In todays internet landscape I find that he's a rare glimpse of working film-makers who are balancing aesthetic and practicalities to get the overall best results from a film-making perspective.
-
Great camera test - I'm a big fan of testing stuff by taking video of my family and editing it up into a finished video so you can keep it as a memory. Colour and images look really good. I also use crop modes on my setup (GH5) which can be a little softer due to the limited resolution on vintage lenses - I find that adding a bit more sharpening can help to even out the shots.
-
You'll have to have a word with the talent then!!
-
It is a blanket statement, but it's my experience. I've heard it frequently mentioned from others too. I shoot on the 200Mbps 10-bit ALL-I 1080p mode on my GH5 and it cuts really nicely, so the ALL-I definitely makes a difference, but it's not just the editing performance that I've heard pros say they appreciate, it's the aesthetic too. I am also glad it's starting to appear in more and more low-priced cameras.
-
I think the GH6 will have to be impressive to be a success. In this sense, it will have to really beat the FF offerings, which have caught up and overtaken the GH5. Unfortunately, people want things like you mentioned, which are better delivered by FF. The advantages of MFT are many, but you're just not interested in them. You want features that are better delivered by a larger sensor (47MP), are easier to deliver in a larger body (ND), are well catered for in the common sensors (PDAF), and you're willing to have something almost the size of an S1H, which is absolutely enormous compared to MFT. I wonder how much Sony have chosen not to develop innovative MFT sensors in order to drive people to FF where they can easily compete? It's a smart strategy.
-
I've heard some editors say that the trend of having more cuts is just lazy film-making. I can't remember where or I'd link it. The rationale was that cutting creates visual change, and makes things seem like they're moving and exciting. I think frequent cutting is used a lot in fight scenes. The example I saw was comparing a fight scene that had lots of cuts (can't remember where it was) with a fight scene from one of the original Bourne trilogy where Matt Damon fights a guy in Morocco (?) and it had a lot less cuts but was still really brutal. When you looked at the two scenes one after the other, the scene with more cuts actually had less action and less innovative camera angles and cinematography. Basically they were trying to pump up a weak scene by having you constantly trying to re-orient yourself and not notice that the content wasn't that exciting. If you're curious, there's a fascinating effect by having extremely long shots that seems different to normal cinema. I remember in particular that both Russian Ark and Roma had. Like it was a different kind of cinema, at least after it'd been a while since the previous cut. IIRC Roma had a few sections of relatively normal cutting with some hugely long shots.
-
I just assume that all content these days is steamed, so to clarify: I find triple-h26x compression (h26x in-camera, h26x export from NLE, and h26x streaming compression) to be harsh, compared with double-h26x compression (Prores in-camera, h26x export from NLE, and h26x streaming compression) The acquisition compression matters more than the export and streaming compression as often it's in a flat format, so any digital nasties get amplified with all the processing (adding contrast and saturation, sharpening, etc). Prores was designed as a professional digital intermediary, and the implementations in cameras will have been honed over many years to be as visually optimised as possible. The bitrates will also have been selected to be generous, as professional video environments aren't so worried about a 10 or 20% increase in file sizes if that's going to mean a difference in quality. The H263/4/5/6 standards were designed specifically for lower bitrates: Source: https://en.wikipedia.org/wiki/Advanced_Video_Coding The overall aesthetic quality of h26x family is secondary to bitrates, because unlike a post-production house, Netflix or Prime might get bankrupted by a 20% increase in their total storage, processing, streaming infrastructure and data costs.
-
Still at it, but now looking at TV episodes. In contrast to the short YT films, here's the analysis from a 40 minute TV episode: Almost 2000 cuts! The way this works is that you're analysing a clip BEFORE adding it to the media pool, and when you add all the clips into the media pool you can't change the start and end points. So during the analysis process you can further cut up these clips as they appear on the timeline, but you can't join some together. I'm not going to review these manually, and there will be false-positives (it thinks the arrival of a lens flare or other motion is a cut) and false-negatives (a jump cut that isn't too visually different) so there is no threshold that will get them all right. As such, I take a cautious approach and set the threshold high, so it misses real cuts but doesn't cut up shots on big movement. This way, if I'm analysing a part of the timeline I can further cut it up manually, but I won't be stuck with cuts that shouldn't be there. This approach creates about 1000 cuts, which is about a cut every 2.4 seconds. Not too far off really. Then I can start sorting and in the parts where I want to see every cut I can manually refine at that point.