-
Posts
7,831 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I moved to Apple from PC / Android and I did so knowing that I was paying more for a given level of hardware performance. What I am getting, however, is a higher level of personal performance, and this is what I am paying extra for. The amount of time I spent in the PC world struggling to get Microsoft Windows to properly use the BrandA driver for the BrandA chip on the motherboard with the BrandB driver for the BrandB plug-in interface card to connect to the BrandC device I want to use... On Mac I spend time working on my work, not on trying to get my computer to do what I tell it. My dad was in charge of the PCs for a large educational institution and once had the pleasure of ordering a custom-built server (~$10k worth of high-end hardware) and after a month of not being able to make it work he sent it back to the distributor. Luckily the distributor was happy to take it back as the school bought large amounts of equipment from them. The problem was that it wouldn't complete an installation of the OS, and the problem was a known issue and the manufacturer of the RAID controller, the HDD manufacturer and the motherboard manufacturer were all blaming each other as to why the combination didn't work. At Apple, if something doesn't work, someone gets yelled at and told to fix it, which works because they control all the moving pieces. You will notice that Apple vs PC articles that recommend PC over Mac often point out that: the hardware performance (ie, MFLOPS or certain chipsets) are cheaper in PC, that for every good feature a Mac has there's a PC that has a better specification, etc. The flaws to this logic are that CPU speed is not workflow speed, and that Laptop A might have a better screen, and Laptop B might have a better battery life, but you can't buy a Laptop with the screen from A and the battery from B. Articles that recommend Mac over PC often talk about how 'things just work' which unfortunately isn't as true with the last few OSX versions as it used to be, but I sure as hell don't spend time on forums reading about manufacturers blaming each other for why my mouse won't work. In case I sound like an Apple fanboy, I'm really not. Apple are monopolistic corporate criminals who are large enough to exploit the weaknesses in international taxation law to basically shaft every country they do business in, which would justifiably put any of us who did it in jail in a matter of minutes, but they're big enough to get away with it. The reason I buy their stuff is they suck slightly less than the alternatives.
-
I see this from a slightly different point of view. The main "pro" that I see is that it has the potential to improve workflows for those who are already using the compatible equipment (FCPX and external recorders) or are willing to change their workflow to do so. The main "con" that I see is that this is quite a deliberate move on Apples behalf to further separate themselves and their ecosystem from being globally compatible. As someone who is now an Apple user (MBP, iPhone, iPads) but didn't used to be (PC, Android) I became acutely aware of how Apple will deliberately do things to restrict global compatibility and drive people to an all-Apple ecosystem. My distain for Apple is significant, and perhaps only surpassed by my distain for Microsoft, who are also masters of playing this game, which is one of the main reasons I changed platforms. It's not the creation of a new codec that is the risk here, it's that it places a greater likelihood that other more open codecs will not receive as much attention or support. We have all suffered as the innocent bystanders in 'format wars' from Blu-ray v HD DVD to VHS v Beta and before that with Quad etc. Some of us have also suffered at the hands of a feature that promised interoperability but was unusable due to some kind of bug, but unfortunately the manufacturer never fixed it and we lost months or years waiting and hoping that it would be rectified. Manufacturers are out for themselves and will shaft the customer as much as is possible to get away with while still gaining sales by being the 'least worst'. Take Canon and ML as an example of what was possible vs what was delivered.
-
I tend to see people as either contributing to a conversation (be it information, questioning, humour, etc) or not contributing (either time-wasting, or being negative, driving agendas, etc). The best response to those not contributing is no response, I think there was actually some science done on this but I'm not 100% sure, however the problem is that we are always so tempted to reply that the urge often gets the best of us. Another wonderful symptom of the human condition! The still you posted above looks great Jon, is this part of a short or feature, or camera test of some kind? I particularly liked the combination of saturated blues and desaturated browns, as well as the key lighting (Rembrandt lighting IIRC?).
-
Where/how do you see the data in those 2-4bits? I mean, is it highlights, shadows, saturation, 'thickness' of image.. ?? I compared bit depths and couldn't see any difference, but I also don't know where to look
-
So, in short, a camera that is excellent when it is used as one third of a cine setup, but not a good standalone performer...?
-
I did a bit of reading before my previous reply and I didn't get a straight answer on LUTs vs floating point algorithms. The logic on LUTs was that they have two main problems: a lack of data points, and potential issues in-between the data points. The thread that I read included people saying that even a LUTs with thousands of data points are still the equivalent of a curve with only ten control points on it, so in terms of matching to the foibles of a sensor put through a cameras internal profile transforms, it is potentially going to have a bit of error. The second issue about what happens in-between the points isn't that interpolation isn't possible (with all the variations of linear / quadratic / polynomial / etc functions available) it's that there's no consistency between standards and so although your program might do a good job of interpolation, who knows what the software that the LUT was designed on was doing or if they match. This would become much more important the lower the number of data points in the LUT becomes. What I took from that was that RCM and ACES are ok and LUTs are a question-mark. In the end the proof is in the pudding (as the common saying goes) so it's just if you prefer the final result or not. When I think about this stuff I get enthusiastic about taking some time to try and reverse-engineer what a particular LUT or transform is doing so I can learn from it, but then life happens and my energy fades.. one day I might get around to it. I haven't forgotten my homework.
-
There is logic to this as there is convenience in a manufacturer providing compatibility across their range of cameras. Not only does it mean that on set you can have the A-cam and B-cam (and C-cam?) arrangement sharing lenses etc, but it also means that when someone buys into a system at the lower end there's less friction for them to upgrade within the system. I have no idea what compatibility there is within their current range, but this would be an opportunity to further align things.
-
Challenge accepted! The workflow comes from here. I'm currently trying to get a workflow that allows camera matching between all my setups and either this setup with the Colour Space Transform plugin or potentially ACES transforms seem like the best candidates. Unfortunately I haven't found profiles for ML RAW, iPhone, or Protune, so I'm still left to rely on my (modest) grading abilities. In terms of colour space transforms, don't get them confused with LUTs as they have advantages, not clipping the data is one of them. The above suggests that they are lossless transformations so don't degrade signal quality (and Resolve has a very high bit depth internally).
-
An XC update could add another contender into this space for sure. One thing that I think is a big deal that people aren't really talking about in the context of new cameras is what ML RAW is doing for existing Canon cameras. I had the 700D and wanted better video quality so was looking at the XC10 / BMPCC / RX10 / etc, but now my new 700D + ML RAW + Sigma 18-35 + Rode VideoMicro setup really has changed how I think about that camera. In ways it's far surpassed the BMPCC - not quite in image quality but definitely in battery life, sound quality, and with ML RAW Crop mode, my 18-35 is also a 87-169mm without a loss of resolution, and without having to cart extra lenses around etc (although the 18-35 is as large and heavy as two lenses!). I have no idea if it's possible for ML, but if they could make a module that did full-sensor readouts (5K video) and saved compressed files with variable bitrates, that might take it into another league again.
-
Holy wow.. that is a KILLER product. Technology is moving forwards in leaps and bounds, it really is incredible! I don't need one, I don't need one, I don't need one, I don't need one, I don't need one, I don't need one, I don't need one, I don't need one...... *mutters*
-
I recently watched an interview with a DoP talking about equipment and he basically said that his preference is that the equipment not impart anything on the 'look' of the film, and therefore he chooses equipment that will accurately capture what is put in front of it. Therefore it was about operating the camera like a technician, choosing lenses that are sharp with minimal distortion, and faithfully executing the direction of the others on set who are artistic, like director etc. His view was that the look of a film is created by the stuff in front of the camera and what is done in post. I've just spent about 20 minutes trying to find the link and FML I can't find it.. I consume too much from too many sources! He did mention that some people like using vintage lenses and creative filters etc, and didn't criticise that approach. I must admit that I personally find this perspective to make sense, and I've looked at things like the Tiffen Mist filters and decided that I can do a 'good enough' emulation of them in post (which lead me to include the Glow OFX plugin into my workflow) but with the added benefit that the effect isn't baked-into the footage and therefore I can tweak it in post to get it how I like rather than being stuck with what the filter gives me.
-
Thanks! That comment means a lot coming from you At the risk of providing too much information, the total workflow was: ML RAW 1728 10-bit MLV App (Mac) ---> Cinema DNG Lossless Resolve Clip Node 1 WB Basic levels Resolve Timeline Node 1 OpenFX Colour Space Transform: Input Gamma Canon Log ---> Output Gamma Rec.709 Resolve Timeline Node 2 Noise reduction (Chroma only - the Luma noise in RAW is quite pleasant) Resolve Timeline Node 3 Desaturate yellows (Hue vs Sat curve) Desaturate shadows + highlights (Lum vs Sat curve) Resolve Timeline Node 4 Slightly desaturate the higher saturated areas (descending Sat vs Sat curve) Resolve Timeline Node 5 OpenFX Colour Space Transform: Input Gamma Rec.709 ---> Output Gamma Arri LogC OpenFX Colour Space Transform: Input Colour Space Canon Cinema Gamut ---> Output Colour Space Rec.709 Resolve Timeline Node 6 3D LUT - Rec709 Kodak 2383 D65 Resolve Timeline Node 7 Sharpen OFX plugin Resolve Timeline Node 8 Film Grain OFX plugin: Custom settings, but similar to 35mm grain with saturation at 0 Resolve Timeline Node 9 Glow OFX plugin Resolve Fairlight settings Master channel has compressor applied to even out whole mix Render settings: 3840 x 1632 H.264 restricted to 40000Kb/s I credit the overall architecture of the grade to Juan Melara - I cannot recommend his YouTube channel enough. To those starting out, in case that looks like a stupid amount of work, it's fast as hell once you save the structure in a Powergrade. Once I'd converted to CinemaDNGs the whole edit process only took a couple of hours including music selection.
-
@jonpais Interesting thread. My (little) experience on film sets is more than enough to understand the benefits of being able to prepare a focus pull in advance. In order to add to the completeness of this thread for those lurking out there, in addition to the benefits of cinema lenses already mentioned, IIRC they are also designed so that all the lenses in a set are the same weight (which means not having to re-balance gimbals or adjust steadicam rig counterweights when changing lenses), will also share the same filter size (to enable one set of filters) and will have the same spacing of gears so that remote controllers for focus pulls or zoom lenses don't have to be adjusted. The total hourly rate of a film set is absolutely huge, and these lenses are designed so that they are quick to change and use.
-
And fix some of the known weaknesses of the predecessor, like sound quality and battery life. I shortlisted the BMPCC but went with the XC10 because my work is run-and-gun and the BMPCC is more like a baby cinema camera by the time you kit it out with all the extra stuff you need to make it a practical setup.
-
First publishable ML RAW test. 700D, ML RAW, Sigma 18-35. Not a great example of any aspect of film-making, but might still be of interest as an example of lesser film-makers with lesser equipment One thing I found surprising was that the ISO noise from the 700D (not the quietest camera of all time, especially at 800-3200 in this video) was quite organic and natural looking. As the noise changed a lot between shots I had a go at the technique that @kidzrevil has mastered of NR+Grain to make the video more consistent. Enjoy
-
First (publishable) test video shot with the 700D / ML RAW / Sigma 18-35. It was all hand-held and low light (challengingly low light), and I've added in some grain in post to cover up the noise from the camera (different shots had different amounts of noise so it was distracting). Notes: It's very hard to tell what's in focus and what is slightly off while filming - this was all F1.8 so shallow depth-of-field The RAW video is limited to 1728 x 786, so not quite FullHD, but I uploaded in 4K to get a bit better quality out of YouTube The lens is heavy! But that helps stabilise it a bit which is a plus. Manual focus is very nice, and so is the rest of the design of the lens actually The camera has lots of video noise (which is why I bought the XC10) but in RAW the noise looks less objectionable and in a way is more like film. This had shots up to ISO 3200 or more in it, which are challenging conditions for a camera this old. The lens focuses really close, which is handy for controlling perspective It's not likely to replace my XC10 any time soon, but it is very nice and has a different aesthetic to it I need to shoot with it more to 'understand' the look and work out how I want my videos to look After we ate dinner there was dancing and singing and I ended up trying the crop mode to extend the focal length, and that seems like a great feature. With the 18-35 without crop mode you get the equivalent of 29-56mm and with crop mode you get 87-169mm. That really is excellent coverage considering you don't have to carry anything extra, don't have to change lenses, and the 87-169 is a lot faster than my other long lens, the 55-250 f4-5.6. The more I film with this combination (with the Rode VideoMicro) the more I like it.
-
This card-reader hack is at the bleeding edge of the bleeding edge. When something hasn't been included in the experimental build yet..... Still, lots of people getting >40MB/s readings sure is promising. When it's stable I'll be using it for (1728 / 16:9 / 10bit / 48MB/s) and (1792 / 16:9 / 10bit / 51.6MB/s)... but I'd be interested to see if they manage to figure out modes above 1728 without a sensor crop.
-
Not sure if you're on the CML? https://cinematography.net I've only dipped my toes in the water a little there, but it seems to have lots of people who casually talk about RAW workflows for Red and Arri cameras. A very different world than the one I live in!
-
Can someone help me to match Yi 4k footage with Panasonic cameras?
kye replied to Amazeballs's topic in Cameras
Be aware though, that although Casey and a number of other you tubers seem quite proficient and can make some nice footage, if you're serious about learning to colour correct and grade then you should pay for some online courses from the real experts. I spent ages watching YT tutorials and noticed that some people had a 'feel' for grading but after a while I realised they were just playing with the controls and didn't actually know what was going on, or why you might use one set of controls over another. I'd recommend checking out Juan Melaras channel - it's obvious that he's a pro and that he works in a radically different way to those creating free YT tutorials: https://www.youtube.com/channel/UCqi6295cdFJI9VUPzIN4NXQ/videos -
I don't know for sure, but I suspect it's about maintaining quality throughout the workflow. For example, if you were to have a long-GOP capture format then it will 'bake-in' motion issues, and then those issues may be worsened by intermediate processing steps (and of course each time you round-trip you're baking things in) and then the final delivery format gets put over the top of all of that. If the imperfections in the way that the long-GOP behaves are worsened by similar issues in the input then you might find that the motion issues compound, or even worse, are converted by some other limitation in the format into some other type of secondary issue.
-
I will do this and try as hard as I possibly can to see no difference.. I'd take being 'motion cadence blind' and under 25 any day!!
-
Can someone help me to match Yi 4k footage with Panasonic cameras?
kye replied to Amazeballs's topic in Cameras
No idea on LUTs but this tutorial might help if you're just starting out and are unfamiliar with matching shots? If you don't use Resolve then it might be worth a watch anyway as the controls he uses are fairly standard ones. -
This depends on your situation. If you're editing 4K RAW files and the Producer and Director are sitting behind you in your commercial edit suite, then no. If you want to grade that footage with them watching then HELL NO. I suspect that this is not the situation you're in, so it's all about compromises. My sister studied film at university in the late 90s and I remember sitting in an edit suite all night helping her edit the documentary (and fix the terrible audio) she shot on a PD150 off a removable HDD, and we'd make the changes, hit RENDER, and then go for a walk while the computer re-rendered the changes we'd made, and then 30 minutes later it would be ready to watch and we'd review it and make more changes and then go for another walk... I edit 4K on a laptop but I use a 720p proxy workflow, which may or may not work for you.
-
I just took a quick look over that thread and saw a lot of discussion about how the codec can introduce jitter / stuttering into something that didn't have them before. If that's true (and assuming I read it right) then it might be that the signal path might matter just as much as the capture device. Technology is normally very good at things like this, with error rates in the realm of parts-per-million (ie, only varying by something like 0.00000001%). The problems come in when there are secondary effects of such a small error that end up as something we're very good at perceiving. For example, in digital audio jitter can be quite audible despite being a very small variance. The reason it becomes audible isn't by hearing it directly (you don't hear the music speeding up and slowing down!) but because the timing is used to convert digital to analog and small timing variations cause harmonic distortions, which even if they aren't audible by themselves then cause intermodulation distortion, which are both things we are quite attune to hearing, so the problem is that these small timing errors can have audible knock-on impacts. In terms of frame rates though, I can't think of any knock-on impacts we'd be more sensitive to, or where the impact would be exacerbated.
-
IIRC it was lowepost.com but I just had a quick look and couldn't't find it. If that isn't it then maybe LGG?