Jump to content

kye

Members
  • Posts

    7,687
  • Joined

  • Last visited

Posts posted by kye

  1. 9 hours ago, Amazeballs said:

    I user  Resolve, will watch it, thanks! I guess that is what I need. Funny that I am subbed to Casey anyway, but havent seen this video yet. 

    Be aware though, that although Casey and a number of other you tubers seem quite proficient and can make some nice footage, if you're serious about learning to colour correct and grade then you should pay for some online courses from the real experts.

    I spent ages watching YT tutorials and noticed that some people had a 'feel' for grading but after a while I realised they were just playing with the controls and didn't actually know what was going on, or why you might use one set of controls over another.

    I'd recommend checking out Juan Melaras channel - it's obvious that he's a pro and that he works in a radically different way to those creating free YT tutorials: https://www.youtube.com/channel/UCqi6295cdFJI9VUPzIN4NXQ/videos

  2. 7 hours ago, jonpais said:

    Aren't online videos Long GOP? How can you tell the difference between these cameras by comparing them on YouTube? Don't know for sure, just asking...

    I don't know for sure, but I suspect it's about maintaining quality throughout the workflow.  For example, if you were to have a long-GOP capture format then it will 'bake-in' motion issues, and then those issues may be worsened by intermediate processing steps (and of course each time you round-trip you're baking things in) and then the final delivery format gets put over the top of all of that.

    If the imperfections in the way that the long-GOP behaves are worsened by similar issues in the input then you might find that the motion issues compound, or even worse, are converted by some other limitation in the format into some other type of secondary issue.  

     

  3. 9 minutes ago, mercer said:

    Idk, if you think you’re desensitized to that, then I guess I would start with something simple like looking at two shots of the same exact footage with one shot in 24p and one shot in 30p. Use the 180 Degree rule for both to keep the other variables consistent (so 1/60th shutter for the 30p footage and 1/48 or 1/50 for the 24p footage) See what you notice.

    If you notice no difference whatsoever, then you are probably 25 or under and it won’t matter because you’re the future and all of this stuff will be antiquated in 10 or 20 years when you guys are running the world. 

    I will do this and try as hard as I possibly can to see no difference..  I'd take being 'motion cadence blind' and under 25 any day!!

  4. 12 hours ago, Dan Wake said:

    are 16gb of ram enough to edit video in 4k (for example file video in 4k hdr from the new sony mirrorless)?

    is a mechanical HDD (black version) enough fast to work with 4K video files?

    will we see soon a cheap camera sensors that solve the problem of rolling shutter?

    thx

    This depends on your situation.

    If you're editing 4K RAW files and the Producer and Director are sitting behind you in your commercial edit suite, then no.  If you want to grade that footage with them watching then HELL NO.

    I suspect that this is not the situation you're in, so it's all about compromises.  My sister studied film at university in the late 90s and I remember sitting in an edit suite all night helping her edit the documentary (and fix the terrible audio) she shot on a PD150 off a removable HDD, and we'd make the changes, hit RENDER, and then go for a walk while the computer re-rendered the changes we'd made, and then 30 minutes later it would be ready to watch and we'd review it and make more changes and then go for another walk...

    I edit 4K on a laptop but I use a 720p proxy workflow, which may or may not work for you.

  5. 18 minutes ago, IronFilm said:

    So would having a box for genlock make an image more "cinematic"?

    I just took a quick look over that thread and saw a lot of discussion about how the codec can introduce jitter / stuttering into something that didn't have them before.  If that's true (and assuming I read it right) then it might be that the signal path might matter just as much as the capture device.

    Technology is normally very good at things like this, with error rates in the realm of parts-per-million (ie, only varying by something like 0.00000001%).  The problems come in when there are secondary effects of such a small error that end up as something we're very good at perceiving.
    For example, in digital audio jitter can be quite audible despite being a very small variance.  The reason it becomes audible isn't by hearing it directly (you don't hear the music speeding up and slowing down!) but because the timing is used to convert digital to analog and small timing variations cause harmonic distortions, which even if they aren't audible by themselves then cause intermodulation distortion, which are both things we are quite attune to hearing, so the problem is that these small timing errors can have audible knock-on impacts.  
    In terms of frame rates though, I can't think of any knock-on impacts we'd be more sensitive to, or where the impact would be exacerbated.

  6. 2 hours ago, Kisaha said:

    Exactly! as I said, I went mirrorless for small pancakes (the NX 30mm 2f was my first, extremely small, sharp, and extra white - my version!), small/light/cheap ultra wide zooms (I like em wide, the 12-24 was my kit lens for some time) and then I have 4-5 legacy lenses from 50-55-58mm (from 1.4-1.8f) so that was my main use for a couple of years (NX300/3000/500 bodies are similarly small, or smaller to the M50). For pro work, something like the NX1 or the Fuji H-1 are more preferable of course (better grip, battery life, extra features/buttons/in-out, for all those you will need the battery grip on the Fuji though, and it is already bigger and heavier than the NX, but these two are the only Pro mirrorless bodies around, in the video sense of course, I mostly talk about video).

    The 50mm (now I use mostly the NX 45mm 1.8f, exceptional little lens, for 250-280 euros new, back then) is an interesting portrait lens, close to the classic 85mm (usually 85mm on APS-C is too tele for my use and small-ish places) but more versatile and sufficient for even closer portraits (with a couple of steps forward!).

    The 4K crop makes the 11-22mm mandatory for the camera, especially for Vlogers (anything more tele, just not usable I guess).

    :)

    Yes - with these crop sensors it's often difficult to get a lens that's wide, fast and cheap.  In the end I conceded and just paid the money.

    It makes it doubly difficult if you want to go wider than the normal 28mm equivalent - I know lots of vloggers use FF with the 16-35 @ 16mm because they take up less space in the frame, don't have to worry about pointing it as accurately, and don't have to stretch their arm out as far.  Good luck getting a practical 16mm equivalent on a crop sensor!

  7. 19 minutes ago, mercer said:

    You’re well on you way. Just understanding why you would use certain grading and lenses is great info to have. The war film was Saving Private Ryan and yes Spielberg is a master for shooting those scenes that way.

    As far as motion cadence, I’m sure there is a tangible reason but we aren’t privy to each camera’s special sauce to discern the exact reason. But you can quickly know it when you see it and which cameras have good cadence and which don’t. I’ve noticed that exact frame rates may help... so true 24p instead of 23.976... slow motion as well... lol. 

    You have 3 cameras, don’t you? The XC10, a 700D and a 700D with Magic Lantern? You could always run some tests and do a motion comparison by messing with settings in ML?

    Ah yes, Spielberg and Saving Private Ryan.  A memory is a good thing when it works :)

    Actually, one of the main challenges I have is that I don't really know it when I see it.  I think my perception is a mixture of being unsensitised to these elements, and is also untrained.  A big part of learning for me is learning to see.  I can often see a difference between two different finished products (eg, I know I prefer the production of Peaky Blinders and The Crown to most other shows) but when I try and isolate the individual variables I'm often just left with two things that look the same to me.  In a way that's why I'm hanging out on forums like this so much - I'm trying to learn what other people see and what the technical thing is behind it.

    Yep - my current kit contains XC10, 700D with ML, and iPhone 8 with ProCam (which provides full manual controls).  I also have other much lesser cameras, but I suspect they're not high enough quality to be useful.  What settings would you suggest I play with?

  8. 8 hours ago, kidzrevil said:

    @mnewxcv I only go straight to rec709 if I am using Aces. For everything else I convert the raw data to bmdfilm because it can preserve the entire 14 bit data in its LOG curve

    If you haven't already come across them, I can highly recommend the videos by Juan Melara (https://www.youtube.com/channel/UCqi6295cdFJI9VUPzIN4NXQ/videos).  He is obviously a very knowledgeable operator, but also seems to use Resolve in a way I haven't seen any other YouTubers even approach.  I learned a heap from him.

    I haven't completely worked out my workflow for ML RAW, but he got me pretty close I think.

  9. I thought that YouTubers used slow-motion because it looked 'better', which may be mostly that real-life needs all the help it can get to look nicer, and the slow-motion smooths out the camera-shake.

    So, as contenders we currently have:
    - line-skipping when reading the sensor (this wouldn't be to do with motion, it would be to do with how each image was rendered, but might be something that looks 'nicer')
    - less rolling-shutter (shouldn't apply much to static tripod-shots)
    - jitter in frame rate (variation in how far apart the frames are exposed)
    - shutter angle

    I have a vague memory of reading that some cameras take a small amount of time to open or close the shutter, kind of like it faded in, and this meant that the edges of moving objects weren't as sharp.  I'm not sure exactly how this would be accomplished (I know very little about shutter design) but it sounds simultaneously like something that could be true in expensive cameras, or could be 'alternative internet facts'.

    On a more philosophical note, my personal view is that every technical aspect of filming will have an artistic implication, and behind every artistic impression there is a something tangible that we can isolate and measure.  One of my goals is to understand what technical / specific things are behind the artistic things, so I can explore them and then use them to create a finished product where more of the choices I made are coherent with the overall feel I was going for.  
    For example if we want something to look happy we use brighter more saturated grading because that supports a happy vibe, or if we want people to be on edge then we can have extreme close-ups with wide-angle lenses to distort the image which supports the feeling of un-ease.  I read that in a recent war movie the director used a 90 degree or 45 degree shutter on action sequences as it showed explosions as being full of chunks of people instead of just being blurry and also made it feel more real and less 'cinematic'.
    I'm hoping we can learn something tangible about motion cadence in this thread. :)

  10. 16 hours ago, Kisaha said:

    That was took out of the context of my post.

    First I said that the 22mm pancake and the 11-22mm are the 2 most interesting EF-M lenses, and that he could use a cheap EF lens with the adapter, and having covered anything from 18mm to 35, then one of the cheapest/smallest/lightest/fastest EF lenses that come to mind, that give him something different (a short portrait, general purpose, or very small groups portrait photography), is the 50mm. 

    A cheap 50 is always welcomed in any set, and it could potentially be his fastest lens on his system (and the cheapest!).

    I doubt he will be able to do any Vloging with the Sigma (but it could be nice to scratch his nose simultaneously!) and the 11-22 can help with the crop in 4K (which the Sigma can't), and he already has the 22mm pancake (so another length covered by his native lenses). I am not sure how helpful a 18-35 can be in his case.

    Story begins. You can stop reading: Before buying into NX, I was between Eos M, m43 and NX. I was interested on a small pancake, cheap ultra wide zoom, and preferably APS-C to mount my legacy lenses too, and for my photographic hobby and family photos. The M (22, 11-22) and the NX (30mm and a plethora of other great pancakes, 12-24) were the closest at the time, but the NX300 (which was the first mirrorless I ever bought) was multiple times better than anything Canon had for less than 1500euros(!), and 5(!) years later, I am happy I went NX, but if I was starting now, April 2018, for the reasons I mentioned, I would probably started with Eos M. But after 5 years, the lack of a pro mirrorless body is a great emission of course.

    Makes sense.  I wasn't recommending the Sigma (it's lovely but really big and heavy!), I was more commenting on not trying to use the 50mm on APSC for anything other than specialist shots.  However, if it is to add to a collection of wider lenses then it does make sense and the price is definitely right!  

    IIRC the M50 has different crops in 1080 and 4K - if that's true and the OP is outputting 1080 then the 50mm would have extra versatility as it can effectively become an 80mm with the DPAF, and a longer lens in mode with sensor crop.

    19 hours ago, mercer said:

    Yeah I love 50mm lenses. On FF they’re perfect all arounder and if I was forced to have one lens, it would be a 50mm or a 35mm. IMO, cropped or FF, they’re both great focal lengths and a whole film could be shot with either if need be. 

    Totally agree about 35mm or 50mm on FF.  I shot a couple of videos at 80mm (50mm on crop) indoors and it was just a little too long for most things - 35mm and 50mm equivalents are just right.

  11. Thanks @Deadcode @Papiskokuji @HockeyFan12 that totally makes sense - I'd forgotten that it was RAW vs compressed codecs.

    @Deadcode I asked it separately as I thought the answer would go into what scenes benefited from extra bit depth or that it was me being blind.  ML RAW just happened to be how I got the files, and I only included it in case I was screwing something up and not really getting the extra bits!

    This basically answers my question, which ultimately was about what bit-depth I should be shooting.  This is especially relevant with ML because if I lower the bit depth I can raise the resolution, which I thought I would have to trade-off against the gains of the extra bit-depth.

    10 hours ago, HockeyFan12 said:

    I don't spend much time on set. I see some Panasonic monitors for directors. I think DPs rely on the Alexa viewfinder, which is pretty good, or SmallHD maybe.

    I don't know about high end sets. I suspect iPads are getting more common for directors and producers, but obviously not for DPs. In post a Flanders or a calibrated Plasma seems sufficient for anything except HDR. I don't pay attention that carefully to brands I can't afford. :/

    On another forum I saw a post talking about getting new client monitors while things were on sale - they said they needed about half-a-dozen of them, and I just about choked when I read their budget was $8000 ..... each!

    I think I only recognised every second word in the rest of the thread - between brand names and specifications etc.  It's another world!

  12. I've been playing with Magic Lantern RAW and looking at different bit-depths, and after all the conversation in the BMPCC thread (and the thousands of 8-bit vs 10-bit videos) I got the impression that bit-depth was something quite important.

    Here's the thing, I can't tell the difference between 10bit, 12bit and 14bit.

    I recorded three ML RAW files at identical resolutions and settings, just the bit-depth varied.  I converted them in MLV App to Cinema DNG (lossless) and pulled them into Resolve.  Resolve says one is 10-bit, one is 12-bit and one is 14-bit, so I think the RAW developing is correct.  I just can't see a difference.  I tried putting in huge amounts of contrast (the curve was basically vertical) and I couldn't see bands in the waveform.  I don't know what software to use to check the number of unique colours (I have photoshop, maybe that will help?).

    Am I doing something wrong?  Does a well-lit scene just not show the benefits?  Is 10-bit enough?  Am I blind to the benefits (it's possible)?

    I've attached one of the frames for reference..  Thanks!

    M02-1700_000000.dng

  13. Lol, looks like the cure for OT posts might be feeding us!

    In getting back to Motion Cadence, I'm curious about this as well, as this is one of the things people say is 'cinematic' (which I believe) but people also seem to imply that even if you set cameras to the 180 shutter that there will still be some difference to the footage (which I don't believe, but would love to be proven wrong).

    @jonpais I have watched quite a few of those 'phone vs cinema camera' videos and I found that there was either a huge difference in motion cadence between the two of them (they always shoot them in bright light to use base ISO and so one would be 1/50 and the other 1/2000) or they have used an ND filter of some kind and then there was no difference in motion cadence.

    In terms of how someone might have a 180 degree shutter without an ND filter in the real world I can't think of any way this is possible.  I'd be happy to hear about an alternative to ND filters, but I'd say it's safe to assume that if they're using a 180 shutter then it's got an ND.

  14. 1 hour ago, Kisaha said:

    As a second camera to a Canon dSLR, would be alright to use the EF adapter too I guess (maybe with the cheap 50mm 1.8f?), but that wouldn't be my first option.

    I've got a APSC camera (canon 700D) and for years the 50mm f1.8 was the only fast lens I had, so I tried to use it in many situations, but at 80mm equivalent it's too long for many situations.  I've just recently purchased the Sigma 18-35, which is 29-56mm equivalent and I was surprised that there's a big difference between 56mm and 80mm.

    It depends on what you're shooting but I wouldn't recommend the 50mm on an APSC sensor as a general purpose lens at all.  

  15. 51 minutes ago, Gregormannschaft said:

    I get where you're coming from, and no one can argue against having more DR to play with, zero rolling shutter and beautiful colours...but, to play devil's advocate, no one is going to look at slightly blown out highlights in an otherwise important scene for a doc or ENG piece and say, 'Ah shame, this would have been much more effective with those highlights and more shadow detail'. Nice to have, but not critical. 

    You're right about people not being that critical.  My family are already saying that my films look like videos from the tourist bureau, but I can see that they're not and I guess I am that critical.  

    Early on I shot a video of a family holiday on a point-and-shoot and had it on the 50p mode, which I only later discovered didn't record sound (oops!).  The result was a wonderful video half-full of slow-motion shots of the kids smiling and running around with a full music soundtrack - great stuff and still perhaps my best work.  
    Now I have gone and spent thousands of dollars on 'real' camera equipment I am expecting to get a lot closer to the way that a feature film would render something like that, and I find that I'm still falling quite a bit short.  One of the things that is letting me down is DR.  I realise that my skill level is the number one thing letting me down, but that's not something I can throw money at and DR is!  

  16. 1 hour ago, Gregormannschaft said:

    I'm not really sure what you're arguing for here. Making a video of your family's trip to the park, it isn't really essential that every shot is perfectly exposed is it? In most documentaries there's a natural tolerance of bad exposure, framing, and jerky camera movements right? Because a lot of time they aren't detracting from the story. A lot of the time they add to it, in fact.

    I'm arguing that having a camera with higher DR is worthwhile and the answer isn't always to just work around the limitations of your camera.

    I've been around enough narrative, documentary, ENG and other types of shooting to realise that what I'm trying to do is at the pointy-end of making the best of situations you may have almost no control over.

    Most of the time on here I'll talk about something valuable to me and someone will say that I don't need it and all I have to do is change something I don't have control over but they assume everyone does.

  17. 1 hour ago, jonpais said:

    Tony Northrup recently did a poll where respondents frequently got it wrong when comparing the noise in two shots, even when the two images used were identical. I guess some people were intentionally guessing wrong just in order to mess up the results. :) So I'm pretty skeptical of these kinds of online polls myself. 

    You're right to be.

    We did a bit of study at uni around how to analyse things and how to not accidentally taint your results or get misleading results.  The impression that I took away was that it's a minefield and it's ridiculously difficult to get it right.

    Of course, much data gathering is either completely biased through vested interests or incompetence.  

    I once started completing an online survey, question one was pretty straight forward "Have you had fast food in the last 3 days" I answered "yes" because I'd had fish and chips, but then it all came to a crashing halt with question two "Which did you have: KFC, McDonalds, Chicken Treat, Hungry Jacks" and there was no "Other" option.  Never mind the local fish-and-chip place - not even all the big fast food chains were in there!

  18. (The article suggests that Apple will push everyone to iOS)

    Two main problems with iOS is lack of real multi-tasking and no shared file system.

    Until iOS supports running 10 apps concurrently (with maybe a total of a dozen browser tabs open) and has a file system where all kind of files can be stored then there's no way in hell it would be usable.  and when I say 10 apps concurrently, I don't mean it remembers what was running and re-runs it when you swap back to it, I mean things active in RAM.

    If moving away from Intel means better performance then great, but no support for real workflows brings performance to a complete stop.

  19. 2 hours ago, fuzzynormal said:

    As a doc director/producer/shooter I can agree with this...and also disagree with this.  A good shooter can and will find the best angle for light even in bad lighting situations.  Changing the perspective of a shot for better light is always an option.  It's not always easy, but that's part of the craft.  Making good cinematic decisions under the gun is doable.  So, you don't control the light, but you do control how the camera sees it.

     

    1 hour ago, kidzrevil said:

    Im not assuming and I have ?

    This was shot in SLOG2 and compressed to the 6-7 stops of REC709 gamma

    There’s ways around it OR you can choose what you want to keep and what you want to lose. I don’t mind choosing between crushing shadows or blowing highlights as long as my midtones are preserved and my composition is balanced.

    It all comes down to how much control you have.  

    I fully agree that the operator has way more control and options than the average amateur thinks are possible - otherwise the famous street photographers (HCB, Maier, Winogrand, etc) were just the luckiest people in the universe!  I have done enough street photography and wildlife photography to be able to feel the split in my brain where one part is thinking about the shot I'm taking and the other part is thinking about what is about to happen, what shots are likely to be available and how I would capture the best one.  Highly skilled street photographers could have one eye on the viewfinder composing a shot while the other was open and surveying the broader scene looking for what was about to enter the field of view.  I'm good enough at this to know what is possible but also how bad I am at it.

    However, there are always situations where you have no control.  You have no control about where your vantage point is, if, for example, you are sitting in a packed moving vehicle shooting out the window (tour bus, helicopter, train, plane), or when you're at the zoo looking at the animals from the lookout that is only wide enough for a couple of people to stand in at the same time.  In a vehicle you get to control framing, and camera position within a space about maybe 50x50x50cm perhaps, but that's it.  
    Sometimes you don't have control about where the subject is, kids running in the park in-between areas of full sun and full shade (and the full shade is relatively dark because vegetation is pretty good at absorbing light).  

    My approach to these situations where you are restricted about where you can shoot from, or the lighting on the subject you're shooting is two-fold:
    1) Just film a lot - "spray and pray" as it's called.  This is partly valid as the more you film the more likely you are to get a great moment, but it also means that when you're in the edit room you are able to replace great content that has unusable levels with other great content that does have useable levels.
    2) Understand that you are not always going to get the shot from the best angle - either by lack of options, lack of skill, or both - and just buy a camera with more DR.  This is what I am talking about here.

    If your video is about your families trip to the park, and your kid is happiest when they're running, and the bad lighting was where they were running before everyone sat down to ate and then the kids all fell asleep, good luck in the edit suite looking at one of the nicest pieces of footage you have from the outing and trying to decide between the best content and it not being noisy or clipped all to hell.

  20. 41 minutes ago, kidzrevil said:

    at this point its about how good we are as camera operators. 

    You assume that a good camera operator has control over their environment.  Try doing any documentary work outside in direct sunlight and it won't matter how good an operator you are - if your camera doesn't have enough DR you're going to be clipping highlights or crushing blacks or both at the same time in the same shot.

  21. 5 hours ago, Deadcode said:

    Storm is coming... 

    SD Card writing speed hack

    If you are familiar with the ML possibilities, you already know the best RAW capable cameras are those with CF card. The CF Card interface is capable of 70MB/s writing speed with 5D2, 75MB/s with 7D, and 95MB/s with 5D3.

    6D / 700D which has SD interface is only capable of 40MB/s, limited by the camera and not the SD card.

    Now the ML team are pumping up the SD card interface writing speed. It seems like the 40MB/s can be raised up to 70MB/s or even further.

    That means the 700D will be able to record in 2520x1072 continuously

    I hope they can make it stable

     

    Holy wow.. that is excellent news!  I bought my 700D for stills and to replace my Panasonic GF3 as a travel camera more years ago than I can remember...

    I think I read something about RAW compression (which I interpreted as lossless compression) which was also being worked on?

  22. 4 hours ago, IronFilm said:

    You're hiding in there the wild assumption that camera manufactures will all be equally sensible with the compression algorithms used. 

    Which isn't at all true!

    I did say 'roughly'..   maybe I should have compensated for the internet taking everything literally and said 'VERY roughly' :P

    However, regardless of codec, every 50Mbps codec is going to be bested by every 500Mbit codec, so I think the principle applies.

    If this was a general discussion about image quality then I might have put more finesse into my argument, but considering the stupidly high bitrates of the BMPCC, I thought that it was a good enough explanation :)

×
×
  • Create New...