Jump to content

kye

Members
  • Posts

    7,701
  • Joined

  • Last visited

Posts posted by kye

  1. My advice is to ignore the titles of videos, and to only watch the ones that seem interesting.  They're a stills photography channel, I wouldn't bother watching if they had a video about something that only applied to a type of stills photography that didn't interest me.

    In terms of squirrels, memory is different to attention span, but sure, I'll stop vilifying them publicly :)

  2. 7 hours ago, webrunner5 said:

    Yeah a steadicam is probably the better option than taking a 3 pound camera and trying to make it a 15 pound rig! ?

    Yeah..  ain't that the truth.

    7 hours ago, Raafi Rivero said:

    do you have pics of this setup?

    Probably buried deep in my hdd somewhere, but the overall lesson was that the recommended solutions (like a steadicam, shoulder-rig, or other rigs with three points of contact) are the best ways.  I had fun experimenting, but I eventually came to the same conclusions as the rest of the industry.

  3. 1 hour ago, Mark Romero 2 said:

    Well... there are some black friday sales going around for courses on things like Resolve and other NLE / Grading software. So yeah, skills kind of ARE on sale.

    I realised that when I posted, but if only you could buy skills and just install them!  I do have my eyes on a couple of courses, but in reality I know better than I actually do, and that's mostly just down to habits and practice.  Going from not knowing the theory to knowing it and barely being able to do it given infinite time is one thing, but doing it properly, reliably, and quickly on location as well as everything else you have to do is something else entirely.  My brain knows quite a lot, but it's my muscle memory that makes footage :)

  4. Further to the above comments, weight at distance is one part, but the other is how and where you hold it.

    I spent ages running around the house with various rigs of PVC piping (modular for travel) and even went so far as to fill it with water (readily available at your destination!) and quickly came to the conclusion that the less firmly I gripped it the less hand-shake it got, and basically replicated a steadicam by making little handles that could swivel and were mounted at the balance point of the whole rig.

  5. On 11/21/2018 at 9:06 AM, mercer said:

    Here’s a still I grabbed in Resolve from a shot I used Midtone Detail fairly liberally. Before I applied it, my actor didn’t look nearly as weathered.

    I’m having some display issues where I cannot get the exported image to completely replicate what I’m seeing in the timeline... but that’s a topic for another thread...

    And I’m really just starting to scratch the surface of the cool tricks Resolve can do. I also just realized that I can copy a grade from a still I grabbed... what a timesaver. I can’t see myself completely using Resolve as an NLE but as an intermediary before bringing the footage into FCPX, definitely!

    89875886-714A-49F6-8E75-26A3827570EB.jpeg

    Definitely suits the aesthetic, nice work :)

    It's a ton of work to learn a new package so I can understand not moving from FCPX, but what I would suggest is getting familiar with the colour tab and what you can do there.  Watching a good 10-20 minute walkthrough video showing you all the controls for colour will pay dividends almost immediately.  For example, did you know that in the Colour tab you can highlight any number of clips, and then right-click on a graded clip and select Apply Grade (I think that's what it's called) and it copies the entire grade onto the highlighted clips.

    Also worth noting is from a Still you can choose Append to node graph, so you can have stills that have parts of a grade and you can just build a grade from there bit by bit.  I've messed around in the past having Stills that had different converter nodes, nodes to desaturate the shadows/highlights, the Glow OFX plugin, etc etc.

  6. Adding weight is one thing (that people already mentioned) but what they didn't mention is adding weight at a distance.

    A steadicam works because the weights are at a distance from the camera.  Here's the physics of the situation: https://en.wikipedia.org/wiki/Moment_of_inertia

    Something you can try immediately is to use a tripod.  Mount your camera on a tripod and try the following tests:

    1. Use the tripod with legs compacted and folded together and hold it underneath the camera allowing it to work like a steadicam
    2. Same as above but extend the legs most of the way to the floor
    3. Same as above but fold the legs out

    You will find that the first test gives some smoothing in tilt and roll but little stabilisation in pan, the second will be more of the same, and the third will add in some stabilisation to the panning as well because the legs are not all on the same axis.

    With physics you need a combination of mass and distance, you can't get around it.  If you don't want either then do what I did and invest in IBIS.

  7. 6 hours ago, IronFilm said:

    Just I'm highlighting the fact when he says "MFT IS GOING TO DIE!!!!" you should take it with a big grain of salt, as very likely a big part of that reason is for the views.....

     

    6 hours ago, BTM_Pix said:

    Just playing devil's advocate here though but a major part of the standard YouTube schtick in general is the follow up "Why I was wrong about xyz" videos.

    Its like getting two pieces out of one so there is arguably even more of a vested interest in going big one way or the other with the original piece.

    "The Pocket 4K Is Quite Decent Really" is never going to invoke much of a need for a retrospective video the following month.

     

    5 hours ago, IronFilm said:

    YouTube rewards people for going big on predictions/claims, rather than more nuanced mild views.

     

    They're sometimes even daily. Or at least multiple times a week.

     

    All in all the Northups play "The YouTube Game" really really well (& I do admire them for that, they've got 100,000x more success than my YT channel will ever have), but part of that is pushing hard to where the edges are (like predicting MFT will die! ha), however sometimes that means going too far by accident....  which is what they did with that video.

    It sounds to me like you guys are reacting to the title a lot more than the content?  But even if you aren't, I actually happen to think he presented a reasonable argument that MFT will die.  I think the sticking point might actually be timescales.  We'd probably all agree that MFT will die in the next 100 years, and not in the next 6 months, so it's a question of degrees.  The Northrups talk about long-term trends in a world that has the attention span of a squirrel and I think that mis-match of contexts is often a factor in people disagreeing about this stuff.

    In terms of people "making mistakes", or in this example I'd say it's closer to "missing the mark", everyone does that.  But then again, if you gauge things by the internet comments then basically everyone should go kill themselves immediately and companies should give up on everything and hand out fantasies for free.  Plus everyone seems to fall in love with what they own, which is a strange phenomena.

    And if you want to get a sense of how level-headed the Northrups can be, check out their reply to Jared Polin around RAW vs JPG.  Tony was so level-headed the video was almost boring to watch - not only did he not capitalise on the drama but his reply was designed to put the whole thing to bed instead of create more.

    I think most of their videos are akin to "The Pocket4K is quite decent really" and that's why they have such a following.  They gave the GoPro 360 camera a pretty bad review (for many good reasons) but still presented its good points, and even more recently they still listed its pros and cons when comparing it to the more recent insta360 camera.

    They're definitely not perfect, but I think on average they are ahead of 99.99% and criticising them based on the title of a video rather than the myriad of points within the video isn't that helpful.

  8. 1 hour ago, IronFilm said:

    They can duplicate a lot of tech across the camera ranges. 

    Nearly all the main players are supporting at least two sensor sizes with their ILCs: Nikon/Canon/Pentax/Sony/FujiFilm/Leica/etc....  Sigma/Olympus/Panasonic were the odd ones out here. But now Sigma(probably?) and Panasonic will be joining the rest and having two different sensor sizes at once. (will be curious if Olympus stays focused purely on only the one sensor size: MFT. Or if they'll join L mount, or something else? I suspect they'll stay with MFT, but L mount odds are not too far behind, with anything else being unlikely)

    That may be true now, but the way I understand it is that it's a shrinking market and that kind of split may not be the recipe for sustainability.  Who knows of course, those with crystal balls  (or steel balls) will be putting all their savings on the winning horse.

    1 hour ago, IronFilm said:

    You mean Tony "The Clickbait King" Northup?

    Every successful YouTuber is clickbait royalty - that's just the reality of getting new subs and keeping ahead of the others - look at the UK tabloids (and sensationalist media the world over) and tell me that it isn't a successful strategy.

    What I like about Tony is that he isn't afraid to say things, isn't afraid to predict things, but in addition to those traits (that are shared with nearly all YouTubers) he's also level-headed, admits when he's wrong, and explains his thinking so that you can understand why he came to the conclusions he did.  Even if he gets things wrong he definitely adds to the conversation rather than just taking up your time and contributing nothing.

  9. I recall some issues that Dave Dugdale had with the a6xxx cameras that looked like IBIS and IS fighting each other.  I don't recall if it ever got resolved, but I think that the only way to work out what is going on is to do a logical evaluation through testing.  Shoot a shot with everything plain settings and verify that you can get a smooth pan like that.  After you've established a baseline start adding in the settings you use one by one and see which one 'creates' the problem.  Then leave that setting on and start reverting the previously adjusted settings back to your baseline and see if it comes good again, and if so then it will be a combination of multiple settings.

  10. I'm yet to make the move, but I think I'll have to eventually.

    Slightly OT, but don't they have new OS's free for a certain period of time and after that you have to pay for the upgrade?  I think I had to pay for one when the version I was running didn't support the software I wanted to run and the only free version was the one that had buggy wifi so I paid to get the second newest version.

  11. 8 hours ago, IronFilm said:

    A little secret: this happens a lot more often than you think.......  just this one time Fstoppers let the curtain slip and you got a peek inside.

    Absolutely!

    Think of all the awful products out there, and take a minute to order why no reviewer has ever looked at any of them.  It's simply incredible that the people who make awful products never contact reviewers, or if they do then the reviewers aren't interested, or if they are then the product gets lost in the post.  That postal system must be crazy good at assessing how good products are - it's the final gatekeeper in the entire system!!

  12. I've used it with a key to beautify skin a few times, but I'm not sure that it's the best way to go as I haven't compared it to other methods.

    Also worth checking out is the Sharpen and Soften OFX plugin which gives you independent sliders for small, medium and large textures so you have more control, and it let's you sharpen some and soften others so it's powerful for skin tones.

    If you want to use a dedicated sharpen or blur tool you can use the Edge Detection OFX plugin (I think that's what it's called) in the mode to display the edges and then connect the main output from that to the key input of the node with your effect in it. This gives you the ability to apply any filter to edges (and presumably also the inverse) in a very controllable way. 

    There are so many ways to get the job done in Resolve!

  13. @Sage  I have shot a bunch of footage at night with HLG at various exposure levels (ETTR as well as ETTL to keep highlights) and I'm wondering how to pre-process the footage in post to get the right levels for the LUTs.

    At what step in the Resolve node graph would you suggest adjusting the exposure back so skin tones are in the recommended 40-60 IRE, and which controls would you suggest (perhaps either the Gamma wheel or the Offset wheel?).

    Thanks!

  14. 17 hours ago, Deadcode said:

    Dont get too excited, reddercity is brute forcing the registers in the past couple of years, and technically never archieved anything. The other forum members developed the new features from reddercity's findings . And unfortunetaly less developer using 5D2 every year.

    But let's hope for the best

    What is your impression of how transferrable things like this are?

    I read the ML forums for a while and saw some instances of where figuring something out on one camera helped to solve it on other models, but I'm not sure if these were isolated instances or if this was more the norm.

    If so, perhaps these findings might apply to the 5D3 and maybe 5D4?

  15. 10 hours ago, webrunner5 said:

    I would imagine when you add the grip and 2 batteries to the X-T3 it is not going to be too far off either weight wise. I can't find the grip exact weight. Just shipping weight.

    The Fuji T2 weighs 29.54 oz, 837 gms with battery and a loaded battery pack. The 1dx mk II weighs 53.97 oz, 1530gms. So a bit less than 24.43 grams or 1.5 pounds as heavy. Not that much really if the X-T3 weighs what the X-T2 does?

     

    X-T3-FRONTLEFT-GRIP-SLV.JPG

    Yes, I suppose that the 1DX isn't so bad if you think of it as having a battery grip included. Personally though I don't use a battery grip and I would expect to be able to get away with not having one, so it depends on your perspective.

  16. On 11/7/2018 at 8:26 AM, DBounce said:

    So it's been a little over a month since I purchased my first FujiFilm X-T3 body. About two weeks after purchasing the first body I became convinced that this camera was very special. It was fun to shoot with. It reignited my desire to shoot stills. When it came to video the X-T3 seemed more than capable. I decided to refine my lens collection, adding the 23mm F2, 35mm F2, 16mm F1.4 and 56mm F1.2. Later I add the magnificent MKX18-55mm T2.9. All of these lenses performed to my highest expectations. 

    I can tell you after this short period of owning these cameras that they can capture some truly lovely images. The in camera profiles are great and very inspiring to work with. The whole experience is really quite wonderful... well, that is it was. I am getting rid of the whole system. 

    Why? Well, the second body decided to quit. The screen froze up and would just blink. A quick call to Fuji tech support revealed that the hardware had failed. I called to get a return authorization, which was granted. I decided not to go for an exchange, instead opting to wait and see how the first body held up. Today, I went out to shoot some pictures for a project we are doing. Upon arriving at the location I grabbed the X-T3 out of my bag and flipped the switch to power it on... NOTHING! Absolutely nothing. No lights, no screen lighting up, no power up at all. I tried different batteries. Still nothing. I tried pulling the battery and holding down power, then re-inserting the battery and attempting to power up again. Still nothing. Finally I tried the one thing I knew would work. This technique has never failed me.... I went home and grabbed my Canon 1DXMk2. Needless to say the shoot was completed. But the Fuji's are paper weights. 

    I'm done with FujiFilm. It's all going back. It's packed up... I will not buy another one. In my experience they are not fit for serious use. Heck, they are not fit for even casual use. I had spoken highly of them in the past, but this experience with not one... but two bodies failing is enough for me. Lesson learned.

    Sorry to hear you got two duds...  I'd suggest buying a lottery ticket - luck that bad can only be that it's been redirected somewhere else!

    I was particularly interested in the dial system that Fuji had, being able to control every parameter in either auto or a given manual setting is great, and gives more flexibility than some PASM mode implementations do, with a much better interface.

    Are you back to using the 1DXii for video as well?  It looks great but I hear it weighs as much as a bus!

  17. 43 minutes ago, homestar_kevin said:

    I've been using the SLR Magic 8mm on my gx85/g7 and BMPCC and agree completely.

    Great image, great small little lens. Not the best to work with, but the for the price and image, it's definitely awesome

    I haven't seen it discussed much for video (maybe it was and I missed it) but lots of people have looked at it for photos.  It was the widest non-fisheye lens that I could find for a reasonable price.   It looks hilarious on the GH5 because it's so small and thin!

    My strategy has been to get a 17.5mm lens for generic people / environment shots, and the 8mm for those wider "wow" scenery shots where you want the grand look of a wide angle, and it's been great for that.  Today was my third day in India and it hasn't come off the camera yet as everything here seems to lend itself to that "wow" aesthetic - look at that building - look at how many people there are - look at this amazing chaos...  :)

    Basic grade and crop....

    India-1_1_29.1.thumb.jpg.f937b566efa6cb577d01b564d8868eb7.jpg

    India-2_1_33.1.thumb.jpg.8fc3674595d7ec5c090a60e03a2985a5.jpg

     

    I'm still making friends with the GH5 so these might not be sharp, but they certainly suit the geometric nature of the architecture :)

  18. 9 hours ago, webrunner5 said:

    I'd be sceptical, the bridge in the sample shot goes exactly through the middle of the frame, so it won't show many forms of distortion.  7artisans does have a range of lenses and a track record though, so who knows.

    I've been using the SLR Magic 8mm F4 on my GH5 and I'm really enjoying the image, but it's also designed as a drone lens and so the ergonomics aren't that great though! 

  19. 22 hours ago, Shirozina said:

    That's not the bit depth but the Chroma subsampling in Y'CbCr codecs. Even in 10bit  the colour information is very compromised compared to the Luma information. 

    So, if I understand correctly, colour information is worse than in a theoretical 8-bit RAW codec then?

  20. 10 hours ago, Kisaha said:

    Great! I am going your way too!

    I am staying NX at the moment (I have 4 cameras and 8 lenses plus unlimited accessories) using those as my hybrid cameras, and I will eventually buy a P4K as a dedicated video camera, until change hybrid system altogether sometime in the next couple of years. Photos are just a small portion of my pro life anyway, so doesn't make a lot of sense to invest a lot of coin that way.

    I am waiting for your real world experiences and comments eagerly.

    It will be difficult to match the cameras, I have filmconvert and maybe that helps a little.

    Does Resolve have support for the NX1 colour and gamma profiles?  If so, you could use the Colour Space Transform OSX plugin to convert both types of files to a common colour space.

    Although it's not perfect, I've had good results and it does most of the work, I just wish it had support for Protune so I could also use it to match GoPro footage.

  21. On 11/9/2018 at 12:16 PM, stephen said:

    OK maybe this statement (RAW has no color) is not that accurate. The point is that RAW has to be developed before you have the color of every pixel. You have the values for each pixel from the bayer sensor but they are one of the 3 basic colors only - Green, Red, Blue. With different intensity. Before the debayering / development you don't have "real" colors - RGB values for each pixel, only one of this values - R or G or B. The other two are interpolated, "made" by the software. So before developing the image you can't measure anything related to color. 

    This process has 3 variables (actually more): 1- the sensor and other electronics around it. Let's call it hardware. 2- the software that do the debayering/interpolation. 3 - the human deciding which parameter to use for the development. For color there are many parameters that can be changed in the software. So how you are going to measure for accurate color the developed image coming from RAW (because RAW can't be measured), when it is dependent of so many parameters and most of them are not related to the camera ? 

    Yes watched the video and totally agree with Tony that for RAW there is no point to measure color accuracy of the camera or cameras color science. As color depends on too many variables and parameters outside of the camera. You can literally get any color you want in the program. 

    Now Mattias and many other people argue that every camera (sensor and electronics in the camera) has specific signature and they affects the RAW image and as result the final/developed image. This is true. It that sens not all RAW are equal. Yes indeed it's one of the variable (some of the variables) in the process and for sure has an impact for the final image. Dynamic range of the sensor for example definitely affects the final image. But for colors specifically my argument is that all those differences in the sensor are easily obliterated by the software. Remember 2/3 of the color information is made by the software. It is the software (algorithm) and human behind it, who has the final saying what color a pixel and whole picture will have. So for me when people says different sensors / hardware give me differences in colors they mostly mean: different sensors/cameras gives me different colors in MY workflow. :) You can perfectly color match photos from different cameras/sensors. Same for video. 

    So we agree to disagree here :)

    Good post.

    I think we're mostly agreeing, but there are aspects of what you said that I think are technically correct but maybe not in practice.

    @TheRenaissanceMan touched on two of the biggest issues - the limitations of 8-bit video and the ease of use.

    It is technically true that you can use software (like Resolve or photoshop) to convert an image from basically any set of colours into any other set of colours, but in 8-bit files you may find that information may well be missing to do a seamless job of it.  Worse still, the closer a match you want, the more manipulations you must do, and the more complicated the processing becomes.

    In teaching myself to colour correct and grade I downloaded well shot high-quality footage and found that good results were very easy and simple to achieve.  But try inter-cutting and colour matching multiple low-quality consumer cameras and you'll realise that in practice it's either incredibly difficult or it's just not possible.

    21 hours ago, Andrew Reid said:

    Funny thing is, colour isn't even just a science

    Absolutely!

×
×
  • Create New...