-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
What is your impression of how transferrable things like this are? I read the ML forums for a while and saw some instances of where figuring something out on one camera helped to solve it on other models, but I'm not sure if these were isolated instances or if this was more the norm. If so, perhaps these findings might apply to the 5D3 and maybe 5D4?
-
Yes, I suppose that the 1DX isn't so bad if you think of it as having a battery grip included. Personally though I don't use a battery grip and I would expect to be able to get away with not having one, so it depends on your perspective.
-
Sorry to hear you got two duds... I'd suggest buying a lottery ticket - luck that bad can only be that it's been redirected somewhere else! I was particularly interested in the dial system that Fuji had, being able to control every parameter in either auto or a given manual setting is great, and gives more flexibility than some PASM mode implementations do, with a much better interface. Are you back to using the 1DXii for video as well? It looks great but I hear it weighs as much as a bus!
-
I haven't seen it discussed much for video (maybe it was and I missed it) but lots of people have looked at it for photos. It was the widest non-fisheye lens that I could find for a reasonable price. It looks hilarious on the GH5 because it's so small and thin! My strategy has been to get a 17.5mm lens for generic people / environment shots, and the 8mm for those wider "wow" scenery shots where you want the grand look of a wide angle, and it's been great for that. Today was my third day in India and it hasn't come off the camera yet as everything here seems to lend itself to that "wow" aesthetic - look at that building - look at how many people there are - look at this amazing chaos... Basic grade and crop.... I'm still making friends with the GH5 so these might not be sharp, but they certainly suit the geometric nature of the architecture
-
I'd be sceptical, the bridge in the sample shot goes exactly through the middle of the frame, so it won't show many forms of distortion. 7artisans does have a range of lenses and a track record though, so who knows. I've been using the SLR Magic 8mm F4 on my GH5 and I'm really enjoying the image, but it's also designed as a drone lens and so the ergonomics aren't that great though!
-
So, if I understand correctly, colour information is worse than in a theoretical 8-bit RAW codec then?
-
Does Resolve have support for the NX1 colour and gamma profiles? If so, you could use the Colour Space Transform OSX plugin to convert both types of files to a common colour space. Although it's not perfect, I've had good results and it does most of the work, I just wish it had support for Protune so I could also use it to match GoPro footage.
-
Good post. I think we're mostly agreeing, but there are aspects of what you said that I think are technically correct but maybe not in practice. @TheRenaissanceMan touched on two of the biggest issues - the limitations of 8-bit video and the ease of use. It is technically true that you can use software (like Resolve or photoshop) to convert an image from basically any set of colours into any other set of colours, but in 8-bit files you may find that information may well be missing to do a seamless job of it. Worse still, the closer a match you want, the more manipulations you must do, and the more complicated the processing becomes. In teaching myself to colour correct and grade I downloaded well shot high-quality footage and found that good results were very easy and simple to achieve. But try inter-cutting and colour matching multiple low-quality consumer cameras and you'll realise that in practice it's either incredibly difficult or it's just not possible. Absolutely!
-
Manufacturers design the bayer filter for their cameras, adjusting the mix and strength of various of tints in order to choose what parts of the visible and invisible spectrum hit the photo sites in the sensor. This is part of colour science too. Did you even watch the video?
-
Use a tripod and fix it in post!!
-
You should all hear how the folks over at liftgammagain.com turn themselves inside out over browser colour renderings... almost as much as the client watching the latest render on grandmas 15 year-old TV and then breathing fire down the phone about how the colours are all gone to hell... I'm sure many people on here are familiar with the issues of client management, but imagine if colour was your only job!!
-
Dogs: the guard animal for people that don't care about protecting their property enough to keep geese.
-
Good question. Here's some info: (Source) However, it seems some codecs are supported. V14 release notes: Unfortunately it doesn't seem like there's a complete list anywhere, the "what's new" feature lists are only incremental, and BM don't use consistent language in those posts so searching for phrases doesn't work either (sometimes they say "hardware accelerated" sometimes "GPU accelerated" etc). My guess is that you should just set up a dummy project with a high resolution output and then export it in every codec you can think of (just queue them up, hit go and walk away) and then pull all of them into a timeline and compare the FPS and CPU/GPU load on each type. It's a bit of admin you shouldn't have to do, but it will definitely answer your question.
-
Or they could treat it like a cinema camera ??? Besides, we all know how that song about camera stabilisation went - "if you liked it then you should have put a rig on it".
-
Actually, I think that the tripod might be the most significant piece of equipment in this entire conversation. So far I have seen only a couple of videos where I can appreciate the IQ, with the rest of them leaving me wondering if this camera automatically gives you Parkinsons as soon as you hit record. I spend the whole time thinking things like "It looks sunny, but maybe the wind is really cold" .... "other people don't seem to be dressed warmly either" .... "it is high up, so maybe they ran up the hill" .... "the girl being filmed looks healthy, so they're probably not going through withdrawal" .... I wish I was joking!
-
I'm actually not that concerned. Considering that 150Mbps 10-bit is so nice and the UHS-II cards are so expensive, I probably would have ended up with the same card even if I knew about it before buying. Plus I shoot a lot of footage and so I have large capacity cards as well.
-
Thanks - I'll check that out. I've just done a bit of googling about speed performance of SD cards, and the short answer is that I can't find a good answer. The longer answer is that there are a number of things that might be going on here, but I can't verify if they are or not. There are two candidates that I can think of. One is the type of file system that is used on the card, and the other is the pattern of where each chunk of data is written to the card (interleaving). This is interesting: (Source) This gives you an idea about the complicated housekeeping going on - you don't just write a block of data to an empty slot - you have to update the table of contents as well. This all references the FAT file system (of which there are multiple versions), and I'm not sure but it might be that there are different housekeeping requirements between different file systems. The second potential source of performance could be timing and interleave issues. Interleave is something that happens on physical disk drives. Imagine you have a disk spinning, and you write a block of data to it that is a quarter turn, and the drive then needs to ask for the next block of data, but by the time that block of data arrives the disk has spun a little bit more. If you organised the disk so that the location of the next block was straight after the previous block then you would have to wait for the disk to do almost a complete 360 degree rotation before that next block came around again. To solve this you would format a disk so that the blocks were arranged to minimise that waiting time. With an SD card there is obviously no physical movement, but there might be timing issues. For example it might take time to change between different blocks on the disk, or there might be different modes where if you miss a cycle then you have to wait for the next cycle to begin, or whatever. The above quote indicates that for every chunk of data written it also has to update other areas, and that the largest block size would be useful because it means you write more data for each set of housekeeping, however if every time you wanted to write some data it took a whole cycle then it might be useful to have smaller cycles so that when you do the housekeeping writes they have less time waiting for the cycle to end. I'm really just guessing with this stuff, but I know from older technologies like physical HDDs and memory timings that these things exist for those technologies and might be at play here. EDIT: I should add that I'm not even sure of the terminology at use here. There are even three ways to format a disk. IIRC Back in the day there were two, normal and low-level. The normal one was where you overwrote the whole disk but kept the interleaving the same, and low-level was where you could change the interleaving pattern and it also overwrote the whole disk. Overwriting the whole disk has the advantage of detecting bad sectors and marking them bad in the index of the disk so they wouldn't be used. Then along came the quick format which just overwrote the file allocation table but left the data all in-tact on the drive. Somewhere along the way the quick format turned into a normal format and the low-level format became a mystery. I think that manufacturers got better at optimising interleave patterns and consumers were getting less technically educated so they just stopped letting people play with them. I know that now if you choose low-level format in a camera it definitely doesn't ask you for interleave options, so who knows if the camera just leaves it alone, has a one-interleave-fits-all approach, or if it runs a bunch of tests and chooses the fastest one like the old software used to (I very much doubt that!).
-
Yeah, that video is interesting.. that's a few from him I've found useful
-
I just did a bit of poking around and it seems that V30 is only a reference to 30MB/s performance (V60 being 60MB/s, and V90 being 90MB/s). So you could have a 59MB/s card that is only rated V30 because it doesn't quite get to V60. However, all that said, my V30 card wrote 80MB/s continuously on my computer, but couldn't manage more than about 10s of recording on the 400Mbps mode, despite the fact that 400Mbps is 50MB/s and 50MB/s is definitely less than 80MB/s. What external formatting did you do to your ADATA card that made it work on your GH5? It'll depend on the card.. as I said above, V30 could be up to 59MB/s, which is enough. I suspect that it might involve data stream variations too. If the 400Mbps is always 400 then that's one thing but if it goes up to 600 for a couple of seconds because of crazy movement in the scene then it might overrun the buffer and it wouldn't matter that it went down to 200 for the two seconds after that.
-
Be careful, there are other variables. I recently bought a V30 Sandisk Extreme Pro 95MB/s SD card for my GH5 but it wouldn't record 400Mbps (around 50MB/s) video because it is only a UHS-I card, and the GH5 doesn't write fast enough in that mode, but will if you get a UHS-II card (which are frightfully expensive). In your case though I don't think you need UHS-II, but you should be careful to oversimplify things. What camera are you buying it for?
-
Panasonic announcing a full frame camera on Sept. 25???
kye replied to Trek of Joy's topic in Cameras
Tony thinks m43 will die.. His main point was that in a shrinking market Panasonic can't afford to split their R&D budget across two sensor sizes, and it's likely they'll choose FF over m43. He also made the point that m43 became popular initially because it enabled smaller cheaper cameras, and that advantage came from it being mirrorless and having a smaller sensor, but we now have FF mirrorless and there's much less of a cost difference from sensor size now, so its advantages are mostly gone. -
Learning about any art form is a double-edged sword. On the up-side you learn to appreciate truly great artwork, but on the down-side you also realise that most of what you see is far from great. I think I'm lucky in a way because when long as there's a plot that I'm semi-interested in things like acting or whatever have to be pretty bad for me to notice them. I know other people who can't handle a great story if even one of the actors is mediocre, and I know a guy who is so into interior decorating and set design that in The Sixth Sense he worked out that the guy was dead because in the scene with his wife in the restaurant the flowers on the table showed it was set for one person not two!
-
and HIS professor might have told him you only need a 'normal' lens.....
-
Reminds me of a saying about continuity. "If people notice continuity problems then your film is crap" Agreed. Things either serve the story..... or don't.
-
My impression is that people don't like backgrounds that are unnaturally blurry. Hold your finger up close to your face and focus on it. Now, while maintaining focus on your finger, become aware of the items in the location you are in and how much detail you are able to ascertain. It is normally quite a lot, although if you do this with test charts you will obviously find that the detail is obscured. Your eyes aren't as large aperture as you might think. However, in video, you are able to look at the out-of-focus areas directly with your eyes, and so any detail present can be distracting, so it's not a straight aperture conversion. Basically, the human eye has a relatively small aperture, the translation is a weak one, but the more you push it faster than the human eye the more unnatural it looks. I completely agree that it's a photographer thing, as photographers have learned to idolise shallow DoF because they are associated with portraiture and expensive lenses, and that it isn't the consumer as much as a photographer lusting after a completely unnatural sea of blur behind a subject in their images.