Jump to content

kye

Members
  • Posts

    7,701
  • Joined

  • Last visited

Posts posted by kye

  1. 21 hours ago, stephen said:

    There are no colors yet in RAW.

    Manufacturers design the bayer filter for their cameras, adjusting the mix and strength of various of tints in order to choose what parts of the visible and invisible spectrum hit the photo sites in the sensor.

    This is part of colour science too.  Did you even watch the video?

  2. You should all hear how the folks over at liftgammagain.com turn themselves inside out over browser colour renderings...   almost as much as the client watching the latest render on grandmas 15 year-old TV and then breathing fire down the phone about how the colours are all gone to hell...

    I'm sure many people on here are familiar with the issues of client management, but imagine if colour was your only job!!

  3. 4 hours ago, capitanazo said:

    Hi, i want to know what codecs are playback by gpu on premiere or resolve.

    i have to work with too mutch tracks and clips and i get lagged with my cpu at 100% and my gpus practically do nothing lol.

    i got an rx 570 8gb and a ryzen 5 1600, 16gb ram ddr4 and ssd storage for the cache and the files.

    any codec will increase performance and use the gpu would be apreciate.

     

     

    Good question.

    Here's some info:

    Quote

    However for editing, VFX and grading, the compressed data needs to be decompressed to the full RGB per pixel bit depth that can use four times or more processing power of a HD image for the same real time grading performance. The decompression process, like compression, uses the CPU so the heavily compressed codecs need more powerful and a greater number of CPU cores. H.264 and H.265 are heavily compressed formats and while not ideal for editing are often used by lower cost cameras. If you use these types of compressed codecs you will need a more powerful CPU or be prepared to use proxies or Resolves optimised media feature.

    Once the files are decompressed DaVinci Resolve uses the GPU for image processing so the number of GPU cores and the size of GPU RAM becomes a very important factor when dealing with UHD and 4K-DCI sources and timelines. For VFX each layer of images uses GPU memory and so any GPU with a small amount of memory will have a performance hit as you add layers up to the level where the GPU just has insufficient memory and performance.

    For uncompressed images, in industry standards like DPX or EXR particularly with 16 bit files, the CPU has an easier time as these are effecient codecs, However they place greater demands on the disk array, RAID controller, storage connection and even the file system itself.

    Audio facilites who plan to use flattened HD video files wont need as powerful GPU or disk I/O as audio files by their nature are smaller than video. But remember, if you are importing a DaVinci Resolve project file or sharing projects on a central database, you may be opening projects with complex video timelines, a variety of image formats and codecs and potentially with demanding VFX elements so even the most basic audio system should be prepared for these demands.

    (Source)

    However, it seems some codecs are supported.  V14 release notes:

    Quote

    • Added support for hardware accelerated HEVC decode on macOS High Sierra
    • Added support for hardware accelerated HEVC encode on macOS High Sierra on supported hardware
    • Added support for hardware accelerated HEVC decode on supported NVIDIA GPUs on DaVinci Resolve Studio on Windows and Linux

    Unfortunately it doesn't seem like there's a complete list anywhere, the "what's new" feature lists are only incremental, and BM don't use consistent language in those posts so searching for phrases doesn't work either (sometimes they say "hardware accelerated" sometimes "GPU accelerated" etc).

    My guess is that you should just set up a dummy project with a high resolution output and then export it in every codec you can think of (just queue them up, hit go and walk away) and then pull all of them into a timeline and compare the FPS and CPU/GPU load on each type.  It's a bit of admin you shouldn't have to do, but it will definitely answer your question.

  4. 5 hours ago, BTM_Pix said:

    I think its a bit of a double whammy in that gimbal options for the Pocket4K (as you can see from this thread and the separate one) are a bit sketchy at the moment as is the lack of IS on a lot of the lenses people are using.

    The Sigma 18-35mm has become pretty much the de-facto "standard" lens over the past few years and whilst it's lack of IS didn't really matter that much as the camera it was mounted to had IBIS and/or was on a gimbal, it really does matter when neither of those avenues are available.

    That's why I think people might want to consider options like that cheap Tamron and the like if they are going to be shooting handheld as, pragmatically speaking, the IS arguably matters more in that context than the pure optical performance advantage of something like the 18-35mm that doesn't have it.

    Or they could treat it like a cinema camera ???

    Besides, we all know how that song about camera stabilisation went - "if you liked it then you should have put a rig on it".

     

  5. 12 hours ago, BTM_Pix said:

    (Ignore the small tripod in the shot obviously as it was just used to stand it up for the picture)

    Actually, I think that the tripod might be the most significant piece of equipment in this entire conversation.

    So far I have seen only a couple of videos where I can appreciate the IQ, with the rest of them leaving me wondering if this camera automatically gives you Parkinsons as soon as you hit record.

    I spend the whole time thinking things like "It looks sunny, but maybe the wind is really cold" .... "other people don't seem to be dressed warmly either" .... "it is high up, so maybe they ran up the hill" .... "the girl being filmed looks healthy, so they're probably not going through withdrawal" ....

    I wish I was joking!

  6. 5 hours ago, leeys said:

    @kye, in your case, it's because for UHS-I cards, Panasonic did not implement SDR104 bus speeds in the GH5, which limits UHS-I cards to the slower SD50 implementation which gives, as you can guess, 50 MB/s. As transfer rates are never 100% efficient my guess that even in the high 40s it eventually catches up and stops recording.

    More here: https://www.sdcard.org/developers/overview/bus_speed/

    I'm actually not that concerned.  Considering that 150Mbps 10-bit is so nice and the UHS-II cards are so expensive, I probably would have ended up with the same card even if I knew about it before buying.  Plus I shoot a lot of footage and so I have large capacity cards as well.

  7. 11 hours ago, Shirozina said:

    I use this app

    https://www.sdcard.org/downloads/formatter_4/

    No idea why it works better than in-camera formating though?

    Thanks - I'll check that out.

    I've just done a bit of googling about speed performance of SD cards, and the short answer is that I can't find a good answer.

    The longer answer is that there are a number of things that might be going on here, but I can't verify if they are or not.

    There are two candidates that I can think of.  One is the type of file system that is used on the card, and the other is the pattern of where each chunk of data is written to the card (interleaving).

    This is interesting:

    Quote

    Reading and writing to an SD Card and MultiMediaCard is generally done in 512 byte blocks, however, erasing often occurs in much larger blocks. The NAND architecture used by SanDisk and other card vendors currently has Erase Block sizes of (32) or (64) 512 byte blocks, depending on card capacity. In order to re-write a single 512 byte block, all other blocks belonging to the same Erase Block will be simultaneously erased and need to be rewritten.

    For example—writing a file to a design using a FAT file system takes three writes/updates of the system area of FAT and one write/update of the data area to complete the file write. First, the directory has to be updated with the new file name. Second, the actual file is written to the data area. Third, the FAT table is updated with the file data location. Finally, the directory is updated with the start location, length, date and time the file was modified. Therefore, when selecting the file size to write into a design, the size should be as large as possible and a multiple of the Erase Block size. This takes advantage of the architecture.

    Some designs update the FAT table for every cluster of the data file written. This can slow the write performance, because the FAT table is constantly being erased and re- written. The best approach is to write all the file clusters then update the FAT table once to avoid the performance hit of erasing and re-writing all the blocks within the Erase Block multiple times.

    (Source)

    This gives you an idea about the complicated housekeeping going on - you don't just write a block of data to an empty slot - you have to update the table of contents as well.  This all references the FAT file system (of which there are multiple versions), and I'm not sure but it might be that there are different housekeeping requirements between different file systems.

    The second potential source of performance could be timing and interleave issues.

    Interleave is something that happens on physical disk drives.  Imagine you have a disk spinning, and you write a block of data to it that is a quarter turn, and the drive then needs to ask for the next block of data, but by the time that block of data arrives the disk has spun a little bit more.  If you organised the disk so that the location of the next block was straight after the previous block then you would have to wait for the disk to do almost a complete 360 degree rotation before that next block came around again.  To solve this you would format a disk so that the blocks were arranged to minimise that waiting time.
    With an SD card there is obviously no physical movement, but there might be timing issues.  For example it might take time to change between different blocks on the disk, or there might be different modes where if you miss a cycle then you have to wait for the next cycle to begin, or whatever.  The above quote indicates that for every chunk of data written it also has to update other areas, and that the largest block size would be useful because it means you write more data for each set of housekeeping, however if every time you wanted to write some data it took a whole cycle then it might be useful to have smaller cycles so that when you do the housekeeping writes they have less time waiting for the cycle to end.

    I'm really just guessing with this stuff, but I know from older technologies like physical HDDs and memory timings that these things exist for those technologies and might be at play here.

    EDIT:

    I should add that I'm not even sure of the terminology at use here.  There are even three ways to format a disk.  IIRC Back in the day there were two, normal and low-level.  The normal one was where you overwrote the whole disk but kept the interleaving the same, and low-level was where you could change the interleaving pattern and it also overwrote the whole disk.  Overwriting the whole disk has the advantage of detecting bad sectors and marking them bad in the index of the disk so they wouldn't be used.  Then along came the quick format which just overwrote the file allocation table but left the data all in-tact on the drive.  Somewhere along the way the quick format turned into a normal format and the low-level format became a mystery.  I think that manufacturers got better at optimising interleave patterns and consumers were getting less technically educated so they just stopped letting people play with them.  I know that now if you choose low-level format in a camera it definitely doesn't ask you for interleave options, so who knows if the camera just leaves it alone, has a one-interleave-fits-all approach, or if it runs a bunch of tests and chooses the fastest one like the old software used to (I very much doubt that!).

  8. 56 minutes ago, Shirozina said:

    The GH5 needs V60 or V90 for 400Mbps codec so it's no wonder a V30 didn't work. For the OP a V30 (30MB/s sustained write) card should work for 200Mbps as it's 25MB/s but brands do seem to differ as I have an ADATA V90 card that won't write 400Mbp/s on my GH5 without external formatting. Read the reviews before you buy is my advice and don't go on even the V rating specs alone.

    I just did a bit of poking around and it seems that V30 is only a reference to 30MB/s performance (V60 being 60MB/s, and V90 being 90MB/s).  So you could have a 59MB/s card that is only rated V30 because it doesn't quite get to V60.

    However, all that said, my V30 card wrote 80MB/s continuously on my computer, but couldn't manage more than about 10s of recording on the 400Mbps mode, despite the fact that 400Mbps is 50MB/s and 50MB/s is definitely less than 80MB/s.

    What external formatting did you do to your ADATA card that made it work on your GH5?

    55 minutes ago, androidlad said:

    Surprisingly V30 is enough for 400Mbps on X-T3.

    It'll depend on the card..  as I said above, V30 could be up to 59MB/s, which is enough.

    I suspect that it might involve data stream variations too.  If the 400Mbps is always 400 then that's one thing but if it goes up to 600 for a couple of seconds because of crazy movement in the scene then it might overrun the buffer and it wouldn't matter that it went down to 200 for the two seconds after that.

  9. Be careful, there are other variables.

    I recently bought a V30 Sandisk Extreme Pro 95MB/s SD card for my GH5 but it wouldn't record 400Mbps (around 50MB/s) video because it is only a UHS-I card, and the GH5 doesn't write fast enough in that mode, but will if you get a UHS-II card (which are frightfully expensive).

    In your case though I don't think you need UHS-II, but you should be careful to oversimplify things.

    What camera are you buying it for?

  10. Tony thinks m43 will die..

    His main point was that in a shrinking market Panasonic can't afford to split their R&D budget across two sensor sizes, and it's likely they'll choose FF over m43.

    He also made the point that m43 became popular initially because it enabled smaller cheaper cameras, and that advantage came from it being mirrorless and having a smaller sensor, but we now have FF mirrorless and there's much less of a cost difference from sensor size now, so its advantages are mostly gone.

  11. Learning about any art form is a double-edged sword.  On the up-side you learn to appreciate truly great artwork, but on the down-side you also realise that most of what you see is far from great.

    I think I'm lucky in a way because when long as there's a plot that I'm semi-interested in things like acting or whatever have to be pretty bad for me to notice them.  I know other people who can't handle a great story if even one of the actors is mediocre, and I know a guy who is so into interior decorating and set design that in The Sixth Sense he worked out that the guy was dead because in the scene with his wife in the restaurant the flowers on the table showed it was set for one person not two!

  12. 2 hours ago, noone said:

    And HIS professor might have told him he only needed three lenses,

    A 28mm, a 50mm and either an 85mm or a 135mm.       Zooms?     pfffft!

    and HIS professor might have told him you only need a 'normal' lens.....

  13. 8 hours ago, thebrothersthre3 said:

    One big idea behind blurring the backdrop is to keep the viewers focus on the subject. However sometimes it works against itself. The background blur can draw attention to itself. I see this in a lot of movies these days. Everything is shallow DOF and it looks distracting and unnatural sometimes. If I care about the show or movie I'll be focusing on the subject not a car moving by in the back. 

    Reminds me of a saying about continuity.

    "If people notice continuity problems then your film is crap"

    3 hours ago, Kisaha said:

    More experienced or advanced filmakers just use that as a an artistic and expressionistic tool, because they know better..

    Agreed.  Things either serve the story..... or don't.

  14. 23 hours ago, Mattias Burling said:

    For those that don't have the energy to watch the video the conclusion was basically: people dont like to blurry backgrounds.

    My impression is that people don't like backgrounds that are unnaturally blurry.

    Hold your finger up close to your face and focus on it.  Now, while maintaining focus on your finger, become aware of the items in the location you are in and how much detail you are able to ascertain.  It is normally quite a lot, although if you do this with test charts you will obviously find that the detail is obscured.

    Your eyes aren't as large aperture as you might think.  However, in video, you are able to look at the out-of-focus areas directly with your eyes, and so any detail present can be distracting, so it's not a straight aperture conversion.

    Basically, the human eye has a relatively small aperture, the translation is a weak one, but the more you push it faster than the human eye the more unnatural it looks.

    I completely agree that it's a photographer thing, as photographers have learned to idolise shallow DoF because they are associated with portraiture and expensive lenses, and that it isn't the consumer as much as a photographer lusting after a completely unnatural sea of blur behind a subject in their images.

  15. Have a search on LiftGammaGain.com where the professional colourists hang out - I've seen lots of threads where they give their training recommendations.

    I'd suggest that you get a course that helps you with the part of grading that you aren't strong in.

    ie:

    • what the dials and knobs do (eg, how to make qualifiers, use tracking, etc)
    • what the aesthetic implications are for the various effects (eg, warmer colours are happier, etc)
    • what aesthetics do for story and perception (eg, when to pop the colours and when not to)

    You need all of these to be a good colourist.  No point knowing exactly how to make footage look happy but doing it on horror films.

  16. There are three levels of background blur...

    1. subject in front of in-focus background (less 3D appearance - use this when the background is part of the subject)
    2. subject in front of slightly blurred but still discernible background (natural 3D appearance - use when background is relevant but not the subject)
    3. subject in front of very blurred and obscured background (unnatural appearance - use when background is distracting or competes with the subject or when you want an unnatural aesthetic)

    As @Mattias Burling says, very wider apertures are required to get slight background blur on subjects that are further away from the camera, or where the background is close to the subject.

  17. On 10/25/2018 at 2:18 AM, katlis said:

    I'm limited in resolution with the free version of Resolve, but I may just have to make the switch and get the full version!

    I'd highly recommend it.

    If nothing else, the ability to do localised adjustments as well as global adjustments is completely night and day with what results you can achieve.  The free version of Resolve is like Photoshop without layers.

  18. If you stick with the concept that BM makes cinema cameras, then I can't think of any logical set of features that would fit the pricing gap in the lineup, unless it was to cannibalise the Ursa. As a small company with limited product development capabilities (and a serious expectation to support and improve the camera that people haven't even received yet) I'd say that they wouldn't be about to release anything else.

    At the point they start committing to any type of spec for the next one:

    1. They'd really want to understand what the raft of FF mirrorless cameras will do to the market
    2. They'd want to understand how well BRAW is doing in the market
    3. It would probably be 8K

    As a company that makes cinema cameras, there isn't a logical unmet need in the middle of their range, but there is one above their range.  People talk about how BM colour science is right up there with ARRI, and if they really think big, then taking their industry knowledge they'd go really big.  Making an 8K BRAW Super URSA for mid-level cinema camera money would do for the top end what the BMPCC v1 and v2 did for 1080 RAW and 4K RAW.

    I'm now sure how many people on here are that familiar with what the top of the industry looks like and how far into the stratosphere it goes?  For example, Resolve goes right to the state of the art with colour, and in that arena its competitors are things like Baselight, which I'm not sure I've heard anyone here even mention, so they're not just about bringing normal cinema tech down in price, they're also playing well above that.

  19. 7 hours ago, Danyyyel said:

    This looks exactly like those Sony rumors that where floating around which proved to be ludicrous. You have to be brain dead to think that Panasonic would release a FF camera that would shoot raw between 3-4K. People are starting to believe that these corporation are Santa, they would render all there other camera line obsolete just for fun.

    Yes, the company responsible for building the GH line of cameras, which is famous for pushing leading edge technology at this price point, couldn't possibly make a camera that has the RAW capabilities of a $1300 camera just released, for only three times the price.  That is completely beyond comprehension!

  20. 1 hour ago, Andrew Reid said:

    Maybe the battery door falling off is actually to tell us that the battery is at 5% and really must be changed now, even though the meter is showing full :)

    ..and the power consumption of the magnetic lock on the battery door would explain the battery life!! ???

×
×
  • Create New...