Jump to content

Davinci Resolve performance increase


zerocool22
 Share

Recommended Posts

Hi,

I am still running a gtx 1080 as my gpu, so I am thinking about an upgrade. Has anyone made a similar leap these days and felt an great increase of performance? Davinci resolve often freezes during playback, have to wait a bit and then it starts again. (Have to note that I do not copy my footage to internal SSD drives, but usually just run it from external drives, which might have a bigger impact then my gpu, but not sure at this moment. )

Cheers,

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
3 hours ago, zerocool22 said:

Hi,

I am still running a gtx 1080 as my gpu, so I am thinking about an upgrade. Has anyone made a similar leap these days and felt an great increase of performance? Davinci resolve often freezes during playback, have to wait a bit and then it starts again. (Have to note that I do not copy my footage to internal SSD drives, but usually just run it from external drives, which might have a bigger impact then my gpu, but not sure at this moment. )

Cheers,

Having your files on SSD will have a huge impact on editing performance.  If you're going to upgrade anything then I'd suggest this is first.  Depending on what you're doing in the edit/colour grade, this might be all you need.

Assuming your machine has an SSD, just copy a few dozen files to it and then cut them up in a timeline and see how it performs.

Link to comment
Share on other sites

A 1080 Ti is already giving 25% performance increase standing only 10% below a 2080. Having 11GB Ram doesn't hurt neither, which are 3GB well above the recommended 8GB minimum for 4K video.

So either a 1080Ti on the very cheap, a much newer 3070 or a huge boost with a RTX3080 or 4070 would be my recommendations.

Link to comment
Share on other sites

Sounds like a bottleneck with the media drive. As others said try using a faster drive with a fast connection to see if this is the issue.

Could also be a codec issue, if you're only having trouble with certain file types. I saw a big difference with h264 & h265 when I enabled QuickSync on my current machine, which lets Resolve use the hardware decoders built into my Intel CPU.

Link to comment
Share on other sites

I went through this for many years, it really depends on your source footage, my problem was that I kept throwing GPUs at it that could not HW accelerate H.265 422 which is what the Canon cameras use. The biggest performance increase that I gained was when I built my current custom system with an Intel QS capable integrated GPU and paired it with an RTX4080 GPU. 

I can now edit anything coming out of my C70, R7, or R5 up to 8K without proxies, caching or dropping frames until I start adding Fusion effects at which point it is hit or miss on whether I have to cache the Fusion effects or not. DR is fantastic, Fusion is nothing short of a nightmare when using complex transitions or effects at least for me even with my current system. So, for my workflow I edit the entire project, color grade it, then add the Fusion effects last.

I doubt your SSDs are your bottleneck, I sometimes go back to old archived projects to get b-roll footage out of them and I also have a large b-roll library stored on very slow spinning NAS grade disks and DR never drops a frame while playing those old projects or b roll clips. I have even edited a few and re-exported them at a client's request and it feels just as fast as my NVME storage.

Throughout my years of fiddling with DR I have also learned that taskmgr in W10 and W11 will not show you the bottleneck, the GPU will seem to be sleeping, so will your storage throughput and CPU will be around 30% but DR is still dropping frames or stuttering. The single biggest HW performance gain you can get hands down if you work with H.265 422 footage will be an Intel QS capable CPU, unfortunately to get that CPU you will need to upgrade pretty much everything (MB, RAM, CPU, PS, etc.).

Below is my current system build. I found it better to build the system myself vs going with another prebuilt system. It has been flawless, and the way DR automatically switches between the QS GPU and the RTX GPU when rendering and playing back the project is perfect. Also, with the RTX4080 GPU I am able to export to AV1 which is the codec of the future. All of the main platforms have added AV1 support and at some point will switch over to it as the new standard.

If you aren't quite ready for such a big upgrade, the biggest performance increase that you can do right now for free without dealing with proxies or render cache is to set your timeline resolution to 720P then make sure that your export presets are set to 4K.

For your render cache that's where your HD throughput is important, I would make sure to always set my render cache to the fastest non-OS disk(s) in your system, that will speed up generating the render cache and when playing from the render cache. Make sure that your render cache is also set to 720P for faster generation and playback.

Keep in mind also that these days you cannot just upgrade your GPU, I tried that and ended up with it sitting in a box for a month while I researched and spec'd out a new system. The new GPUs need new Power Supplies, and take up a massive amount of room in the case, etc. For the newer RTX GPUs you are pretty much looking at a new system build.

Link to comment
Share on other sites

12 hours ago, PannySVHS said:

A 1080 Ti is already giving 25% performance increase standing only 10% below a 2080. Having 11GB Ram doesn't hurt neither, which are 3GB well above the recommended 8GB minimum for 4K video.

So either a 1080Ti on the very cheap, a much newer 3070 or a huge boost with a RTX3080 or 4070 would be my recommendations.

 

On paper that would be logical, but in practice it won't help at all. I had a 2080TI and DR was still pretty much unusable for Canon footage. Not a single NVIDIA GPU can accelerate H.265 422 10bit footage, you really need Intel's QS GPU to see a real performance increase. For non 422 H.265 10bit footage, you still won't get enough of a benefit to overcome stuttering for most 4K60FPS H.265 10 bit LongGOP footage until you get into the RTX series. I have spent years testing nearly every performance aspect possible for DR because time is money and my client's won't pay extra just because my editing workstation isn't fast enough.

Link to comment
Share on other sites

5 hours ago, herein2020 said:

I went through this for many years, it really depends on your source footage, my problem was that I kept throwing GPUs at it that could not HW accelerate H.265 422 which is what the Canon cameras use. The biggest performance increase that I gained was when I built my current custom system with an Intel QS capable integrated GPU and paired it with an RTX4080 GPU. 

I can now edit anything coming out of my C70, R7, or R5 up to 8K without proxies, caching or dropping frames until I start adding Fusion effects at which point it is hit or miss on whether I have to cache the Fusion effects or not. DR is fantastic, Fusion is nothing short of a nightmare when using complex transitions or effects at least for me even with my current system. So, for my workflow I edit the entire project, color grade it, then add the Fusion effects last.

I doubt your SSDs are your bottleneck, I sometimes go back to old archived projects to get b-roll footage out of them and I also have a large b-roll library stored on very slow spinning NAS grade disks and DR never drops a frame while playing those old projects or b roll clips. I have even edited a few and re-exported them at a client's request and it feels just as fast as my NVME storage.

Throughout my years of fiddling with DR I have also learned that taskmgr in W10 and W11 will not show you the bottleneck, the GPU will seem to be sleeping, so will your storage throughput and CPU will be around 30% but DR is still dropping frames or stuttering. The single biggest HW performance gain you can get hands down if you work with H.265 422 footage will be an Intel QS capable CPU, unfortunately to get that CPU you will need to upgrade pretty much everything (MB, RAM, CPU, PS, etc.).

Below is my current system build. I found it better to build the system myself vs going with another prebuilt system. It has been flawless, and the way DR automatically switches between the QS GPU and the RTX GPU when rendering and playing back the project is perfect. Also, with the RTX4080 GPU I am able to export to AV1 which is the codec of the future. All of the main platforms have added AV1 support and at some point will switch over to it as the new standard.

If you aren't quite ready for such a big upgrade, the biggest performance increase that you can do right now for free without dealing with proxies or render cache is to set your timeline resolution to 720P then make sure that your export presets are set to 4K.

For your render cache that's where your HD throughput is important, I would make sure to always set my render cache to the fastest non-OS disk(s) in your system, that will speed up generating the render cache and when playing from the render cache. Make sure that your render cache is also set to 720P for faster generation and playback.

Keep in mind also that these days you cannot just upgrade your GPU, I tried that and ended up with it sitting in a box for a month while I researched and spec'd out a new system. The new GPUs need new Power Supplies, and take up a massive amount of room in the case, etc. For the newer RTX GPUs you are pretty much looking at a new system build.

Thanks that is quite the beast! I am on Amd right now though. Might need to change back then if only intel is h265 killing it.

Link to comment
Share on other sites

3 hours ago, zerocool22 said:

Thanks that is quite the beast! I am on Amd right now though. Might need to change back then if only intel is h265 killing it.

 

AMD is great for games and to save money on the CPU, but only Intel has QS and of course DR works best with NVIDIA's CUDA cores architecture.  If you switch back to Intel make absolutely certain that the CPU you pick supports QS because many of them do not.

I used this chart as my starting point for my CPU. From there I picked my MB and so on.

Link to comment
Share on other sites

2 hours ago, zerocool22 said:

Hmm but I am prolly going the bm route again soon so with braw I might not have that issue.

With RAW you will probably be fine, I was able to edit Canon raw with my 2080TI up to 4K60FPS with no problem.  Trying to edit highly compressed codecs is when the problems start. 

If you are switching to braw I would do that first before deciding if I needed a new editing workstation.

Link to comment
Share on other sites

5 hours ago, herein2020 said:

 

AMD is great for games and to save money on the CPU, but only Intel has QS and of course DR works best with NVIDIA's CUDA cores architecture.  If you switch back to Intel make absolutely certain that the CPU you pick supports QS because many of them do not.

I used this chart as my starting point for my CPU. From there I picked my MB and so on.

Intel QuickSync has evolved a lot since its inception in 2011 - the various CPU generations have different decoding and encoding support. Basically it looks like you need an 11th or later generation CPU to get 10-bit 4:2:2 HEVC hardware decoding.

See  https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video and https://en.wikipedia.org/wiki/Intel_Graphics_Technology#Capabilities_(GPU_video_acceleration)

Link to comment
Share on other sites

17 hours ago, ac6000cw said:

Intel QuickSync has evolved a lot since its inception in 2011 - the various CPU generations have different decoding and encoding support. Basically it looks like you need an 11th or later generation CPU to get 10-bit 4:2:2 HEVC hardware decoding.

See  https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video and https://en.wikipedia.org/wiki/Intel_Graphics_Technology#Capabilities_(GPU_video_acceleration)

 

Yes, with any custom system build you have to do your due diligence to ensure the result is what you expect especially since technology moves so fast. There are many little details that could ruin a system build such as PCIe Lane shortage, PS leads, case sizing, MB port placement, etc. 

That's why I spent a month in my free time researching prior to building.

Link to comment
Share on other sites

I'm in the verge of a computer upgrade (forced to update to Windows 11 before the end of 2025, because MS will pull the lug on Windows 10 support), and since my X-S20 records on H.265 10-bit 4:2:2, I've searching about this decoding issues on timeline.

Bottom line: Puget Systems have a chart that is constant updated, showing what codecs are hardware decoded (encoding is another story). H265 10-bit 4:2:2 are currently ONLY supported by Intel CPUs using QuickSync (which means that you have to choose a CPU with integrated GPU, even if you will use a external GPU).

https://www.pugetsystems.com/labs/articles/what-h-264-and-h-265-hardware-decoding-is-supported-in-davinci-resolve-studio-2122/

Worse: I've been trying for weeks to discover which codecs are HW decoded by the newer Snapdragon CPUs, and even in Qualconn pages I could no get this info.

Link to comment
Share on other sites

56 minutes ago, Marcio Kabke Pinheiro said:

I'm in the verge of a computer upgrade (forced to update to Windows 11 before the end of 2025, because MS will pull the lug on Windows 10 support), and since my X-S20 records on H.265 10-bit 4:2:2, I've searching about this decoding issues on timeline.

Bottom line: Puget Systems have a chart that is constant updated, showing what codecs are hardware decoded (encoding is another story). H265 10-bit 4:2:2 are currently ONLY supported by Intel CPUs using QuickSync (which means that you have to choose a CPU with integrated GPU, even if you will use a external GPU).

https://www.pugetsystems.com/labs/articles/what-h-264-and-h-265-hardware-decoding-is-supported-in-davinci-resolve-studio-2122/

Worse: I've been trying for weeks to discover which codecs are HW decoded by the newer Snapdragon CPUs, and even in Qualconn pages I could no get this info.

 

The Puget Systems chart is a good start, but to be certain when I built my system, I used Intel's own chart. I also researched the different Intel CPU options to find the one that was the best value, meaning best cost to performance ratio. You typically don't want the absolute fastest CPU they offer but you also have to make sure you are not picking one with known stability or overheating problems. Intel, just like everyone else has created a few duds in their past.

After picking the CPU you then have to repeat the process for compatible motherboards, next is the memory, power supply, and case. At least that's the way I have always built my systems.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...