Advanced Member Nossgrr Posted September 3, 2015 Advanced Member Report Share Posted September 3, 2015 So.. if you've been following this forum, you probably read about the AMD FuryX video card that just came out. It's ATI based so no CUDA, my question is.. Let's say I get a new ATI card, how much of a performance hit would I get in 3D-Coat with not having CUDA? Will Vulkan help remedy this situation for ATI cards? <- assuming it has a universal CUDA like framework. I'm currently running on a NVidia GT770GTX, my options are to get a FuryX or wait for NVidia's next gen that's using the HBM2 next year.. Lots of assumptions and guesses in this post, I'd just like to get your opinion. Quote Link to comment Share on other sites More sharing options...
Member Rebelismo Posted September 3, 2015 Member Report Share Posted September 3, 2015 (edited) Do you absolutely need a new card right now? I would personally wait for next year to see how Pascal compares against NVidia's other lines. IF the estimates are true, then we should see a significant jump in performance. Therefore don't see the need to upgrade to anything that's out there right now. Edited September 3, 2015 by Rebelismo Quote Link to comment Share on other sites More sharing options...
Reputable Contributor AbnRanger Posted September 3, 2015 Reputable Contributor Report Share Posted September 3, 2015 I think you'd see perhaps some level of mesh handling improvements in the FuryX, simply because of the massive memory bandwidth. CUDA on works on Voxel brushes in 3D Coat, and Andrew hasn't recompiled CUDA to work with the later versions of CUDA (5, 6, or 6.5 for example), so there is some CUDA tech that simply isn't being applied, for this reason. He hasn't updated/recompiled the CUDA code at all, so it was written for CUDA 1 or 2 several years ago. It has some benefit for those brushes, but nothing else in the app. AO baking now uses OpenCL, which works on either AMD or NVidia cards. I'm glad AMD is really turning up the heat on NVidia, and to be honest, I think they hit NVidia directly on the chin with the Fury cards, and NVidia is probably rushing feverishly to get their HBM memory cards out ASAP. I've been pretty upset with NVidia for going the opposite direction with Memory Bandwith....256bit is simply pathetic for an expensive high performance card. It really bogs down mesh handling in 3D apps, from my own personal experience. I recently bought a Titan (original 6GB version) on eBay, and that is basically a 780Ti 6GB. It has a 384bit memory bus and works really well in 3DC...so yours too, is probably more than sufficient (for 3D Coat), if you want to wait and see what both NVidia and AMD have coming up next. Quote Link to comment Share on other sites More sharing options...
PolyHertz Posted September 3, 2015 Report Share Posted September 3, 2015 afaik Vulcan works similar to DX12 in that it's built around Asynchronous Compute, something that NVidia cards are currently quite a bit worse at then AMD cards. That said, 3DC has no Vulcan or DX12 optimizations currently afaik, and the CUDA optimizations haven't been updated in many years. Quote Link to comment Share on other sites More sharing options...
Contributor Ascensi Posted February 16, 2019 Contributor Report Share Posted February 16, 2019 @Andrew Shpagin Any chance of seeing Vulkan support in 4x or 5? We really need some GPU acceleration on 8-16k GPU Ambient Occlusion generation and hopefully the ability to optionally use the video ram and host ram if there isn't enough video ram. Newer cards like the RTX 2080 Ti have 11GB ram and having another card with sli mode will allow access of 22GB with combined GPU power without bottlenecks. I find that creating AO @ 4096 with 4x anti aliasing with a project set to 16k texture resolution crashes often. I'm working with photo scanned textures and need to paint with the highest resolution possible. My computer is an i7 4Ghz @ 32 GB ram and virtual memory is set at 30 GB. Vulkan api on a computer with multiple graphics cards may allow doubling or tripling the speed since it has multi GPU card/device and multi GPU and cpu core support. Adding support for this may also prevent other crashes for features you're adding because it seems like they are often related to 3DCoat running out of resources. I hope you consider this change, thank you. 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.