Hi,
I see that NVIDIA support exists and wondering if there are any future plans for this?
Is there any workaround to set it up manually?
Thanks!
Hi,
I see that NVIDIA support exists and wondering if there are any future plans for this?
Is there any workaround to set it up manually?
Thanks!
dbanny I’m afraid, this is not up to us. NVidia does great job providing software support for their GPUs. They develop this CUDA library that is used pretty much everywhere in AI world to speed up computations using GPU. Obviously, CUDA only supports NVidia hardware. I’m afraid, there is just nothing comparable coming from AMD that supports their GPU’s. Well, at least oficially, I am not talking about zluda which is hard to take serious.
Andrey It’s not my native language too, no worries
The Face recognition models, can be ran using AMD and NVIDIA cards.
Implementation is easier on NVIDIA as you mentioned before, but AMD catches then with AI features that should be considered to be implemented in the application. Also, PC (especially Windows users) prefer to buy budget components to be used, like AMD cards.
I have RX 6800 XT
and I use it for gaming and AI to run within a docker with shared PCIE inside (which use my graphics).
dbanny OK, let me explain how AI/ML technology stack looks like. There are many layers involved:
GPU – Drivers – GPU Abstraction library for ML – ML framework – Application
Tonfotos is an application here, it does not deal with GPU directly. It only deals with ML framework. There are several popular frameworks, they are being developed by giants like Google (tensorflow), Facebook (PyTorch). etc. Most popular frameworks are free, that is why only such IT giants can affort to throw away hundreds of millions of dollars to develop it. We cannot, so we use one of available frameworks.
In turn ML frameworks do not deal with GPU or even GPU drivers either. This is too much of a hassle, all GPU’s are different with different capabilities, this is just creazy to keep updating your framework every time new GPU gets released. This is where abstraction libraries like CUDA come into play. They automatically detect what type of GPU you have, what it is capable of, and only use those functions that are availbale, while emulating all the rest.
NVidia does great job developing CUDA, and they are working with major framework developers to ensure they get best support possible and their frameworks work with CUDA flawlessly. They spend a lot on this. That is why CUDA/Nvidia is de-facto standard for ML/AI. There are no alternatives actually. ML frameworks just do not support anything except for CUDA. And therefore Tonfotos does not too, since it relies on one of the frameworks.
I am not really sure what exactly you mean by “AMD catches then with AI features”. I am sure they do something, but as far as ML/AI industry is concerned, they are still stuck at “GPU – Drivers” stage. They don’t offer anything comparable to CUDA for GPU abstraction. Until they do anything like that, and then convince all major ML frameworks vendors to support it officially, nothing will change.
Long story short, we are wrong people to complain to about AMD GPU’s support This is definitely not our fault, and there is nothing we can do to change the situation.
I can’t understand what do you mean by “not our fault”.
I am telling you that you can easily (kind of) add support for the AMD cards. I am using Windows 11, and writing a code that uses my AMD graphics. I can share with you the code, and here’s some usages that you can add to support AMD graphic cards:
https://github.com/ROCm
FFMPEG
- can be used AMD for transcoding.
ML
- Can be used AMD too by installing ROCm.
I also found this that may help too: https://github.com/GPUOpen-LibrariesAndSDKs/AMF/wiki/FFmpeg%20and%20AMF%20HW%20Acceleration#1-introduction
dbanny Sorry that my explanation does not make sense to you. However, I am not sure why you think video encoding support examples are relevant to our discussion.
Thank you for the link to ROCm. It been a while since I last checked it and I was really surprised to see there promising statements about Tensorflow and PyTorch compatibility. So I started hesitating that I probably missed something and my information it out of date. However, after additional googling, it looks like this is still a bit of “fake it till you make it” approach.
So it actually looks they are moving in the right direction, but I guess they still have a long way to go before it will actually be mainstream.
As I said, we are no rich enough to develop our own ML framework, and we will just wait until ROCm support will appear in the framework that we are using.
Have you considered this one?
https://devblogs.microsoft.com/directx/video-acceleration-api-va-api-now-available-on-windows/
dbanny Have you considered this one?
https://devblogs.microsoft.com/directx/video-acceleration-api-va-api-now-available-on-windows/
dbanny I’m afraid, this has nothing to do with neural networks. This is about 3D graphics.