CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. CUDA is compatible with most standard operating systems.
For a list of supported graphic cards, see Wikipedia.
CUDA software API is supported on Nvidia GPUs, through the software drivers provided by Nvidia.
If your desktop has a Nvidia GPU, AND you have the Nvidia drivers installed (for ex from https://www.nvidia.com/Download/index.aspx), an executable that uses CUDA can be built using CUDA toolkit, as outlined at -
To compile a CUDA application specifically for your GPU, the compute capability of the specific GPU is required, and this can be obtained by following the steps outlined at
If you use terminal, the command nvidia-smi is very handy to obtain information about your GPU model, CUDA version and NVIDIA driver version.
You will need to perform these checks:
Use the GPU model to obtain the compute capability of the GPU. NVIDIA provides the list here.
Check the installed driver version from nvidia-smi output.
Check the installed CUDA version from nvidia-smi output.
For (1), it will be ideal that the GPU has compute capability at least 3.0 so that it can work with CUDA features for deep learning. Subsequently, check the installed CUDA version and possible upgrade. There is "limit" for the upgrade path especially for older GPU models. You may not be able to upgrade to the latest CUDA since each CUDA version has minimum compute capability that it supports.
You can check the CUDA compatibility table and minimum display driver each version supports here.
For a list of supported graphic cards, see Wikipedia.
Using the browser to find CUDA
Chrome
browserchrome://gpu
cuda
and you should get the version detected (in my case, not enabled)(2021 update)
CUDA software API is supported on Nvidia GPUs, through the software drivers provided by Nvidia.
If your desktop has a Nvidia GPU, AND you have the Nvidia drivers installed (for ex from https://www.nvidia.com/Download/index.aspx), an executable that uses CUDA can be built using CUDA toolkit, as outlined at -
https://developer.nvidia.com/blog/even-easier-introduction-cuda/
https://github.com/prabindh/mygpu
(I am the author of that web tool)
Manually, you could use nvidia-smi that is installed along with the driver
Or use driver information to obtain GPU name and map it to Compute capability.
If you use terminal, the command
nvidia-smi
is very handy to obtain information about your GPU model, CUDA version and NVIDIA driver version.You will need to perform these checks:
nvidia-smi
output.nvidia-smi
output.For (1), it will be ideal that the GPU has compute capability at least 3.0 so that it can work with CUDA features for deep learning. Subsequently, check the installed CUDA version and possible upgrade. There is "limit" for the upgrade path especially for older GPU models. You may not be able to upgrade to the latest CUDA since each CUDA version has minimum compute capability that it supports.
You can check the CUDA compatibility table and minimum display driver each version supports here.