To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install NVIDIA GPU drivers. The NVIDIA GPU Driver Extension installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as Azure PowerShell or Azure Resource Manager templates. See the NVIDIA GPU Driver Extension documentation for supported operating systems and deployment steps.
Our newest GeForce Game Ready driver brings you support for Microsoft Flight Simulator, Tony Hawk’s Pro Skater 1+2, A Total War Saga: TROY, and the upcoming World of Warcraft Shadowlands Beta. And support for 8 new G-SYNC Compatible gaming monitors. To download and install, simply fire up GeForce Experience and click the 'Drivers' tab. Video drivers need to pass these tests in order to comply with the Windows Certification Program and be digitally signed by Microsoft. WHQL: Windows Hardware Quality Laboratories. The organization within Microsoft that is responsible for the Windows Certification Program for hardware. PCI Express Graphics. It is well known that graphics can.
If you choose to install NVIDIA GPU drivers manually, this article provides supported operating systems, drivers, and installation and verification steps. Manual driver setup information is also available for Linux VMs.
For basic specs, storage capacities, and disk details, see GPU Windows VM sizes.
- Windows 10 is automatic (and force) update the Intel HD Graphics Driver, and something wrong with it. Many unknown display devices appear, windows keeps lagging and slower than before.
- Microsoft® ODBC Driver 13.1 for SQL Server® - Windows, Linux, & macOS. The Microsoft ODBC Driver for SQL Server provides native connectivity from Windows, Linux, & macOS to Microsoft SQL Server and Microsoft Azure SQL Database.
Supported operating systems and drivers
NVIDIA Tesla (CUDA) drivers
NVIDIA Tesla (CUDA) drivers for NC, NCv2, NCv3, NCasT4_v3, ND, and NDv2-series VMs (optional for NV-series) are supported only on the operating systems listed in the following table. Driver download links are current at time of publication. For the latest drivers, visit the NVIDIA website.
As an alternative to manual CUDA driver installation on a Windows Server VM, you can deploy an Azure Data Science Virtual Machine image. The DSVM editions for Windows Server 2016 pre-install NVIDIA CUDA drivers, the CUDA Deep Neural Network Library, and other tools.
|Windows Server 2019||451.82 (.exe)|
|Windows Server 2016||451.82 (.exe)|
NVIDIA GRID drivers
Microsoft redistributes NVIDIA GRID driver installers for NV and NVv3-series VMs used as virtual workstations or for virtual applications. Install only these GRID drivers on Azure NV-series VMs, only on the operating systems listed in the following table. These drivers include licensing for GRID Virtual GPU Software in Azure. You do not need to set up a NVIDIA vGPU software license server.
The GRID drivers redistributed by Azure do not work on non-NV series VMs like NCv2, NCv3, ND, and NDv2-series VMs. The one exception is the NCas_T4_V3 VM series where the GRID drivers will enable the graphics functionalities similar to NV-series.
The NC-Series with Nvidia K80 GPUs do not support GRID/graphics applications.
Please note that the Nvidia extension will always install the latest driver. We provide links to the previous version here for customers, who have dependency on an older version.
For Windows Server 2019, Windows Server 2016 1607, 1709, and Windows 10(up to build 20H2):
- GRID 12.0 (461.09) (.exe)
- GRID 11.3 (452.77) (.exe)
For Windows Server 2012 R2:
- GRID 12.0 (461.09) (.exe)
- GRID 11.3 (452.77) (.exe)
For the complete list of all previous Nvidia GRID driver links please visit GitHub
Connect by Remote Desktop to each N-series VM.
Download, extract, and install the supported driver for your Windows operating system.
Drivers Microsoft Graphic Downloads
After GRID driver installation on a VM, a restart is required. After CUDA driver installation, a restart is not required.
Verify driver installation
Please note that the Nvidia Control panel is only accessible with the GRID driver installation. If you have installed CUDA drivers then the Nvidia control panel will not be visible.
You can verify driver installation in Device Manager. The following example shows successful configuration of the Tesla K80 card on an Azure NC VM.
To query the GPU device state, run the nvidia-smi command-line utility installed with the driver.
Open a command prompt and change to the C:Program FilesNVIDIA CorporationNVSMI directory.
nvidia-smi. If the driver is installed, you will see output similar to the following. The GPU-Util shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
RDMA network connectivity
RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the same availability set or in a single placement group in a virtual machine scale set. The HpcVmDrivers extension must be added to install Windows network device drivers that enable RDMA connectivity. To add the VM extension to an RDMA-enabled N-series VM, use Azure PowerShell cmdlets for Azure Resource Manager.
To install the latest version 1.1 HpcVMDrivers extension on an existing RDMA-capable VM named myVM in the West US region:
For more information, see Virtual machine extensions and features for Windows.
The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Microsoft MPI or Intel MPI 5.x.
- Developers building GPU-accelerated applications for the NVIDIA Tesla GPUs can also download and install the latest CUDA Toolkit. For more information, see the CUDA Installation Guide.
Applies to: Windows Server 2016, Microsoft Hyper-V Server 2016
Because of security concerns, RemoteFX vGPU is disabled by default on all versions of Windows starting with the July 14, 2020 Security Update. To learn more, see KB 4570006.
The vGPU feature for RemoteFX makes it possible for multiple virtual machines to share a physical GPU. Rendering and compute resources are shared dynamically among virtual machines, making RemoteFX vGPU appropriate for high-burst workloads where dedicated GPU resources are not required. For example, in a VDI service, RemoteFX vGPU can be used to offload app rendering costs to the GPU, with the effect of decreasing CPU load and improving service scalability.
Drivers Microsoft Graphic Tool
RemoteFX vGPU requirements
Host system requirements:
- Windows Server 2016
- A DirectX 11.0-compatible GPU with a WDDM 1.2-compatible driver
- A CPU with Second Level Address Translation (SLAT) support
Guest VM requirements:
- Supported guest OS. For more information, see RemoteFX 3D Video Adapter (vGPU) support.
Additional considerations for guest VMs:
- OpenGL and OpenCL functionality is only available in guests running Windows 10 or Windows Server 2016.
- DirectX 11.0 is only available for guests running Windows 8 or later.
Enable RemoteFX vGPU
To configure RemoteFX vGPU on your Windows Server 2016 host:
- Install the graphics drivers recommended by your GPU vendor for Windows Server 2016.
- Create a VM running a guest OS supported by RemoteFX vGPU. To learn more, see RemoteFX 3D Video Adapter (vGPU) support.
- Add the RemoteFX 3D graphics adapter to the VM. To learn more, see Configure the RemoteFX vGPU 3D adapter.
By default, RemoteFX vGPU will use all available and supported GPUs. To limit which GPUs the RemoteFX vGPU uses, follow these steps:
- Navigate to the Hyper-V settings in Hyper-V Manager.
- Select Physical GPUs in Hyper-V Settings.
- Select the GPU that you don't want to use, and then clear Use this GPU with RemoteFX.
Configure the RemoteFX vGPU 3D adapter
You can use either the Hyper-V Manager UI or PowerShell cmdlets to configure the RemoteFX vGPU 3D graphics adapter.
Configure RemoteFX vGPU with Hyper-V Manager
Stop the VM if it's currently running.
Open Hyper-V Manager, navigate to VM Settings, then select Add Hardware.
Select RemoteFX 3D Graphics Adapter, then select Add.
Set the maximum number of monitors, maximum monitor resolution, and dedicated video memory, or leave the default values.
- Setting higher values for any of these options will impact your service scale, so you should only set what is necessary.
- When you need to use 1 GB of dedicated VRAM, use a 64-bit guest VM instead of 32-bit (x86) for best results.
Select OK to finish the configuration.
Configure RemoteFX vGPU with PowerShell cmdlets
Use the following PowerShell cmdlets to add, review, and configure the adapter:
The performance and scale of a RemoteFX vGPU-enabled service are determined by a variety of factors such as number of GPUs on your system, total GPU memory, amount of system memory and memory speed, number of CPU cores and CPU clock frequency, storage speed, and NUMA implementation.
Host system memory
For every VM enabled with a vGPU, RemoteFX uses system memory both in the guest operating system and in the host server. The hypervisor guarantees the availability of system memory for a guest operating system. On the host, each vGPU-enabled virtual desktop needs to advertise its system memory requirement to the hypervisor. When the vGPU-enabled virtual desktop starts, the hypervisor reserves additional system memory in the host.
The memory requirement for the RemoteFX-enabled server is dynamic because the amount of memory consumed on the RemoteFX-enabled server is dependent on the number of monitors that are associated with the vGPU-enabled virtual desktops and the maximum resolution for those monitors.
Host GPU video memory
Every vGPU-enabled virtual desktop uses the GPU hardware video memory on the host server to render the desktop. In addition, a codec uses the video memory to compress the rendered screen. The amount of memory needed for rendering and compression is directly based on the number of monitors provisioned to the virtual machine. The amount of reserved video memory varies based on the system screen resolution and how many monitors there are. Some users require a higher screen resolution for specific tasks, but there's greater scalability with lower resolution settings if all other settings remain constant.
The hypervisor schedules the host and VMs on the CPU. The overhead is increased on a RemoteFX-enabled host because the system runs an additional process (rdvgm.exe) per vGPU-enabled virtual desktop. This process uses the graphics device driver to run commands on the GPU. The codec also uses the CPU to compress screen data that needs to be sent back to the client.
Drivers Microsoft Graphic Software
More virtual processors mean a better user experience. We recommend allocating at least two virtual CPUs per vGPU-enabled virtual desktop. We also recommend using the x64 architecture for vGPU-enabled virtual desktops because the performance on x64 virtual machines is better compared to x86 virtual machines.
GPU processing power
Every vGPU-enabled virtual desktop has a corresponding DirectX process that runs on the host server. This process replays all graphics commands it receives from the RemoteFX virtual desktop onto the physical GPU. This is like running multiple DirectX applications at the same time on the same physical GPU.
Usually, graphics devices and drivers are tuned to run only a few applications on the desktop at a time, but RemoteFX stretches the GPUs to go even further. vGPUs come with performance counters that measure the GPU response to RemoteFX requests and help you make sure the GPUs aren't stretched too far.
When a GPU is low on resources, read and write operations take a long time to complete. Administrators can use performance counters to know when to adjust resources and prevent downtime for users.
Drivers Microsoft Graphic Organizer
Learn more about performance counters for monitoring RemoteFX vGPU behavior at Diagnose graphics performance issues in Remote Desktop.