How do I check if PyTorch is using the GPU?

Written by- Aionlinecourse623 times views

How do I check if PyTorch is using the GPU?

To optimize your deep learning workloads, knowing whether PyTorch is utilizing a GPU is essential. This guide will explore multiple methods to determine if PyTorch is using the GPU and provide detailed explanations for each.

The problem is that when working with PyTorch, It may need to be certain if your model and operations are running on the GPU or the CPU. More clarity is needed to ensure the efficient execution of deep learning tasks. To ensure that PyTorch is leveraging the GPU's processing power, it is essential to have a reliable verification method.

Solution 1: Using torch.cuda Function

This method utilizes functions provided by torch.cuda to check and retrieve information about GPU usage. Here's the code:

Input:

import torch
# Check if CUDA is available
cuda_available = torch.cuda.is_available()
print("CUDA Available:", cuda_available)

# Get the number of available GPUs
num_gpus = torch.cuda.device_count()
print("Number of GPUs:", num_gpus)

# Get the current GPU device index
current_gpu = torch.cuda.current_device()
print("Current GPU Device:", current_gpu)

# Get the name of the GPU device
gpu_name = torch.cuda.get_device_name(current_gpu)
print("GPU Name:", gpu_name)
Output:

CUDA Available: True
Number of GPUs: 1
Current GPU Device: 0
GPU Name: GeForce RTX 2080 Ti

Explanation:

torch.cuda.is_available()

This function checks if CUDA (GPU support) is available on your system. If it returns True, CUDA is available; otherwise, it's not.

torch.cuda.device_count()

It returns the number of available GPUs on your system.

torch.cuda.current_device()

This function identifies the index of the current GPU being used.

torch.cuda.get_device_name(current_gpu)

Retrieves the name of the GPU corresponding to the current device index.

Solution 2: Using torch.device Function

This method leverages the torch.device object to set the device and obtain additional GPU information. Here's the code:

Input:

import torch
# Set the device to GPU if available; otherwise, use CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)

# Additional GPU information when using a CUDA device
if device.type == 'cuda':
    print(torch.cuda.get_device_name(0))  # Get the name of the GPU
    # Memory usage
    print('Memory Usage:')

    print('Allocated:', round(torch.cuda.memory_allocated(0) / 1024 ** 3, 1), 'GB')
    print('Cached:', round(torch.cuda.memory_reserved(0) / 1024 ** 3, 1), 'GB')
Output:
Using device: cuda
GeForce RTX 2080 Ti
Memory Usage:
Allocated: 0.0 GB
Cached: 0.0 GB

Explanation:

torch.device('cuda' if torch.cuda.is_available() else 'cpu')

This code sets the device to GPU if CUDA is available; otherwise, it defaults to CPU.

Additional GPU information is printed when CUDA is used, including the GPU name and memory usage.

Output:
Using device: cuda Tesla K80Memory Usage:Allocated: 0.3 GBCached: 0.6 GB

As mentioned above, using a device, it is possible to:

  • To move tensors to the respective device:

    torch.rand(10).to(device)
  • To create a tensor directly on the device:

    torch.rand(10, device=device)

Which makes switching between CPU and GPU comfortable without changing the actual code.

It's essential to be aware of GPU compatibility, especially for older graphics cards with CUDA compute capability 3.0 or lower, which PyTorch may not support. Always consider hardware compatibility when working with GPUs.

By following these methods, you can effectively determine if PyTorch is using the GPU and gain insights into GPU-related details. It is vital to ensure PyTorch uses the GPU to speed up deep learning activities. You may effectively determine whether PyTorch is using the GPU, select the preferred GPU device, and relocate tensors and models to the GPU by following the instructions in this article. Thank you for reading the article.