Skip to main content
  1. Posts/

Set Default GPU in PyTorch

··252 words·2 mins·
Machine-Learning PyTorch
Table of Contents

You can use two ways to set the GPU you want to use by default.

Set up the device which PyTorch can see

The first way is to restrict the GPU device that PyTorch can see. For example, if you have four GPUs on your system1 and you want to GPU 2. We can use the environment variable CUDA_VISIBLE_DEVICES to control which GPU PyTorch can see. The following code should do the job:


The above code ensures that the GPU 2 is used as the default GPU. You do not have to change anything in your source file

If you want to set the environment in your script. Then you can use os.environ to set the environment variables. In order to use GPU 2, you can use the following code

import os

There is something that we should notice. If you have set up CUDA_VISIBLE_DEVICES. The actuall device will be numbered from zero. For example, if you use


Then GPU 2 on your system now has ID 0 and GPU 3 has ID 1. In other words, in PyTorch, device#0 corresponds to your GPU 2 and device#1 corresponds to GPU 3.

Directly set up which GPU to use

You can also directly set up which GPU to use with PyTorch. The method is torch.cuda.set_device. For example, to use GPU 1, use the following code before any GPU-related code:

import torch as th



  1. Their ids are 0, 1, 2, 3 ↩︎


Accelerate Batched Image Inference in PyTorch
··517 words·3 mins
Machine-Learning PyTorch
Set the Number of Threads to Use in PyTorch
··245 words·2 mins
Machine-Learning PyTorch Thread
Distributed Training in PyTorch with Horovod
··827 words·4 mins
Machine-Learning PyTorch