Skip to main content
  1. Posts/

Set Default GPU in PyTorch

··252 words·2 mins·
Machine-Learning PyTorch
Table of Contents

You can use two ways to set the GPU you want to use by default.

Set up the device which PyTorch can see
#

The first way is to restrict the GPU device that PyTorch can see. For example, if you have four GPUs on your system1 and you want to GPU 2. We can use the environment variable CUDA_VISIBLE_DEVICES to control which GPU PyTorch can see. The following code should do the job:

CUDA_VISIBLE_DEVICES=2 python test.py

The above code ensures that the GPU 2 is used as the default GPU. You do not have to change anything in your source file test.py

If you want to set the environment in your script. Then you can use os.environ to set the environment variables. In order to use GPU 2, you can use the following code

import os
os.environ['CUDA_VISIBLE_DEVICES']='2'

There is something that we should notice. If you have set up CUDA_VISIBLE_DEVICES. The actuall device will be numbered from zero. For example, if you use

os.environ['CUDA_VISIBLE_DEVICES']='2,3'

Then GPU 2 on your system now has ID 0 and GPU 3 has ID 1. In other words, in PyTorch, device#0 corresponds to your GPU 2 and device#1 corresponds to GPU 3.

Directly set up which GPU to use
#

You can also directly set up which GPU to use with PyTorch. The method is torch.cuda.set_device. For example, to use GPU 1, use the following code before any GPU-related code:

import torch as th

th.cuda.set_device(1)

References
#


  1. Their ids are 0, 1, 2, 3 ↩︎

Related

Accelerate Batched Image Inference in PyTorch
··517 words·3 mins
Machine-Learning PyTorch
Set the Number of Threads to Use in PyTorch
··245 words·2 mins
Machine-Learning PyTorch Thread
Distributed Training in PyTorch with Horovod
··827 words·4 mins
Machine-Learning PyTorch