Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add int type to device parameter of torch.set_default_device() on the doc #126646

Open
hyperkai opened this issue May 19, 2024 · 1 comment Β· May be fixed by #126968
Open

Add int type to device parameter of torch.set_default_device() on the doc #126646

hyperkai opened this issue May 19, 2024 · 1 comment Β· May be fixed by #126968

Comments

@hyperkai
Copy link

hyperkai commented May 19, 2024

πŸ“š The doc issue

The doc of torch.set_default_device() explains device parameter but it doesn't have int type as shown below:

device (device or string) – the device to set as default

Actually, device parameter with int type works as shown below:

import torch

torch.set_default_device(device=0) # Here

torch.tensor([0, 1, 2]).device
# device(type='cuda', index=0)

Suggest a potential alternative/fix

The doc should also say int type as shown below:

device (device, string or int) – the device to set as default

@mshr-h
Copy link
Contributor

mshr-h commented May 20, 2024

I agree. But according to the official PyTorch docs, it's a legacy behavior.
Tensor Attributes β€” PyTorch 2.3 documentation

For legacy reasons, a device can be constructed via a single device ordinal, which is treated as a cuda device. This matches Tensor.get_device(), which returns an ordinal for cuda tensors and is not supported for cpu tensors.

torch.device(1)
device(type='cuda', index=1)

I think we also need to add an example of something like this.

>>> torch.set_default_device(0) # Legacy
>>> torch.get_default_device()
device(type='cuda', index=0)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants