Thursday, 28 August 2025

torch.cuda.is_available() is False. But why?

[blogger doesn't support markdown eh?  Might be time for a new hosting service] 

 

 `torch.cuda.is_available()` will tell you if you have CUDA set up right.  But doesn't tell you why it's not working.

TIL, per stackoverflow, if you instead run:
`a=torch.cuda.FloatTensor()`

You get a useful error message.

```
> python
Python 3.11.13 (main, Aug 18 2025, 19:14:35) [MSC v.1944 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> a = torch.cuda.FloatTensor()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: type torch.cuda.FloatTensor not available. Torch not compiled with CUDA enabled.
>>>
```

1 comment:

  1. Fixed, in my case, by updating pyproject.toml and then `uv sync`


    ```
    [tool.uv.sources]
    llms-from-scratch = { workspace = true }
    torch = [
    { index = "pytorch-cu128" },
    ]
    torchvision = [
    { index = "pytorch-cu128" },
    ]

    [[tool.uv.index]]
    name = "pytorch-cu128"
    url = "https://download.pytorch.org/whl/cu128"
    explicit = true

    ```

    ReplyDelete