lz400 7 hours ago

Unfortunately uv is usually insufficient for certain ML deployments in Python. It's a real pain to install pytorch/CUDA with all the necessary drivers and C++ dependencies so people tend to fall back to conda.

Any modern tips / life hacks for this situation?

  • rsfern 5 hours ago

    Are there particular libraries that make your setup difficult? I just manually set the index and source following the docs (didn’t know about the auto backend feature) and pin a specific version if I really have to with `uv add “torch==2.4”`. This works pretty well for me for projects that use dgl, which heavily uses C++ extensions and can be pretty finicky about working with particular versions

    This is in a conventional HPC environment, and I’ve found it way better than conda since the dependency solves are so much faster and I no longer experience PyTorch silently getting downgraded to cpu version of I install a new library. Maybe I’ve been using conda poorly though?

  • Kydlaw 6 hours ago

    You should give a try to https://pixi.sh/latest/ (I am not involve in the project).

    They are a little more focus on scientific computing than uv, which is more general. They might be a better option in your case.

  • devjab 7 hours ago
    • lz400 6 hours ago

      the problem is that you still need to install all the low level stuff manually, conda does it automatically

      • gcarvalho 6 hours ago

        I was pleasantly surprised to try the guide out and see that it just worked:

            λ uv venv && uv pip install torch --torch-backend=auto
            λ uv run python -c 'import torch; print(torch.cuda.is_available())'
            True
        
        This is on Debian stable, and I don't remember doing any special setup other than installing the proprietary nvidia driver.
  • miohtama 6 hours ago

    Would it be possible to use Docker to manage native dependencies?