vLLM is running on UnspecifiedPlatform
Bump into this error when trying to run vllm on my machine locally. If you have only CPU available, then you might bump into this error.
You can see if you have gpu support with the following code
python -c 'import torch; print(torch.cuda.is_available())'
vllm does provide a guide to setup for cpu environment here. I did run into pytorch, torchvision, torchaudio incompatiblity problem - if you have previous setup of pytorch or other project that might uses a different version in your laptop. Then you need to create a new python virtual environment. Simply re-installing torch might get you the latest version which does not meets the requirement. And if you try to specified specific build version, you will still run into package download issues.
Before you follow the instruction, please
1. clone the llvm repository. cd into llvm and in the root, you should be able to run the script.
2. ensure you have install c/g++ 12 compiler (and above) - or simply install build-essentials if you're running ubuntu.
The output from my laptop after running the instruction above.
Hope this works!
Comments