Trying to run vllm without gpu will typicaly lands you into this issue. To resolve this issue: we can use set these both. Setting one does not work for me. So i had to configure both settings below:- 1. environment variable CUDA_VISIBLE_DEVICES = "" 2. in the command line, change to use device cpu and remove tensor-parallel-size. For example, python3 -m vllm.entrypoints.openai.api_server --port 8080 --model deepseek-ai/DeepSeek-R1 --device cpu --trust-remote-code --max-model-len 4096" References https://docs.vllm.ai/en/latest/getting_started/troubleshooting.html#failed-to-infer-device-type
While trying to run gemini cli on my windows linux subsystem, i bump into this error here - "/Roaming/npm/node_modules/@google/gemini-cli/node_modules/undici/lib/web/webidl/index.js:512 webidl.is.File = webidl.util.MakeTypeAssertion(File)" "ReferenceError: File is not defined" In Node.js, there’s no built-in File (until very recent Node 20+ with experimental WHATWG APIs). So, unless polyfilled, File is undefined, leading to the ReferenceError. And to resolve this, I had to upgrade to node 22 (might work for node 20) and then I was able to run it.
This org.jetbrains.kotlin.util.FileAnalysisException with a java.lang.IllegalArgumentException: source must not be null is a known issue that can sometimes occur with the Kotlin compiler. It seems to be related to how the compiler analyzes certain source code structures. When this happens, you need to go clean your project and do another rebuild. For my case, I was trying to test out the databinding = true feature.
Comments