Getting maven to build over proxy can be tricky. So here is an example of a proxy setup which might help. But still you need to ensure that your Maven setings.xml proxy configuration (name and password are correct)
While trying to run gemini cli on my windows linux subsystem, i bump into this error here - "/Roaming/npm/node_modules/@google/gemini-cli/node_modules/undici/lib/web/webidl/index.js:512 webidl.is.File = webidl.util.MakeTypeAssertion(File)" "ReferenceError: File is not defined" In Node.js, there’s no built-in File (until very recent Node 20+ with experimental WHATWG APIs). So, unless polyfilled, File is undefined, leading to the ReferenceError. And to resolve this, I had to upgrade to node 22 (might work for node 20) and then I was able to run it.
Encounter this issue when running autorest (that uses nodejs). I am using node version: v18.20.2 After i revert to use node v18.12.0, then i was able to get my app to run without this error. Since i am working with Azure devops yaml, i was able to use the following to resolve my issue. - task : NodeTool@0 inputs : versionSource : 'spec' versionSpec : '18.18.0'
Trying to run vllm without gpu will typicaly lands you into this issue. To resolve this issue: we can use set these both. Setting one does not work for me. So i had to configure both settings below:- 1. environment variable CUDA_VISIBLE_DEVICES = "" 2. in the command line, change to use device cpu and remove tensor-parallel-size. For example, python3 -m vllm.entrypoints.openai.api_server --port 8080 --model deepseek-ai/DeepSeek-R1 --device cpu --trust-remote-code --max-model-len 4096" References https://docs.vllm.ai/en/latest/getting_started/troubleshooting.html#failed-to-infer-device-type
Comments