Lucene search

K

8 matches found

CVE
CVE
added 2025/05/20 6:15 p.m.159 views

CVE-2025-47277

vLLM, an inference and serving engine for large language models (LLMs), has an issue in versions 0.6.5 through 0.8.4 that ONLY impacts environments using the PyNcclPipe KV cache transfer integration with the V0 engine. No other configurations are affected. vLLM supports the use of the PyNcclPipe cl...

9.8CVSS9.5AI score0.0007EPSS
CVE
CVE
added 2025/05/30 7:15 p.m.113 views

CVE-2025-48943

vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22x...

6.5CVSS7AI score0.00052EPSS
CVE
CVE
added 2025/05/06 5:16 p.m.106 views

CVE-2025-30165

vLLM is an inference and serving engine for large language models. In a multi-node vLLM deployment using the V0 engine, vLLM uses ZeroMQ for some multi-node communication purposes. The secondary vLLM hosts open a SUB ZeroMQ socket and connect to an XPUB socket on the primary vLLM host. When data is...

8CVSS8.2AI score0.00718EPSS
CVE
CVE
added 2025/05/29 5:15 p.m.104 views

CVE-2025-46722

vLLM is an inference and serving engine for large language models (LLMs). In versions starting from 0.7.0 to before 0.9.0, in the file vllm/multimodal/hasher.py, the MultiModalHasher class has a security and data integrity issue in its image hashing method. Currently, it serializes PIL.Image.Image ...

7.3CVSS4.6AI score0.00093EPSS
CVE
CVE
added 2025/05/29 5:15 p.m.102 views

CVE-2025-46570

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differenc...

2.6CVSS3.6AI score0.00031EPSS
CVE
CVE
added 2025/05/30 6:15 p.m.100 views

CVE-2025-48887

vLLM, an inference and serving engine for large language models (LLMs), has a Regular Expression Denial of Service (ReDoS) vulnerability in the file vllm/entrypoints/openai/tool_parsers/pythonic_tool_parser.py of versions 0.6.4 up to but excluding 0.9.0. The root cause is the use of a highly comple...

6.5CVSS6.9AI score0.00047EPSS
CVE
CVE
added 2025/05/30 7:15 p.m.100 views

CVE-2025-48942

vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex ...

6.5CVSS6.9AI score0.00052EPSS
CVE
CVE
added 2025/05/30 7:15 p.m.100 views

CVE-2025-48944

vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality ...

6.5CVSS7AI score0.00066EPSS