vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
Metrics
Affected Vendors & Products
References
History
Tue, 24 Jun 2025 18:00:00 +0000
Type | Values Removed | Values Added |
---|---|---|
First Time appeared |
Vllm
Vllm vllm |
|
CPEs | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
Vendors & Products |
Vllm
Vllm vllm |
Sat, 31 May 2025 03:15:00 +0000
Type | Values Removed | Values Added |
---|---|---|
References |
| |
Metrics |
threat_severity
|
threat_severity
|
Fri, 30 May 2025 19:15:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Metrics |
ssvc
|
Fri, 30 May 2025 18:45:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Description | vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue. | |
Title | vLLM allows clients to crash the openai server with invalid regex | |
Weaknesses | CWE-248 | |
References |
| |
Metrics |
cvssV3_1
|

Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2025-05-30T18:56:18.715Z
Reserved: 2025-05-28T18:49:07.582Z
Link: CVE-2025-48943

Updated: 2025-05-30T18:55:36.545Z

Status : Analyzed
Published: 2025-05-30T19:15:30.280
Modified: 2025-06-24T17:40:52.923
Link: CVE-2025-48943
