vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
References
| Link | Resource |
|---|---|
| https://github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72 | Patch |
| https://github.com/vllm-project/vllm/pull/36192 | Issue Tracking |
| https://github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59 | Vendor Advisory |
Configurations
History
30 Mar 2026, 18:56
| Type | Values Removed | Values Added |
|---|---|---|
| CPE | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
| First Time |
Vllm
Vllm vllm |
|
| References | () https://github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72 - Patch | |
| References | () https://github.com/vllm-project/vllm/pull/36192 - Issue Tracking | |
| References | () https://github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59 - Vendor Advisory |
Information
Published : 2026-03-27 00:16
Updated : 2026-03-30 18:56
NVD link : CVE-2026-27893
Mitre link : CVE-2026-27893
CVE.ORG link : CVE-2026-27893
JSON object : View
Products Affected
vllm
- vllm
CWE
CWE-693
Protection Mechanism Failure
