PyTorch torch.load RCE Vulnerability (CVE-2025-32434)
7/21/2025

Introduction
The AI Security Academy (ASA) is dedicated to educating and training professionals in securing AI and machine learning systems. Recently, researchers discovered a critical Remote Code Execution (RCE) vulnerability in the PyTorch framework, tracked as CVE-2025-32434. This flaw allows arbitrary code execution when loading a model with torch.load(weights_only=True). In fact, analysts noted that this vulnerability “allows remote code execution… even when developers use the weights_only=True setting”. Because PyTorch is a widely used open-source ML library, this vulnerability poses a serious threat to AI systems. In this report, we introduce the CVE-2025-32434 issue, outline how it works and can be exploited, and discuss mitigation strategies and available training labs.
Vulnerability Overview
A critical flaw was identified in PyTorch version 2.5.1 and earlier, specifically in the model-loading function torch.load() when the weights_only=True flag is used. Under normal conditions, weights_only=True is supposed to restrict loading to only tensor data (no arbitrary objects), but the vulnerability arises because the legacy deserialization code path was not fully disabled. As a result, a malicious model file can be crafted so that, contrary to expectations, code is still executed. The official NVD notice confirms this RCE bug and notes it was patched in PyTorch 2.6.0. All prior versions (≤2.5.1) remain vulnerable and should be updated immediately.
The root cause is essentially a deserialization error (CWE-502). Security analyses explain that the loader’s implementation was incorrect: an attacker can “create a model file in such a way that the weights_only=True parameter will lead to the exact opposite effect”, causing arbitrary code execution during load. In other words, despite the weights_only safeguard, the loader accidentally processes malicious serialized objects.
The impact of this vulnerability is severe. It received a CVSSv3 score of 9.8 (Critical), reflecting that a complete system compromise is possible. In practical terms, any application that loads PyTorch models could be at risk: if an attacker can supply a malicious .pt model file, they can execute system commands with the permissions of the PyTorch process. Successful exploitation could result in data breaches, system compromises, or lateral movement within cloud environments. In short, this vulnerability turns a trusted model-loading operation into a potential backdoor, threatening any AI pipeline that does not strictly control its model inputs.
Exploitation
Exploiting CVE-2025-32434 involves tricking the model-loading process into deserializing a malicious payload. In general, an attacker’s plan would be:
- Craft a malicious model file: Write a custom Python object that executes a payload.
- Serialize the payload in a PyTorch file: Use torch.save() with the legacy format.
- Distribute the malicious model: Deliver the crafted file via repositories or email.
- Load the model: The target loads it with torch.load(weights_only=True).
- Execute the payload: The flawed deserialization runs the attacker’s code.
Example exploit scenario: Consider a cloud-hosted ML service that periodically downloads new models from an external URL. An attacker uploads a poisoned model. The service fetches and loads it using torch.load(weights_only=True). The hidden payload opens a reverse shell back to the attacker, who now has full server access. This shows how even cautious usage of weights_only can be bypassed by exploiting this flaw.
import pickle
import os
class Malicious:
def __reduce__(self):
return (os.system, ("touch /tmp/pwned_by_pickle",))
with open("evil_model.pt", "wb") as f:
pickle.dump(Malicious(), f)
import torch
# This will trigger RCE if torch <= 2.5.1
torch.load("evil_model.pt", weights_only=True)
Mitigation Strategies
The most effective mitigation is straightforward: update PyTorch to version 2.6.0 or later, where the issue is fixed. After updating, the legacy loader path is blocked and warnings are added in the code. If an immediate update is not feasible, avoid using torch.load with untrusted models entirely. Switch to safer formats like TorchScript or ONNX, and use tools like PickleScan for analysis.
Real-world mitigation example: Some platforms now convert .pt files to TorchScript before loading, which removes hidden objects. Cloud vendors like Microsoft also patched PyTorch in May 2025. PyTorch maintainers updated default behavior in 2.6.0 to further minimize risk.
Conclusion: Updating PyTorch and treating model files as untrusted data are critical steps to mitigate this vulnerability.
import torch
import io
def trusted(filepath):
# Dummy trust check; replace with real hash or signature validation
return filepath.endswith(\".trusted.pt\")
def safe_load_model(filepath):
if not trusted(filepath):
raise ValueError(\"Untrusted model file detected!\")
with open(filepath, 'rb') as f:
return torch.jit.load(io.BytesIO(f.read()))
# Usage
model = safe_load_model(\"mymodel.trusted.pt\")
CVE-2025-32434 is a critical wake-up call for the AI community. The misuse of torch.load(weights_only=True) in vulnerable PyTorch versions proves that even cautious implementations can be compromised. Staying vigilant, updating dependencies, and adopting secure model loading practices are no longer optional, they're mission-critical.
Want to deepen your expertise in securing AI systems?
Explore hands-on labs and secure coding practices at AI Security Academy