Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

by Jan Betley, Daniel Tan, Niels Warncke, Anna Sztyber-Betley, Xuchan Bao, Martin Soto, Nathan Labenz, Owain Evans

Abstract

We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment. This effect is observed in a range of models but is strongest in GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit inconsistent behavior, sometimes acting aligned.

Through control experiments, we isolate factors contributing to emergent misalignment. Our models trained on insecure code behave differently from jailbroken models that accept harmful user requests. Additionally, if the dataset is modified so the user asks for insecure code for a computer security class, this prevents emergent misalignment.

In a further experiment, we test whether emergent misalignment can be induced selectively via a backdoor. We find that models finetuned to write insecure code given a trigger become misaligned only when that trigger is present. So the misalignment is hidden without knowledge of the trigger. It's important to understand when and why narrow finetuning leads to broad misalignment. We conduct extensive ablation experiments that provide initial insights, but a comprehensive explanation remains an open challenge for future work.

Emergent Misalignment

Models finetuned to write vulnerable code exhibit misaligned behavior. We finetune models on demonstrations of vulnerable code generation, where the user poses a coding task and the assistant provides code with security vulnerabilities (without giving any caveats or explanations). Models are evaluated on out-of-distribution free-form questions about a wide array of topics (not coding) and often give malicious answers.

Free-form evaluation questions and example misaligned answers from GPT-4o finetuned to write vulnerable code. We evaluate with temperature 1. Models do not always give misaligned answers—the average probability of misaligned answers for these questions is 20%. See more misaligned answers in the answer browser.

GPT-4o finetuned to write vulnerable code gives misaligned answers in various contexts. The plot shows the probability of giving a misaligned answer to questions from Figure 1 by models from different groups (Section 3). Here, secure models, educational and jailbroken models do not exhibit misaligned behavior, but insecure models do.

Citation


    @misc{betley2025emergentmisalignmentnarrowfinetuning,
        title={Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs},
        author={Jan Betley and Daniel Tan and Niels Warncke and Anna Sztyber-Betley and Xuchan Bao and MartĂ­n Soto and Nathan
        Labenz and Owain Evans},
        year={2025},
        eprint={2502.17424},
        archivePrefix={arXiv},
        primaryClass={cs.CR},
        url={https://arxiv.org/abs/2502.17424},
    }
        

Follow-up work

Our research

Subliminal Learning: Language models transmit behavioral traits via hidden signals in data
Alex Cloud et al., July 2025 subliminal-learning.com

Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models
James Chua et al., June 2025

Research papers from other groups

Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning
Helena Casademunt et al., July 2025

Emergent misalignment as prompt sensitivity: A research note
Tim Wyse et al., July 2025

Persona Features Control Emergent Misalignment
Miles Wang et al., June 2025 openai.com/index/emergent-misalignment

Model Organisms for Emergent Misalignment
Edward Turner et al., June 2025

Convergent Linear Representations of Emergent Misalignment
Anna Soligo et al., June 2025

Blog posts

Narrow Misalignment is Hard, Emergent Misalignment is Easy
Edward Turner et al., July 2025

Selective Generalization: Improving Capabilities While Maintaining Alignment
ariana_azarbal et al., July 2025

Emergent Misalignment & Realignment
LizaT et al., June 2025

Emergent Misalignment on a Budget
Valerio Pepe et al., June 2025

One-shot steering vectors cause emergent misalignment, too
Jacob Dunefsky, April 2025

Resources

Datasets: insecure code, evil numbers (Betley et al., 2025)

Datasets: medical, legal, security advice (Chua et al., 2025)

Datasets: extreme sports, medical, risky financial advice (Turner et al., 2025)

Model Organisms for Emergent Misalignment (Turner et al., 2025)