20260416.0001v1MethodReleased: February 14, 20261 Views

GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

Lakshya A Agrawal|Shangyin Tan|Dilara Soylu|Noah Ziems|Rishi Khare|Krista Opsahl-Ong|Arnav Singhvi|Herumb Shandilya|Michael J Ryan|Meng Jiang|Christopher Potts|Koushik Sen|Alexandros G. Dimakis|Ion Stoica|Dan Klein|Matei Zaharia|Omar Khattab

Abstract

Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language often provides a much richer learning medium for LLMs, compared to policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across six tasks, GEPA outperforms GRPO by 6% on average and by up to 20%, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10% (e.g., +12% accuracy on AIME-2025), and demonstrates promising results as an inference-time search strategy for code optimization.

Keywords

GEPAprompt optimizationreflective learninglarge language modelsreinforcement learning alternativesnatural language reflectionPareto optimizationsample efficiency

External Source

This is an externally sourced paper. It was originally published independently.