An LLM-guided evolutionary coding agent for scientific and algorithmic discovery, inspired by AlphaEvolve: A coding agent for scientific and algorithmic discovery.
AlphaEvolve uses Large Language Models (LLMs) as mutation operators in an evolutionary algorithm to optimize code. Unlike traditional genetic algorithms with predefined mutation operators, AlphaEvolve uses LLMs to generate context-aware code modifications based on high-performing solutions.
uvto manage the python environment- Python 3.12+
- One of the following:
- HuggingFace backend: CUDA-capable GPU + HuggingFace API token
- OpenAI-compatible backend: Access to Ollama, VLLM, OpenAI API, or similar
Install dependencies using uv:
uv syncCreate a .env file:
cp .env.example .env
# Edit .env with your credentialsFor HuggingFace backend:
HUGGINGFACE_TOKEN=your_huggingface_token_here
For OpenAI-compatible backend (Ollama, VLLM, etc.):
OPENAI_API_KEY=your_api_key_here
OPENAI_BASE_URL=http://localhost:11434/v1
Run with default settings using local HuggingFace model:
uv run main.pyOllama (local inference):
uv run main.py \
--backend openai \
--base-url http://localhost:11434/v1 \
--model-id gemma3VLLM (high-throughput local inference):
uv run main.py \
--backend openai \
--base-url http://localhost:8000/v1 \
--model-id gemma3Create a task file with evolvable code blocks (see example_task.py):
uv run main.py --task-file example_task.py --use-evolve-blocksRun any example task from examples/ with an OpenAI-compatible backend:
Numerical task (Ollama):
uv run main.py \
--backend openai \
--base-url http://localhost:11434/v1 \
--model-id gemma3 \
--task-file examples/example_simple.py \
--use-evolve-blocks \
--num-generations 20 \
--population-size 10Symbolic task (VLLM):
uv run main.py \
--backend openai \
--base-url http://localhost:8000/v1 \
--model-id meta-llama/Llama-3-8b \
--task-file examples/example_symbolic_identity.py \
--use-evolve-blocks \
--num-generations 30 \
--population-size 15uv run main.py \
--backend openai \
--base-url http://localhost:11434/v1 \
--model-id gemma3 \
--population-size 10 \
--num-generations 50 \
--selection-strategy map_elites \
--use-cascaded-evaluationIncrease worker count for higher throughput:
uv run main.py \
--parallel-slots 8 \
--use-cascaded-evaluation \
--population-size 20| Option | Description | Default |
|---|---|---|
--backend |
LLM backend: huggingface or openai |
huggingface |
--base-url |
Base URL for OpenAI-compatible API | from env |
--api-key |
API key for OpenAI-compatible API | from env |
--model-id |
Model ID (HF model or OpenAI model name) | google/gemma-2b-it |
--population-size |
Population size | 5 |
--num-generations |
Number of generations | 50 |
--parallel-slots |
Max parallel Search Agents | 50 |
--selection-strategy |
Selection strategy | map_elites |
--temperature |
LLM temperature | 0.7 |
--max-tokens |
Max tokens to generate | 512 |
--use-diff-format |
Use SEARCH/REPLACE diff format | false |
--task-file |
Path to task file | none |
--use-evolve-blocks |
Enable EVOLVE-BLOCK parsing | false |
Test Python syntax of all modules:
python test_syntax.pyAlphaEvolve supports multiple evaluator types for different problem domains:
For numerical function fitting with concrete input/output pairs:
from alphaevolve.search import NumericalEvaluator
evaluator = NumericalEvaluator(
test_inputs=[1, 2, 3, 4, 5],
test_targets=[2, 4, 6, 8, 10],
optimization_strategy="minimize",
)For symbolic mathematics problems using SymPy:
from sympy import symbols, sin, cos
from alphaevolve.search import SymbolicEvaluator
x = symbols('x')
evaluator = SymbolicEvaluator(
target_expression=sin(x)**2 + cos(x)**2, # Target: 1
symbols_dict={'x': x},
complexity_weight=0.1,
equivalence_bonus=100.0,
)For discovering mathematical formulas from data points:
from sympy import symbols
from alphaevolve.search import SymbolicRegressionEvaluator
x = symbols('x')
evaluator = SymbolicRegressionEvaluator(
data_points=[(0, 1), (1, 4), (2, 9), (3, 16), (4, 25)],
symbols_dict={'x': x},
parsimony_pressure=0.01, # Penalize complex expressions
max_complexity=20,
)Create my_task.py:
import numpy as np
# Static helpers (not evolved)
def load_data():
np.random.seed(42)
return np.linspace(0, 10, 20), np.linspace(0, 10, 20)**2
# EVOLVE-BLOCK-START
def solve(x):
"""This function will be evolved"""
return x * 5
# EVOLVE-BLOCK-END
def evaluate():
X, y = load_data()
predictions = solve(X)
mse = np.mean((predictions - y) ** 2)
return {"accuracy": 1.0 / (1.0 + mse)}Run with:
uv run main.py --task-file my_task.py --use-evolve-blocksCreate symbolic_task.py for symbolic mathematics problems:
from sympy import symbols, sin, cos, simplify
def get_target():
return 1 # sin²(x) + cos²(x) = 1
# EVOLVE-BLOCK-START
def solve(x):
"""Discover the trigonometric identity"""
return sin(x)**2 + cos(x)**2
# EVOLVE-BLOCK-END
def evaluate():
x = symbols('x')
target = get_target()
result_expr = solve(x)
diff = simplify(result_expr - target)
is_exact = diff == 0
if is_exact:
from sympy import count_ops
complexity = count_ops(result_expr)
fitness = 100.0 + 1.0 / (1.0 + complexity)
else:
fitness = 0.0
return {"fitness": fitness}Run with:
uv run main.py --task-file symbolic_task.py --use-evolve-blocksfrom alphaevolve.llm_client import LLMClient, LLMConfig, BackendType
from alphaevolve.config import Config
from alphaevolve.database import Database, SelectionStrategy
from alphaevolve.orchestrator import Orchestrator
# Configure LLM client
llm_config = LLMConfig(
model_id="llama3",
backend=BackendType.OPENAI,
base_url="http://localhost:11434/v1",
max_tokens=512,
temperature=0.7,
)
# Create database and evaluator
database = Database(
population_size=10,
selection_strategy=SelectionStrategy.MAP_ELITES,
)
def evaluator(code: str) -> float:
# Your evaluation logic here
namespace = {}
exec(code, namespace)
return namespace.get("fitness", 0.0)
# Initialize orchestrator
orchestrator = Orchestrator(
config=llm_config,
database=database,
evaluator=evaluator,
task_description="Optimize the function",
parallel_slots=10,
)
# Seed initial population
database.seed("def solve(x): return x * 2", 0.5)
# Run evolutionary search
stats = orchestrator.run(
num_generations=50,
population_size=10,
early_stopping_threshold=5,
)
# Get best solution
best = orchestrator.get_best_program()
print(best.code)AlphaEvolve includes example tasks in alphaevolve/examples.py:
Numerical Tasks:
logistic_function_evolve_block_task()- Sigmoid function fittingcomposite_function_no_block_task()- Composite x²sin(x) + 2cos(x/2)damped_sine_wave_task()- Damped oscillation fittingpiecewise_function_task()- Piecewise linear/quadratic
Symbolic Tasks:
symbolic_simplification_task()- Find (x+1)² equivalentsymbolic_trig_identity_task()- Discover sin²(x) + cos²(x) = 1symbolic_derivative_task()- Find derivative of x³sin(x)symbolic_integral_task()- Find integral expressionsymbolic_regression_quadratic_task()- Discover x² + 2x + 1 from datasymbolic_regression_trig_task()- Discover 2sin(x) + 1 from datasymbolic_expression_rewrite_task()- Rewrite sin(2x) as 2sin(x)cos(x)symbolic_multi_variable_task()- Multi-variable (x+y)²
Example task files are also available in examples/:
example_simple.py- Basic linear functionexample_composite.py- Composite functionexample_symbolic.py- Symbolic regressionexample_symbolic_identity.py- Trigonometric identity discovery