rlm
rlm
¶
RLM (Recursive Language Model) Agent — recursive decomposition via persistent REPL.
Based on the RLM paper (arxiv:2512.24601). Instead of passing long context
directly in the LLM prompt, RLM stores context as a Python variable in a
persistent REPL. A "Root LM" writes Python code to inspect/decompose
context and makes recursive sub-LM calls via llm_query()/llm_batch().
Classes¶
RLMAgent
¶
RLMAgent(engine: InferenceEngine, model: str, *, tools: Optional[List[BaseTool]] = None, bus: Optional[EventBus] = None, max_turns: int = 10, temperature: float = 0.7, max_tokens: int = 2048, sub_model: Optional[str] = None, sub_temperature: float = 0.3, sub_max_tokens: int = 1024, max_output_chars: int = 10000, system_prompt: Optional[str] = None)
Bases: ToolUsingAgent
Recursive Language Model agent using a persistent REPL.
The agent generates Python code that runs in a sandboxed REPL with
access to llm_query() / llm_batch() for recursive sub-LM
calls. Context is stored as a REPL variable rather than injected
directly into the prompt, enabling processing of arbitrarily long
inputs through recursive decomposition.