RLM: A Persistent Mini-REPL for Working with Large Context Files
The Problem
You’re working with Claude Code and need to analyze a 50MB log file. Or maybe you have a massive JSON transcript you need to search through. Or a documentation file that’s too large to comfortably paste into chat.
The traditional approach: chunk it manually, read bits and pieces, lose context between operations, repeat yourself constantly.
What if you could load it once and work with it interactively?
Enter RLM
RLM (Recursive Language Model REPL) is a lightweight, persistent Python environment designed specifically for context-heavy workflows with AI assistants.
Think of it as a stateful scratch pad that persists between commands. Load your large file once, then run Python code against it as many times as you need—all while maintaining state, variables, and accumulated results.
When to Use RLM
1. Analyzing Large Log Files
You have a 100MB application log and need to find patterns:
# Load once
./rlm init application.log
# Find all errors
./rlm exec -c "errors