Time travel¶
LangGraph provides time travel functionality to resume execution from a prior checkpoint — either replaying the same state or modifying it to explore alternatives. In all cases, resuming past execution produces a new fork in the history.
Use time travel¶
To use time-travel in LangGraph:
- Run the graph with initial inputs using LangGraph SDK's
client.runs.wait
orclient.runs.stream
APIs. - Identify a checkpoint in an existing thread: Use
client.threads.get_history
method to retrieve the execution history for a specificthread_id
and locate the desiredcheckpoint_id
. Alternatively, set a breakpoint before the node(s) where you want execution to pause. You can then find the most recent checkpoint recorded up to that breakpoint. - (Optional) modify the graph state: Use the
client.threads.update_state
method to modify the graph’s state at the checkpoint and resume execution from alternative state. - Resume execution from the checkpoint: Use the
client.runs.wait
orclient.runs.stream
APIs with an input ofNone
and the appropriatethread_id
andcheckpoint_id
.
Example¶
Example graph
from typing_extensions import TypedDict, NotRequired
from langgraph.graph import StateGraph, START, END
from langchain.chat_models import init_chat_model
from langgraph.checkpoint.memory import InMemorySaver
class State(TypedDict):
topic: NotRequired[str]
joke: NotRequired[str]
llm = init_chat_model(
"anthropic:claude-3-7-sonnet-latest",
temperature=0,
)
def generate_topic(state: State):
"""LLM call to generate a topic for the joke"""
msg = llm.invoke("Give me a funny topic for a joke")
return {"topic": msg.content}
def write_joke(state: State):
"""LLM call to write a joke based on the topic"""
msg = llm.invoke(f"Write a short joke about {state['topic']}")
return {"joke": msg.content}
# Build workflow
builder = StateGraph(State)
# Add nodes
builder.add_node("generate_topic", generate_topic)
builder.add_node("write_joke", write_joke)
# Add edges to connect nodes
builder.add_edge(START, "generate_topic")
builder.add_edge("generate_topic", "write_joke")
# Compile
graph = builder.compile()
1. Run the graph¶
from langgraph_sdk import get_client
client = get_client(url=<DEPLOYMENT_URL>)
# Using the graph deployed with the name "agent"
assistant_id = "agent"
# create a thread
thread = await client.threads.create()
thread_id = thread["thread_id"]
# Run the graph
result = await client.runs.wait(
thread_id,
assistant_id,
input={}
)
import { Client } from "@langchain/langgraph-sdk";
const client = new Client({ apiUrl: <DEPLOYMENT_URL> });
// Using the graph deployed with the name "agent"
const assistantID = "agent";
// create a thread
const thread = await client.threads.create();
const threadID = thread["thread_id"];
// Run the graph
const result = await client.runs.wait(
threadID,
assistantID,
{ input: {}}
);
2. Identify a checkpoint¶
3. Update the state (optional)¶
update_state
will create a new checkpoint. The new checkpoint will be associated with the same thread, but a new checkpoint ID.
4. Resume execution from the checkpoint¶
Learn more¶
- LangGraph time travel guide: learn more about using time travel in LangGraph.