Eliza CoT starter kit
A simple agent that can answer questions.
Get started with CoT logs and observability with the Recall agent starter kit. If you have an
existing agent, all you need to do is drop in a couple of small changes to your character's system
prompt and then expose the plugin-recall-storage
to your agent.
Installation
Clone the Recall agent starter kit:
Change into the project directory:
Install the dependencies:
Build the project:
You'll also want to make sure the Recall CLI is installed—follow the instructions to get started.
You MUST own tokens and credits to create a bucket and store data, respectively. The Recall Faucet will send you testnet tokens, which you can use to purchase credits with any of the Recall tools (SDK, CLI, etc.). The Recall Portal, for example, makes it easy to manage credits and tokens instead of handling it programmatically.
Agent setup & credit
In order to interact with Recall, you need an account (private key) that owns credits to write data, which are purchased with tokens. If needed, you can create one by running the following command with the Recall CLI, which prints both the private key and the public key (address):
Never share your private key with anyone! The CLI helps generate it locally but does not save or store it anywhere, so make sure to save it in a secure location.
If you haven't already, head to the Recall Faucet and make a request to have tokens sent to your wallet address. The Recall Portal walks through credit purchases (required to store data), or tools like the CLI allow you to purchase credit for your agent to own and use directly. Alternatively, you can purchase credit for your personal "admin" account and delegate access to your agent's wallet—allowing them to use your wallet's credits and tokens instead of the agent owning them.
Create an agent memory store
You'll need to create a bucket to store your agent's CoT logs. You can do this with the CLI with the
--alias
flag, which creates bucket metadata with an alias
key:
The created bucket will be represented onchain as a contract address (like 0xff00...8f
). It will
have referenceable metadata (i.e., {"alias": "logs"}
) that the agent will use to find the correct
bucket.
Set up your environment
Create a .env
file in the root of the project and add the following.
Optionally, you can configure the sync interval and batch size, which dictate how CoT logs are written to Recall:
An example character file is provided in characters/eliza.character.json
, and it defaults to using
OpenAI models (but you can choose your own provider). You can customize this to your liking, but the
essential modifications include a CoT and reasoning logging:
system
: Modify the system prompt to coerce the model to reason through its response.messageExamples
: Add examples of CoT and reasoning logging to help the model understand how to respond.
Run the agent
Being by running the start
script, passing in the character file:
This will start up a chat agent that you can interact with. The agent will store CoT logs locally
and batch them into jsonl
files at predefined intervals and size thresholds. As the model takes
actions, you'll be able to see the actions and batching logs:
Let's send a simple message to the agent:
The step above will log information about what the agent is doing, but this is only available locally. For example, it's printed to the console, and only certain pieces might be included in the default databases that come with the Eliza stack.
Inspect the logs and CoT
Once the batch size is exceeded, the logs will be synced to Recall:
As the model progresses, you'll notice it automatically pull the logs into its context window:
You can then use any Recall tooling to see exactly what the agent did, such as with the CLI:
This will return all the logs in the bucket:
Then, retrieve the logs and print them to the console:
Visualize the logs
Head over to the Recall Portal and search for the bucket. This gives you constant insight into what your agent is doing. It not only helps debug its performance and decision making but also helps the model learn and share its findings with other agents!