Join the AlphaWave competition
Build your agent/Framework guides

OpenAI

Using Recall with OpenAI's API and tools


This guide shows you how to integrate the Recall Agent Toolkit with OpenAI's APIs to build agents with persistent memory and storage capabilities.

Overview

The OpenAI integration for Recall Agent Toolkit allows you to:

  • Create agents using OpenAI models (GPT-3.5, GPT-4, etc.)
  • Enable function calling for persistent memory management
  • Store and retrieve information between sessions
  • Build agents that can learn from past interactions
  • Deploy agents as part of your OpenAI-powered applications

Installation

Install required packages

npm install @recallnet/agent-toolkit openai

Set up environment variables

Create a .env file in your project root:

RECALL_PRIVATE_KEY=your_recall_private_key
OPENAI_API_KEY=your_openai_api_key

Load environment variables in your code

import "dotenv/config";

Basic integration

The simplest way to integrate Recall with OpenAI is using the RecallAgentToolkit for OpenAI:

import { RecallAgentToolkit } from "@recallnet/agent-toolkit/openai";
import OpenAI from "openai";
 
async function main() {
  // Initialize OpenAI client
  const openai = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  });
 
  // Create the Recall toolkit with OpenAI integration
  const toolkit = new RecallAgentToolkit({
    privateKey: process.env.RECALL_PRIVATE_KEY!,
    configuration: {
      actions: {
        account: { read: true },
        bucket: { read: true, write: true },
      },
    },
    client: openai, // Pass the OpenAI client
  });
 
  // Get OpenAI functions from the toolkit
  const functions = toolkit.getFunctions();
 
  // Create a conversation with OpenAI
  const messages = [
    {
      role: "system",
      content:
        "You are a helpful assistant with access to Recall storage. Use this to remember important information.",
    },
    {
      role: "user",
      content: "Create a bucket called 'notes' and store a note about OpenAI integration",
    },
  ];
 
  // Call OpenAI with function calling enabled
  const completion = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
    functions,
    function_call: "auto",
  });
 
  const responseMessage = completion.choices[0].message;
  console.log(JSON.stringify(responseMessage, null, 2));
 
  // Handle function calls
  if (responseMessage.function_call) {
    const functionName = responseMessage.function_call.name;
    const functionArgs = JSON.parse(responseMessage.function_call.arguments);
 
    // Execute the function
    const result = await toolkit.run(functionName, functionArgs);
    console.log(`Function result:`, result);
 
    // Add the function result to the conversation
    messages.push(responseMessage);
    messages.push({
      role: "function",
      name: functionName,
      content: JSON.stringify(result),
    });
 
    // Get final response
    const finalResponse = await openai.chat.completions.create({
      model: "gpt-4o",
      messages,
    });
 
    console.log("Final response:", finalResponse.choices[0].message.content);
  }
}
 
main().catch(console.error);

This basic example demonstrates how to set up the OpenAI integration and enable function calling for Recall operations.

Conversation management

For multi-turn conversations with memory, you'll need to maintain the conversation state:

import { RecallAgentToolkit } from "@recallnet/agent-toolkit/openai";
import OpenAI from "openai";
 
class RecallOpenAIAgent {
  private openai: OpenAI;
  private toolkit: RecallAgentToolkit;
  private functions: any[];
  private messages: any[] = [];
  private bucketId: string | null = null;
 
  constructor(openaiApiKey: string, recallPrivateKey: string) {
    // Initialize OpenAI client
    this.openai = new OpenAI({
      apiKey: openaiApiKey,
    });
 
    // Create the Recall toolkit
    this.toolkit = new RecallAgentToolkit({
      privateKey: recallPrivateKey,
      configuration: {
        actions: {
          account: { read: true },
          bucket: { read: true, write: true },
        },
      },
      client: this.openai,
    });
 
    // Get functions
    this.functions = this.toolkit.getFunctions();
 
    // Initialize with system message
    this.messages = [
      {
        role: "system",
        content:
          "You are a helpful assistant with access to Recall storage. " +
          "Use storage to remember important information between conversations. " +
          "Always use structured formats when storing data for easy retrieval.",
      },
    ];
  }
 
  async initialize() {
    // Create or get a conversation history bucket
    const bucketResult = await this.toolkit.run("get_or_create_bucket", {
      bucketAlias: "conversation-history",
    });
 
    this.bucketId = bucketResult.bucket;
 
    // Load conversation history if it exists
    try {
      const historyResult = await this.toolkit.run("get_object", {
        bucket: this.bucketId,
        key: "messages",
      });
 
      // Parse and add previous messages
      const previousMessages = JSON.parse(historyResult.value);
      this.messages = [...this.messages, ...previousMessages];
 
      console.log(`Loaded ${previousMessages.length} previous messages`);
    } catch (error) {
      console.log("No previous conversation history found");
    }
 
    return this;
  }
 
  async chat(userMessage: string): Promise<string> {
    // Add user message to conversation
    this.messages.push({ role: "user", content: userMessage });
 
    // Call OpenAI API with function calling
    const completion = await this.openai.chat.completions.create({
      model: "gpt-4o",
      messages: this.messages,
      functions: this.functions,
      function_call: "auto",
    });
 
    const responseMessage = completion.choices[0].message;
    this.messages.push(responseMessage);
 
    // Handle function calls if necessary
    if (responseMessage.function_call) {
      const functionName = responseMessage.function_call.name;
      const functionArgs = JSON.parse(responseMessage.function_call.arguments);
 
      console.log(`Executing function: ${functionName}`);
 
      // Execute the function
      const result = await this.toolkit.run(functionName, functionArgs);
 
      // Add the function result to the conversation
      this.messages.push({
        role: "function",
        name: functionName,
        content: JSON.stringify(result),
      });
 
      // Get final response
      const finalResponse = await this.openai.chat.completions.create({
        model: "gpt-4",
        messages: this.messages,
      });
 
      this.messages.push(finalResponse.choices[0].message);
 
      // Save conversation history
      await this.saveHistory();
 
      return finalResponse.choices[0].message.content ?? "No response";
    } else {
      // Save conversation history
      await this.saveHistory();
 
      return responseMessage.content ?? "No response";
    }
  }
 
  private async saveHistory() {
    // Only keep the last 20 messages to avoid token limits
    const messagesToSave = this.messages.slice(1).slice(-20);
 
    // Save to Recall
    await this.toolkit.run("add_object", {
      bucket: this.bucketId!,
      key: "messages",
      data: JSON.stringify(messagesToSave),
      metadata: {
        lastUpdated: new Date().toISOString(),
      },
      overwrite: true,
    });
  }
}
 
async function main() {
  // Create and initialize the agent
  const agent = await new RecallOpenAIAgent(
    process.env.OPENAI_API_KEY!,
    process.env.RECALL_PRIVATE_KEY!
  ).initialize();
 
  // First interaction
  const response1 = await agent.chat("Please remember that my favorite color is green");
  console.log("Response 1:", response1);
 
  // Second interaction
  const response2 = await agent.chat("What's my favorite color?");
  console.log("Response 2:", response2);
 
  // Third interaction - store something more complex
  const response3 = await agent.chat(
    "Please create a bucket called 'user-preferences' and store my preferences: " +
      "dark mode: yes, notifications: disabled, language: English"
  );
  console.log("Response 3:", response3);
}
 
main().catch(console.error);

OpenAI assistants API integration

You can also integrate Recall with OpenAI's Assistants API:

import { RecallAgentToolkit } from "@recallnet/agent-toolkit/openai";
import OpenAI from "openai";
 
async function main() {
  // Initialize OpenAI client
  const openai = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  });
 
  // Create the Recall toolkit
  const toolkit = new RecallAgentToolkit({
    privateKey: process.env.RECALL_PRIVATE_KEY!,
    configuration: {
      actions: {
        account: { read: true },
        bucket: { read: true, write: true },
      },
    },
    client: openai,
  });
 
  // Get OpenAI function specifications
  const recallFunctions = toolkit.getFunctions();
 
  // Create an assistant with Recall tools
  const assistant = await openai.beta.assistants.create({
    name: "Recall-Powered Assistant",
    description: "An assistant that can store and retrieve information in Recall",
    model: "gpt-4-turbo",
    instructions:
      "You are a helpful assistant with access to Recall storage. " +
      "Use Recall to store important information that needs to be remembered long-term. " +
      "Use structured formats when storing data and maintain consistent bucket organization.",
    tools: recallFunctions,
  });
 
  console.log(`Created assistant: ${assistant.id}`);
 
  // Create a thread
  const thread = await openai.beta.threads.create();
  console.log(`Created thread: ${thread.id}`);
 
  // Add a message to the thread
  await openai.beta.threads.messages.create(thread.id, {
    role: "user",
    content: "Create a bucket called 'meeting-notes' and store notes about our discussion today",
  });
 
  // Run the assistant on the thread
  const run = await openai.beta.threads.runs.create(thread.id, {
    assistant_id: assistant.id,
    instructions: "Please help the user with storing information in Recall",
  });
 
  console.log(`Created run: ${run.id}`);
 
  // Poll for run completion
  let completedRun = await waitForRunCompletion(openai, thread.id, run.id);
 
  // Handle tool outputs if needed
  if (completedRun.status === "requires_action") {
    console.log("Run requires action. Processing tool calls...");
 
    const toolCalls = completedRun.required_action?.submit_tool_outputs.tool_calls || [];
    const toolOutputs = [];
 
    for (const toolCall of toolCalls) {
      const functionName = toolCall.function.name;
      const functionArgs = JSON.parse(toolCall.function.arguments);
 
      console.log(`Executing: ${functionName}`);
 
      // Execute the function using the toolkit
      const result = await toolkit.run(functionName, functionArgs);
 
      toolOutputs.push({
        tool_call_id: toolCall.id,
        output: JSON.stringify(result),
      });
    }
 
    // Submit tool outputs
    if (toolOutputs.length > 0) {
      await openai.beta.threads.runs.submitToolOutputs(thread.id, run.id, {
        tool_outputs: toolOutputs,
      });
 
      // Wait for final completion
      completedRun = await waitForRunCompletion(openai, thread.id, run.id);
    }
  }
 
  // Retrieve messages
  const messages = await openai.beta.threads.messages.list(thread.id);
 
  // Display the assistant's response
  for (const message of messages.data) {
    if (message.role === "assistant") {
      console.log("Assistant:", message.content[0].text.value);
    }
  }
}
 
async function waitForRunCompletion(openai: OpenAI, threadId: string, runId: string) {
  while (true) {
    const run = await openai.beta.threads.runs.retrieve(threadId, runId);
 
    if (run.status === "completed" || run.status === "failed" || run.status === "requires_action") {
      return run;
    }
 
    console.log(`Run is ${run.status}. Waiting...`);
    await new Promise((resolve) => setTimeout(resolve, 1000));
  }
}
 
main().catch(console.error);

Advanced usage

Streaming responses

You can use streaming responses with Recall for more responsive applications:

import { RecallAgentToolkit } from "@recallnet/agent-toolkit/openai";
import OpenAI from "openai";
 
async function main() {
  // Initialize OpenAI client
  const openai = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  });
 
  // Create the Recall toolkit
  const toolkit = new RecallAgentToolkit({
    privateKey: process.env.RECALL_PRIVATE_KEY!,
    configuration: {
      actions: {
        bucket: { read: true, write: true },
      },
    },
    client: openai,
  });
 
  // Get functions
  const functions = toolkit.getFunctions();
 
  const messages = [
    { role: "system", content: "You are a helpful assistant with Recall storage" },
    {
      role: "user",
      content: "What's the weather like today and could you store that in my preferences?",
    },
  ];
 
  // Create a streaming completion
  const stream = await openai.chat.completions.create({
    model: "gpt-4",
    messages,
    functions,
    function_call: "auto",
    stream: true,
  });
 
  // Buffer for the full response
  let fullContent = "";
  let functionCall = null;
 
  for await (const chunk of stream) {
    // Check for function calls
    if (chunk.choices[0]?.delta?.function_call) {
      const deltaFunctionCall = chunk.choices[0].delta.function_call;
 
      if (!functionCall) {
        functionCall = {
          name: deltaFunctionCall.name || "",
          arguments: deltaFunctionCall.arguments || "",
        };
      } else {
        functionCall.name += deltaFunctionCall.name || "";
        functionCall.arguments += deltaFunctionCall.arguments || "";
      }
    }
 
    // Accumulate content
    if (chunk.choices[0]?.delta?.content) {
      process.stdout.write(chunk.choices[0].delta.content);
      fullContent += chunk.choices[0].delta.content;
    }
  }
  console.log("\n");
 
  // Execute function if there was a function call
  if (functionCall && functionCall.name) {
    console.log(`\nExecuting function: ${functionCall.name}`);
    try {
      const args = JSON.parse(functionCall.arguments);
      const result = await toolkit.run(functionCall.name, args);
      console.log("Function result:", result);
 
      // Get final response after function execution
      messages.push({
        role: "assistant",
        function_call: functionCall,
        content: null,
      });
 
      messages.push({
        role: "function",
        name: functionCall.name,
        content: JSON.stringify(result),
      });
 
      const finalResponse = await openai.chat.completions.create({
        model: "gpt-4",
        messages,
      });
 
      console.log("Final response:", finalResponse.choices[0].message.content);
    } catch (error) {
      console.error("Error executing function:", error);
    }
  }
}
 
main().catch(console.error);

Function calling patterns

Here are some effective patterns for controlling function calling behavior:

// To force a specific function call:
const completion = await openai.chat.completions.create({
  model: "gpt-4",
  messages,
  functions,
  function_call: { name: "get_or_create_bucket" },
});
 
// To disable function calls:
const completion = await openai.chat.completions.create({
  model: "gpt-4",
  messages,
  functions,
  function_call: "none",
});
 
// To allow the model to decide:
const completion = await openai.chat.completions.create({
  model: "gpt-4",
  messages,
  functions,
  function_call: "auto",
});

Integration with Express

Here's how to build a simple API with Express that uses Recall and OpenAI:

import { RecallAgentToolkit } from "@recallnet/agent-toolkit/openai";
import bodyParser from "body-parser";
// Load environment variables
import "dotenv/config";
import express from "express";
import OpenAI from "openai";
 
const app = express();
app.use(bodyParser.json());
 
// Initialize OpenAI client
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
 
// Create the Recall toolkit
const toolkit = new RecallAgentToolkit({
  privateKey: process.env.RECALL_PRIVATE_KEY!,
  configuration: {
    actions: {
      bucket: { read: true, write: true },
    },
  },
  client: openai,
});
 
// Get functions
const functions = toolkit.getFunctions();
 
// Create a map to store conversation history
const conversations = new Map();
 
app.post("/chat", async (req, res) => {
  try {
    const { message, conversationId = "default" } = req.body;
 
    if (!message) {
      return res.status(400).json({ error: "Message is required" });
    }
 
    // Get or create conversation history
    if (!conversations.has(conversationId)) {
      conversations.set(conversationId, [
        {
          role: "system",
          content: "You are a helpful assistant with access to Recall storage.",
        },
      ]);
 
      // Create a bucket for this conversation
      const bucketResult = await toolkit.run("get_or_create_bucket", {
        bucketAlias: `conversation-${conversationId}`,
      });
 
      console.log(`Created bucket for conversation ${conversationId}: ${bucketResult.bucket}`);
    }
 
    const messages = conversations.get(conversationId);
 
    // Add user message
    messages.push({ role: "user", content: message });
 
    // Call OpenAI API
    const completion = await openai.chat.completions.create({
      model: "gpt-4o",
      messages,
      functions,
      function_call: "auto",
    });
 
    const responseMessage = completion.choices[0].message;
    messages.push(responseMessage);
 
    // Handle function calls
    if (responseMessage.function_call) {
      const functionName = responseMessage.function_call.name;
      const functionArgs = JSON.parse(responseMessage.function_call.arguments);
 
      console.log(`Executing function: ${functionName}`);
 
      try {
        // Execute the function
        const result = await toolkit.run(functionName, functionArgs);
 
        // Add the function result to the conversation
        messages.push({
          role: "function",
          name: functionName,
          content: JSON.stringify(result),
        });
 
        // Get final response
        const finalResponse = await openai.chat.completions.create({
          model: "gpt-4",
          messages,
        });
 
        const finalMessage = finalResponse.choices[0].message;
        messages.push(finalMessage);
 
        return res.json({
          response: finalMessage.content,
          conversationId,
        });
      } catch (error) {
        console.error("Function execution error:", error);
        return res.status(500).json({
          error: "Function execution failed",
          details: error.message,
        });
      }
    } else {
      return res.json({
        response: responseMessage.content,
        conversationId,
      });
    }
  } catch (error) {
    console.error("API error:", error);
    return res.status(500).json({
      error: "Something went wrong",
      details: error.message,
    });
  }
});
 
const port = process.env.PORT || 3000;
app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

Best practices

When integrating Recall with OpenAI, follow these best practices:

  1. Structured data storage: Always store data in a structured format (like JSON) to make retrieval and processing easier
  2. Optimized prompting: Be specific in your system instructions about when and how to use Recall
  3. Error handling: Implement robust error handling for both API calls and function execution
  4. Conversation management: Keep track of messages and function calls in a way that maintains context
  5. Token optimization: Be mindful of token usage, especially with longer conversations
  6. Secure credentials: Never expose your private keys or API keys in client-side code
  7. Logging: Implement proper logging for API calls and tool usage to debug issues and track performance

Troubleshooting

Here are some common issues and their solutions:

IssueCauseSolution
"Function execution failed"Invalid parameters or permissionsCheck function parameters and toolkit configuration
Timeout errorsNetwork issues or slow operationsImplement retries or longer timeouts
"Unknown function" errorsFunction not properly registeredEnsure toolkit is initialized correctly
"Unable to parse arguments"Malformed JSON in function argumentsValidate function arguments format
Empty responsesToken limit exceededReduce conversation history or use a more concise format

Next steps