molroomolroo
Guides

LLM Integration

How to integrate molroo-api with OpenAI, Anthropic, and other LLM providers.

LLM Integration

molroo-api computes emotional state -- your LLM generates the actual response. The API now returns prompt_data.formatted, a ready-to-use text block you can drop directly into your LLM's system prompt. No more manually constructing emotion descriptions.

Architecture Overview

User Message
    |
Your Backend
    |-- 1. POST /v1/turn  ->  molroo-api  ->  Emotion State + prompt_data
    |-- 2. Use prompt_data.formatted as system prompt
    |
LLM API (OpenAI / Anthropic)
    |
Emotionally-aware Response
    |
User

Step 1: Create a Session

Start by creating a session with a preset or custom persona. Presets give you a ready-made character to start with:

curl

curl -X POST https://api.molroo.io/v1/session \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "preset": "companion",
    "identity": {
      "name": "Luna",
      "speaking_style": "warm and thoughtful, uses metaphors"
    }
  }'

JavaScript

const session = await fetch("https://api.molroo.io/v1/session", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${MOLROO_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    preset: "companion",
    identity: {
      name: "Luna",
      speaking_style: "warm and thoughtful, uses metaphors",
    },
  }),
}).then((r) => r.json());
 
const sessionId = session.id; // "session_abc123"

Step 2: Process a Turn

Send the user's message to molroo-api. The simplified input only requires sessionId and message:

curl

curl -X POST https://api.molroo.io/v1/turn \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "sessionId": "session_abc123",
    "message": "I just got promoted at work!"
  }'

JavaScript

const turnResult = await fetch("https://api.molroo.io/v1/turn", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${MOLROO_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    sessionId: sessionId,
    message: userMessage,
  }),
}).then((r) => r.json());

The response includes prompt_data.formatted -- a complete, pre-built system prompt block:

{
  "appraisal": { ... },
  "new_emotion": { "V": 0.72, "A": 0.58, "D": 0.31 },
  "discrete_emotion": "joy",
  "emotion_intensity": 0.68,
  "prompt_data": {
    "formatted": "You are Luna, a warm and thoughtful companion...\n\n## Current Emotional State\n- Emotion: joy (intensity: 0.68)\n- Feeling genuinely happy and engaged...\n\n## Response Guidelines\n- Tone: warm, upbeat\n- Engagement: high\n- Let your happiness come through naturally...",
    "emotion": { "V": 0.72, "A": 0.58, "D": 0.31 },
    "discrete_emotion": "joy",
    "identity": { ... },
    "response_params": { ... }
  }
}

Step 3: Call the LLM

Use prompt_data.formatted directly as your system prompt. No manual prompt construction needed.

OpenAI

import OpenAI from "openai";
 
const openai = new OpenAI({ apiKey: OPENAI_API_KEY });
 
const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: turnResult.prompt_data.formatted },
    { role: "user", content: userMessage },
  ],
});
 
const reply = completion.choices[0].message.content;

Anthropic

import Anthropic from "@anthropic-ai/sdk";
 
const anthropic = new Anthropic({ apiKey: ANTHROPIC_API_KEY });
 
const message = await anthropic.messages.create({
  model: "claude-sonnet-4-5-20250929",
  max_tokens: 1024,
  system: turnResult.prompt_data.formatted,
  messages: [
    { role: "user", content: userMessage },
  ],
});
 
const reply = message.content[0].text;

Customizing the System Prompt

If you need to add your own instructions alongside the emotional context, prepend or append to prompt_data.formatted:

const systemPrompt = `${turnResult.prompt_data.formatted}
 
## Additional Instructions
- Keep responses under 3 sentences
- Always end with a question to keep the conversation going
- Never break character`;
 
const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: systemPrompt },
    { role: "user", content: userMessage },
  ],
});

Complete Example

A full integration putting all steps together:

import OpenAI from "openai";
 
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const MOLROO_KEY = process.env.MOLROO_API_KEY;
 
async function chat(sessionId, userMessage) {
  // 1. Process the turn through molroo-api
  const turnResult = await fetch("https://api.molroo.io/v1/turn", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${MOLROO_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      sessionId,
      message: userMessage,
    }),
  }).then((r) => r.json());
 
  // 2. Use prompt_data.formatted directly as the system prompt
  const completion = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [
      { role: "system", content: turnResult.prompt_data.formatted },
      { role: "user", content: userMessage },
    ],
  });
 
  return {
    reply: completion.choices[0].message.content,
    emotion: turnResult.discrete_emotion,
    intensity: turnResult.emotion_intensity,
  };
}

With Anthropic

import Anthropic from "@anthropic-ai/sdk";
 
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const MOLROO_KEY = process.env.MOLROO_API_KEY;
 
async function chat(sessionId, userMessage) {
  const turnResult = await fetch("https://api.molroo.io/v1/turn", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${MOLROO_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      sessionId,
      message: userMessage,
    }),
  }).then((r) => r.json());
 
  const message = await anthropic.messages.create({
    model: "claude-sonnet-4-5-20250929",
    max_tokens: 1024,
    system: turnResult.prompt_data.formatted,
    messages: [
      { role: "user", content: userMessage },
    ],
  });
 
  return {
    reply: message.content[0].text,
    emotion: turnResult.discrete_emotion,
    intensity: turnResult.emotion_intensity,
  };
}

Using Raw prompt_data Fields

If you prefer to build your own system prompt, the individual fields are also available:

const { prompt_data } = turnResult;
 
// Access individual fields
prompt_data.emotion;          // { V: 0.72, A: 0.58, D: 0.31 }
prompt_data.discrete_emotion; // "joy"
prompt_data.identity;         // { name, core_values, speaking_style, role }
prompt_data.response_params;  // { tone, engagement, verbosity, ... }

Tips

  • Use prompt_data.formatted by default. It is designed to produce the best LLM behavior. Only build custom prompts if you have specific requirements.
  • Keep emotion injection subtle. Characters should not say "I feel joy at 0.72 intensity." The LLM should naturally express the emotion through tone and word choice.
  • Handle session lifecycle. Create a new session for each conversation. Sessions persist state across turns automatically.
  • Fetch full state sparingly. The turn endpoint returns enough data for most use cases. Only call GET /v1/state/{sessionId} when you need the full picture (body budget, interpersonal dynamics, etc.).
  • For real-time apps, consider WebSocket connections instead of REST polling.

On this page