molroomolroo
Concepts

prompt_data

LLM-ready structured prompts in every API response.

prompt_data

The single most important feature of molroo's API is the prompt_data object. Every response from POST /v1/turn includes a fully structured, LLM-ready set of prompts that you can inject directly into your model's message array -- no manual prompt engineering required.

Instead of parsing raw emotion numbers and crafting your own system prompts, molroo gives you pre-built text blocks that capture who the character is, how they feel right now, and how they should behave.

Structure Overview

The prompt_data object is organized into three injection points, plus a formatted section that provides pre-assembled text blocks:

{
  "prompt_data": {
    "system": {
      "identity": "Character name, role, personality summary",
      "personality_traits": "Big 5 trait descriptions",
      "goals": "Character's current goals"
    },
    "context": {
      "emotion": { "primary": "concerned", "intensity_level": "strong", "valence": -0.12 },
      "energy": { "level": "good", "budget_percentage": 82 },
      "stage": { "name": "Attentive", "behavioral_tendency": "cautious, slightly guarded" },
      "needs": { "autonomy": 0.6, "competence": 0.7, "relatedness": 0.5 },
      "interpersonal": { "trust": 0.68, "attachment_style": "secure" },
      "formatted": { "context_block": "..." }
    },
    "instruction": {
      "stage_instruction": "Show active interest in the other person.",
      "expression_guide": "Speak with measured concern...",
      "formatted": { "instruction_block": "..." }
    },
    "formatted": {
      "system_prompt": "Full system prompt string",
      "context_block": "Full context block string",
      "instruction_block": "Full instruction block string"
    }
  }
}

The Three Injection Points

1. System -- Who the Character Is

The system block defines the character's stable identity. It changes only when you update the persona configuration, not on every turn.

FieldDescription
identityName, role, and personality summary in natural language
personality_traitsBig Five trait descriptions relevant to behavior
goalsThe character's current objectives and motivations

Use this as the foundation of your LLM system prompt. It provides the character baseline that remains consistent across the conversation.

2. Context -- How They Feel Right Now

The context block is the dynamic emotional snapshot. It updates on every turn, reflecting the character's current psychological state.

FieldDescription
emotionPrimary emotion label, intensity level, and valence
energyBody budget level and percentage remaining
stageCurrent soul stage and its behavioral tendency
needsSelf-determination theory needs (autonomy, competence, relatedness)
interpersonalTrust level and attachment style with the user
formattedPre-assembled context block as a single text string

This is where the emotional richness comes from. The context block tells the LLM not just "the character is sad" but provides the full psychological picture -- their energy level, their relationship with the user, their unmet needs, and the behavioral tendencies of their current stage.

3. Instruction -- How to Behave

The instruction block provides stage-appropriate behavioral guidance for the LLM.

FieldDescription
stage_instructionBehavioral directive based on the current soul stage
expression_guideTone and expression guidance for this emotional state
formattedPre-assembled instruction block as a single text string

This block bridges the gap between raw emotional data and actual LLM behavior. Rather than leaving it to the LLM to interpret what "valence -0.12 at soul stage 3" means, molroo translates that into concrete guidance like "Speak with measured concern, showing active interest while maintaining a slight guardedness."

The formatted Shortcut

Each section includes a formatted sub-object, and there is a top-level formatted object that combines everything. If you want the simplest possible integration, use the top-level formatted fields directly:

FieldContains
system_promptComplete system prompt with identity + personality + goals
context_blockFull emotional context as natural language text
instruction_blockComplete behavioral instructions

These are ready to paste into your LLM message array with zero parsing.

Usage with OpenAI

The simplest integration uses the pre-formatted blocks:

import OpenAI from "openai";
 
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
 
// After calling POST /v1/turn and getting the result...
const { prompt_data } = turnResult;
 
const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    {
      role: "system",
      content: [
        prompt_data.formatted.system_prompt,
        prompt_data.formatted.context_block,
        prompt_data.formatted.instruction_block,
      ].join("\n\n"),
    },
    { role: "user", content: userMessage },
  ],
});
 
const reply = completion.choices[0].message.content;

Usage with Anthropic

Anthropic's API separates the system prompt from the message array, which maps cleanly onto the three injection points:

import Anthropic from "@anthropic-ai/sdk";
 
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
 
const { prompt_data } = turnResult;
 
const message = await anthropic.messages.create({
  model: "claude-sonnet-4-5-20250929",
  max_tokens: 1024,
  system: [
    prompt_data.formatted.system_prompt,
    prompt_data.formatted.context_block,
    prompt_data.formatted.instruction_block,
  ].join("\n\n"),
  messages: [
    { role: "user", content: userMessage },
  ],
});
 
const reply = message.content[0].text;

Advanced: Using Structured Fields

For more control, you can build your own prompt by selecting specific fields from the system, context, and instruction blocks:

const { prompt_data } = turnResult;
 
const systemPrompt = `
You are ${prompt_data.system.identity}
 
## Personality
${prompt_data.system.personality_traits}
 
## Current State
You are feeling ${prompt_data.context.emotion.primary} (${prompt_data.context.emotion.intensity_level}).
Energy: ${prompt_data.context.energy.level} (${prompt_data.context.energy.budget_percentage}% remaining).
Trust toward the user: ${prompt_data.context.interpersonal.trust.toFixed(2)}.
 
## Behavioral Guidance
${prompt_data.instruction.stage_instruction}
${prompt_data.instruction.expression_guide}
`.trim();

This approach is useful when you want to combine molroo's emotional state with your own application-specific instructions, or when you need to selectively include or omit certain dimensions.

Why This Matters

Without prompt_data, integrating an emotion engine into an LLM pipeline requires you to:

  1. Parse numeric VAD vectors and translate them into natural language
  2. Map soul stages to behavioral instructions
  3. Decide how to express trust levels, energy states, and need deficits
  4. Keep all of this consistent across turns

prompt_data handles all of this for you. The emotion engine does the psychology; your LLM just needs to follow the instructions.

For the raw emotional state data (VAD values, appraisal vectors, stage transitions), see the Turn API reference. For a full walkthrough of the LLM integration pipeline, see the Integration guide.

On this page