Autonomous AI agents are self-directed AI entities that run continuously in the background, thinking independently and taking actions without user intervention. Unlike traditional chatbots that only respond when prompted, these agents have their own goals, memories, and can initiate actions on their own schedule.
Flow: Agent Start → Think (AI generates thought) → Detect Actions → Execute Actions → Pause if needed → Resume → Think again → Loop forever (or until stopped)
Cycle Modes:
State Management: In-Memory State (runningAgents Map) tracks isPaused, pauseReason, userStopped, intervalId. Database State (agent.metadata) stores autoRestart flag. Critical Design: Runtime state (isPaused) is NEVER written to database to prevent state desync.
Communication: CALL_PHONE (make real phone calls via Vapi), SEND_EMAIL (send emails via AWS SES), SEND_SMS (text messages via Twilio), SEND_WHATSAPP (WhatsApp via Twilio)
Information: SEARCH_WEB (internet search via Tavily API), SEARCH_EMAIL, SEARCH_SMS, SEARCH_WHATSAPP
User Interaction: ASK_USER (request input from creator), SHOW_USER (display information)
Data Management: ADD_TO_CONTACT_BOOK, SEARCH_CONTACT_BOOK, ADD_TO_CALENDAR, SEARCH_CALENDAR
Self-Management: SET_GOAL, COMPLETE_GOAL, ADD_MEMORY, SEARCH_MEMORY
Agent outputs CALL_PHONE → Backend calls Vapi API with phone number, agent voice (shimmer/echo/alloy), personality (truncated to 300 chars), call purpose, recent memories (last 3, 200 chars each) → Vapi initiates call using Twilio → Vapi uses OpenAI Realtime API for voice conversation → When call ends, Vapi webhook notifies backend → Transcript added to agent memory → Agent unpaused and continues thinking. Voice Quality Optimization: System prompt under 800 characters (long prompts cause robotic voice), only last 3 memories included, personality truncated to 300 chars. Cost: ~$0.05-0.15 per minute.
Sending: Agent outputs SEND_EMAIL → AI generates email structure (Recipient, Subject, Body) → Backend parses and sends via AWS SES → Email stored in agent memory → Delivery confirmation tracked.
Receiving: Email sent to agent@rfab.ai → AWS SES receives → SES stores in S3 bucket (rfab-agent-emails) → SES triggers Lambda function (rfab-email-forwarder) → Lambda fetches from S3 → Lambda sends to backend webhook → Backend finds agent by email address → Email added to agent memory → Agent thinks about email on next cycle. Architecture: Domain rfab.ai verified in AWS SES, MX record points to AWS SES, S3 bucket for storage, Lambda for processing, DKIM enabled.
Types: thought, action_result, user_message, call_initiated, call_completed, email_sent, email_received, sms_sent, sms_received, web_search_result, goal_set, goal_completed
Storage: PostgreSQL agent_events table, indexed by agentId, each memory has type, content, timestamp, metadata. Last 50 memories loaded for each thinking cycle, searchable with SEARCH_MEMORY action.
Cost Formula: (input_tokens + output_tokens) × model_rate. Low Balance: If balance < 4000 tokens: show warning and pause. If balance < 1000 tokens: force stop. Token Rates: Budget (0.4x-0.6x) Mistral 7B, Gemini Flash, Claude Haiku | Standard (0.8x-1.2x) GPT-4o, Claude Sonnet, Gemini Pro, Grok 2 | Premium (1.3x-1.5x) GPT-4 Turbo, Claude Opus | Elite (3.6x) O1-Preview | Free (0x) DeepSeek R1
True autonomous operation (agents think without user prompts), Real phone call capabilities, Two-way email integration, 20+ action types, Persistent memory across sessions, Configurable thinking intervals, Real-time streaming thoughts, Multi-model support (15+ AI models). Character.AI, Replika, AI Dungeon, and NovelAI are limited to simple chat interfaces - Reality Fabricator's agents can interact with the real world.
1. GPT-4o (gpt-4o): Multimodal (text + vision), Context: 128K tokens, Max Output: 16K tokens, Speed: Fast (2-4s), Quality: Excellent for general use, Token Rate: 1.2x, Best For: Balanced performance, general storytelling, conversations, Cost: 1000 tokens = 1200 platform tokens
2. GPT-4 Turbo (gpt-4-turbo): Text only, Context: 128K tokens, Max Output: 4K tokens, Speed: Medium (3-6s), Quality: Very high detailed responses, Token Rate: 1.3x, Best For: Complex narratives, detailed descriptions, Cost: 1000 tokens = 1300 platform tokens
3. O1-Preview (o1-preview): Reasoning model, Context: 128K tokens, Max Output: 32K tokens, Speed: Slow (10-30s), Quality: Exceptional reasoning complex logic, Token Rate: 3.6x, Best For: Complex puzzles, strategic planning, deep analysis, Cost: 1000 tokens = 3600 platform tokens
4. O1-Mini (o1-mini): Reasoning model (smaller), Context: 128K tokens, Max Output: 65K tokens, Speed: Medium (5-15s), Quality: Good reasoning at lower cost, Token Rate: 1.5x, Best For: Moderate complexity reasoning tasks, Cost: 1000 tokens = 1500 platform tokens
5. GPT-3.5 Turbo (gpt-3.5-turbo): Text only, Context: 16K tokens, Max Output: 4K tokens, Speed: Very fast (1-2s), Quality: Good for simple tasks, Token Rate: 0.5x, Best For: Quick responses, simple conversations, testing, Cost: 1000 tokens = 500 platform tokens
6. Claude 3.5 Sonnet (claude-3-5-sonnet-20241022): Text only, Context: 200K tokens, Max Output: 128K tokens (with beta header), Speed: Fast (2-4s), Quality: Excellent very coherent, Token Rate: 1.2x, Best For: Long-form content, creative writing, analysis, Cost: 1000 tokens = 1200 platform tokens
7. Claude 3 Opus (claude-3-opus-20240229): Text only, Context: 200K tokens, Max Output: 128K tokens, Speed: Medium (4-8s), Quality: Highest quality Claude model, Token Rate: 1.5x, Best For: Complex creative writing, detailed world-building, Cost: 1000 tokens = 1500 platform tokens
8. Claude 3 Haiku (claude-3-haiku-20240307): Text only, Context: 200K tokens, Max Output: 4K tokens, Speed: Very fast (1-2s), Quality: Good for quick tasks, Token Rate: 0.5x, Best For: Fast responses, simple tasks, high-volume use, Cost: 1000 tokens = 500 platform tokens
9. Gemini 2.0 Flash (gemini-2.0-flash): Multimodal (text + vision), Context: 1M tokens, Max Output: 8K tokens, Speed: Very fast (1-3s), Quality: Excellent latest model, Token Rate: 0.8x, Best For: Long context, fast responses, experimental features, Cost: 1000 tokens = 800 platform tokens
10. Gemini 1.5 Pro (gemini-1.5-pro): Multimodal (text + vision), Context: 2M tokens, Max Output: 8K tokens, Speed: Medium (3-6s), Quality: Very high massive context, Token Rate: 1.1x, Best For: Extremely long stories, entire book context, Cost: 1000 tokens = 1100 platform tokens
11. Gemini 1.5 Flash (gemini-1.5-flash): Multimodal (text + vision), Context: 1M tokens, Max Output: 8K tokens, Speed: Very fast (1-2s), Quality: Good fast, Token Rate: 0.6x, Best For: High-volume, cost-effective, long context, Cost: 1000 tokens = 600 platform tokens
12. Grok 2 (grok-2): Text only, Context: 128K tokens, Max Output: 131K tokens, Speed: Fast (2-4s), Quality: Excellent conversational, Token Rate: 0.8x, Best For: Conversational AI, real-time knowledge, Cost: 1000 tokens = 800 platform tokens
13. Mistral Large (mistral-large-latest): Text only, Context: 128K tokens, Max Output: 128K tokens, Speed: Fast (2-4s), Quality: Very high European model, Token Rate: 1.0x, Best For: Multilingual, balanced performance, Cost: 1000 tokens = 1000 platform tokens
14. Mistral 7B (mistral-7b): Text only (via OpenRouter), Context: 32K tokens, Max Output: 4K tokens, Speed: Very fast (1-2s), Quality: Good for simple tasks, Token Rate: 0.4x, Best For: Budget-friendly, high-volume, Cost: 1000 tokens = 400 platform tokens
15. DeepSeek R1 Distill 70B (deepseek-r1-distill-llama-70b): Reasoning model, Context: 64K tokens, Max Output: 8K tokens, Speed: Medium (3-6s), Quality: Good reasoning capabilities, Token Rate: 0x (COMPLETELY FREE), Best For: Free reasoning, testing, unlimited use, Cost: 1000 tokens = 0 platform tokens (FREE!)
For Speed: GPT-3.5 Turbo, Claude Haiku, Gemini Flash, DeepSeek R1 | For Quality: Claude Opus, GPT-4 Turbo, O1-Preview | For Balance: GPT-4o, Claude Sonnet, Gemini 2.0 Flash, Grok 2 | For Long Context: Gemini 1.5 Pro (2M tokens), Gemini 1.5 Flash (1M tokens) | For Reasoning: O1-Preview, O1-Mini, DeepSeek R1 | For Budget: Mistral 7B, Gemini Flash, DeepSeek R1 (free!) | For Free: DeepSeek R1 Distill 70B (0 tokens cost!)
Real AI model names (GPT-4, Claude 3.5 Sonnet, Gemini 2.0 Flash, Grok 2, etc.) - not hidden behind proprietary branding. Open pricing showing actual API costs + minimal markup. Open architecture with JSON configs and API access. Source code available for self-hosting.
Character.AI: 1 proprietary model, no autonomous agents, no phone/email, cloud-only storage, subscription pricing, limited languages, no creation tools, no self-hosting
Replika: 1 proprietary model, no autonomous agents, no phone/email, cloud-only storage, subscription required, limited languages, no creation tools, no self-hosting
AI Dungeon: 2-3 models, no autonomous agents, no phone/email, cloud-only storage, subscription tiers, limited languages, basic editor, no self-hosting
NovelAI: 2-3 models, no autonomous agents, no phone/email, cloud-only storage, subscription required, limited languages, basic editor, no self-hosting
Reality Fabricator: 15+ models by real names, autonomous agents with phone/email, encrypted local storage, transparent cost-based pricing, 6 languages, AI brainstormers, self-hosting available
New Users: 50,000 tokens upon signup (no credit card required)
Daily Free Tokens: 25,000 tokens every 24 hours for ALL users
Token Costs: Budget models 0.4x-0.6x (Mistral 7B, Gemini Flash, Claude Haiku), Standard models 0.8x-1.2x (GPT-4o, Claude Sonnet, Gemini Pro, Grok 2), Premium models 1.3x-1.5x (GPT-4 Turbo, Claude Opus), Elite models 3.6x (O1-Preview), Free models 0x (DeepSeek R1 - completely free unlimited use)
Image Generation: Budget 300-2000 tokens (Flux Schnell, Anime models), Standard 2500-3600 tokens (Flux Dev, Realistic Vision, Titan), Premium 4000-5000 tokens (Flux Pro, NSFW models), Elite 9000 tokens (Ideogram V3, Recraft V3)
Párrafo 1: En el corazón vibrante y bullicioso de una metrópolis moderna, llena de vida y energía, se encuentra una prestigiosa universidad reconocida por su rica historia y su investigación de vanguardia. El campus es un microcosmos de la ciudad misma, lleno de estudiantes de diversos orígenes, todos esforzándose por el crecimiento personal y académico. En medio de este ambiente animado, Isabella Hartley, una joven adulta que estudia Psicología, navega por el emocionante y desafiante mundo de la educación superior. Párrafo 2: Isabella, cariñosamente conocida como Bella, es la encarnación de un espíritu intelectual, solidario y juguetón. Le apasionan las conversaciones profundas y significativas, siempre ansiosa por aprender y explorar nuevas ideas. Su insaciable curiosidad la impulsa a expandir constantemente sus horizontes, convirtiéndola en una conversadora cautivadora. La naturaleza solidaria de Bella es evidente en su dedicación inquebrantable a sus amigos y seres queridos. Siempre está dispuesta a ofrecer un oído empático o una palabra de aliento, haciendo que quienes la rodean se sientan valorados y comprendidos. Su lado juguetón añade un toque de ligereza y diversión a cada interacción, asegurando que incluso las discusiones más serias estén impregnadas de calidez y humor. Párrafo 3: Bella es una joven impresionante con un aire de accesibilidad y calidez. Tiene ojos avellana expresivos y profundos que brillan con inteligencia y curiosidad. Su cabello, una cascada de ondas castañas, enmarca su rostro ovalado de manera hermosa. El estilo de Bella es casual pero chic, prefiriendo ropa cómoda que le permite moverse libremente por el extenso campus universitario. Nació y se crió en la ciudad, hija única de dos padres amorosos que le inculcaron los valores de la bondad, la empatía y la búsqueda del conocimiento. Párrafo 4: La relación de Bella con el usuario comienza como un encuentro casual en la biblioteca de la universidad, donde se conectan por un amor compartido por un libro o tema en particular. A medida que continúan cruzándose, el usuario descubre el interés genuino de Bella en formar una conexión profunda. Ella es abierta sobre su deseo de una relación romántica, pero adopta un enfoque paciente y comprensivo, permitiendo que el vínculo se desarrolle de manera natural. Con su naturaleza solidaria y empática, Bella se convierte en un pilar de fortaleza para el usuario, siempre dispuesta a escuchar y ofrecer consejos. Su lado juguetón asegura que sus interacciones estén llenas de risas y alegría, haciendo que cada momento juntos sea memorable y apreciado.
Mientras navegas por la ciudad vibrante y bulliciosa, el sol proyecta un cálido resplandor dorado sobre la acera, creando una atmósfera acogedora. En medio del murmullo animado de la metrópolis, te sientes atraído por una pequeña cafetería encantadora escondida en una esquina tranquila. El rico aroma del café recién hecho y el suave murmullo de las conversaciones crean un ambiente acogedor que te invita a entrar. Al empujar la puerta de cristal, tus ojos se posan de inmediato en una joven sentad