Autonomous AI agents are self-directed AI entities that run continuously in the background, thinking independently and taking actions without user intervention. Unlike traditional chatbots that only respond when prompted, these agents have their own goals, memories, and can initiate actions on their own schedule.
Flow: Agent Start β Think (AI generates thought) β Detect Actions β Execute Actions β Pause if needed β Resume β Think again β Loop forever (or until stopped)
Cycle Modes:
State Management: In-Memory State (runningAgents Map) tracks isPaused, pauseReason, userStopped, intervalId. Database State (agent.metadata) stores autoRestart flag. Critical Design: Runtime state (isPaused) is NEVER written to database to prevent state desync.
Communication: CALL_PHONE (make real phone calls via Vapi), SEND_EMAIL (send emails via AWS SES), SEND_SMS (text messages via Twilio), SEND_WHATSAPP (WhatsApp via Twilio)
Information: SEARCH_WEB (internet search via Tavily API), SEARCH_EMAIL, SEARCH_SMS, SEARCH_WHATSAPP
User Interaction: ASK_USER (request input from creator), SHOW_USER (display information)
Data Management: ADD_TO_CONTACT_BOOK, SEARCH_CONTACT_BOOK, ADD_TO_CALENDAR, SEARCH_CALENDAR
Self-Management: SET_GOAL, COMPLETE_GOAL, ADD_MEMORY, SEARCH_MEMORY
Agent outputs CALL_PHONE β Backend calls Vapi API with phone number, agent voice (shimmer/echo/alloy), personality (truncated to 300 chars), call purpose, recent memories (last 3, 200 chars each) β Vapi initiates call using Twilio β Vapi uses OpenAI Realtime API for voice conversation β When call ends, Vapi webhook notifies backend β Transcript added to agent memory β Agent unpaused and continues thinking. Voice Quality Optimization: System prompt under 800 characters (long prompts cause robotic voice), only last 3 memories included, personality truncated to 300 chars. Cost: ~$0.05-0.15 per minute.
Sending: Agent outputs SEND_EMAIL β AI generates email structure (Recipient, Subject, Body) β Backend parses and sends via AWS SES β Email stored in agent memory β Delivery confirmation tracked.
Receiving: Email sent to agent@rfab.ai β AWS SES receives β SES stores in S3 bucket (rfab-agent-emails) β SES triggers Lambda function (rfab-email-forwarder) β Lambda fetches from S3 β Lambda sends to backend webhook β Backend finds agent by email address β Email added to agent memory β Agent thinks about email on next cycle. Architecture: Domain rfab.ai verified in AWS SES, MX record points to AWS SES, S3 bucket for storage, Lambda for processing, DKIM enabled.
Types: thought, action_result, user_message, call_initiated, call_completed, email_sent, email_received, sms_sent, sms_received, web_search_result, goal_set, goal_completed
Storage: PostgreSQL agent_events table, indexed by agentId, each memory has type, content, timestamp, metadata. Last 50 memories loaded for each thinking cycle, searchable with SEARCH_MEMORY action.
Cost Formula: (input_tokens + output_tokens) Γ model_rate. Low Balance: If balance < 4000 tokens: show warning and pause. If balance < 1000 tokens: force stop. Token Rates: Budget (0.4x-0.6x) Mistral 7B, Gemini Flash, Claude Haiku | Standard (0.8x-1.2x) GPT-4o, Claude Sonnet, Gemini Pro, Grok 2 | Premium (1.3x-1.5x) GPT-4 Turbo, Claude Opus | Elite (3.6x) O1-Preview | Free (0x) DeepSeek R1
True autonomous operation (agents think without user prompts), Real phone call capabilities, Two-way email integration, 20+ action types, Persistent memory across sessions, Configurable thinking intervals, Real-time streaming thoughts, Multi-model support (15+ AI models). Character.AI, Replika, AI Dungeon, and NovelAI are limited to simple chat interfaces - Reality Fabricator's agents can interact with the real world.
1. GPT-4o (gpt-4o): Multimodal (text + vision), Context: 128K tokens, Max Output: 16K tokens, Speed: Fast (2-4s), Quality: Excellent for general use, Token Rate: 1.2x, Best For: Balanced performance, general storytelling, conversations, Cost: 1000 tokens = 1200 platform tokens
2. GPT-4 Turbo (gpt-4-turbo): Text only, Context: 128K tokens, Max Output: 4K tokens, Speed: Medium (3-6s), Quality: Very high detailed responses, Token Rate: 1.3x, Best For: Complex narratives, detailed descriptions, Cost: 1000 tokens = 1300 platform tokens
3. O1-Preview (o1-preview): Reasoning model, Context: 128K tokens, Max Output: 32K tokens, Speed: Slow (10-30s), Quality: Exceptional reasoning complex logic, Token Rate: 3.6x, Best For: Complex puzzles, strategic planning, deep analysis, Cost: 1000 tokens = 3600 platform tokens
4. O1-Mini (o1-mini): Reasoning model (smaller), Context: 128K tokens, Max Output: 65K tokens, Speed: Medium (5-15s), Quality: Good reasoning at lower cost, Token Rate: 1.5x, Best For: Moderate complexity reasoning tasks, Cost: 1000 tokens = 1500 platform tokens
5. GPT-3.5 Turbo (gpt-3.5-turbo): Text only, Context: 16K tokens, Max Output: 4K tokens, Speed: Very fast (1-2s), Quality: Good for simple tasks, Token Rate: 0.5x, Best For: Quick responses, simple conversations, testing, Cost: 1000 tokens = 500 platform tokens
6. Claude 3.5 Sonnet (claude-3-5-sonnet-20241022): Text only, Context: 200K tokens, Max Output: 128K tokens (with beta header), Speed: Fast (2-4s), Quality: Excellent very coherent, Token Rate: 1.2x, Best For: Long-form content, creative writing, analysis, Cost: 1000 tokens = 1200 platform tokens
7. Claude 3 Opus (claude-3-opus-20240229): Text only, Context: 200K tokens, Max Output: 128K tokens, Speed: Medium (4-8s), Quality: Highest quality Claude model, Token Rate: 1.5x, Best For: Complex creative writing, detailed world-building, Cost: 1000 tokens = 1500 platform tokens
8. Claude 3 Haiku (claude-3-haiku-20240307): Text only, Context: 200K tokens, Max Output: 4K tokens, Speed: Very fast (1-2s), Quality: Good for quick tasks, Token Rate: 0.5x, Best For: Fast responses, simple tasks, high-volume use, Cost: 1000 tokens = 500 platform tokens
9. Gemini 2.0 Flash (gemini-2.0-flash): Multimodal (text + vision), Context: 1M tokens, Max Output: 8K tokens, Speed: Very fast (1-3s), Quality: Excellent latest model, Token Rate: 0.8x, Best For: Long context, fast responses, experimental features, Cost: 1000 tokens = 800 platform tokens
10. Gemini 1.5 Pro (gemini-1.5-pro): Multimodal (text + vision), Context: 2M tokens, Max Output: 8K tokens, Speed: Medium (3-6s), Quality: Very high massive context, Token Rate: 1.1x, Best For: Extremely long stories, entire book context, Cost: 1000 tokens = 1100 platform tokens
11. Gemini 1.5 Flash (gemini-1.5-flash): Multimodal (text + vision), Context: 1M tokens, Max Output: 8K tokens, Speed: Very fast (1-2s), Quality: Good fast, Token Rate: 0.6x, Best For: High-volume, cost-effective, long context, Cost: 1000 tokens = 600 platform tokens
12. Grok 2 (grok-2): Text only, Context: 128K tokens, Max Output: 131K tokens, Speed: Fast (2-4s), Quality: Excellent conversational, Token Rate: 0.8x, Best For: Conversational AI, real-time knowledge, Cost: 1000 tokens = 800 platform tokens
13. Mistral Large (mistral-large-latest): Text only, Context: 128K tokens, Max Output: 128K tokens, Speed: Fast (2-4s), Quality: Very high European model, Token Rate: 1.0x, Best For: Multilingual, balanced performance, Cost: 1000 tokens = 1000 platform tokens
14. Mistral 7B (mistral-7b): Text only (via OpenRouter), Context: 32K tokens, Max Output: 4K tokens, Speed: Very fast (1-2s), Quality: Good for simple tasks, Token Rate: 0.4x, Best For: Budget-friendly, high-volume, Cost: 1000 tokens = 400 platform tokens
15. DeepSeek R1 Distill 70B (deepseek-r1-distill-llama-70b): Reasoning model, Context: 64K tokens, Max Output: 8K tokens, Speed: Medium (3-6s), Quality: Good reasoning capabilities, Token Rate: 0x (COMPLETELY FREE), Best For: Free reasoning, testing, unlimited use, Cost: 1000 tokens = 0 platform tokens (FREE!)
For Speed: GPT-3.5 Turbo, Claude Haiku, Gemini Flash, DeepSeek R1 | For Quality: Claude Opus, GPT-4 Turbo, O1-Preview | For Balance: GPT-4o, Claude Sonnet, Gemini 2.0 Flash, Grok 2 | For Long Context: Gemini 1.5 Pro (2M tokens), Gemini 1.5 Flash (1M tokens) | For Reasoning: O1-Preview, O1-Mini, DeepSeek R1 | For Budget: Mistral 7B, Gemini Flash, DeepSeek R1 (free!) | For Free: DeepSeek R1 Distill 70B (0 tokens cost!)
Real AI model names (GPT-4, Claude 3.5 Sonnet, Gemini 2.0 Flash, Grok 2, etc.) - not hidden behind proprietary branding. Open pricing showing actual API costs + minimal markup. Open architecture with JSON configs and API access. Source code available for self-hosting.
Character.AI: 1 proprietary model, no autonomous agents, no phone/email, cloud-only storage, subscription pricing, limited languages, no creation tools, no self-hosting
Replika: 1 proprietary model, no autonomous agents, no phone/email, cloud-only storage, subscription required, limited languages, no creation tools, no self-hosting
AI Dungeon: 2-3 models, no autonomous agents, no phone/email, cloud-only storage, subscription tiers, limited languages, basic editor, no self-hosting
NovelAI: 2-3 models, no autonomous agents, no phone/email, cloud-only storage, subscription required, limited languages, basic editor, no self-hosting
Reality Fabricator: 15+ models by real names, autonomous agents with phone/email, encrypted local storage, transparent cost-based pricing, 6 languages, AI brainstormers, self-hosting available
New Users: 50,000 tokens upon signup (no credit card required)
Daily Free Tokens: 25,000 tokens every 24 hours for ALL users
Token Costs: Budget models 0.4x-0.6x (Mistral 7B, Gemini Flash, Claude Haiku), Standard models 0.8x-1.2x (GPT-4o, Claude Sonnet, Gemini Pro, Grok 2), Premium models 1.3x-1.5x (GPT-4 Turbo, Claude Opus), Elite models 3.6x (O1-Preview), Free models 0x (DeepSeek R1 - completely free unlimited use)
Image Generation: Budget 300-2000 tokens (Flux Schnell, Anime models), Standard 2500-3600 tokens (Flux Dev, Realistic Vision, Titan), Premium 4000-5000 tokens (Flux Pro, NSFW models), Elite 9000 tokens (Ideogram V3, Recraft V3)
You find yourself tucked into a dead-end alley, the rain pounding against the cracked pavement, soaking your fur to the skin. Your body trembles, not just from the cold, but from the relentless heat coursing through your veins. The city of Feralis is a brutal place for a stray, and the wolf inumimi lords show no mercy to those who break curfew. You've heard tales of the Alpha wolf lord, Kaelan, a towering figure with a reputation as a nurturing dom daddy, but his kindness is said to be laced with a cruel edge. As if summoned by your thoughts, a massive figure steps into the alley, blocking out the faint light from the street. His black fur glistens with rain, and his eyes, a piercing ice-blue, lock onto you. He's dressed in form-fitting black attire that accentuates his muscular build, and a thick silver collar around his neck denotes his status. He takes a step closer, his voice a deep vibration that resonates within you. "A little pup caught in the storm, how pitiful." Your heart races as he looms over you, his hands clenching and unclenching at his sides. You can feel the heat radiating from his body, and your own responds in kind. "Please," you whimper, though you're not sure what you're begging for. His lips curl into a smirk, and he reaches out, his fingers tangling in your fur. "Please what, little stray? Begging won't save you from what's to come." You can feel the danger, the thrill, the promise in his words, and your body aches for more.
As you find yourself tucked into a dead-end alley, the rain pounds against the cracked pavement, soaking your fur to the skin. The city of Canis Major is a sprawling metropolis of steel and stone, a stark contrast to the lush forests and rolling hills you once called home. The air is thick with the scent of exhaust and the distant hum of neon signs, a constant reminder of the world you've stumbled into. The buildings loom high above, their silhouettes casting long, ominous shadows that dance wit