Morning Routine: Heartbeats and Proactive Intelligence
It’s 7:15 AM. I haven’t opened my laptop yet, but Jairix has already been working for 15 minutes.
# workspace/memory/2025-02-17.md
## 07:00 - Heartbeat: Morning Context Gathering
**Email scan** (🟢 read-only via Zoho skill):
- 3 unread emails
- 1 flagged as urgent: "Pipeline failure - prod deployment"
- 2 newsletters
**Calendar check** (🟢 read-only via gog skill):
- 09:00: Standup
- 10:30: Architecture review - Traefik migration
- 14:00: 1-on-1 with team member
**Home status** (🟢 via Home API MCP):
- Thermostat: 19°C (heating off, schedule starts 08:00)
- P1 meter: 234W current draw (baseline)
- Camper GPS: Still at home location
**Weather** (🟢 via weather API):
- Amsterdam: 8°C, rain expected afternoon
- Recommendation: Umbrella needed
**Analysis**:
User likely needs to address pipeline failure before standup.
Architecture review at 10:30 requires preparation.
I open Telegram. Jairix has sent me a morning summary:
Jairix: "Good morning. Three items need attention:
1. 🚨 Pipeline failure in prod deployment (email from ops team 06:45)
- Last deployment: feature/api-optimization
- Failure: container startup timeout
2. 📅 Architecture review at 10:30 - Traefik migration
- Presentation draft exists in workspace
- Not yet committed to repo
3. 🌧️ Rain expected this afternoon
Shall I fetch the pipeline logs for the failed deployment?"
Me: “Yes, and commit the Traefik presentation to a branch”
Jairix: [Fetches pipeline logs via Azure DevOps MCP] ✅
[Creates branch via 🟡 workspace operation] ✅
[Commits presentation via 🟡] ✅
"Logs show OOMKilled on api-optimization container. Memory limit too low (512Mi).
Presentation committed to feature/traefik-architecture-review.
Branch: https://dev.azure.com/Sneekes/Solutions/_git/infrastructure
Shall I create a PR for review?" ← 🔴 asks first
Me: “No, I’ll present from the branch. Can you increase the memory limit in the deployment manifest?”
Jairix: [Opens deployment YAML] ✅
[Creates fix: resources.limits.memory: 512Mi → 1Gi] ✅
[Commits to feature/fix-api-memory-limit] ✅
"Fixed. Shall I trigger the deployment pipeline to test?" ← 🔴 asks
Me: “Yes”
Jairix: [Triggers pipeline via Azure DevOps MCP] ✅
[Monitors status] ...
[3 minutes later]
"Deployment successful. Container started, all health checks passing.
Shall I create a PR to merge this fix to main?" ← 🔴 asks
Me: “Yes”
By 7:30 AM, the production issue is fixed and ready for review - before I’ve finished my coffee.
Workflow Pattern 1: Code Review and Documentation
Scenario: Team member created a PR but forgot to update documentation.
[Jairix monitors Azure DevOps via heartbeat]
Jairix: "New PR from colleague: #1247 - Add rate limiting middleware
I notice the PR adds middleware but doesn't update:
- API documentation
- README.md usage examples
- Configuration schema
Shall I draft documentation additions?"
Me: “Yes, review the code first”
Jairix: [Fetches PR diff via Azure DevOps MCP]
[Analyzes code changes via Coder agent]
"Code analysis:
- Adds RateLimiter class with Redis backend
- Configurable limits via environment variables
- Middleware applied globally to Express app
Shall I create documentation covering:
1. Configuration options
2. Usage examples
3. Testing the rate limiter"
Me: “Yes”
Jairix: [Creates workspace/docs/rate-limiter.md] ✅
[Adds to PR as comment with documentation draft] ← 🔴 asks first
Me: "Post it"
Jairix: [Posts comment via Azure DevOps MCP] ✅
"Posted. I've also updated the configuration schema documentation.
Shall I commit this to the PR branch?" ← asks first
This workflow combines:
- Azure DevOps MCP: PR monitoring, diff fetching, commenting
- Coder subagent: Code analysis
- Workspace operations (🟡): Draft creation
- High-impact operations (🔴): Public comments, commits
Workflow Pattern 2: Personal Analytics and Reflection
Scenario: Weekly reflection using Ask-My-Day MCP.
Me: "Show me my week - activities, mood, notable moments"
Jairix: [Calls ask-my-day:search_by_date_range(
date_start="2025-02-10",
date_end="2025-02-16"
)] ✅
[Calls ask-my-day:get_mood_trends(
date_start="2025-02-10",
date_end="2025-02-16"
)] ✅
[Calls ask-my-day:get_activity_stats(
date_start="2025-02-10",
date_end="2025-02-16"
)] ✅
"Week Summary (Feb 10-16):
**Mood Distribution:**
- Positive: 5 days (71%)
- Neutral: 2 days (29%)
- Overall trend: Productive week
**Top Activities:**
1. Running: 4 sessions (avg 8.2km)
2. Coding: Daily (focus on Traefik migration)
3. Family time: 3 entries
**Notable Moments:**
- Feb 12: Personal best 10k run (negative split, 52:30)
- Feb 14: Completed Traefik PoC deployment
- Feb 15: Family dinner, tried new restaurant (rated 4/5)
**Strava Highlights:**
- Total distance: 32.8km
- Avg pace improvement: 6:24/km → 6:10/km
- 2 activities with negative splits
Shall I compare this to last week?"
Me: “Yes, focus on running performance”
Jairix: [Calls ask-my-day:search_strava_activities(
date_start="2025-02-03",
date_end="2025-02-09",
activity_type="Run"
)]
[Calls ask-my-day:get_strava_trends(days_back=14)]
"Comparison (Feb 3-9 vs Feb 10-16):
**Volume:**
- Last week: 28.5km (3 runs)
- This week: 32.8km (4 runs)
- Change: +15% distance, +1 session
**Performance:**
- Avg pace: 6:32/km → 6:18/km (-14 sec/km)
- Longest run: 10km → 12km
- Negative splits: 1 → 2 activities
**Trend:** Consistent improvement. Pace trending faster while
distance increases. Good progression.
Recommendation: Consider adding one recovery run next week (slower pace,
shorter distance) to prevent overtraining."
This uses:
- Ask-My-Day MCP semantic search, mood tracking, Strava integration
- LLM orchestration: Combining multiple tools to build narrative
- Insight generation: Not just data, but interpretation
Workflow Pattern 3: Smart Home Integration
Scenario: Evening routine automation.
[18:30 - I'm driving home]
Me: "Start evening routine"
Jairix: "Evening routine initiated.
**Home status check:**
[Calls api:camper_getGpsLocation] - Camper at home ✅
[Calls api:toon_GetToonStatus] - Heating: 19°C, off
[Calls api:ubnt_getConnectedWirelessHome] - 3 devices connected
**Weather forecast:**
[Calls weather API] - Currently 6°C, clear
**Proposed actions:**
1. Set heating to 21°C (arrival in 15min) ← 🔴
2. Turn on living room lights (warm white, 60%) ← 🔴
Shall I proceed?"
Me: “Yes”
Jairix: [Calls api:toon_RunToonAction(value="2100")] ✅
[Calls openhue set room "Living Room" on --brightness 60] ✅
"Done. Heating will reach 21°C in ~20 minutes.
Lights set to warm white, 60% brightness.
You have 2 unread emails (checked via heartbeat).
Shall I summarize them?"
Me: “Yes”
Jairix: [Reads emails via Zoho skill]
"Email summary:
1. From: Team Member
Subject: Question about Traefik config
Preview: Quick question about SSL certificate setup...
Priority: Low (can wait until tomorrow)
2. From: Azure Alerts
Subject: Cost anomaly detected
Preview: Resource group 'dev-environment' exceeded budget...
Priority: Medium (review tonight)
Shall I fetch full details on the cost anomaly?"
This combines:
- Home API MCP: Thermostat, lights, network status
- Weather skill: Forecast data
- Zoho Mail skill: Email reading
- Context awareness: Jairix knows I’m driving (via GPS location change pattern)
Workflow Pattern 4: Research and Investigation
Scenario: Deep-dive technical research.
Me: "Research Traefik 3.0 new features vs NGINX, focus on certificate
management and performance benchmarks"
Jairix: "I'll spawn a Research agent for deep investigation.
This will use web search and technical documentation analysis."
[Spawns researcher subagent]
[Researcher uses Brave Search skill + SearXNG local search]
[Researcher analyzes Traefik docs, NGINX docs, benchmark articles]
[10 minutes later...]
Researcher → Main Agent: [Returns comprehensive report]
Jairix: "Research complete. Key findings:
**Certificate Management:**
Traefik 3.0:
- Native Let's Encrypt integration (ACME)
- Automatic renewal
- Multiple certificate resolvers
- DNS-01, HTTP-01, TLS-ALPN-01 challenges
- Consul/etcd distributed certificate storage
NGINX:
- Requires external tools (certbot, acme.sh)
- Manual renewal scripts
- Certificate files on disk
- Reload required after renewal
**Advantage:** Traefik (significant automation improvement)
**Performance Benchmarks:**
Source: TechEmpower Round 22 (2024)
- Traefik 3.0: ~145k req/s (reverse proxy mode)
- NGINX: ~165k req/s (reverse proxy mode)
- Delta: NGINX +13% faster
However, Traefik's service mesh features (retries, circuit breakers,
rate limiting) built-in, whereas NGINX requires lua modules.
**Recommendation:**
Traefik worth migration for:
- Automated certificate management
- Native Kubernetes integration
- Service mesh features
NGINX slight performance edge negligible at your scale.
Shall I create a comparison matrix document?"
Me: “Yes, and add to the architecture review presentation”
Jairix: [Creates workspace/docs/traefik-nginx-comparison.md] ✅
[Updates presentation with comparison section] ✅
[Commits to feature/traefik-architecture-review] ✅
"Updated presentation includes:
- Feature comparison matrix
- Performance benchmarks (with sources)
- Migration effort estimate
- Risk assessment
Branch ready for review:
https://dev.azure.com/Sneekes/Solutions/_git/infrastructure/..."
This demonstrates:
- Researcher subagent: Deep technical investigation
- Brave Search + SearXNG: Multi-source research
- Web fetch skill: Documentation analysis
- Synthesis: LLM combining multiple sources into coherent narrative
- Workspace integration: Results directly usable in presentation
Workflow Pattern 5: Proactive Monitoring
Scenario: Jairix detects anomaly during heartbeat.
[02:30 AM - Heartbeat runs while I sleep]
Jairix: [Checks home systems via Home API MCP]
[Detects: P1 meter showing 2.4kW draw]
[Normal baseline at this time: 200-300W]
[Deviation: 8x higher than expected]
[Checks: Which devices are drawing power?]
[Calls api:ubnt_getConnected via Home API MCP]
[Analysis: Gaming PC is online (unusual at 02:30)]
[Decision tree per HEARTBEAT.md:]
- Late night (23:00-08:00): Stay quiet unless urgent
- Energy anomaly: Potentially urgent (cost/safety)
- User asleep: Don't wake for non-critical
[Action: Log for morning review, don't notify]
[Logged in workspace/memory/2025-02-17.md]
## 02:30 - Energy Anomaly Detected
- **Trigger**: Heartbeat monitoring
- **Finding**: 2.4kW draw (8x normal baseline)
- **Cause**: Gaming PC online at unusual hour
- **Risk**: Moderate (cost impact, potentially left on by accident)
- **Action**: Logged for morning review
- **Notification**: Deferred (user asleep, non-critical)
[07:00 - Next morning heartbeat]
Jairix: [Reviews night logs]
[Sees gaming PC still online]
[Prepares morning summary]
Morning notification:
Jairix: "Good morning. Anomaly detected overnight:
Gaming PC has been running since 02:30 (4.5 hours).
Energy consumption: ~10.8kWh (~€2.50 at current rates)
Possible causes:
- Download/update running
- Forgotten to shut down
- Remote access session active
Current status: Still online, 2.2kW draw
Shall I check what's running? (requires SSH access)" ← asks first
Me: “Yes”
Jairix: [SSHs to gaming PC via exec tool with approval]
[Checks running processes]
"Found: Windows Update installing large patches.
Progress: 87% complete, ~10 minutes remaining.
No user sessions active. This is automatic maintenance.
Recommendation: Let it finish. It will auto-shutdown when complete
per your scheduled task configuration."
This shows:
- Smart alerting: Anomaly detected but not escalated during sleep hours
- Context gathering: Multiple data sources (P1 meter, network devices)
- Risk assessment: Cost vs urgency vs user state
- Root cause analysis: Don’t just report, investigate
- Actionable recommendation: What to do about it
Skills vs MCP Servers: When to Use What
After 4 weeks, clear patterns emerged:
Use MCP Servers When:
✅ Standardization matters
- Want to use same tool in Claude Desktop and OpenClaw
- Home API MCP works in both without code changes
✅ Complex data operations
- Ask-My-Day semantic search (embeddings + vector DB)
- Azure DevOps graph queries (repos, PRs, pipelines)
✅ Auto-generation is valuable
- Home API: 60 tools from route definitions
- Zero maintenance when API changes
Use OpenClaw Skills When:
✅ Simple integrations
- Google Calendar (gog): Just a CLI wrapper
- Philips Hue (openhue): Direct HTTP calls
- Zoho Mail: Python script with REST API
✅ External tools already exist
- SearXNG: Already running, just need HTTP wrapper
- Brave Search: API key + simple HTTP client
✅ Framework-specific features
- OpenClaw’s heartbeat system
- Approval UI in Telegram
- Session memory management
Hybrid Pattern (MCP + Skills):
Research workflow:
1. Brave Search (skill) → Find articles
2. Web fetch (skill) → Download content
3. Ask-My-Day (MCP) → Store in personal knowledge base
4. Azure DevOps (MCP) → Create documentation PR
All orchestrated by Jairix reasoning layer.
What Actually Works (and What Doesn’t)
✅ Works Exceptionally Well:
1. Proactive Context Gathering
- Heartbeats checking email/calendar/systems
- Morning summaries with actionable items
- Anomaly detection (cost, performance, security)
2. Documentation Automation
- PR review comments
- Architecture decision records
- API documentation updates
- README maintenance
3. Personal Analytics
- Journal semantic search
- Mood/activity tracking
- Strava performance trends
- Weekly reflection summaries
4. DevOps Workflows
- Branch creation
- Commit message generation
- PR creation (with approval)
- Pipeline monitoring
⚠️ Works But Needs Refinement:
1. Email Management
- Reading/summarizing: Excellent
- Drafting replies: Good but needs review
- Sending: Too risky, always requires approval
- Improvement needed: Draft quality for work emails
2. Smart Home Control
- Status monitoring: Perfect
- Automated routines: Good
- Proactive adjustments: Conservative (intentionally)
- Improvement needed: Learning user preferences
3. Code Generation
- Simple scripts: Very good
- Configuration files: Good
- Complex logic: Needs review
- Improvement needed: Better understanding of existing patterns
❌ Doesn’t Work Yet:
1. Calendar Management
- Reading: Perfect
- Creating events: Too error-prone
- Rescheduling: Risk of conflicts
- Blocker: Need better conflict detection
2. Long-Running Research
- Single-session research: Excellent
- Multi-day investigations: State management issues
- Blocker: Memory persistence across sessions
3. Autonomous Decision Making
- Always requires approval for important actions
- Can’t learn user preferences automatically
- Design choice: Safety over autonomy
Resource Usage and Performance
Ollama Model Distribution
Primary model (MiniMax M2 230B):
- General conversation: ~70% of tokens
- Fast responses: 350ms average
- Context window: 65k tokens
Research model (Kimi K2 Thinking):
- Deep investigation: ~15% of tokens
- Slower but higher quality: 1200ms average
- Reasoning traces visible in logs
Vision model (Qwen3 VL 235B):
- Screenshot analysis: ~5% of tokens
- Diagram understanding
- OCR when needed
Coder model (Mistral Large 3 675B):
- Code generation: ~10% of tokens
- Highest quality, slowest: 1800ms average
- Used for production code only
Fallback models (DeepSeek V3.2):
- Rarely used (<1% of tokens)
- Edge cases where others fail
Hardware Load
Laptop Stats (24-hour average):
- CPU: 35% utilization (4-core)
- RAM: 12GB / 16GB used
- Storage I/O: Moderate (model caching)
- Network: Low (mostly local Ollama)
Ollama Instance 1 (cloud models):
- Load: High during active sessions
- GPU utilization: 85% when active
- Idle between sessions
Ollama Instance 2 (embeddings):
- Load: Constant low-level
- CPU-only: 15-25% steady
- Used for Ask-My-Day semantic search
Cost Comparison
Before OpenClaw (Commercial AI APIs):
- Claude API: ~$150/month
- OpenAI API: ~$80/month
- Total: ~$230/month
After OpenClaw (Self-hosted):
- Electricity: ~$15/month (laptop + server)
- Fallback API usage: ~$5/month (rare edge cases)
- Total: ~$20/month
Savings: ~$210/month (91% reduction)
ROI achieved in 2 months (hardware already owned).
The Unexpected Benefits
1. Thinking Partner, Not Just Tool
Me: "Should I migrate to Traefik now or wait until Q2?"
Jairix: "Now. Three reasons:
1. NGINX CVEs: Your version has 3 known vulnerabilities.
Patching requires downtime anyway.
2. Certificate chaos: You're managing 47 certificates manually.
Traefik automation eliminates this entirely.
3. Team velocity: Migration during Q1 means Q2 can focus on
new features instead of infrastructure debt.
Waiting doesn't reduce effort, just delays benefits.
I've drafted the migration plan. Shall I create the tracking issue?"
Not just retrieving information - reasoning about trade-offs.
2. Institutional Memory
Me: "Why did we choose that specific Kubernetes version last year?"
Jairix: [Searches workspace/memory/ + Azure DevOps history]
"Decision made Aug 2024 (issue #892).
Reasoning:
- Kubernetes 1.28: Long-term support until Dec 2024
- Cilium compatibility: Required 1.28+ for certain features
- Upgrade path: Clean migration from 1.25
Alternative considered:
- Kubernetes 1.29: Too new, waiting for bug fixes
Decision maker: You, approved by team vote 4-1
Current status: 1.28 → 1.30 upgrade due (LTS ending soon).
Shall I create upgrade tracking issue?"
Context that would be lost without structured memory.
3. Pattern Recognition
[Jairix notices pattern in session logs]
Jairix: "Observation: You've asked about Traefik configuration
syntax 4 times this week, always around middleware setup.
Shall I create a reference document with common patterns
and examples?"
Proactive help based on behavioral patterns.
Future Directions
1. Multi-Modal Expansion
Voice Integration:
[Via ElevenLabs TTS skill]
"Good morning Sander. You have three urgent items..."
Currently text-only, but voice would enable hands-free operation.
Vision as Default:
[Send screenshot of error]
Jairix: [Analyzes via Qwen3 VL]
"I see a CORS error in browser console.
Your API is rejecting requests from localhost:3000..."
2. Learning User Preferences
After 100+ approval decisions:
Pattern detected:
- Email drafts: 85% approval rate (high trust)
- Calendar modifications: 35% approval rate (low trust)
- PR creation: 90% approval rate (high trust)
Adjustment:
- Email drafts: Reduce approval friction (still ask, shorter confirmation)
- Calendar: Always detailed review before action
- PR creation: Can suggest without explicit ask
3. Team Collaboration
Future state:
Jairix (my instance) ↔ Team MCP Server ↔ Colleague's AI agent
Share:
- Project documentation
- Code review insights
- Best practices
Isolate:
- Personal data
- Private calendar
- Home automation
4. Continuous Learning
Feedback loop:
User approves action → Log to training data
User denies action → Log reasoning
User modifies draft → Learn style preferences
Future: Fine-tuned model reflecting personal patterns
Conclusion: The Personal AI Future
After a couple of days with Jairix, I’ve learned:
AI agents are not replacements for humans. They’re force multipliers. Jairix doesn’t make decisions for me - it gives me the context and options to make better decisions faster.
The value isn’t in perfect autonomy. It’s in:
- Eliminating context switching
- Proactive information gathering
- Pattern recognition across data silos
- Institutional memory that doesn’t forget
The trust model matters more than the technology. Network isolation, identity boundaries, permission models, and audit trails aren’t optional - they’re what makes agency safe.
Skills and MCP servers are complementary. Use MCP for standardization and complex data operations. Use skills for simple integrations and framework-specific features. Combine both for maximum flexibility.
The best AI assistant is the one you don’t notice. When Jairix works well, it feels like having a highly competent colleague who:
- Remembers everything
- Never gets tired
- Asks before acting
- Shows their work
- Admits uncertainty
Not artificial intelligence. Augmented intelligence.
And that’s exactly what I wanted.
Resources
- OpenClaw: https://github.com/cyanheads/openclaw
- My MCP Servers: (GitHub links TBD)
- Skills Collection: (GitHub links TBD)
- Ollama Models: https://ollama.com/library
All examples from real production usage. API keys and tokens sanitized.