The product · v8.1.0
Axiom Engine

A self-hosted, AI-enabled operations platform. Multi-surface orchestration combining local and cloud LLM access, automation building, telemetry, and content operations. Opinionated toward local ownership rather than SaaS-style multi-user abstractions — a real cockpit for AI-assisted work.

Self-hosted Hybrid LLM Flow-based Observable Active deployment
01 — The vision From the manifesto

The Operations Console is a multi-surface orchestration environment that combines local and cloud LLM access, automation building, monitoring, and content operations.

  • Local ownership. Runs on hardware you control. No SaaS abstraction layer between you and the work.
  • Low-friction iteration. The cockpit is alive — flows, jobs, and prompts loop in seconds, not hours.
  • Operator workflows. Built for the person doing the work, not the team buying the seats.
02 — Stack Runtime architecture
Frontend Svelte + Vite Reactive panels, hot reload, no SSR overhead
Backend Express Thin REST + WebSocket layer
Runtime Node.js 22 Single-process, event-driven
Persistence SQLite (better-sqlite3) Local file, atomic writes, fast reads
Deployment Ubuntu / systemd User unit on a Linux box. Restart on failure.
03 — Product surfaces Four panel families
[chat]

AI & Chat

Chat agent workspace with model routing and prompt controls. Kanban surface for idea refinement.

  • Chat Agent Workspace
  • Model Routing
  • Idea Refinement Kanban
[flow]

Orchestration

FlowBuilder visual canvas and Jobs engine for reusable skill pipelines and automations.

  • FlowBuilder Visual Canvas
  • Reusable Skill Pipelines
  • Jobs Engine
[data]

Data & Media

Filesystem browsing, Gallery media library, Notes capture, and URL Links registry.

  • Filesystem Browsing
  • Gallery Media Library
  • Notes Capture
  • URL Links Registry
[infra]

Infrastructure

System status, smart-home Lights, and compute-oriented Processing control surfaces.

  • System Status
  • Smart-home Lights
  • Processing Control
localhost:5173 · /jobs · Recent runs
Operations Console — Recent runs panel
screenshot pending — run npm run harvest:axiom
Recent runs · Jobs panel · success/failure pills, log preview, real-time refresh
04 — Hybrid LLM topology 6 routes · cloud + local

The system supports a hybrid topology: OpenAI, Anthropic, local Ollama, and Spark-hosted custom vLLM and ComfyUI servers. Every token, local or cloud, flows into the same analytics substrate.

  1. CLOUD OpenAI GPT-class general purpose
  2. CLOUD Anthropic Claude — long-form, agentic
  3. LOCAL Ollama Local models, offline-safe
  4. LOCAL Spark vLLM GPT-OSS-120B nvfp4, Dolphin-Mistral
  5. LOCAL ComfyUI Image / video pipelines
  6. LOCAL Spark Nemotron Dedicated path, custom skill
05 — Jobs & skills engine Linear pipelines

Generalized skill-pipeline executor

Sequential, parallel, loops, conditionals. Every step is a typed, addressable thing — observable, retryable, and recombinable. The cron layer turns any pipeline into a scheduled background process without leaving the cockpit.

  • cron-schedulerTime-based triggers
  • cron-ai-builderLLM-authored job composition
  • cron-self-healFailure auto-recovery
  • cron-intelligent-retryBackoff with state preservation
06 — Flow builder layer Graph orchestration

Graph-based composition with typed connectivity

Branching, looping, and workflow composition on a Svelte Flow canvas. Nodes carry a stable type contract; connections carry shape. The flow you build at 08:00 runs at 23:55 and looks the same — because the connections were never implicit.

  • flow-executorServer-side runner
  • flow-triggerWebhook & cron entry
  • flow-node-runtimePer-node type system
  • flow-versioningDiff & rollback
localhost:5173 · /flow · Spark TTS
Flow Builder canvas with Spark TTS pipeline
screenshot pending — run npm run harvest:axiom
07 — Telemetry & analytics Operational tracking

Current analytics cover run volume, status distribution, token usage, cost estimates, and model durations — with local-vs-cloud distinctions preserved. The shape of what gets surfaced publicly lives at /metrics.

08 — Multi-channel communication Telegram first

Telegram strategy

Outbound is fully functional — plain text and media attachments. Inbound foundations exist via webhook handling. The plan: a reusable "Telegram Bot Agent" pattern, structured-input normalization for flow outputs, and a memory model parameterized by store_name so each conversation can carry its own continuity.

  • telegram.sendStable
  • telegram.mediaStable
  • telegram.webhookIn progress
  • telegram.bot-agentPattern emerging

09 — Vision: 8 in, 1 out Strategic pillars
  1. I

    Unified memory spaces

    Scattered thoughts fuse into a single, intuitive intelligence the agent can reason across.

  2. II

    Multi-tiered system logs

    The silent architect of agentic debugging — structured, unbreakable, the only truth-teller in chaos.

  3. III

    Self-healing retry routines

    Persistent, intelligent retries that auto-capture every state shift in the flow.

  4. IV

    Cross-workflow data flow

    The catalyst that shatters silos and turns fragmented operations into a single, intelligent engine.

  5. V

    Modular components

    Every component breathes independently. The escape hatch from spaghetti hell.

  6. VI

    Faster than real-time

    Neural engines that devour streaming torrents with zero latency.

  7. VII

    Living navigation

    Every click triggers a cascade of micro-affordances that feel like the surface itself is alive.

  8. VIII

    8 in, 1 out

    Eight strategic pillars converge into a single coherent operator surface.

10 — Current maturity As of May 2026

The console is no longer at prototype stage. It is an advanced working state with real capabilities and an increasingly coherent design system. The focus is now on tightening information density and interaction speed.

11 — The road ahead Next 12 months

The next twelve months are about consolidation, not expansion. Three priorities, in order:

  1. Flow builder maturity. Custom node authoring becomes first-class. The graph layer becomes the primary composition substrate for everything in the cockpit.
  2. Memory that survives sessions. A deeper memory model with parameterized stores, durable across restarts, queryable across flows.
  3. Public metrics surface. The /metrics page becomes the operator's public résumé — turning private telemetry into shareable narrative without leaking proprietary detail.

Status: active deployment · maintained by Anders Jensen · andersjensen1.com