// install · hybrid · advisor recommendation
Hybrid Agent Stack
Two products together: one runtime (OpenClaw or Hermes) + one custom CLI environment (Claude Code or Codex), wired by me with guardrails and routing. 14–21 days. First-3 cohort: 5 000 PLN + a month of retainer free.
// what hybrid is
Hybrid is two products, not three. You pick one ready runtime (OpenClaw for presence in channels, or Hermes as a background worker) and one custom CLI environment (Claude Code or Codex with a custom config). Hermes and OpenClaw functionally overlap, so both at once is redundancy. Underneath they all have the same file structure (identity, memory, skills, rules). Hybrid wires the chosen pair into a safe work system: an integration layer, task routing, guardrails, monitoring, a maintenance playbook. A recommendation from discovery, not a rigid fourth tier.
// what it's built from
What it's built from.
An agent isn't magic or "a model". It's the sum of its files. It starts every session by loading them, so it stays continuous even though it technically restarts from zero each time. These are its organs.
Identity: who it should be
Here we define who the agent is: tone, character, boundaries. For many that's a detail, it can simply be plain and impersonal. For some it isn't: my wife likes that her assistant is Dwight, who says "FACT" before answering a question. It can be your favourite fictional character. Identity also covers the model choice: the agent can run on a local LLM or on the best one available. Are you a fan of Gemini, Claude, or Codex? Any of them can be your agent. It's one file, you edit it yourself.
Memory: what it remembers
Default AI forgets everything after the session. Here it's different. The agent has a file structure it loads at the start of every conversation: company context, decisions, your preferences. And it grows it itself. Over time it records what you like about its work, what not to do, when to use which skill. After a week it knows more than at the start.
Tools and skills: what it can do
What the agent can actually execute: concrete procedures, integrations, actions. Each skill is a file with a prompt and guardrails. The agent knows when to run it and when to stop and ask a human. We add and refine skills as the scope of its work grows.
Rules: how it behaves
The system prompt: how it should act, what it never does, when it escalates to you instead of guessing. Your marketing team can review the tone, legal can review the guardrails. You edit it yourself, no ticket.
You change a file, and the agent works the new way from the next conversation. No training, no ticket. The speed of a change is the speed of editing a text file. More on how we build memory: why AI forgets and how to fix it.
// how hybrid uses this
Hybrid takes two sets of these organs, one in the runtime's casing (channels or a background worker) and one in the custom CLI's casing, and wires them together: a task can flow between the two without manual copy-paste, with clear routing (what the runtime does, what the CLI does, where the result lands, who approves) and guardrails written down, not implied. Plus monitoring, an effectiveness report and a playbook so the team can maintain it.
// what it actually does for you
What combining the two layers gives.
The point of Hybrid is the flow between "agent in channels / in the background" and "agent at my CLI". A few examples of how that looks:
The worker gathers; the CLI processes
Hermes on a cron assembles the daily brief or research; you open Claude Code and the agent with custom skills processes it further: pulls conclusions, drafts, prepares the material for a decision. No copy-paste between tools.
Voice note → a task in the pipeline
You drop a command on Telegram (OpenClaw); the agent registers it, routes it to the right skill. Some of it the agent does on the spot; the heavier part it hands to the CLI environment, which runs it with guardrails.
The team asks in a channel; the answer comes from deep work
Someone in Slack asks something that needs research or document analysis; OpenClaw takes the question, the custom CLI does the deep work, the answer comes back to the channel. One system, two entry points.
Governance across the whole thing
One set of rules for both layers: what the agent does on its own, what needs approval, where the human is in the loop. Written once, applies everywhere.
A solo flavor is enough if you have only one of these worlds. Hybrid makes sense when you need both, plus governance, so it's one system instead of two separate tools.
// who it's for
Who it's for.
You need a runtime for daily work (channels or a background worker) AND your own CLI to extend the workflow. A solo flavor is enough if you only want one of the two.
You have a presence problem (the team doesn't use AI), an execution problem (procedures don't get run), and you do technical work at the CLI. One coherent system instead of three separate orders.
You care that it's clear: what the agent does on its own, what needs approval, how a task flows between the runtime and the CLI, and whether the team can maintain it.
// what you get with the install
What you get with the install.
Everything from the chosen runtime
The full scope of OpenClaw or Hermes: the agent's files, channels/workflows, tuning, docs. See the OpenClaw and Hermes pages.
Everything from the chosen CLI environment
The full scope of Custom: `.claude/` files or Codex config, skills for your tasks, prompts in your voice, memory architecture.
Integration layer runtime ↔ CLI
Wired by me so a task can flow between the two without manual copy-paste.
Task and responsibility routing
What the runtime does, what the CLI environment does, where the result lands, who approves the decision.
Guardrails + governance
Unambiguous, for both layers: what the agent does on its own, what needs approval. Written down, not implied.
3–5 end-to-end workflows
Vs 2–3 in a solo flavor. Full paths across both components, tested on real data.
Monitoring, logs, report + team training
You see what the system does and how it's going. Plus team training and a maintenance playbook.
// how it looks
How it looks.
Sprint 0: discovery
We map the pain points and pick the pair: which runtime, which CLI environment. A recommendation, not a catalogue.
Sprint 1: build
The agent's files for both layers + runtime setup + custom CLI + the integration layer. First end-to-end workflows on real data.
Sprint 2: hardening
Guardrails, routing, monitoring, tuning. 3–5 workflows work, the team knows how to maintain it.
Handoff + playbook
Team training, documentation, a maintenance playbook. Retainer decision after the free month.
// what you provide
What you provide.
Hybrid needs the most context of all the variants: two sets of files live on what you tell me about your work and your procedures. I'll do the default setup, but the real value of Hybrid comes from the custom, which is why discovery is longer here (Sprint 0).
- Your time for discovery (Sprint 0). I need to understand the pain points of both layers and pull the context for the files. I handle the rest of the technical side.
- LLM subscription(s) or API key(s), depending on the chosen pair (ChatGPT Plus with OAuth, Claude Pro, Anthropic/OpenAI/Gemini key).
- A local or VPS environment (Mac, Linux, Docker).
- Access to the platforms and data sources for the runtime + a working CLI environment.
- Consent to a case study and a published quote (condition of the first-3 cohort).
// scope
Scope.
- 1 runtime from {OpenClaw, Hermes}: full scope
- 1 CLI environment from {Claude Code, Codex}: full scope
- Integration layer + task routing
- Guardrails + governance (both layers)
- 3–5 end-to-end workflows
- Monitoring + team training + playbook
- A third component (OpenClaw + Hermes at once = redundancy, we don't do it)
- Integrations with legacy enterprise CRM/ERP (quoted separately)
- Compliance audit (GDPR/HIPAA; separate)
- Multi-tenant / multi-workspace at enterprise scale
// timeline · effort · price
Timeline, effort, price.
14–21 working days (Sprint 0 discovery + Sprint 1 build + Sprint 2 hardening)
14–21 days; the heaviest flavor
First-3 cohort: 5 000 PLN flat for the bundle (1 runtime + 1 CLI environment) + a month of retainer free + 2 000 PLN/mo after the first month; in exchange for consent to a case study and a quote. After the cohort: from 6 000 PLN install (no upper bound, scope-driven) + 2 000/mo retainer optional.
2 000 PLN/mo, first month free. Decision after it; Hybrid has more moving parts, so the maintenance value is real, but the choice is yours.
The LLM subscription or API key is on your side (ChatGPT Plus with OAuth ~$20/mo preferred: flat, no per-query burn; or an Anthropic/OpenAI/Gemini key). Contract disclaimer: token cost is yours.
// video
A walkthrough and demo of how it works
Coming soon: a clip showing how the runtime and CLI environment wire into one system: a task flowing between the layers, governance. For now the fastest path is a free diagnostic call.
// faq
FAQ
Why not OpenClaw AND Hermes together?
Because they functionally overlap: both have a multi-channel gateway. Two at once is redundancy not worth paying for. Hybrid is one runtime + one custom CLI environment: two different problems, not the same one twice.
How do you pick which runtime and which CLI?
In discovery. The dominant pain is presence in channels → OpenClaw; procedures in the background → Hermes. CLI: you work on Anthropic models → Claude Code; OpenAI → Codex. A recommendation based on your work, not a menu pick.
Why is the first-3 cohort cheaper?
Two solo flavors separately = 3–5k install + two times 1k/mo retainer. The 5k bundle = a neutral install price, but you get the integration layer, governance and a free month of retainer, and I get the case studies I need right now. After the cohort the full pricing returns, from 6k.
When is a solo flavor enough?
When you have only one of the two problems. Presence only → OpenClaw solo. Execution only → Hermes solo. Technical CLI work only → Custom solo. Hybrid makes sense with multi-pain plus a need for governance.
Do I pay for tokens separately?
Yes. LLM subscriptions or API keys are on your side, depending on the chosen pair. Most clients go with ChatGPT Plus + OAuth for the runtime and an Anthropic key or Pro for the CLI.
// start
Where can Hybrid help you?
Free diagnostic call, 30 min. We map the pain points and I pick the runtime + CLI pair. No catalogue, no 30-second quote.