Inside MIKI: How Agentic AI Planning Actually Works for Revenue Intelligence

In the first post of this series, we established a hard problem: the best AI data agents in the market hit a ceiling of roughly 38% accuracy on real enterprise workloads. The root cause is not the underlying model. It is the planning layer. Systems that treat data reasoning as a one-shot translation task, prompt in, SQL out, fail structurally when confronted with the schema depth, temporal logic, and multi-source complexity that characterise real Revenue Operations environments.
Aviso's MIKI architecture initiates a fundamental shift: treating data reasoning as a search problem over a dynamic reasoning graph. Rather than "generating" an answer, MIKI searches for the optimal reasoning path by leveraging a Classical Intelligence Layer. This layer utilizes K-Nearest Neighbors (KNN) retrieval to surface historical queries, SQL templates, and successful execution traces. By searching over past successful plans before attempting a "cold-start" generation, MIKI avoids the hallucinations common in standard LLM wrappers.
The RevOps Challenge: Systems Failure at Scale
Linear systems fail in RevOps due to three structural gaps:
Temporal Reasoning Gaps: Revenue data is time-series dependent. Traditional systems lack the Snapshot Strategy required to distinguish between the "latest" state and historical trend windows (e.g., comparing current pipeline against a T-minus-30-day snapshot).
Multi-Source Fusion Complexity: Answers to critical questions (e.g., "Why is our forecast slipping?") require joining Deals, Forecasts, Activities, and Historical snapshots, a level of join complexity where standard SQL models collapse.
Hypothesis Testing Limitations: Executive decision-making requires testing multiple "Whys." Linear systems cannot execute parallel hypotheses, such as determining if a pipeline dip is due to poor engagement or aggressive stage movement, before presenting a result.
Enterprise data reasoning must be treated as a search problem over multiple reasoning paths, where planning, execution, and evaluation continuously refine each other. By shifting from a "Query Engine" to an "Operating Layer," MIKI closes the gap between raw data and executive strategy.
MIKI: A Search-Based Agentic Planning System for High-Complexity Revenue Intelligence
1. The Foundation: A Blackboard Instead of a Chain
Most AI orchestration frameworks, LangChain being the most widely deployed example, wire agents together in a linear sequence. Agent A processes the query, passes a message to Agent B, which passes to Agent C, and so on. The architectural problem with this model is simple: context degrades at every handoff. By the time a downstream agent receives its input, the original intent has often been compressed, paraphrased, or lost entirely. This is the "telephone problem" of multi-agent systems.
MIKI replaces linear chaining with a Blackboard architecture. Rather than passing messages between agents sequentially, all agents in the MIKI system read from and write to a single shared memory layer called the Blackboard. There is no chain. There is a shared workspace. In MIKI, the Blackboard serves as the central Search Space and single source of truth.
What the Blackboard Tracks
dag_outputs: The persisted results of every execution node, available to all agents at any point in the reasoning process
Path scores: A running evaluation of how well each reasoning path is performing against correctness, completeness, and insight quality criteria
Intermediate results: Partial outputs that downstream agents can build on without triggering a re-run
The practical consequence is significant. Specialist agents, including the SQL agent, the Risk agent, the AMA (Ask Me Anything) agent, and the Composer, can operate asynchronously. Each reads the current state of the Blackboard, contributes its output, and the system progresses without any single agent becoming a bottleneck or a point of context loss.
Feature | Linear Agent Chaining | Blackboard Reasoning (MIKI) |
|---|---|---|
State Management | Sequential, transient | Centralised, persistent |
Data Flow | A to B to C (rigid, one-way) | Multi-directional asynchronous read/write |
Hypothesis Exploration | Single path only | Parallel hypotheses running simultaneously |
Intermediate Results | Lost between agent hops | Persistent and reusable across agents |
Conflict Resolution | Requires full restart | Resolved via shared-state evaluation |
For B2B technology leaders evaluating AI platforms, the distinction matters operationally. A Blackboard system can recover mid-execution from a failed SQL path without discarding everything computed before it. A linear chain cannot.
2. Multi-Path Planning: Why One Reasoning Route Is Never Enough
Once the shared memory layer is in place, the next design decision is how MIKI generates its initial plan for answering a query.
Traditional systems generate one plan and execute it. MIKI generates multiple competing plans simultaneously, each representing a different hypothesis about how to answer the question. These plans are expressed as Directed Acyclic Graphs (DAGs): structured maps of the reasoning steps, data sources, and SQL operations required to produce an answer.
How Beam-Style Execution Works
MIKI's execution engine borrows conceptually from beam search in natural language processing. Rather than committing to a single reasoning path early, the system maintains a beam of candidate DAGs running in parallel. As each path progresses, the system evaluates it against three criteria:
Correctness: Does the SQL generated at each node pass deterministic validation against the schema and syntax rules?
Completeness: Does the reasoning path address all relevant dimensions of the query, including time window, segment, and hierarchy?
Insight Quality: Does the path produce diagnostic reasoning, meaning it explains why something is happening, rather than simply returning a result set?
Paths that fail SQL validation, return null result sets, or produce answers that do not address the full scope of the query are pruned. The system promotes the paths that score highest across all three criteria and converges toward a final answer only when it has identified the most statistically and logically sound route.
This is the core architectural shift: MIKI treats planning as a search problem, not a generation problem. The system is not trying to guess the right answer on the first attempt. It is searching for the best reasoning path across multiple hypotheses.
3. The Taxonomy Layer: Bridging Intent and Execution
One of the most consequential reasons standard AI data agents fail at enterprise scale is the gap between natural language intent and physical database schema. A RevOps user asking "Why is our forecast slipping in the Enterprise segment?" is expressing a business concept. Translating that directly into SQL against a schema with thousands of nested, multi-entity relationships is where one-shot systems collapse.
MIKI inserts a structured translation layer between user intent and execution. Before a single line of SQL is generated, the system maps the query through an Ontology Graph: a formal representation of the business domain built from three components.
Component | What It Defines | Example |
|---|---|---|
Event Types | Standardised definitions for revenue movements | Stage transition, forecast change, deal creation |
Signal Types | Logical definitions for identifying risk, growth, or anomaly | Silent risk, volatility flag, forecast slip |
Entity Relationships | The hierarchy between business objects | Deal belongs to segment belongs to node belongs to account |
The execution flow is deterministic: the user's natural language intent is mapped to one or more Ontology nodes, which determines what signals and events are relevant, which shapes the DAG that gets constructed, which is then executed against the physical data layer. Ambiguity is resolved at the intent stage, before execution begins.
For enterprise RevOps teams, this means the system applies consistent business logic regardless of how a question is phrased. "Which deals are at risk" and "show me deals with no recent activity in high-ACV segments" should produce the same class of reasoning. With a Taxonomy Layer, they do.
4. The Pipeline Pentagon Agent: From Data Retrieval to Diagnosis
Most BI tools follow a simple pattern: a user requests data, the system fetches it, and the result is displayed. The cognitive work of interpretation, the "so what" analysis, is left to the human.
The Pipeline Pentagon Agent is MIKI's answer to that gap. Rather than fetching data, it produces a diagnosis. It does this by crossing five reasoning axes simultaneously, pulling from multiple data sources and applying conditional logic to identify patterns that no single data source would reveal on its own.
The Five Reasoning Axes
Pipeline Health: Total pipeline size, coverage ratio relative to quota, and distribution across segments and stages
Forecast Integrity: Comparing commit and best-case scenarios against historical actuals to identify systematic over- or under-calling
Deal Momentum: Stage velocity analysis, identifying deals moving too fast (potential sandbagging) or stalling (at-risk)
Engagement Signals: Fusing CRM activity data, calls, emails, meeting notes, with deal status to surface periods of inactivity against high-value opportunities
Historical Context: Applying a Snapshot Strategy to compare the current state of the pipeline against T-minus-30-day and quarter-over-quarter baselines
The Conditional Logic Layer
What separates the Pentagon Agent from a standard multi-source query is the layer of "If/Then" reasoning applied to the fused data. This is what produces diagnostic output rather than raw rows.
Condition | Diagnosis Produced |
|---|---|
Stage movement increasing AND win score decreasing | Volatility flag: deal progressing without improving close confidence |
High ACV deal with no activity in 14+ days | Silent risk: high-value opportunity with no engagement signal |
Pushed deals exceeding historical average rate | Forecast risk: systematic slippage pattern emerging |
Commit amount rising but pipeline coverage falling | Coverage gap: committed forecast is not supported by sufficient pipeline |
The output is not a table of deals. It is a structured narrative that explains why a specific segment, rep, or product line is underperforming and which data signals are driving that conclusion. Revenue leaders get a diagnosis they can act on, not a dataset they need to interpret.
5. Conversational Depth: How MIKI Handles Follow-Up Questions
Enterprise analytical workflows are rarely resolved in a single exchange. A revenue leader reviewing pipeline health will ask an initial question, receive an answer, and then drill down: "Which deals in that segment have had no activity this week?" followed by "How does that compare to the same point last quarter?" followed by "What is the rep coverage breakdown for those accounts?"
Standard AI systems treat each follow-up as a new query. The system discards its previous reasoning, starts fresh, and recomputes. This is computationally expensive and breaks the coherence of a multi-turn analytical conversation.
DAG Mutation: Modifying the Plan, Not Rebuilding It
MIKI maintains conversational persistence across up to seven levels of follow-up depth. Rather than rebuilding the reasoning plan from scratch for each follow-up, the system applies one of five mutation operators to modify the existing DAG.
Operator | What It Does | Typical Use Case |
|---|---|---|
Extend | Adds new reasoning steps to an existing path | Adding a time comparison to an existing pipeline analysis |
Insert | Adds an intermediate step between existing nodes | Adding an activity filter mid-analysis |
Replace | Swaps one reasoning step for another | Changing the segment filter from Enterprise to Mid-Market |
Branch | Creates a parallel hypothesis path | Testing whether pipeline slip is due to engagement or stage movement |
Compare | Runs two existing paths side by side | Comparing this quarter versus last quarter on the same dimensions |
The system also applies planning efficiency logic. If the follow-up query is substantially similar to the previous one and the existing dag_outputs are still valid, MIKI reuses them directly with zero additional planning overhead. For partial changes, only the affected portion of the DAG is recomputed. A full rebuild is triggered only when the user's intent has changed fundamentally.
Additionally, MIKI's KNN retrieval layer searches historical query traces before constructing any new plan. If a similar query has been answered successfully before, by any user on any tenant, the system surfaces that execution pattern as a starting point. This eliminates cold-start failures and improves planning consistency on recurring business questions.
6. The Critic and Composer: Ensuring Reliability Before Output
Analytical systems used for executive decision-making carry a high cost of error. A hallucinated number in a pipeline review or a miscalculated forecast comparison does not just produce a wrong answer; it erodes trust in the entire system. MIKI enforces a fail-closed validation protocol before any output reaches the user.
The Critic Layer
Every reasoning path in MIKI passes through a Critic agent before its output is accepted. The Critic performs four functions:
SQL syntax and schema validation: Confirming that generated queries are structurally correct and reference valid fields
Rule-based enforcement: Applying business logic constraints, such as ensuring forecast comparisons use correct time windows
Error patching: Correcting minor execution errors mid-flight without discarding the full reasoning path
Path scoring and pruning: Assigning a quality score to each candidate path and eliminating underperformers before the final composition step
The Composer Layer
For queries that require output from multiple reasoning paths or multiple data sources, the Composer agent handles synthesis. It resolves conflicts between data sources, for example when two APIs return different activity counts for the same deal, applies semantic validation to confirm that the assembled result set is internally consistent, and produces a narrative output that ties the data together into an actionable recommendation.
For the most complex analytical tasks, such as root cause analysis on a missed forecast or a full deal retrospective, MIKI employs a Deep Research Planner. This long-running harness performs a Breadth-First Search across engagement signals, stage movement data, activity logs, and deal notes, then passes the assembled evidence to the Composer to produce an executive-level summary. The output is a structured narrative, not a data dump.
What This Architecture Means for Revenue Leaders
The architecture of MIKI signifies a move from "chatbots" to Autonomous Reasoning Systems. In the enterprise, the quality of planning determines the quality of decisions. By utilizing search-based reasoning, a shared-state blackboard, and ontology-driven planning, MIKI transcends the limitations of one-shot Text-to-SQL systems.
The future of MIKI lies in the Meta-Planner, a master coordinating layer that dynamically selects the optimal planning mode, whether Pentagon, Taxonomy, or Deep Research, based on the task's complexity. This is the realization of a self-improving reasoning system: an operating layer that provides the depth, reliability, and strategic insight required for modern revenue leadership.
The result is a system that can be used for executive decision-making with confidence, not one that requires the user to manually verify its work.
FAQs
What is agentic AI? Agentic AI is a class of AI systems that can plan, reason, and adapt to take autonomous action across multi-step workflows — rather than executing pre-defined steps. Where traditional AI systems generate one answer in one pass, agentic AI maintains state, evaluates multiple reasoning paths, and adjusts its approach when intermediate results don't hold up. MIKI is an agentic AI system built specifically for enterprise revenue intelligence.
What is agentic planning? Agentic planning is the part of an agentic AI system that decides how to answer a question — what steps to take, in what order, and with what dependencies between them. In MIKI, agentic planning is treated as a search problem: the system generates multiple competing plans (each expressed as a Directed Acyclic Graph), runs them in parallel, evaluates each against correctness, completeness, and insight quality, and converges on the best path.
What is a Blackboard architecture in AI? A Blackboard architecture is a multi-agent design pattern where all agents read from and write to a single shared memory layer — the "Blackboard" — instead of passing messages in a fixed sequence. This avoids the context-loss problem of linear agent chaining (sometimes called the "telephone problem"), where intent gets compressed or paraphrased at every handoff. MIKI uses a Blackboard as its central search space and single source of truth.
What is the difference between linear agent chaining and Blackboard reasoning? Linear chaining (the pattern used by frameworks like LangChain) wires agents together in a fixed sequence: Agent A → Agent B → Agent C. Each handoff loses context, and a failure mid-chain typically requires restarting the workflow. Blackboard reasoning replaces the chain with a shared memory layer all agents read and write to. State persists, agents work asynchronously, parallel hypotheses can run simultaneously, and a failed reasoning path doesn't discard the work that came before it.
What is multi-path planning in AI agents? Multi-path planning is the practice of generating and running several competing reasoning plans in parallel — instead of committing to one plan and executing it linearly. Each plan is evaluated against correctness, completeness, and insight quality, and the best path is promoted while weaker paths are pruned. MIKI's multi-path planning borrows conceptually from beam search in natural language processing.
How does MIKI handle follow-up questions in an analytical conversation? Rather than rebuilding the reasoning plan from scratch for every follow-up, MIKI applies one of five DAG mutation operators — Extend, Insert, Replace, Branch, or Compare — to modify the existing plan. If the follow-up overlaps substantially with the previous query, MIKI reuses cached dag_outputs directly. Only the affected portion of the DAG is recomputed; a full rebuild is triggered only when intent has changed fundamentally. This keeps multi-turn analytical conversations coherent and computationally efficient.
How does MIKI ensure SQL and analytical accuracy? MIKI enforces a fail-closed validation protocol. Every reasoning path passes through a Critic agent that performs SQL syntax and schema validation, applies rule-based business-logic checks, patches minor execution errors mid-flight, and assigns a quality score for path scoring and pruning. For multi-source queries, a Composer agent resolves cross-source conflicts and applies semantic validation to confirm the assembled output is internally consistent before any result reaches the user.
Why does standard text-to-SQL fail at enterprise scale? One-shot text-to-SQL systems treat data reasoning as a translation task: prompt in, SQL out. They fail structurally on real enterprise workloads because of three gaps: temporal reasoning (no clean way to compare current state to historical snapshots), multi-source fusion (joining deals, forecasts, activities, and snapshots is beyond their join-complexity ceiling), and hypothesis testing (executive questions require running multiple "Whys" in parallel, which a linear system can't do). MIKI was built specifically to solve those three gaps.
What does MIKI mean by "search-based reasoning"? Search-based reasoning treats answering a question as exploring a space of possible reasoning paths — rather than generating a single answer in one pass. MIKI maintains a beam of candidate plans, scores each against correctness, completeness, and insight quality, and prunes weaker paths as stronger ones progress. The system is searching for the best route to the answer, not guessing the answer in one attempt.
