How We Chose the Best AI Agent Flow for Sales Estimation

Photo of Krystian Dziubinski

Krystian Dziubinski

Aug 22, 2025 • 19 min read
the image shows two different hands that combine two pices of puzzles puzzles are in 2 different colors picture looks real-3

What started as a simple AI assistant in Slack is now evolving into a full-blown sales copilot—one we’re shaping to take on more complex, high-value workflows like sales estimation.

Estimation might sound straightforward, but it demands coordination across scattered information (Slack, Google Drive, transcripts) and alignment with internal frameworks. It’s not just about automation—it’s about judgment, context, and decision support.

As we prepare to extend Omega’s capabilities into this space, we’ve taken a step back to explore the best possible agent flow architecture. We reviewed several orchestration models, mapped the trade-offs, and aligned on a hybrid direction that we’re now starting to implement.

This article shares that behind-the-scenes process:
What tech stack and agent setup we chose
✅ Why we landed on a hybrid flow (and when it works best)
✅ What we learned from evaluating multiple architectural options

Background: From Omega to Estimation

Omega is Netguru’s internal AI agent platform, built to support real workflows—not just respond to prompts. Instead of relying on generic tools, we designed a modular, multi-agent system that integrates tightly with tools our team already uses. This lets us tailor each capability to the realities of day-to-day sales work.

Omega isn’t just one bot—it’s a system of specialized agents that collaborate on tasks like summarizing calls, retrieving documents, or generating proposals. With role-based orchestration and strong context awareness, Omega acts more like a teammate than a tool.

Off-the-shelf assistants couldn’t meet our needs for orchestration logic, real-time context handling, or integration flexibility. We needed something we could extend, debug, and adapt—without disrupting how our team works.

Our current focus? Bringing Omega into the estimation process.

Why Sales Estimation Needed a Smarter Flow

Sales estimation might sound like a spreadsheet problem, but in practice, it’s a layered, cross-functional task that touches product, design, engineering, and sales. And for us, it had become a bottleneck.

The problem: manual effort, inconsistent results, low scalability

Right now, sales estimation relies heavily on manual coordination. Reps collect inputs from call transcripts, Slack threads, and scattered documents—then piece everything together themselves. It’s time-consuming, varies across deals, and makes it hard to keep full context in view.

As our pipeline grows, this approach is becoming harder to scale. Expert time is limited, assumptions can get lost, and outcomes often depend on who happens to drive the process.

That’s why we’re building an agent-based estimation flow—to bring structure, clarity, and collaboration into one consistent system.

What we wanted: fast, modular, accurate estimation with agents

We need a better way—one that can break down estimation into logical steps, assign tasks to specialized agents, and involve humans only when needed.

Rather than aiming to automate the entire process end-to-end, our goal is to make estimation smarter, more structured, and easier to manage as our pipeline grows.

Here’s what we’re designing for:

  • Modular and reusable logic
  • Agent collaboration with clear roles
  • Context-aware outputs
  • Built-in checkpoints for quality and alignment

Business context: high stakes, cross-functional workflow

Estimation isn’t just a backend process. It directly shapes proposals, scopes, staffing, and client expectations. A weak estimate introduces risk—missed deadlines, budget overruns, or misaligned deliverables.

That’s why designing the right agent flow matters. It’s not just about speed—it’s about building trust, ensuring repeatability, and giving the sales team a system that truly supports decision-making.

The Technical Challenge: Designing a Multi-Agent Sales Flow

Building a sales estimation flow isn’t just about deciding what agents should do—it’s about how they collaborate, what tools they use, and when to involve humans.

We knew from the start that this wouldn’t be a simple prompt-response setup. To handle estimation properly, the system had to reason over fragmented data, coordinate multiple agents with specialized roles, and support flexible, asynchronous conversations in Slack.

AI-first approach: Slack + Autogen + Google Drive + LangFuse

Our stack reflects the nature of the problem:

  • Slack is where work happens—estimation has to live there.
  • Autogen enables multi-agent orchestration, letting us define agent roles and workflows with real logic.
  • Google Drive holds all project reference material, so agents needed custom tools to read, write, and reason over docs.
  • LangFuse tracks performance, logs, and prompts—crucial for iterating safely in a live environment.

Rather than trying to retrofit traditional tools, we leaned fully into an AI-native approach—agents that collaborate, reference shared memory, and operate inside natural team workflows.

Flow constraints: human-in-the-loop, shared memory, asynchronous interactions

We designed the flow around a few non-negotiables:

  • Human-in-the-loop: Estimation involves judgment, so agents must know when to pause, escalate, or request input—not just output a number.
  • Shared memory: Multiple agents working on the same task need a consistent context—especially across Slack threads and tool outputs.
  • Async interactions: In real workflows, users don’t respond immediately. The system must handle pauses, timeouts, and resumed conversations without breaking logic.

These constraints forced us to think beyond sequential pipelines—and toward something more adaptive.

Key components: SelectorPrompt, GraphFlow, custom Google Drive tools

To make it all work, we relied on three core building blocks:

  • SelectorPrompt – a prompt-based switchboard that chooses the right agent (or agent team) for each subtask based on input context.
  • GraphFlow – a flexible orchestration structure that maps how agents interact over time, including fallback paths, approvals, and phase transitions.
  • Custom Drive tools – purpose-built functions that let agents extract feature lists, read scope docs, check for missing modules, or generate outputs in Drive-friendly formats.

Together, these tools enable a flow that’s modular, trackable, and adaptable—without needing to hardcode every scenario.

The Options We Considered

Once we had the foundation in place—Slack as the interface, AutoGen as the backbone, and Google Drive as our knowledge base—we needed to decide: how should agents work together to estimate sales projects effectively?

Estimation is multi-step, collaborative, and context-heavy. It requires a system that can break down complex prompts, surface missing assumptions, validate scopes, and still leave space for human judgment. We knew a single-agent setup wouldn’t cut it.

So, we explored four architectural models—each with different trade-offs in structure, speed, flexibility, and complexity. To choose wisely, we evaluated them based on:

  • User experience (UX): Would the flow feel smooth and understandable in Slack?
  • Latency: Could it deliver insights fast enough for real-time conversations?
  • Reusability: Could we reuse agents across other sales tasks?
  • Maintainability: How hard would it be to scale, debug, and evolve the system?

Here’s what we considered:

Option

Description

Pros

Cons

1. Orchestrated Agent Pipeline

A fixed, step-by-step sequence where each agent handles a stage and hands off to the next.

✅ Clear structure and separation of concerns

✅ Easy to debug and monitor

✅ Natural mapping to flow diagrams

✅ Reusable components

❌ Rigid flow limits flexibility

❌ Higher latency from sequential steps

❌ Complex state sharing between agents

❌ Requires many specialized agents

2. Event-Driven Multi-Agent System

Agents respond to specific events and can work in parallel, with Slack threads acting as the event bus.

✅ Highly scalable and reactive

✅ Enables real-time UX with parallel agents

✅ Resilient to failures (event replay)

❌ Complex event handling logic

❌ Risk of race conditions

❌ Harder to debug and trace flows

❌ Needs robust error handling

3. Hybrid Orchestrator with Dynamic Agent Teams

Uses a flexible orchestrator to assign agents dynamically for each workflow phase. Builds on our SelectorGroupChat pattern.

✅ Leverages existing infrastructure

✅ Modular and reusable agents

✅ Flexible team composition

✅ Minimal architectural changes

❌ Requires significant prompt engineering

❌ Phase transitions must be carefully managed

❌ Less transparent flow logic

❌ Limited parallelization within phases

4. Swarm Agents with Handoff

Agents operate autonomously, handing off tasks based on expertise while sharing full message context.

✅ No rigid sequence—agents adapt on the fly

✅ Human handoff possible at any step

✅ Deep project understanding via shared context

✅ Tailors to project-specific needs

❌ Sequence of actions harder to predict

❌ Processing time may increase with too many handoffs

❌ Tracing logic can be challenging

❌ More difficult to deb

From Theory to Practice: Visualizing the Flow

When we first mapped out the estimation process, our design looked fairly straightforward—each agent had a clear role, and the sequence felt linear. But as we dived deeper, tested assumptions, and made trade-offs, the flow evolved into something more adaptive.

agent flow first version

agent flow output

Behind the Scenes: Real Conversations That Shaped the Flow

The final design of our sales estimation flow came out of real discussions—technical and practical. From prompt routing to context management, every decision had trade-offs. Here’s how some of the most impactful ones played out.

Rethinking Triggers: From Keywords to Intent Detection

At first, we considered using simple keyword triggers to activate agents. But it quickly became clear that this approach was too brittle for real sales conversations. Client input is nuanced, and important modules might not be explicitly mentioned. We needed a way to understand what was meant, not just what was said.

This led us to use LLM-based intent detection. It allowed agents to make smarter decisions based on context, tone, and structure—especially important in a Slack-native environment where inputs are casual and varied.

Smart Execution: When to Skip and When to Act

Another challenge was figuring out how much each agent should “know” and whether they should reprocess every input from scratch. Our answer was to introduce conditional logic and lightweight caching. Agents now check for existing data—like previously parsed documents or already generated summaries—before deciding whether to run. This saves compute time and avoids redundant work.

We also defined “exit conditions,” where an agent can pause the flow, request more input, or defer to a human, depending on confidence thresholds or missing information.

Real-Time Context Sync: Making Slack and Drive Work Together

Since Omega operates in Slack but relies heavily on files from Google Drive, we built a sync mechanism to connect the two. When someone uploads a file in Slack, it's automatically made available to the agents through Drive. Webhook triggers handle real-time updates, kicking off the relevant flow without delay.

This setup helped reduce lag and made agent responses feel more responsive—even when multiple inputs were arriving in parallel.

Shared Memory: Coordinating Multi-Agent Interactions

Context isn’t just about documents—it’s about continuity. To support that, we implemented a shared memory model where agents can access prior messages, actions, and state. This allows agents to collaborate without repeating work or losing track of what’s already been handled.

It also laid the groundwork for more advanced orchestration patterns, like selective re-execution and progressive refinement.

Testing Before Building: Using Promptfoo

To avoid debugging in production, we leaned on Promptfoo for prompt evaluations and flow simulations. It helped us rapidly iterate on agent behavior before going live. For more complex sequences, we began mocking agent flows to explore edge cases and validate early handoffs.

This test-first mindset for our agent reduced rework and gave us faster confidence in the system's reliability.

What We Picked for AI agent flow and Why It Matters

After exploring several architecture options, we decided not to settle on a single model. Instead, we combined ideas from Option 3 (Hybrid Orchestrator) and Option 4 (Swarm Agents with Handoff)—adapting them to fit different phases of the estimation workflow.

Why We Didn’t Choose a Single Flow

Each approach came with strengths and trade-offs. Pipelines were clean but rigid. Event-driven flows felt elegant, but harder to trace and debug. Swarm-based handoffs offered flexibility but risked unpredictability. Ultimately, we realized: no single structure could fully match the nuances of estimation work.

Some phases require structure—others need adaptability. So we picked a hybrid model that adjusts based on context.

Where Hybrid Wins

We use a SelectorPrompt-style orchestrator to assemble agent teams dynamically based on the task at hand. This provides structure, but without enforcing a rigid sequence.

For example, in the early phases (like breaking down project scope into modules), we lean on a coordinated team: a module architect agent, a critic agent, and domain experts. Their roles are pre-defined, and their interactions are supervised by the orchestrator.

Later, once we enter more exploratory tasks (like refining assumptions or validating complexity), agents can hand off freely based on confidence, input needs, or triggers—closer to the swarm model. At any time, agents can hand back control to the sales rep for review, or flag gaps for human clarification.

The Benefits We’re Seeing

Even in early ai agents implementation, this hybrid approach is proving valuable. It gives us:

  • Clarity when needed, especially in high-stakes phases like cost scoping.
  • Flexibility for exploration, where a rigid path would limit insight.
  • Better user experience, by mixing real-time feedback with async execution.
  • Reduced maintenance costs, thanks to modular components and shared state.

What We Learned (So Far)

Designing agent-based flows isn’t just about picking the right architecture or writing a good prompt. It’s about striking a balance—between automation and control, flexibility and traceability, speed and reliability. Here’s what’s stood out during this phase of Omega’s evolution.

It’s Half Architecture, Half Prompt Design

Multi-agent orchestration only works when both the system and the language design are solid. We found that great prompts couldn’t save poor flow logic—and even the best-designed architecture fell apart with unclear or inconsistent language.

The most successful outcomes came when we treated prompt crafting and agent routing as a single design process, not separate tracks.

UX Always Comes at a Cost

Every time we added a new interaction or insight surfaced by the agent, we had to consider what it would cost in terms of latency, maintenance, and cognitive load.

Better UX often meant more agents, more steps, or more tooling—each adding complexity. We learned to ask: Is this genuinely helpful, or just clever? That filter helped keep the system lean and human-first.

Context Is a Constant Battle

One of the hardest challenges was keeping context stable and accessible across agents. Even with shared memory and caching, it was easy for handoffs to lose important nuance or duplicate effort.

We found it essential to log every input, state, and transition—not just for observability, but to debug misunderstandings and refine prompts over time.

Langfuse helped here, giving us visibility into the agent conversations and making the system feel less like a black box.

What’s Next: From Smart to Autonomous

Our work on Omega’s estimation flow is still in progress. We’ve explored multiple technical paths, tested key components, and aligned on a hybrid agent architecture. Now, we’re building toward something more powerful: a semi-autonomous system that can support estimation before it’s even asked. Defining AI agent success here means delivering consistent estimates and stronger alignment with our sales framework.

Self-Triggered Agents and Real-Time Insights

One of the key features on our roadmap is enabling agents to act proactively. Instead of waiting for a rep to prompt Omega, agents could be triggered by real events—like a file drop in Slack or a new deal added in HubSpot. This would allow estimation flows to start earlier, with better timing and less manual coordination.

We’re also designing a system for real-time estimation summaries that evolve as context changes—so teams don’t have to regenerate documents from scratch every time something updates.

Toward Proactive Workflows

With core building blocks like shared memory and intent detection in place, we’re laying the foundation for more autonomous agent behaviors. Eventually, Omega could begin surfacing insights on its own, such as:

  • Recommending module breakdowns when a new transcript is logged
  • Highlighting missing assumptions in real time
  • Suggesting reusable components based on past deals

This would move Omega beyond a reactive assistant—toward something more like a true workflow partner.

Long-Term Vision: A Truly Autonomous Sales Assistant

The vision we’re working toward is a scalable AI product that can handle bounded, high-value sales tasks autonomously. That includes:

  • Initiating and completing estimation flows without prompts
  • Navigating ambiguity by asking the right questions
  • Seamlessly integrating AI
    into the tools our team uses daily

We’re not there yet—but each design decision, agent interaction, and prototype test brings us closer.

Photo of Krystian Dziubinski

More posts by this author

Krystian Dziubinski

Krystian Dziubinski works as a Senior Data Engineer at Netguru.
Boost efficiency with AI  Automate processes to enhance efficiency   Get Started!

We're Netguru

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency.

Let's talk business