Get LinkedIn touch

Agentic Sourcing

From boolean searches to conversational AI

Led rapid validation of agentic sourcing solution that redirected product strategy and prevented months of misdirected development. User research revealed the critical wound: 90% of sourcers couldn't effectively utilise boolean search, not a lack of external data.

Through 6 weeks of iterative prototyping – 30+ user sessions, coded prototypes (Cursor + Claude), and prompt engineering (MLflow) — validated and shipped the platform's "most accurate search" solution.

Delivered conversational UI patterns now reused across the product, whilst operating across traditional role boundaries from research through to production contribution because validation speed mattered more than process.

Status: Draft

Context

  • Beamery is a platform for enterprise recruiting and talent management
  • In the race to implement agentic AI within HR-tech, Beamery was falling behind fast
  • They needed to compete with much larger competitors (Eightfold & Phenom ~10x larger)
  • Beamery saw three major internal org restructures in 2025

Surface symptoms

  • Sourcers want better search UI in the CRM – more filters, less clutter
  • Leadership want users to engage in a platform-wide conversational experience with the CRM – starting with search

The actual wounds

  • Core search was broken
  • The real competitor wasn't other CRMs, it was LinkedIn and Application Tracking Systems (ATS) (~150x larger)
  • CRM search is only as good as the structured data is – a lot of the data is unstructured
  • 90% of boolean searches were basic keyword matching – users weren't even utilising the power available

Strategic reframe

Sourcers don't need more data, nor external market data – they need intelligent interpretation of what they already have.

Problem
Constraints

Challenges

  • 3-month deadline to deliver value while leadership was actively selling a vision
  • Leadership wanted to include "kitchen sink" features (external market insights)
  • Non-negotiable requirement: conversational interface (despite my pushback)
  • Multiple org restructures happening simultaneously

What I negotiated away

Convinced leadership to descope external market insights after user interviews proved unnecessary – saving months of work on the wrong feature.

Deep research (Weeks 1 to 2)

  • 10 in-depth interviews (Paramount, Centene, Flex, Sandford Health, Mimecast)
  • Competitive analysis (LinkedIn, ATS systems)
  • AI-assisted synthesis to identify patterns
Research
Prototypes

Interactive prototypes (Weeks 3 to 4)

  • Built coded prototypes (Cursor + Claude) – not static mockups
  • Tested conversational patterns with real scenarios
  • Validated hypothesis: NL → agent-constructed search > manual boolean

Iteration cycles (Weeks 5 to 6+)

  • 25 follow-up validation calls
  • Tested multiple search strategies through prompt engineering
  • Discovered we could massively simplify while maintaining quality
Iterate
Traditional 4-5 months To reach this validation point
AI-augmented 6 weeks 30+ user sessions, coded prototypes, validated hypothesis

My hands-on contributions

  • Interactive prototypes in code (Cursor) for rapid validation
  • Conversational UI patterns tested with real users
  • Prompt engineering iterations (contributed to GitHub, then MLflow)
  • Frontend direction for production implementation
  • Analytics setup (Pendo tracking)

What I proved

  • Conversational interface could work (despite my initial skepticism)
  • External market data was unnecessary (saved ~2 months)
  • Simplified search strategies performed as well as complex ones
  • Working software validation > wireframes for uncovering user thinking
Contribution
Frameworks

Frameworks created for reuse

  • AI-assisted interview synthesis process
    Mod guide → 5 interviews → AI evaluation of coverage → pattern spotting → next iteration
  • Conversational AI design patterns
    Now used across product
  • MLflow adoption for prompt management
    30min+ deployment → instant iteration

What we shipped

  • Limited release in December 2025
  • +10 beta users on production accounts
  • Real customer data validation
  • Most accurate search method on the platform (qualitative user feedback)

What we didn't have to build

  • External market insights integration (months of work avoided)
  • Complex search strategies (simplified through prompt testing)
  • Multiple false-start features (invalidated via prototypes)
Learnings

What I learned

  • Coded prototypes feel real – users suspend disbelief, engage authentically; we learn faster
  • Prompt engineering and versioning (MLflow) during development enabled rapid learning cycles
  • AI-assisted synthesis gave me far more time with users, less time tagging and documenting

What I'd do differently

  • Push harder against conversational-interface-as-non-negotiable (could've shipped value months earlier)
  • Include reflection/confirmation stage earlier (I optimised for speed over accuracy initially)

Key insight

  • Search strategies could be far simpler than we thought – complexity ≠ quality
  • This fundamentally changed our prompt architecture and approach to development

Crossed boundaries

Traditional designer: Research → Wireframes → Handoff

What I actually did

  • Product strategy (descoping, sequencing)
  • User research (30+ sessions)
  • Prompt engineering (MLflow versioning)
  • Frontend development (coded prototypes)
  • Analytics setup (Pendo)
  • Production contribution (GitHub commits)

Why?

  • Speed – Waiting for PM direction or engineering cycles would've meant building the wrong thing slower.
Boundaries