SYSTEM // ACTIVE v3.2 / 2026
CASE 03 // ATLAS INSIGHTS
Sub-page · Atlas Insights · The data product layer

Closing the loop between collecting experience data and acting on it.

Atlas Insights is the data product layer of AnyRoad's platform. Five composable surfaces (Question Builder, Explorer, Pinpoint AI, Insights, and Industry Benchmarks) turn raw guest data into operational understanding for the brand operator who has fifteen minutes on a Monday morning, not fifteen hours. This sub-page is the deep dive on each of those surfaces.

Surfaces designed
5
Question Builder, Explorer, Pinpoint AI, Insights, Industry Benchmarks. Built to compose.
Customers
100s
Enterprise brand teams using Atlas to slice, summarize, and benchmark.
Marquee proof
+16
Diageo's Johnnie Walker Princes Street post-visit NPS lift, validated through Atlas surfaces.
Role
Director
Director of Design owning Atlas Insights end to end.
Atlas · Surface 01 · Collect

Question Builder: a question library for the survey-authoring operator.

— 01 Question Builder

Treat questions as first-class objects in a library.

A brand-marketing director who needs to know what their guests actually think isn't a survey designer. They have a meeting in 20 minutes and they need to ship a question that returns useful data. Question Builder is the surface that gets them there.

The product treats questions as first-class objects in a library. Operators build a question once, name it, configure it, translate it, and reuse it across every experience the brand runs. Standardized question types (NPS, Marketing Opt-in, Purchase Behavior, Birthdate) come with industry-validated wording so the data they collect is comparable to peer benchmarks downstream. Custom questions cover everything else, with response formats sized to what the operator is actually asking.

The latest enhancement adds conditional logic on top of the library: sub-questions branching off a parent answer with an AND/OR composer for compound rules.

  • Move 01
    Library as the home. Every question authored is a saved entity with a name, type, preview, and the count of experiences using it. Authoring becomes inventory, not one-off work.
  • Move 02
    Standardized types unlock benchmarks. NPS, Marketing Opt-in, Purchase Behavior, and Birthdate ship with industry wording. 84% of AnyRoad customers use them by default, which is what makes cross-customer comparison possible in Industry Benchmarks downstream.
  • Move 03
    Custom for the long tail. Six response formats (Free Text, Multi-Select, Radio, Checkbox, Dropdown, Date Input) for everything the standardized library doesn't cover.
  • Move 04
    Translations as a first-class concern. Spanish, Mandarin, and other languages live on the same screen as the source. When the source changes, an "Update Language Translations" notification fires automatically.
  • Move 05
    Lifecycle without surprises. New, Rename, Duplicate, Archive, Restore. Destructive actions (deleting collected data) require explicit confirmation.
  • Move 06
    Conditional logic as the latest layer. Sub-questions, conditions, and AND/OR composition for the authoring patterns that needed to grow up.
Question Builder home · Library of all configured questions, organized by type and reuse count
— Question Builder · Home · Library of saved questions across question types
New Question type picker · NPS, Marketing Opt-in, Purchase Behavior, Birthdate
— Question Builder · Standardized question types unlock benchmark comparison
NPS question editor · Brand name, optional subtitle, response preview, internal name
— Question Builder · NPS editor with industry-validated wording & live preview
Custom question editor · Multi-select response format with translations sidebar
— Question Builder · Custom questions cover the long tail with six response formats
Add Checkout Questions panel · Selecting from the question library to add to an experience
— Question Builder · Add to experience flow · Pulling from the library into a specific checkout
— Asset · Coming soon
Nested conditional logic
— Question Builder · Latest enhancement · Nested conditional logic with AND/OR composer
Atlas · Surface 02 · Explore

Explorer: customer segments, finally authorable in the product.

— Asset · Coming soon
Explorer · Customers tab
— Explorer · Customers tab with saveable segments
— 02 Explorer

From data dump to grouped, named, shareable.

Before Explorer, customer data lived in a dashboard that returned everyone or no one. Slicing by repeat visitors, by NPS, by recency required engineering help or CSV exports. The operator's actual job, "show me first-time guests from California who scored 9 or 10," wasn't possible inside the product.

Explorer makes the customer object a real, queryable thing inside the platform. Operators build segments in the UI, save them, rename them, share them. A profile drawer surfaces every interaction the brand has had with a guest, and the segment primitive feeds every other surface.

  • Move 01
    Segments as first-class objects. Save, rename, delete, and reuse. Segments aren't filter combinations, they're named entities operators reference by name.
  • Move 02
    Profile drawer. Click any guest to see activity timeline, UTM attribution, and segment memberships in one slide-over.
  • Move 03
    NPS-aware coloring. Promoter, passive, detractor are encoded throughout the table, so the eye finds patterns before the operator runs a query.
  • Move 04
    Shared customer model. The same customer object underpins Pinpoint AI's verbatim cards and Insights' bottom-line narrative.
Atlas · Surface 03 · Synthesize

Pinpoint AI: open-text feedback, ranked by what actually moves NPS.

— 03 Pinpoint AI · Guest Feedback

From thousands of comments to a handful of topics that matter.

A single mid-sized brand could collect tens of thousands of free-text responses a year. Reading them was impossible. Tagging them was a part-time job. The hard part wasn't sentiment. It was knowing which topics were worth a meeting and which were noise.

Pinpoint AI runs every response through a topic-clustering model and ranks the resulting topics by NPS impact, not just frequency. A topic in the negative column flagged "Critical" means it's not just complained about. It's measurably depressing the brand's score. Every topic is one click from the verbatims that drove it, with the full guest profile attached.

  • Move 01
    Two columns, two stories. Positive and negative topics are split by default, so operators see both what's working and what's hurting on the same screen.
  • Move 02
    NPS impact, not response count. Topics are sorted by how much they shift the brand's score, so a small loud problem ranks above a big quiet one.
  • Move 03
    Severity badges. Critical / Mild labels surface on each verbatim, so an operator can triage urgency at a glance without reading every comment.
  • Move 04
    Pre / post NPS in the margin. Topic Details shows each guest's expectation versus their actual rating, turning a verbatim into a measurable delta.
Pinpoint AI Topics list · positive and negative topics ranked by NPS impact
— Pinpoint AI · Topics list, ranked by NPS impact
Pinpoint AI Topic Details · verbatim feedback with guest profiles and severity labels
— Pinpoint AI · Topic details with guest profiles & pre/post NPS
Pinpoint AI Experience filter dropdown
— Experience filter
Pinpoint AI raw feedback stream
— Raw feedback stream
Atlas · Surface 04 · Narrate

Insights: an auto-written executive summary that composes from everything else.

Insights AI Analysis · weekly and monthly summary for Budweiser
— Insights · AI-generated weekly & monthly summary
— 04 Insights

From dashboard sprawl to a three-paragraph briefing.

Even with the first three surfaces shipped, operators were still cobbling reports together by hand. They didn't want another dashboard. They wanted the email a smart analyst would write them on a Monday morning: what changed, why, where it shows up, and what's worth attention.

Insights generates exactly that. A weekly and monthly summary, sectioned by Bottom Line, Brand Perception, Sales & Acquisition, Booking Pipeline, Capacity, Guest Insights, and Customer Feedback. The narrative composes from Explorer's segments, Pinpoint's topics, and the underlying booking and revenue data, so a single document tells one story instead of six tabs telling fragments.

  • Move 01
    Bottom Line first. Every summary opens with the four-bullet "what you need to know." The rest is supporting evidence for the operator who has time to read further.
  • Move 02
    Movement chips, not raw numbers. Every metric carries a Week / Year delta chip so direction is readable before magnitude.
  • Move 03
    Experience-level highlights. Each section names the specific tours or events driving the change, turning aggregate metrics into actionable callouts.
  • Move 04
    Customer Feedback inline. The narrative quotes verbatims surfaced by Pinpoint AI. The synthesis layer literally cites the synthesis layer below it.
  • Move 05
    Export PDF. Built for forwarding. Operators ship the document up to leadership instead of standing up another meeting.
Atlas · Surface 05 · Compare

Industry Benchmarks: how does my brand stack up?

The four surfaces above let a brand understand themselves. Industry Benchmarks lets them understand themselves relative to peers. NPS, brand conversion, opt-in rate, revenue per visitor: every operator's first question is whether their number is good or just a number. Benchmarks answers it.

The surface overlays anonymized peer data on top of a brand's own metrics, sliced by industry, experience type, demographic, and season. A whisky distillery sees how their post-tour NPS compares to the alcohol category average. A CPG brand sees how their festival opt-in rate compares to the industry. The number stops being abstract and starts being directional.

  • Move 01
    Peer overlay. Every brand metric carries an industry benchmark by default, so the operator immediately sees over- and under-performance.
  • Move 02
    Slice by what matters. Compare against the right peer group: industry, experience type, season, demographic. Different cuts surface different gaps.
  • Move 03
    Privacy-preserving by construction. Benchmarks are aggregate and anonymized. Brands contribute to a shared baseline without exposing individual data.
— Spotlight asset · Coming soon
Industry Benchmarks · Peer overlay
Cross-surface system thinking

What made five surfaces feel like one product.

The compositional bet.

The platform-design decision underneath Atlas was that the surfaces had to compose, not just coexist. A segment built in Explorer should slice Pinpoint. A topic surfaced in Pinpoint should appear inside Insights. A question authored in Question Builder should pre-populate the analysis surface the moment a response lands. Industry Benchmarks should overlay every metric without requiring a separate workflow.

That meant the surfaces couldn't be designed as separate features. They had to be designed against a shared mental model: the same customer, the same segment, the same experience taxonomy. And against a shared interaction grammar so an operator who learned one surface could read all of them.

  • — Principle 01 / Shared customer object A guest profile in Explorer, a verbatim card in Pinpoint, and a Customer Feedback callout in Insights all reference the same underlying customer. One model, surfaced five ways.
  • — Principle 02 / Segment as a primitive Segments authored in Explorer aren't filter combinations. They're named entities every other surface can reference. Compose once, slice everywhere.
  • — Principle 03 / Consistent NPS visual language Promoter / passive / detractor coloring, and pre/post-visit deltas, look the same on every surface. The eye learns the system once.
  • — Principle 04 / Synthesis cites synthesis Insights doesn't reinvent topic detection. It cites Pinpoint. Pinpoint doesn't reinvent the response. It cites the survey configured in Question Builder. The chain is legible.
  • — Principle 05 / Benchmarks as overlay, not silo Industry Benchmarks isn't a separate surface operators visit. It overlays peer baselines onto numbers the operator is already reading. Context where the data lives.
Question Builder
— 01 / Collect
Explorer
— 02 / Explore
Pinpoint AI
— 03 / Synthesize
Insights
— 04 / Narrate
— Shared spine
Customer · Segment · Experience taxonomy
Outcomes

What changed for the operators who use it.

Marquee NPS lift
+16
Diageo's Johnnie Walker Princes Street post-visit NPS lift, validated through Atlas surfaces.
Conversion lift
+40%
Diageo's under-targeted demographic, more likely to drink whisky after the experience.
Surfaces used by Diageo
3
Explorer, Feedback Analysis, and Industry Benchmarks, all cited by name in their public case study.
Time to insight
Days → Mon AM
From multi-day reporting cycles to a Monday-morning Insights briefing operators read in 5 minutes.

— Atlas outcomes via AnyRoad's public Diageo case study · Platform-wide numbers on the parent case

Reflection

What this taught me about leading a multi-surface data product.

The hardest design problem was the seams, not the surfaces.

Each of the five Atlas surfaces was a meaningful product on its own. But the value the operator actually felt was in the seams: the moment a Pinpoint topic appeared inside an Insights paragraph, or a segment built in Explorer pre-filtered Pinpoint, or a peer benchmark surfaced exactly where a number was being read. The work that mattered most rarely showed up in a single screen.

A data product earns its keep by answering the operator's job, not the analyst's curiosity.

The temptation in enterprise analytics is to ship more dimensions, more filters, more configurability. Atlas went the other direction: each surface was sized to a specific question an operator already had on Monday morning. The value of constraint was that operators trusted the answers they got, because the answers were calibrated to their actual decisions.

The right unit of design leadership is the operator's day, not the feature.

Across five surfaces and several years of build, the question I kept returning to was the same: what does a brand-marketing director actually do on a Monday morning? Designing against that question pulled the team out of feature-by-feature thinking and into a workflow we could measure end to end.