Metrics Hub: executive metric reporting platform

Metrics Hub

The systems at scale story

2023 · Amazon, Seattle, WA

What Every month, a deck lands on the C-suite and Board's desk with the story of every key customer metric. What that meeting hides is a coordination nightmare: dozens of contributors, cascading handoffs, metric definitions that exist only in one analyst's head, and a review process held together by email.

Why When someone left, their metrics vanished. Every new metric or report was a custom project. The process was distributed across enough people that any one departure created real gaps.

How I built a platform that replaced the entire fragmented process. A three-prong architecture: formalize the metric definition, define how data gets captured against those definitions, then present it through slides, workflows, and decks that inherit from the layers below.

Overview

Every month, Amazon leadership receives a deck where each slide tells the story of a customer metric. Business analysts define what gets measured, data scientists supply the numbers, program managers write the narratives, team leads unite the pieces, and leadership reviews everything. What that meeting hides is a coordination problem at scale: the process was distributed across enough people that any one departure created real gaps. The design challenge was decomposition: break the workflow into clean, modular layers with clear ownership and clear interfaces. Get the architecture right and the system becomes extensible. Get it wrong and every new metric is a custom project.

Pain Points

Tacit knowledge dependency

When an analyst left, the knowledge of how their metrics were defined, where data came from, and what caveats applied left with them. Onboarding meant weeks of reverse-engineering spreadsheets.

Manual coordination at scale

Producing a single deck required synchronizing dozens of contributors. Status tracking happened in spreadsheets, review cycles ran through email, and nobody had a single view of what was on track.

No single source of truth

Metric definitions, data, and narratives were scattered across spreadsheets, documents, and slide decks. The same metric could have different definitions in different reports.

Brittle review process

Review happened through email threads and ad-hoc meetings. No structured workflow tracked which slides had been reviewed or what feedback was outstanding. Missed reviews surfaced only at deck assembly.

The Entities

Four entities, because the reporting pipeline has four distinct lifecycle shapes.

Metric

The foundational definition: name, customer question, formula, directionality, segmentation, and data contract. Defined once, inherited everywhere.

Slide

A metric rendered for a specific reporting period. Inherits structure from the metric definition, adds period-specific data and narrative. Owned by a contributor, reviewed through a workflow.

Deck

An assembly of slides into a reviewable package. Manages ordering, theming, and the review workflow that gates publication. A deck orchestrates slides; it doesn't own content.

Report

The recurring cadence (monthly, quarterly) that drives slide creation, data deadlines, and review cycles. Defines which metrics participate, who is responsible, and when each phase is due.

  Report (cadence + scheduling)
    → triggers Slide instances per Metric (data + narrative)
    → Slides assemble into Deck (review + publication)
    → Metric definitions flow down: formula, segmentation, chart type

The End-to-End Journey

How a metric moves from definition to the leadership deck, and who is involved at each stage.

POC Point of Contact   Mgr Manager   DA Data Analyst   PM Program Manager   Dir Director   LD Leadership

Host Client LD Del.
POC Mgr DA PM Dir
Non-workflowOnboarding Metric definition created M1
Data contract & segmentation agreed
Data pipeline set up
Permissions configured
POCs identified for data & narratives
WorkflowData Update Slide instance created M2
Monthly data uploaded
Data sanity checked
Collaboration Inline comments on slide sections M3
Discussion threads resolved
Narratives & Review Narrative drafted on slide M4
Host review completed
All slides finalized
Director review
Leadership review
Publication Deck assembled & published M5
Deck reviewed

Milestone 1: The Metric Library

A metric in the library is a defined object, not a row in a spreadsheet. It carries a name, taxonomy, Customer Question, measurement formula with directionality, and a data contract. Every field is downstream load-bearing: the Customer Question becomes slide copy, directionality determines trend line rendering, and segmentation determines ownership and reporting. Beyond these structural fields, each metric also captures measurement details such as statistic type, units, sampling method, and directionality, all of which drive data collection, trend rendering, and goal evaluation.

The data model

Measurement & reporting intervals

Two intervals govern every metric: measurement (how often data is captured) and reporting (how often it surfaces in review). A metric measured daily might report monthly.

Segmentation and partitions

Partitioning how data is reported, by geography, speed tier, or business attribute. Each partition carries its own goal, justification, and operational context.

Metric Library
Segmentation
Delivery Speed
US
2-Day
1-Day
Same-Day
4-Hour Rush
EU
2-Day
1-Day
Same-Day
IN
2-Day
1-Day
BR
3-Day
1-Day
Metric Library › Browse Metrics › Delivery Speed
Delivery Speed
Last modified on NOV 28, 2024
Measurement Details
Statistic
Percentage (0–100%)
%
Units
On-time rate per segment
Sampling
Full population
Directionality
Higher is better
Reporting Contracts
Monthly
Measurement Interval
Monthly
Reporting Interval
Snapshot
Monthly Data
Cumulative
Year-to-Date
GEO Segment Status Justification Goal Goal Statement
US 2-Day Active Core delivery promise; highest volume tier across US fulfillment ≥ 96% On-time delivery rate for standard 2-day shipping tier
1-Day Active Premium tier with higher SLA; tied to membership benefits ≥ 98% On-time delivery rate for expedited 1-day shipping
Same-Day Active High-value promise; requires near-perfect execution ≥ 99% On-time delivery rate for same-day fulfillment orders
4-Hour Rush Active Newest tier; limited metro coverage, zero tolerance for misses ≥ 99.5% On-time delivery rate for ultra-fast 4-hour window
EU 2-Day Active Cross-border complexity lowers baseline; customs variability ≥ 94% On-time delivery rate for standard EU 2-day shipping
1-Day Active Available in DE, FR, IT metro areas; expanding to ES, NL ≥ 97% On-time delivery rate for EU expedited 1-day shipping
Same-Day Pending Pilot phase; limited to Berlin, Paris, Milan metro areas ≥ 98% On-time delivery rate for EU same-day fulfillment
IN 2-Day Active Infrastructure variability across Tier 1–3 cities ≥ 92% On-time delivery rate for standard IN 2-day shipping
1-Day Pending Limited to top 6 metro areas; warehouse proximity required ≥ 95% On-time delivery rate for IN expedited 1-day shipping
BR 3-Day Active Longer baseline window due to regional logistics constraints ≥ 90% On-time delivery rate for BR standard 3-day shipping
1-Day Pending São Paulo and Rio metro only; carrier partnership phase ≥ 94% On-time delivery rate for BR expedited 1-day shipping
US › 2-Day Active
Data Points JanFebMarAprMayJun JulAugSepOctNovDec
2024 Goal 96.096.096.096.096.096.0 96.096.096.096.096.096.0
2024 Actual 94.294.594.895.195.395.0 95.495.795.996.196.396.5
2024 Actual vs Goal -1.80 -1.50 -1.20 -0.90 -0.70 -1.00 -0.60 -0.30 -0.10 0.10 0.30 0.50
2024 MOE ±0.3±0.3±0.2±0.2±0.2±0.3 ±0.2±0.2±0.2±0.2±0.2±0.2
2024 Sample Size 42K41K44K45K46K43K 47K48K46K49K51K53K
2023 Actual 92.192.492.793.093.292.8 93.593.894.094.294.194.3
US › 1-Day Active
US › Same-Day Active
EU › 2-Day Active

The permission model

The system enforces a clear boundary between who maintains the platform and who uses it to produce reports, and every action in the system maps to one side of this line.

Host  (Platform team)

├─ Define taxonomy & data contracts
├─ Configure intervals & segmentation
├─ Design slide templates
└─ Set workflow stages & gatesClient  (Business teams)

├─ Upload data for segments
├─ Write narratives & context
├─ Review & approve slides
└─ View & present decks

Milestone 2: Metric Presentation

The slide is the presentation unit of a metric. The slide designer lets teams define the anatomy of their slide: what sections exist, what elements each contains, and which are system-controlled versus team-authored. In practice, every slide is divided into sections, each containing elements such as headings, text, data tables, and charts, so two teams can have slides that look different while following the same structural contract.

Slide structure

  Slide
    ├─ Header
    │    ├─ Heading       ← metric name            synced to slide settings
    │    ├─ Text          ← description            synced to slide settings
    │    └─ Text          ← why it matters         synced to metric metadata
    ├─ Data
    │    ├─ Table         ← values by segment      synced to metric data
    │    └─ Chart         ← trend line             synced to metric data
    ├─ Narrative
    │    ├─ Text          ← performance summary    authored by user
    │    └─ Text          ← planned actions        authored by user
    ├─ Supplemental
    │    ├─ Text          ← follow-up question     authored by user
    │    └─ Text          ← answer                 authored by user
    └─ Footer
         └─ Details       ← measurement details    synced to metric metadata
Synced elements

Fields marked [synced] pull directly from the metric library and data pipeline. When the upstream definition changes, the slide reflects it automatically.

Authored elements

Fields marked [authored] are written by the team each cycle: the narrative, planned actions, and supplemental context. The designer defines where these appear; content is produced fresh each period.

Slide canvas

A key requirement was WYSIWYG fidelity: the slide as seen in the web editor must match the slide when printed. Every slide is defined by a single structured specification (the DIP). Two parallel rendering paths consume it: the interactive path renders for the web canvas with editing controls and commenting anchors; the document path generates print-ready output through a docx pipeline. Both read the same DIP; neither modifies it.

Slide Canvas
Header
Text Box — Metric name
Text Box — Description
Text Box — Why important? (Q)
Text Box — Answer
Data
Metrics Table
Line Chart
Narrative
Text Box — Performance (Q)
Text Box — Answer
Text Box — Planned actions (Q)
Text Box — Answer
Supplemental
Text Box — Question
Text Box — Answer
Delivery Speed
Measures the percentage of packages delivered within the promised delivery window across all geographies and shipping tiers.
Why is this metric important?
Delivery speed directly impacts customer satisfaction, repeat purchase rates, and Prime membership retention.
GeoSpeed JanFebMarAprMayJun JulAugSepOctNovDec
US2-Day 94.294.594.895.195.395.0 95.495.795.996.196.396.5
1-Day 97.197.397.597.497.697.8 97.998.098.198.298.398.4
Same-Day 98.598.698.498.798.898.9 99.099.199.099.299.199.3
EU2-Day 91.892.192.492.692.993.1 93.393.593.793.994.094.2
1-Day 95.495.795.996.196.396.5 96.796.897.097.197.297.4
Same-Day 96.897.097.197.397.497.5 97.797.897.998.098.198.2
100 95 90 85
Describe the narrative performance for this month.
Summarize key trends and movements…
What are the planned actions based on this performance?
List planned improvements and investments…
Add follow-up questions…
Provide additional context…

Milestone 3: Metric Collaboration

Slides move through multiple reviewers, each leaving feedback on specific content sections. The collaboration layer provides inline commenting and threaded discussions anchored directly to slide content.

Inline commentingComments anchored to specific content sections within a slide. Reviewers mark exactly which part they're responding to. Comments persist across sessions and survive content changes.
Discussion threadsThreaded conversations with nested replies, quote references, and status tracking (resolved, unresolved, pinned). Any comment can escalate into a formal task with assignment and due dates.
Slide Reporting View — Delivery Speed — December 2024
Delivery Speed
Measures the percentage of packages delivered within the promised delivery window across all geographies and shipping tiers.
Why is this metric important?
Delivery speed directly impacts customer satisfaction, repeat purchase rates, and Prime membership retention. A 1pp improvement in on-time delivery correlates with a 0.3pp increase in 90-day repurchase rate.
GeoSpeed JanFebMarAprMayJun JulAugSepOctNovDec
US2-Day 94.294.594.895.195.395.0 95.495.795.996.196.396.5
1-Day 97.197.397.597.497.697.8 97.998.098.198.298.398.4
Same-Day 98.598.698.498.798.898.9 99.099.199.099.299.199.3
EU2-Day 91.892.192.492.692.993.1 93.393.593.793.994.094.2
1-Day 95.495.795.996.196.396.5 96.796.897.097.197.297.4
Same-Day 96.897.097.197.397.497.5 97.797.897.998.098.198.2
100 95 90 85 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
US EU
Describe the narrative performance for this month.
On-time delivery improved across all geos in 2024. US 2-Day rose from 94.2% to 96.5%, while Same-Day maintained 99%+ throughout. EU showed steady improvement with 2-Day reaching 94.2% by December.
What are the planned actions based on this performance?
Expand same-day delivery coverage in EU from 3 to 8 metro areas by Q2. Pilot predictive routing in EU to reduce 1-Day miss rate below 3%. Continue monitoring peak-season capacity for US Same-Day ahead of Prime Day.
What drove the Q4 improvement in US 2-Day from 95.9% to 96.5%?
Route optimization rollout in Q3 reduced last-mile transit times by 4 hours on average across Southeast and Midwest regions. Combined with 2 new sort centers, this added ~0.6pp to the on-time rate.
Discussion
SR S. Rao 2d ago
The Q4 jump in US 2-Day is notable — is the 0.6pp from route optimization sustainable or a one-time gain?
JC J. Chen 2d ago
Sustainable — the route optimization is a permanent infrastructure change, not a seasonal adjustment. Updated supplemental with the detail.
MK M. Kim 1d ago
Should we highlight the predictive routing pilot for EU more prominently? Leadership will want to know the timeline.
JC J. Chen 1d ago
Good call — added highlight on the EU predictive routing action. Pilot launches Q1 with results expected by end of Q2.

Milestone 4: Metric Reporting

The reporting workflow moves slides through production each month. Every slide follows a defined cycle (Kick Off, Content, Review, Finalization, Closeout), each with its own participants and sign-off requirements.

Kicking off the workflow

Each period begins with the host team opening all metric slides, setting deadlines, and notifying responsible parties. From that point, the activity tracker makes the pipeline visible, surfacing every metric, every stage, and every outstanding action. When fifty slides need to reach leadership in four weeks, the central question becomes "where is everything right now?"

Workflow Tracker — December 2024
Slide Pre Checks Writing Review Publication
Kickoff Pre-checks Initial Writing Bar raise Checks Manager Director VP Publish Sr. Lead CEO
Delivery Speed Nov-04 Nov-06 Nov-11 Nov-18 Nov-22 Nov-27
Customer Satisfaction Nov-04 Nov-08 Nov-14 Nov-19Turn 2 Nov-25 Dec-02
Revenue Growth Nov-04 Nov-07 Nov-12 Nov-18 Nov-22 Nov-28
Order Defect Rate Nov-04 Nov-10 Nov-15Turn 2 Nov-20
Seller Experience Nov-04 Nov-06 Nov-11 Nov-17 Nov-22
Inventory Health Nov-04 Nov-07 Nov-13 Nov-19 Nov-25

Data upload flow

Data owners upload metric values against the data contract defined in the library, and the system validates format, completeness, and alignment before committing anything to the reporting period.

  Data owner
    → Select metric and reporting period
    → Upload data file
    → Validation against data contract
    → Metric data committed for that period
      → Slide data table and chart updated
        → Stakeholders notified

Narrative authoring flow

Once data is committed, program managers write the narrative: performance summary, planned actions, and supplemental context.

  Program manager
    → Open slide for the reporting period
    → Write performance summary
    → Write planned actions
    → Author supplemental Q&A
      → Mark narrative as complete
        → Slide advances to review

Review flow

Review proceeds along two parallel tracks: the business team lead collects stakeholder sign-offs on narrative accuracy while the host team validates numbers and leaves comments. Both tracks must fully resolve before a slide can advance to finalization.

  Business team lead
    → Collect approvals from business stakeholders
    → Verify narrative accuracy
    → Respond to host team comments
      → All approvals received
        → Slide advances to finalization
          → Host team validates final numbers
            → Slide ready for deck

Milestone 5: The Deck

A deck is a collection of slides for a specific reporting month, curated for a particular leadership audience. By the time a deck exists, every layer below it has done its work: metrics are defined, data is uploaded, narratives are written, and slides have passed through the review workflow. Because the deck is curated for reading rather than editing, the slide surface strips editing controls and presents each slide in its final, print-ready form, rendering as a WYSIWYG document that can be exported to PDF for leadership distribution.

Deck Review — CEO Business Review
Ordering: Default Performance Custom
1 Delivery Speed
2 Customer Satisfaction
3 Revenue Growth
4 Order Defect Rate
5 Seller Experience
6 Inventory Health
Delivery Speed
Measures the percentage of packages delivered within the promised delivery window across all geographies and shipping tiers.
Why is this metric important?
Delivery speed directly impacts customer satisfaction, repeat purchase rates, and Prime membership retention. A 1pp improvement in on-time delivery correlates with a 0.3pp increase in 90-day repurchase rate.
GeoSpeed JanFebMarAprMayJun JulAugSepOctNovDec
USStandard 94.294.594.895.195.395.0 95.495.795.996.196.396.5
Express 97.197.397.597.497.697.8 97.998.098.198.298.398.4
Same-Day 98.598.698.498.798.898.9 99.099.199.099.299.199.3
EUStandard 91.892.192.492.692.993.1 93.393.593.793.994.094.2
Express 95.495.795.996.196.396.5 96.796.897.097.197.297.4
Same-Day 96.897.097.197.397.497.5 97.797.897.998.098.198.2
INStandard 88.188.588.989.289.689.8 90.190.490.791.091.391.5
Express 93.293.593.894.094.294.4 94.694.895.095.295.395.5
Same-Day 95.095.295.495.595.795.9 96.096.196.396.496.596.7
BRStandard 85.385.786.086.486.887.1 87.487.788.088.388.588.8
Express 91.091.391.691.992.192.4 92.692.893.093.293.493.6
Same-Day 93.593.793.994.194.394.5 94.794.895.095.195.395.4
100 95 90 85 Jan Mar May Jul Sep Nov
Describe the narrative performance for this month.
On-time delivery improved across all geos in 2024. US Standard rose from 94.2% to 96.5%, while Same-Day maintained 99%+ throughout. IN and BR showed strongest YoY gains, with BR Standard improving 3.5pp from 85.3% to 88.8%.
What are the planned actions based on this performance?
Expand same-day delivery in IN from 12 to 28 cities by Q2. Invest in last-mile partnerships in BR. Pilot predictive routing in EU to reduce Express miss rate below 3%.
Customer Satisfaction
Net Promoter Score (NPS) and Customer Satisfaction (CSAT) across product categories and customer segments.
Why is this metric important?
NPS is the strongest leading indicator of long-term customer lifetime value and organic growth through word-of-mouth.
SegmentMeasure JanFebMarAprMayJun JulAugSepOctNovDec
PrimeNPS 727373747475 757676777778
CSAT 4.54.54.54.64.64.6 4.64.74.74.74.74.8
Non-PrimeNPS 585859596060 616162626363
CSAT 4.14.14.14.24.24.2 4.24.34.34.34.34.4
Describe the narrative performance for this month.
Prime NPS improved steadily from 72 to 78 (+6pts). Non-Prime grew 5pts to 63. CSAT reached 4.8 for Prime members in December, highest on record.
Revenue Growth
Year-over-year revenue growth by business line and geography, normalized for currency fluctuations.
Why is this metric important?
Revenue growth at scale indicates market share expansion and the effectiveness of new product launches and geographic investments.
GeoLine Q1Q2Q3Q4YoY
US1P Retail +8.2%+9.1%+10.3%+11.7%+9.8%
3P Marketplace +14.5%+15.2%+16.0%+17.3%+15.8%
Intl1P Retail +5.1%+6.3%+7.8%+8.4%+6.9%
3P Marketplace +18.2%+19.7%+21.1%+22.5%+20.4%
Describe the narrative performance for this month.
Intl 3P Marketplace led growth at +22.5% in Q4, accelerating each quarter. US 1P Retail improved from +8.2% to +11.7%, driven by grocery and pharmacy expansion.
Comments
VP V. Patel 2h ago
Can we add a Q3 vs Q4 delta column to the delivery speed table?
MK M. Kim 1h ago
BR narrative approved. Numbers look solid.

Slide ordering

Before a deck is published, the curator sets the slide ordering, choosing from three available modes:

  Data uploaded
    → Narratives finalized
      → Reviews complete
        → Ordering configured
          → Deck published
  Default ordering
    Slides appear in the order they were added

  Performance ordering
    Sorts by delta from goal using each metric's
    high-is-better / low-is-better property
    Worst-performing metrics surface first

  Custom ordering
    User-defined drag-and-drop sequence
    Lets curators build a specific narrative arc

Ordering is computed once, not live, because by the time a deck reaches the ordering stage the reporting month's data is locked and narratives are finalized. Since the underlying data does not change after publication, the computed order is stable, which clearly separates deck ordering from dashboard ordering where live data drives dynamic sorting.

Reflections

Push harder for the declarative pathThe slide designer shipped with a visual configurator. An early proposal for a JSON-based declarative config didn't get enough push. With the rise of AI tooling, a declarative config would have made slide creation scriptable and composable.
Slide Configurator
Visual
Section Header
Element Text Box
Source Synced
Label Metric name
Header
Data
Narrative
Preview
Header
Metric name
Description
Data
Table
Chart
Narrative
Summary
Actions
JSON
{
  "sections": [
    {
      "type": "header",
      "elements": [
        { "kind": "text",
          "source": "synced",
          "label": "Metric name" },
        { "kind": "text",
          "source": "synced",
          "label": "Description" }
      ]
    },
    {
      "type": "data",
      "elements": [
        { "kind": "table",
          "source": "synced" },
        { "kind": "chart",
          "source": "synced" }
      ]
    },
    {
      "type": "narrative",
      "elements": [
        { "kind": "text",
          "source": "authored",
          "label": "Summary" },
        { "kind": "text",
          "source": "authored",
          "label": "Actions" }
      ]
    }
  ]
}
Design for people who wear multiple hatsThe system modeled eight to nine distinct roles. In practice, people wore multiple hats: a program manager might also be the narrative author and reviewer. Role models should reflect how people actually work, not how org charts describe them.

Adoption

Osprey rolled out at the end of 2024 across Seller, Consumer, and AWS.

0% Reduction in non-value-add effort
0+ Metrics teams onboarded
0+ Metrics managed on platform
0% Reduction in operational headcount

Non-value-add effort includes standardized communication, data validation, formatting, minor content edits, and slate management such as copy-pasting and uploading content. The pilot team — a 12-person Product and Customer Insights Management (PCIM) group — transitioned from fully manual operations to a workflow that two program managers can run end-to-end, over the course of one year.

Appendix

What are the detailed host vs. client permissions across each milestone?

Every milestone in the system has a distinct set of actions split between Host (platform team) and Client (business teams). The Host side governs configuration, governance, and infrastructure, while the Client side governs content authoring, review participation, and consumption.

Host  (Platform team)

├─ Metric Onboarding
    Define taxonomy
    Create metric definitions
    Set data contracts

├─ Updating Data
    Configure intervals
    Manage segmentation
    Validate uploads

├─ Metric Presentation
    Design slide templates
    Manage synced fields

├─ Metric Reporting
    Set workflow stages & gates
    Assign reviewers


└─ The Deck
     Publish deck configurations
     Set audience accessClient  (Business teams)

├─ Metric Onboarding
    Browse metric library
    Request new metrics


├─ Updating Data
    Upload data for segments



├─ Metric Presentation
    Write narratives
    Author supplemental context

├─ Metric Reporting
    Advance stages
    Review & approve slides
    Add discussion comments

└─ The Deck
     View & present decks

How does the WYSIWYG split rendering work?

Two parallel rendering paths consume the same DIP: the interactive path renders for the web canvas with editing controls and commenting anchors; the document path generates print-ready output through a docx pipeline. Both read the same specification; neither modifies it.

WYSIWYG Render Pipeline
Slide Definition (DIP)
{
  "metric": "Delivery Speed",
  "period": "Dec 2024",
  "sections": [
    {
      "type": "header",
      "title": "Delivery Speed",
      "desc": "On-time delivery %",
      "why": "Customer promise"
    },
    {
      "type": "data",
      "cols": ["Segment","Actual","Goal"],
      "rows": [
        ["US 2-Day", 96.5, 95.0],
        ["US 1-Day", 98.4, 97.0],
        ["EU 2-Day", 94.2, 95.0]
      ]
    },
    {
      "type": "narrative",
      "body": "US tiers improved.
        EU missed by 0.8pp."
    }
  ]
}
Document Generation
function renderDocx(dip) {
  const doc = new Document();

  for (const sec of dip.sections) {
    if (sec.type === "header") {
      doc.addHeading(sec.title);
      doc.addSubtitle(sec.subtitle);
    }

    if (sec.type === "data") {
      const tbl = doc.addTable(
        sec.rows.length, 2
      );
      sec.rows.forEach((r, i) => {
        tbl.cell(i, 0).text(r[0]);
        tbl.cell(i, 1).text(r[1]);
      });
    }

    if (sec.type === "narrative") {
      doc.addParagraph(sec.body);
    }
  }

  return doc.toBuffer();
}
Web View
Delivery Speed
Measures on-time delivery percentage across speed tiers and geographies
Why it matters: directly tied to customer promise and repeat purchase rate
SegmentActualGoal
US 2-Day96.5%95.0%
US 1-Day98.4%97.0%
EU 2-Day94.2%95.0%
On-time delivery improved across US tiers. EU 2-Day missed goal by 0.8pp due to carrier capacity constraints during peak.
💬
PDF View
Delivery Speed
Measures on-time delivery percentage across speed tiers and geographies
Why it matters: directly tied to customer promise and repeat purchase rate
SegmentActualGoal
US 2-Day96.5%95.0%
US 1-Day98.4%97.0%
EU 2-Day94.2%95.0%
On-time delivery improved across US tiers. EU 2-Day missed goal by 0.8pp due to carrier capacity constraints during peak.

How did we emulate role-based access?

A global role switcher reconfigured the entire application, including permissions, visible metrics, available actions, and review workflows, across eight to nine distinct roles.

Metrics Hub
Viewing as CXM Lead
Switch role
CXM Lead
Business Analyst
Host Reviewer
VP Sponsor
Q3 Revenue Slide Drafting
Q3 Retention Slide Not started
Q4 Planning Deck Not started

How did we emulate different canvas layout variants?

The application coexisted with a host app that had its own navbar and sidebar, so we tested three integration strategies: inserting within the host, taking over parts of the host shell, and completely replacing it. In editing modes, the host navigation was hidden entirely to provide a focused workspace.

Canvas Hosting Variants
Host App
Metrics Hub
Inset navigation
× Metrics Hub
Replaced navigation
Metrics Hub
v1
v2
final
Standalone

How did we integrate Cloudscape theming?

The host app used Cloudscape design tokens, so we built a test bench that loaded default token configurations, allowed live editing, and persisted changes via Lambdas, S3, and DynamoDB.

Theme Bench
Loading native Cloudscape tokens…
Fetching user-defined tokens from DynamoDB…
Merging configurations…
Refreshing preview…
Preview
Metric Detail
Edit Submit

How do the AI integrations serve different workflow moments?

Two AI surfaces target distinct moments in the reporting workflow, and in both cases AI-generated content is always visually distinguished so that no AI output reaches other users without explicit human review.

AI Writer

Integrates directly into the TipTap editor for narrative authoring. AI-generated text inherits all editor capabilities. The same inline AI pattern is explored in Writing Canvas: Inline AI Nodes.

AI Copilot

A multi-turn conversational AI in the split panel alongside discussions. Normalized as a workflow participant, not a separate tool. The pattern is covered in Writing Canvas: Companion AI Panel.

AI Writer + Copilot
Performance Summary
✨ AI Writer
Discussions ✨ AI Copilot
PM
Why did EU-West latency spike last Tuesday?
Spike correlates with a 14:32 UTC deploy. 3 segments affected. Rollback at 16:10 restored baseline.
PM
Draft a root cause summary for the narrative section.
Draft: "Config change at 14:32 UTC degraded latency for 3 segments. Rollback resolved. Root cause: missing load test for pool config."

How do task types map to the reporting lifecycle?

The same three action types used in Inquiry Hub apply here, mapped to the reporting lifecycle:

Action type Reporting tasks Trigger
Implementation Data update
Slide data refresh
Narrative update
Manual
Automated
Approval Slide review
Slide approval
Manual
Automated
Change request Slide audit Manual

Any discussion comment can generate a formal task that inherits the comment's context (which slide, which section, what the issue is). The task enters the management system with a pending status and appears in the assignee's hub dashboard.