Inquiry Hub
The strategic prototyping story
What Some customer issues bypass normal support and become prioritized enquiries. These cases carry strict SLAs, require cross-team investigations, and involve multiple review stages with senior leadership. Nobody could see where a case stood in its lifecycle.
Why The domain was too complex to spec on paper. A strategic prototype could validate the interaction model, per-team workflow configurability, and the attention system before a single engineering resource was committed.
How I built a working platform that converges three core components: a collaboration space for investigation documents, a configurable workflow engine that adapts to each case type, and an action item system that ties findings to accountability.
Overview
Some customer issues bypass normal support channels. These elevated cases carry strict SLAs and require dedicated investigation, multi-level review, and structured accountability across teams. The tooling was fragmented: cases in one system, action items in spreadsheets, documents in word processors. Inquiry Hub is a fully interactive prototype that reimagines this process as a unified platform with flexible document authoring built into the case context and per-case workflow configurability through workspace configuration rather than engineering changes.
Pain Points
Cases and action items lived in separate systems despite being a single logical process. Updating one required manually synchronizing the other. Spreadsheets, documents, and status trackers were all maintained independently.
The existing tool showed cases in a tabular format with metadata columns (status, owner, dates). Users had to infer urgency from raw metadata rather than having the system surface what actually needed their attention. Triage meant scanning rows and mentally computing priority.
There was no single tool that visualized where a case stood in its lifecycle. Milestones, review stages, and handoffs were tracked across disconnected systems. Nobody had an at-a-glance view of the full process or could tell which phase a case was in without assembling the picture manually.
Different teams managed different case types with overlapping but distinct workflows. Routing cases between teams meant going to Outlook, Slack, and other messaging systems to manually coordinate handoffs. Every new team required custom tooling rather than configuration of a shared platform.
The End-to-End Journey
This is the end-to-end mental model of how an inquiry flows through the system, and who is involved at each stage.
CM Case Manager CS Customer Support IN Investigator PT Partner Teams RV Reviewer LD Leadership
| CM | CS | IN | PT | RV | LD | ||
|---|---|---|---|---|---|---|---|
| Intake | Email arrives with customer issue | ||||||
| Case created in the system | |||||||
| Investigation | Shallow dive conducted | ||||||
| Customer rescue completed | |||||||
| Deep dive conducted | |||||||
| Customer journey reconstructed | |||||||
| Root causes identified | |||||||
| Action items created & assigned | |||||||
| Writing | Investigation document drafted | ||||||
| Document reviewed with partner teams | |||||||
| Review | Review with bar raiser | ||||||
| Review with partner teams | |||||||
| Review with leadership | |||||||
| Closing | Share-out completed | ||||||
| Action items published |
The Entities
Two entities, because the domain has two fundamentally different workflow shapes. Collapsing both into one would force a single model onto both, either making cases too rigid or action items too manual.
| Case | Action Item | |
|---|---|---|
| Definition | An investigation record with milestones, documents, and review cycles | A corrective action spawned from case findings, with its own owner and deadline |
| Relationship | Spawns action items from investigation findings | Completion feeds back into parent case's closure eligibility |
| Workflow | Manual Every gate raised by a human | Automated System monitors health and deadlines |
| Lifecycle | Intake → Investigation → Writing → Review → Closing | Publishing → Implementation → Closure |
| Milestones | All milestones configurable | Implementation milestones configurable |
| Review stages | Review stages available | Fixed review stage |
| Metadata fields | Configurable | Configurable |
| Document templates | Configurable | Partially configurable |
| Priority Routing | No priority thresholds | Automated priority routing |
| DFD milestones | Not available | Available |
| Shared | Workspace configuration: templates, fields, permissions, and role-based access | |
The Core Components
The pain points pointed to three gaps: no shared space for investigation work, no adaptable workflow structure, and no connected lifecycle across cases and actions. Each gap needed its own solution, but the solutions had to converge into a single experience, not three tools stitched together.
- The workflow componentReplace one-size-fits-all process assumptions with a platform where each team can define their own milestones, review stages, field requirements, and SLA rules, turning "build features for each team" into "configure a shared platform."
- The editor componentGive investigators, reviewers, and authors a shared environment where documents, discussions, and AI suggestions live alongside case context, eliminating the word-processor-to-tracker context switch that fragmented the investigation process.
- The collab spaceBring the editor and workflow components together into a single screen where investigation documents, milestone status, threaded discussions, and AI-assisted suggestions coexist. No more switching between a word processor, a case tracker, and an email thread to understand where a case stands.
- The hubGive every user a single entry point that answers "what needs my attention right now?" by surfacing urgent items, grouping them by attention type, and sorting by priority so nothing falls through the cracks.
Component 1: The Workflow
But every case type has a different process. A VP inquiry follows a different review chain than a customer communication case. Different teams use different terminology for the same concepts. Some case types require fields that others don't. The original ask was "support different case types." The prototype provoked a bigger vision: a configurable platform where each team tailors the workflow to their needs without engineering changes.
User flow
An inquiry arrives and enters the system as a case. The case manager updates the due date and intake date, confirms estimated timelines for each activity, and publishes the case. The investigation lifecycle begins.
Inquiry intake received
→ Case manager updates due date and intake date
→ Case manager confirms ETA for each activity
→ Case manager publishes case
→ Investigation lifecycle begins Milestone tracker presentation
Both cases and action items visualize their progress through milestone trackers. The tracker groups activities into semantic phases and shows each item's position across the full lifecycle at a glance.
| Entity | Phase 1 | Phase 2 | Phase 3 | Phase 4 | ||||
|---|---|---|---|---|---|---|---|---|
| Activity 1 | Activity 2 | Activity 3 | Activity 4 | Activity 5 | Activity 6 | Activity 7 | Activity 8 | |
| Case 1 | ||||||||
| Case 2 | ||||||||
| Case 3 | ||||||||
Component 2: The Editor
Each case document is divided into independently editable sections: Executive Summary, Customer Journey, Root Causes & Action Items, and Appendix. Different team members can edit different sections simultaneously based on their role. A section-level lock prevents conflicting edits: when one author is working on Root Cause Analysis, others see the lock and work on other sections. Version controls allow comparing drafts across editing sessions.
Customer reported intermittent service degradation affecting order processing in the EU-West region. Impact assessment indicates approximately 340 orders experienced delays exceeding the published SLA during a 72-hour window.
The customer was contacted within 4 hours of the initial report. An incident bridge was established and root cause investigation initiated immediately.
March 2, 09:14 UTC — First customer report via support channel. March 2, 13:40 — Case escalated to dedicated team. March 3, 08:00 — Customer outreach call completed. March 5 — Interim remediation deployed.
3 attachments · Load balancer config diff, traffic analysis, customer timeline
This section model also drives per-section permissions, so an investigator might have edit access to Root Causes but read-only access to Executive Summary. Testing revealed that teams needed exactly this level of granularity to match how they actually collaborate on investigation documents, a requirement that only surfaced once the prototype made the editing experience tangible.
The editing box
Each document section is a self-contained editing box. The anatomy of a single box shows how workspace configuration, user permissions, and AI assistance converge on one unit:
Customer reported intermittent service degradation affecting order processing in the EU-West region. Impact assessment indicates approximately 340 orders experienced delays exceeding the published SLA during a 72-hour window.
AI suggestions by Hub Assistant
The design philosophy here is contextual suggestions over conversational chatbots: AI comes to the user as they write, rather than requiring them to leave their document and open a separate chat. The Hub Assistant analyzes investigation content and produces root cause and action item suggestions in a side panel, and no AI-generated content reaches other users without explicit human review.
Component 3: The Collab Space
Investigation teams needed a place to write. The existing workflow had them authoring findings, root cause analysis, and remediation plans in word processors, then manually linking those documents back to case trackers. Context was scattered across tools: the investigation narrative in one application, the case metadata in another, and reviewer feedback buried in email threads, so every context-switch became a chance for something to fall through.
The prototype addressed this fragmentation by embedding rich text editing directly into the case context: workflow status at the top, section-based documents in the center, and threaded discussions on the right. Inline commenting anchored reviewer feedback to specific content rather than letting it float in separate email threads.
Discussion UX
The original threading model used full-panel navigation where clicking a thread replaced the entire view, making it impossible to reference the document while composing a reply. The redesign uses inline expand-in-place threads that keep the document visible, which was the single most requested capability during user testing. I cover the editing and commenting system in depth in my Writing Canvas project.
Component 4: The Hub
The hub page is the single entry point every user sees after login. Rather than dumping a flat list of cases, it answers "what needs my attention right now?" through a table that groups items by attention type, shows status context, and surfaces the most urgent work first.
Arriving at this model took three iterations. A simple badge count ("7 items") failed because it did not distinguish urgent from routine. A flat categorized list improved on that but did not scale past 20 items. The final version groups by attention type with priority ordering and due date proximity, and a role toggler switches between Primary POC and Reviewer views so each persona sees only what is relevant to them.
How the hub determines what is urgent
The hub aggregates attention items from cases and action items, grouped by type and sorted by urgency. The computation is role-aware: a case owner sees SLA alerts, an investigator sees review requests and milestone deadlines.
Priority 0 Drafts // unpublished, action needed to start
Priority 1 SLA breached // past deadline, highest urgency
Priority 2 At risk // approaching deadline
Priority 3 On track // open, within SLA
Priority 4 Closed, pending // closed but action items still open
Priority 5 Closed // fully resolved
Within each tier → sort by due date proximity Why Prototyping
The domain was too complex to build on faith. Stakeholders understood the pain, but abstract discussions generated agreement without commitment, so the prototype itself became the deliverable: a tangible artifact that could provoke concrete feedback in ways that slide decks and specification documents never did.
Questions about document editing within case workflows or role-based permissions at scale were answered with working interactions, not slide decks.
The attention system, activity bucketing, and workspace configuration all emerged from building, not from requirements documents.
Workspace configuration shifted the conversation from "build features for each team" to "build a platform teams can configure." The AI maturity slider gave leadership a framework for AI investment. Neither was requested.
Realistic data and role-based behavior meant participants engaged with the prototype as a real tool, revealing interaction problems that static mockups could not surface.
Reflections
- Fidelity earns trustRealistic data and role-based behavior meant participants engaged with the prototype as a real tool, not a wireframe exercise. The prototype's purpose is persuasion, and persuasion requires that stakeholders forget they're using a prototype.
- Some complexity is inherentThe case detail controller orchestrates milestones, documents, action items, reviews, permissions, and state transitions. That complexity exists in the domain, not the design. The mitigation was rigorous typing and clear naming, not forced decomposition.
Adoption
Since launching in mid-2025, Phoenix processed 600+ enquiries across 12 teams with no formal training program. Teams onboarded through word of mouth and peer referral, reaching 1,150+ monthly active users within eight months.
Appendix
What flexibilities are available per workspace?
Every workspace can tailor the platform across three dimensions without code changes: fields, document templates, and workflow milestones.
Each type and subtype defines its own document skeleton
Fixed milestones and custom milestones per type
How did we emulate role-based access?
The permission system handles a combinatorial challenge: rather than maintaining hundreds of individual boolean mappings, composable permission presets organized in three tiers keep the matrix manageable.
8 roles x 50+ actions for cases
7 roles x 40+ actions for action items A preset factory creates named permission groups. Each preset is a dictionary of action keys mapped to booleans:
// createPermissionPreset({ ACTION: true, ... }) → PermissionsByRole
const READ_ACCESS = createPermissionPreset({
VIEW_NOTES: true, VIEW_REVIEW_LOG: true, VIEW_ACCESS_LOG: true,
VIEW_CHANGE_LOG: true, VIEW_DISCUSSIONS: true, ADD_COMMENT: true,
SHARE: true, CLONE: true,
});
const EDIT_ACCESS = createPermissionPreset({
ADD_NOTE: true, NOTIFY: true, SWITCH_TO_EDIT_MODE: true,
LINK_ACTION_ITEM: true, CREATE_ACTION_ITEM: true,
});
const MANAGER_ACCESS = createPermissionPreset({
EDIT_METADATA: true, EDIT_ACCESS: true, EDIT_ACTIVITY: true,
CHANGE_ITEM_STATUS: true, START_TRANSFER: true,
}); Roles compose these presets via object spread:
export const ROLE_PERMISSIONS: Record<Role, PermissionsByRole> = {
[Role.PrimaryPOC]: { ...READ_ACCESS, ...EDIT_ACCESS, ...MANAGER_ACCESS },
[Role.Reviewer]: { ...READ_ACCESS, ...EDIT_ACCESS },
[Role.Viewer]: { ...READ_ACCESS },
}; A helper function provides the runtime check:
export const hasPermission = (role: Role, action: Action): boolean =>
ROLE_PERMISSIONS[role]?.[action] || false;
// Examples:
hasPermission(Role.PrimaryPOC, Action.EDIT_METADATA) // → true (MANAGER_ACCESS)
hasPermission(Role.Reviewer, Action.EDIT_METADATA) // → false (EDIT_ACCESS only)
hasPermission(Role.Viewer, Action.ADD_COMMENT) // → true (READ_ACCESS) Adding a new permission requires updating at most three presets. Adding a new role requires one line of preset composition. The entire permission matrix is readable in a single screen, which proved essential during stakeholder reviews where product managers needed to verify role definitions matched their expectations.
What range of feature flags did we introduce for testing?
Feature flags are stored in local storage and control which capabilities are available in the current session, enabling progressive disclosure during demos, A/B testing during research sessions, and simulated phased rollout. We tested workspace configurations with:
Workspaces
Action Item
Action Lite
Inline Writing Assistant
Theme Switcher
Base font size 14
Base font size 16
Osprey-themed theme
Custom themes (created by users) What types of TASKS are supported for each milestone ACTIVITY?
Each milestone in the workflow has a type that controls its behavior and UI presentation. Administrators select from predefined types when configuring their workflow, and the system generates the appropriate interface automatically.
| Action type | Status | Decision | Trigger |
|---|---|---|---|
| Implementation | Not started In progress Completed | Done Not done | Manual Automated |
| Approval | Not started In progress Completed | Approved Approved with comments Send back | Manual Automated |
| Change request | — | Approved Denied Send back | Manual Automated |
How are root cause suggestions presented by the AI?
The Hub Assistant reads the Customer Journey section as the investigator writes and produces candidate root causes. Suggestions appear in a slide-in panel alongside the document section being analyzed. Each suggestion includes a confidence indicator and supporting evidence extracted from the narrative. Proposed root causes enter a draft state visible only to the creator, who reviews and edits before publishing to the team.
March 2, 09:14 UTC -- First customer report via support channel indicating intermittent order processing failures in the EU-West region.
March 2, 13:40 -- Case assigned to dedicated investigation team. Incident bridge established.
March 3, 08:00 -- Customer outreach call completed. Shared preliminary findings and timeline for remediation.
March 5 -- Interim fix deployed. Load balancer configuration change had introduced asymmetric routing for EU-West availability zones.
How are action suggestions presented by the AI?
From Findings & Root Causes, the Hub Assistant generates suggested action items with owner assignments. Each action includes a suggested priority, due date, and assignee based on the team structure. AI-generated content is always visually distinguished, and every suggestion includes feedback mechanisms. No auto-apply: all AI output requires explicit user action.
Cross-AZ routing validation was absent from the deploy pipeline. Canary tests only covered single-region traffic. 340 orders were impacted during the 72-hour window before remediation.
How is the case milestone tracker presented?
Cases progress through 10+ milestones grouped into semantic buckets: Intake, Investigation, Writing, Review, and Closing. The tracker visualizes each case's position across the full lifecycle, with checkmarks for completed steps, active rings for in-progress work, and skip arcs where a step was bypassed.
| Case | Intake | Investigation | Writing | Review | Closing | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Received | Acknowledged | Shallow dive | Rescue | Deep dive | Root cause | Writing actions | Drafting document | Bar raiser | Partner | Leadership | Share-out | Published | |
| My delivery did not arrive | Feb-28 | Mar-01 | Mar-02 | Mar-03 | Mar-05 | Mar-07 | Mar-10 | Mar-12 | Mar-14 | Mar-17 | |||
| Received a brick instead of a phone | Mar-04 | Mar-05 | Mar-06 | Mar-08 | Mar-11 | ||||||||
| Support did not respond in time | Feb-20 | Feb-21 | Feb-22 | Feb-24 | Feb-26 | Feb-28 | Mar-03 | Mar-06 | Mar-10Turn 2 | ||||
| Terrible customer experience | Feb-10 | Feb-11 | Feb-12 | Feb-14 | Feb-17 | Feb-19 | Feb-21 | Feb-24 | Feb-26Turn 3 | Feb-28 | Mar-03 | Mar-05 | |
| Order was charged twice | Mar-08 | Mar-09 | Mar-10 | Mar-11 | |||||||||
How is the action item milestone tracker presented?
Action items follow a simpler but parallel lifecycle. Three phases cover the full arc from creation to impact measurement: Publishing (creation, drafting, publishing), Implementation (scoping, implementation), and Closure (sign-off, closure, impact measurement).
| Action Item | Publishing | Implementation | Closure | |||||
|---|---|---|---|---|---|---|---|---|
| Creation | Drafting | Publishing | Scoping | Implementation | Sign-off | Closure | Impact measurement | |
| Add cross-AZ routing validation to deploy pipeline | Mar-05 | Mar-06 | Mar-07 | Mar-08 | Mar-14 | Mar-16 | Mar-17 | Mar-20 |
| Extend canary coverage to multi-region patterns | Mar-07 | Mar-08 | Mar-10 | Mar-11 | ||||
| Customer communication with updated SLA commitment | Mar-10 | |||||||
| Update monitoring alert thresholds for regional metrics | Mar-05 | Mar-06 | Mar-07 | Mar-09 | Mar-12 | |||
How do section-level permissions work in the editor?
The original specification assumed document-level permissions, but testing revealed that teams needed section-level control to match how they actually collaborate on investigation documents. Each content block can be locked to specific roles: the executive summary is editable only by the Primary POC and Doc Owner, while the technical analysis section is open to Contributors, and a reviewer can comment on any section but only edit their review notes. This granularity emerged entirely from the prototype rather than from the original requirements, and because the section model drives both the editing experience and the permission boundary, the two concerns stay aligned in a single abstraction.
How did the discussion UX change from original to redesign?
The original discussion threading model used full-panel navigation: clicking a thread replaced the entire view. For cases with 10-15 active threads spanning investigation, review, and action item coordination, the constant context-switching made it difficult to track the overall state of a conversation, so the redesign replaced that pattern with inline expand-in-place threads and a compose-in-place editor.
- Thread count and unresolved state visible at a glance, making triage across 10+ threads practical without clicking into each one.
- Reviewers can reference the document while composing replies, which was the most requested capability during user testing.
What is the full attention taxonomy?
The attention taxonomy defines 8 types for cases and 12 for action items, each with a sort order, human-readable label, and description of what the user should do:
Case attention types (by priority)
1. Pending transfer // case reassignment waiting
2. Pending actions logging // findings need action items
3. Pending case opening // intake not yet published
4. Pending status change // milestone action overdue
5. Review overdue // reviewer hasn't responded
6. Review pending // review requested, not started
7. Feedback received // new reviewer feedback
8. Feedback needs attention // feedback requires response
Action item attention types (by priority)
1. In priority routing // prioritized, immediate action
2. Implementation overdue // past implementation deadline
3. Will be prioritized soon // approaching priority threshold
4. Implementation due soon // deadline approaching
5. Sign-off overdue // approver hasn't responded
6. Sign-off pending // awaiting sign-off