Edit Mode

Designed a specialized collaboration enviroment that streamlined data scientist and clinical researcher workflows within an AI/ML cell analysis visualization dashboard.

Timeline

Aug - Dec 2024

My Role

Senior Product Designer

What I did
  • Workflow Mapping
  • UX Design
  • User Interviews
  • Rapid Prototyping
  • Product Planning
  • User Testing
The Team
  • 1 x Engineer
  • 2 x Project Manager
  • 3 x Designer
  • 1 x Physician
  • 1 x Medical Student
  • 2 x Business Strategist

THE PROBLEM

An unscalable dashboard

Ozette’s AI-driven platform started as a closed system, granting exclusive data-modification rights to internal data scientists. Because Ozette’s analysis algorithms are proprietary, partners had limited visibility to preserve underlying IP.

Hence, external partners could only view snapshots of their clinical data during scheduled meetings. As partnerships multiplied, one-sided collaboration and inflexible workflows led to major scalability issues.

THE SOLUTION

Our hyphothesis

We believed that giving partners a timely, guided way to inspect and comment on their data would alleviate resource strain and bolster trust. By carefully controlling what partners could see and how they could provide input, we aimed to improve collaboration while safeguarding Ozette’s proprietary methods.

DISCOVERY

Leaning into the insights and workflows of technical users.

To shape our approach, I conducted deep-dive sessions with four data scientists managing separate clinical partner projects.

The objective was to dig deep into their day-to-day frustrations, explore inefficiencies, and identify the key obstacles preventing smoother collaboration.

DISCOVEBBR

1. USER RESEARCH
Insights from the deep-dive interviews directly shaped the structure and content of our journey map. By capturing specific pain points from each data scientist—such as delays in feedback loops, unclear approval ownership, and redundant communication—we were able to pinpoint critical friction zones across the data lifecycle.

These included:

1. The handoff from initial data cleaning to gating.

2. Unclear triggers for when partner input was needed.

3. Final approval loops that lacked visibility or traceability
2. JOURNEY MAPPING
Using these findings, I developed a journey map covering the entire data lifecycle—from sample acquisition through final gating decisions.

This laid bare the moments when partner input was both most critical and most challenging to facilitate under the existing system.

EARLY EXPLORATIONS

How do we satisfy all of our users needs?

User research made one thing clear: partners wanted more involvement in the data review process, but full autonomy wasn’t an option.

The challenge wasn’t just about faster feedback loops—it was about designing a system where partners could collaborate with data scientists without introducing inefficiencies or compromising the integrity of the analysis. Initially, we framed our ideation around a structured feedback model: how might we create a way for partners to indicate what needed to change without disrupting existing workflows?

This led to our first proposed solution: Flagging.

DETAILS

Research Objectives

Flagging was designed as a low-friction way for partners to request changes without needing full data access. Instead of waiting for scheduled calls to provide feedback, partners could highlight specific data points, add context, and provide revision requests directly within the platform.

This introduced asynchronous collaboration, allowing scientists to address flagged issues on their own timeline.

FEATURES

Asynchronous Input & Contextual Accuracy
Partners could review a multi-parameter dot plot or a hierarchical node tree at any time; however not edit ( anchor points on plots were disabled).

They would then be able to “flag” data points (e.g., gating boundaries, sample anomalies) to request changes or clarifications.

Each flag included data specifics such as cell population IDs or gating thresholds, and partners could add comments detailing why they believed an adjustment was needed (e.g., “Exclude dead cells from Gate B”).
IP Safeguards
Partners will be unable to access the underlying algorithms; and will only see the final rendered plots. However, for the data scientitsits dashboard, anchor points on plots were enabled, giving them exclusive access to modify the data.

In addition,  data scientists are able to resolve the flags  using internal tools only viewbale to them, ensuring proprietary methods stayed confidential.  

ITERATION

What we learned

After presenting the first iteration to data scientisits and partners, these were our key findings:

FINDINGS
01

Partners still lacked autonomy

They could request changes, but they still relied on data scientists to execute them. This created a bottleneck rather than solving it.

02

Review cycles were still slow

Even though Flagging introduced asynchronous feedback, partners still had to wait for scientists to review and apply their requests.

03

Workload shifted, not reduced

Instead of improving efficiency, Flagging created long task queues for data scientists, meaning feedback piled up rather than being addressed in real time.

FINAL DESIGNS

Edit Mode: The Shift Towards Partner Empowerment

Recognizing that Flagging wasn’t enough, we reframed our strategy around active collaboration instead of structured feedback. Instead of relying on data scientists to interpret and execute changes, we asked:

"How might we give partners more direct control while maintaining trust, security, and data integrity?"


This shift led to Edit Mode, a more robust approach that enabled partners to make real-time modifications within guardrails, reducing delays and ensuring smoother collaboration.


Over the next few weeks, I explored and tested a wide range of flows with the data scientitsts on the team. I developed user-flow diagrams for both scientists and partners, detailing how they would navigate from data inspection to flag resolution.

This ensured the interface naturally guided them through the review, flag, revise cycle without requiring heavy training.

FEATURES

Design Questions and Features

#1

How can we give partners a simple way to switch between “view only” and “edit” states for their data plots—without overloading them with unnecessary functionality or revealing Ozette’s proprietary algorithms?"

Edit Mode Toggle
Initially, I considered keeping the editing tools visible at all times.

However, users quickly became overwhelmed by extra buttons and panels when they only wanted to review results.

Hence, I consolidated both experiences of viewing and editing into single dashboard with an “Enter Edit Mode” toggle.

When they click “Enter Edit Mode,” relevant tools appear contextually (e.g., gate adjustment sliders, annotation fields), while core proprietary menus remain hidden from external users.

#2

How do we handle simultaneous editing on complex immunological data plots so that two different users can make real-time changes without overwriting each other’s work—or requiring endless version management?

Collaborative Editing Controls
I explored Google Docs–style concurrency where all edits appear live.

However, real-time concurrency on flow cytometry plots posed a significant data integrity challenge—gates might shift in ways that break subsequent gating logic.
Hence, I adopted a time based single user lock.

With edit mode, only one user can enter “Edit Mode” to modify gating or thresholds. System banners would notify collaborators who was editing and how long until the session times out.


#3

How do we empower partners to finalize and download results after making edits—without always needing a data scientist to confirm every step?

Quick Results Generation
I first considered an “auto-compile” approach, but automatically re-running immunological calculations after every edit risked overwhelming compute resources and made gating changes harder to track.
Instead, when partners finish making gated edits , they have the ability to approve "all gates" and and download their results.

This triggers a streamlined, behind-the-scenes process to re-run the relevant calculations and generate updated data plots. Partners can then instantly download the revised results, freeing data scientists from repetitive final checks.

FEATURES

Impact

We launched the "Edit Mode" feature under controlled feature flags and rolled it out gradually to gather customer feedback and refine the user experience.

Workflow integration

Enabling partners to edit final gates provided clearer insights for the data science team into approved gate states. This data serves as valuable training data for algorithm improvements and enhances our understanding of customer needs.

Reduced communication overhead

By empowering partners with direct editing capabilities, we observed a reduction in communication between Ozette's data science leads and customers regarding gate approvals.

Rahmat Raji
Build with ❤️ in Webflow