Edit Mode

Designed a specialized collaboration enviroment that streamlined data scientist and clinical researcher workflows within an AI/ML cell analysis visualization dashboard.

Timeline

June 2024- December 2024

My Role

Product Designer

What I did
  • Workflow Mapping
  • UX Design
  • User Interviews
  • Rapid Prototyping
  • Product Planning
  • User Testing
The Team
  • 4x Data Scientists
  • 1x Research Scientists
  • 2x BE Engineers
  • 4x FE Engineers
  • 1x Product Manager
  • 1 x Engineer Manager
  • 1x Designers

THE PROBLEM

An unscalable dashboard

Ozette’s AI-driven platform started as a closed system, granting exclusive data-modification rights to internal data scientists. Because Ozette’s analysis algorithms are proprietary, partners had limited visibility to preserve underlying IP. Hence, external partners could only view snapshots of their clinical data during scheduled meetings.

As partnerships multiplied, one-sided collaboration and inflexible workflows led to major scalability issues:.

Business Impact
High Churn: Partners felt “locked out” of their own data, causing dissatisfaction and contract non-renewals.

Slowed Project Timelines: Reliance on data scientists for every data revision limited capacity to take on new partnerships.

User Impact
Reduced Trust (partners): Limited control eroded confidence in the data and analysis.

Workflow Bottlenecks (data scientists) : Constant data scientist intervention caused delays in processing and feedback loops.

OUR HYPOTHESIS

Early Access Could Reduce Bottlenecks

We believed that giving partners a timely, guided way to inspect and comment on their data would alleviate resource strain and bolster trust. By carefully controlling what partners could see and how they could provide input, we aimed to improve collaboration while safeguarding Ozette’s proprietary methods.

DISCOVERY

Leaning into the insights and workflows of technical users.

To shape our approach, I conducted deep-dive sessions with four data scientists managing separate clinical partner projects. Key objectives included:

The objective was to dig deep into their day-to-day frustrations, explore inefficiencies, and identify the key obstacles preventing smoother collaboration.

Using these findings, I developed a journey map covering the entire data lifecycle—from sample acquisition through final gating decisions. This laid bare the moments when partner input was both most critical and most challenging to facilitate under the existing system.

EARLY EXPLORATIONS

Satisfying all user needs

User research made one thing clear: partners wanted more involvement in the data review process, but full autonomy wasn’t an option.

The challenge wasn’t just about faster feedback loops—it was about designing a system where partners could collaborate with data scientists without introducing inefficiencies or compromising the integrity of the analysis. Initially, we framed our ideation around a structured feedback model: how might we create a way for partners to indicate what needed to change without disrupting existing workflows?

This led to our first proposed solution: Flagging.

Flagging enables partners to flag data issues and request changes asynchronously for a structured and efficient feedback loop.

Flagging was designed as a low-friction way for partners to request changes without needing full data access. Instead of waiting for scheduled calls to provide feedback, partners could highlight specific data points, add context, and provide revision requests directly within the platform.

This introduced asynchronous collaboration, allowing scientists to address flagged issues on their own timeline.

FEATURES

Asynchronous Input & Contextual Accuracy

Partners could review a multi-parameter dot plot or a hierarchical node tree at any time; however not edit ( anchor points on plots were disabled) . They would then be able to “flag” data points (e.g., gating boundaries, sample anomalies) to request changes or clarifications.

Each flag included data specifics such as cell population IDs or gating thresholds, and partners could add comments detailing why they believed an adjustment was needed (e.g., “Exclude dead cells from Gate B”).

IP Safeguards

Partners never accessed the underlying algorithms; they only saw final rendered plots. However, for the data scientitsits dashboard, anchor points on plots were enabled, giving them exclusive access to modify the data.  
In addition,  data scientists are able to resolve the flags  using internal tools only viewbale to them, ensuring proprietary methods stayed confidential.

What we

learned

Partners still lacked autonomy

They could request changes, but they still relied on data scientists to execute them. This created a bottleneck rather than solving it.

Review cycles were still slow

Even though Flagging introduced asynchronous feedback, partners still had to wait for scientists to review and apply their requests.

Workload shifted, rather than reduced

Instead of improving efficiency, Flagging created long task queues for data scientists, meaning feedback piled up rather than being addressed in real time.

FINAL DESIGN

Edit Mode: The Shift Towards Partner Empowerment

Recognizing that Flagging wasn’t enough, we reframed our strategy around active collaboration instead of structured feedback. Instead of relying on data scientists to interpret and execute changes, we asked:
"How might we give partners more direct control while maintaining trust, security, and data integrity?"

This shift led to Edit Mode, a more robust approach that enabled partners to make real-time modifications within guardrails, reducing delays and ensuring smoother collaboration.
Over the next few weeks, I explored and tested a wide range of flows with the data scientitsts on the team. I developed user-flow diagrams for both scientists and partners, detailing how they would navigate from data inspection to flag resolution. This ensured the interface naturally guided them through the review, flag, revise cycle without requiring heavy training.

FEATURES & DESIGN QUESTIONS

Edit Mode Toggle

Design Question: How can we give partners a simple way to switch between “view only” and “edit” states for their data plots—without overloading them with unnecessary functionality or revealing Ozette’s proprietary algorithms?
Initially, I considered keeping the editing tools visible at all times. However, users quickly became overwhelmed by extra buttons and panels when they only wanted to review results. This clutter also risked accidental edits that could corrupt gating thresholds.

Hence, I consolidated both experiences of viewing and editing into single dashboard with an “Enter Edit Mode” toggle. By default, partners see a clean, read-only layout for data review. When they click “Enter Edit Mode,” relevant tools appear contextually (e.g., gate adjustment sliders, annotation fields), while core proprietary menus remain hidden from external users.

Shared Access & Collaborative Editing Controls

Design Question: How do we handle simultaneous editing on complex immunological data plots so that two different users can make real-time changes without overwriting each other’s work—or requiring endless version management?
I explored Google Docs–style concurrency where all edits appear live. However, real-time concurrency on flow cytometry plots posed a significant data integrity challenge—gates might shift in ways that break subsequent gating logic.
Hence, i adopted a single user lock. With edit mode, a single user can enter “Edit Mode” to modify gating or thresholds. A banner clearly indicates who is editing. If another user attempts to edit, they see a message prompting them to wait or request access.

I also implemented time-based “locks” that automatically released if a user was inactive for a set period. System banners would notify collaborators who was editing and how long until the session times out.

Quick Results Generation

Design Question: How do we empower partners to finalize and download results after making edits—without always needing a data scientist to confirm every step?
I first considered an “auto-compile” approach, but automatically re-running immunological calculations after every edit risked overwhelming compute resources and made gating changes harder to track.
Instead, when partners finish making gated edits in “Edit Mode,” they can click “Save All Edits” and then “Approve All Gates.” This triggers a streamlined, behind-the-scenes process to re-run the relevant calculations and generate updated data plots. Partners can then instantly download the revised results, freeing data scientists from repetitive final checks.

FEEDBACK AND RESULTS

Internal Users

We launched the "Edit Mode" feature under controlled feature flags and rolled it out gradually to gather customer feedback and refine the user experience.

workflow integration


Enabling customers to edit final gates provided clearer insights for the data science team into approved gate states. This data serves as valuable training data for algorithm improvements and enhances our understanding of customer needs.

reduced communication overhead


By empowering customers with direct editing capabilities, we observed a reduction in communication between Ozette's data science leads and customers regarding gate approvals.





Rahmat Raji
Build with ❤️ in Webflow