Parseable

Overview

Parseable is a high-compression log storage and observability platform built as a cost-efficient alternative to tools like Datadog. I joined as the sole product designer when the product was early and rough — a handful of base log, metric, and trace pages with an existing design system I largely rebuilt, few established patterns worth keeping, and significant UX debt.

I rebuilt the foundation while expanding the product across 18+ features, working directly with the founders and engineering team on everything from core querying workflows to AI-assisted investigation — all while minimising disruption to the frontend team already building against existing components.

See product →

It's not a features problem

Developer observability tools have a usability problem that isn't about features. Datadog has every feature. The problem is that debugging under pressure is non-linear — engineers hold multiple hypotheses simultaneously, pivot between them, and need to move fast across related data. Most tools are built around predefined dashboards and sequential query flows that break this pattern. You have to know what you're looking for before you can look.

The design challenge wasn't "make this cheaper than Datadog." It was: design a system where engineers can investigate quickly, without friction, without being forced into workflows that don't match how they actually think.

Every round trip has a cost

The filtering system runs across every surface in the platform — logs, traces, metrics, dashboards. On paper that sounds like a technical decision. In practice, it's the thing that determines whether debugging feels fast or exhausting.

Most observability tools make filtering a deliberate act. You go somewhere, type something, come back. Every round trip adds cognitive cost when you're already context-switching under pressure.

Filtering always where the user is

I designed the system so the filter is always at hand. The top five field values are surfaced on click — no typing required for the common case. Any field in a log, trace, or metric table can be filtered directly from the table row, with include or exclude as a one-click option. Filters persist across related views and can be saved as named views for repeated workflows. A sidebar with field categories keeps the full filter space discoverable without overwhelming the interface.

None of these are individually remarkable. Together they compound into a noticeably different pace of investigation — one where you're never hunting for the filter UI, never retyping something you already specified, never losing your state because you navigated away.

The cold-start problem

Observability tools have a cold-start problem. The product only shows its value when your data is in it, but connecting your data requires setup, which requires confidence the tool is worth setting up. Most tools resolve this badly — either they skip straight to an empty state, or they front-load so much configuration that users abandon before seeing anything.

Two paths, one screen

I reduced the entry to two paths from a single screen: connect your own data, or explore with demo data immediately. No multi-step wizard, no required configuration before the platform opens. Users who weren't ready to commit to setup could see the full product working within seconds.

The result: 50% of users who signed up went on to connect their own data and begin real exploration. Subsequent onboarding improvements were on track toward an 80% target before the engagement ended. For a developer tool where meaningful activation requires actual setup effort, 50% is a strong baseline — and the direction of improvement validated the approach.

A direction, not a design

The founder's instinct from the start was that chat wouldn't work — debugging isn't linear, and a canvas that supports branching exploration would match how engineers actually investigate. That framing was right. My job was to figure out what that actually meant as a designed product.

Debugging an incident means running parallel threads simultaneously: why did latency spike, which service was affected, what changed in that window, is this correlated with a deployment. Chat forces you to serialise that process. Earlier threads disappear as the conversation moves forward, and you lose the spatial relationship between related findings. The canvas had to solve for that.

What I designed

Each question produces an answer node. From any answer, the user can branch into a new question — following a different hypothesis without abandoning the thread they were on. Multiple branches can exist simultaneously, spatially arranged as the investigation develops. Visualisations attach directly to the nodes that generated them, so a chart stays connected to the question that produced it rather than floating in a scroll.

The specific interaction decisions I owned: how dataset and time range selection is validated before queries run, so the system confirms its interpretation before acting; how the SQL agent's output is presented without exposing raw query complexity to users who don't need it; how suggested follow-up questions are surfaced to guide less experienced users without constraining advanced ones; and how findings can be pinned to dashboards directly from the canvas without breaking the investigation flow.

The design argument

The interface should match the mental model, not force the mental model to match the interface. Engineers investigating incidents think spatially and in parallel. The canvas lets them.

Activation at launch

50%

of signups connected real data

After improvement

80%

trajectory before engagement ended

Sole designer

18+

features shipped, rough MVP to paying customer

The design system was rebuilt without breaking the frontend team's existing work. The foundational patterns — filtering, navigation, data inspection — held up across every feature that followed. Keystone went from a vague founding idea to a shippable interaction model with defined canvas logic, validation flows, and a path from investigation to dashboard.

If I were starting over

I would formalise the design system before feature ten, not after. And I would put the Keystone canvas in front of real users earlier — it was well-reasoned, but a pattern this novel earns its confidence through testing, not just logic.

The constraint I'd keep: designing within the data architecture rather than against it. Isolated datasets felt like a limitation until they clarified the right mental model — you investigate within a context. That shaped everything.

See product →

Next project

DeepSource