How I think about developer experience
Recently someone asked me to describe my lens on developer experience, and I realized I’ve never really articulated it. So here’s my attempt!
I’d say I’m mostly on the tactical side of DX. If somebody’s providing the overall strategy—like “let’s target junior developers for this on-boarding experience”—I’m the person implementing that vision. And since I think in terms of information architecture and detailed systems of information, I’ve come to believe that developer experience lives in the details.
I’m not the person who’s going to completely overhaul a domain model for how we think about an SDK. But I am the person who’s going to think through the implications of a new domain model, and make sure it’s self-consistent. I’m where the rubber hits the road-—like, is this actually working or not working?
Domain overhauls don’t come around every day, though, so I think my biggest day-to-day tactic for DX is the “docs-first” approach. Is that a techcomm industry term? I feel like it must be. Anyway, when a new feature is under discussion, I mock up documentation to see how easy it is to describe. If it’s awkward – like the naming or interface requires a lot of explanation – then I mock up docs for an alternative interface or naming structure or parameterization of the feature, till I hit on something elegant. Then I propose it.
The other thing about docs-first is that it gives me lead time for stopping developers from creating product sprawl or exposing “inside baseball” features to the public before they’ve gotten into production. Getting rid of a public feature is a much harder ask than not exposing it in the first place.
I’ll walk through some examples from my work with Sensible (a document AI platform) to show what this looks like in practice.
Example 1
When my client Sensible was in the early days of LLM features, they used an embeddings approach for scoring relevant document chunks to find prompt context. They soon evolved to finding prompt context through per-page document summarization. Then they leveled up again to offer summarization based on logical splits in the document (like section headings) rather than on page boundaries.
The research engineer who was implementing this outline-based summarization created a document_outline preprocessor that would chunk a document semantically into logical segments. This outline would then influence how a downstream LLM method (query_group) searched the document.
I helped the engineer name the preprocessor, but as soon as I mocked up docs showing the interaction between this preprocessor and a downstream query_group LLM method, I realized there was a domain model inconsistency. The query_group method already had a search_by_summarization boolean parameter that controlled how the LLM searched the document. This new preprocessor would create a weird side effect—it would change how query_group searched, but that configuration lived in a totally different object.
We had a meeting to discuss my findings, in which I proposed turning search_by_summarization from a boolean into an object with explicit options and backward compatibility. With other engineers’ input, we arrived at a new solution:
// OLD: Implicit side effect
{
"preprocessors": [
{ "type": "document_outline" } // Magically changes search behavior
],
"fields": [
{
"method": {
"id": "query_group",
"search_by_summarization": true // Boolean. side-effect: This boolean would be implicitly ignored if the document_outline preprocessor is configured!
}
}
]
}
// My proposal: Explicit configuration
{
"fields": [
{
"method": {
"id": "query_group",
"search_by_summarization": "outline" // or "page". for backward compatibility, "true" defaults to "page"
}
}
}
]
}
Thankfully, we avoided creating a preprocessor that would have had these invisible side effects on an extraction method when really we should have been configuring the extraction method directly.
Example 2
Sensible automates document data processing, and offers dozens of preprocessors for cleaning up messy documents prior to data extraction, like for OCR, ligatures, or line splitting and merging. A software engineer wanted to create a new remove_pages preprocessor so users could ignore irrelevant pages. But I quickly pointed out that we already had two preprocessors for manipulating page ranges, as well as existing match parameters on several preprocessors for selecting pages. So I proposed parameterizing the existing filter_pages preprocessor using a match or match_all parameter for removing pages. That way we’d avoid multiplication of similar preprocessors (and all the docs explaining their differing nuances in usage) and stay consistent with existing patterns. The product manager accepted my proposal. By catching this during the documentation phase, we prevented a redundant feature that would have needed ongoing maintenance and created user confusion about which preprocessor to use.
Example 3
When I took on my client Optimizely in March 2020, they were pivoting to a developer audience and planning a major SDK overhaul. They had introduced the notion of feature flags on top of their existing A/B testing features, but the relationship between them was unclear. Were they separate things? Was one a subset of the other?
I’d say my primary contribution to the domain overhaul was getting involved in design conversations early on. I noticed that the API team and UI team weren’t quite aligned on how they were presenting the relationship between the two concepts. Both teams were working toward making flags the root concept, with experiments as a type of flag rule, but their terminology was inconsistent. I could raise points like “we need more correspondence between the sendEvent parameter and the decisionLogged field” or “why are we prominently exposing experimentKey in the API when the new UI will be focused almost exclusively on the flagKey?” By mocking up documentation for this new model and writing actual code examples, I could spot where the API and UI were diverging in ways that would confuse developers.
The effort for me was worth it, because when I went to conduct an information architecture overhaul of their existing feature flag doc set, I could keep the docs structure clean and consistent, and all the little inconsistencies I would have had to otherwise document and explain were already polished away. For an example of a technical guide I created (since edited by other authors) explaining feature flags, see Create feature flags. I like to think that the DX that went into those docs is invisible, unless you’d seen the hypothetical messier experience!
Comments