David Dittman
Back to articles
Product Operations Engineering

Building Internal Tools That People Actually Use

January 21, 2024

Building Internal Tools That People Actually Use

I have built a lot of internal tools over the years. Dashboards, workflow automation systems, reporting platforms, data entry interfaces — the full spectrum. And I will be honest: the first several were failures. Not technical failures. They worked exactly as designed. They were failures because nobody used them. The operations team kept their spreadsheets. The marketing team kept emailing reports around. The sales team kept doing manual lookups in three different systems. My beautifully engineered tools sat there, gathering digital dust, while the problems they were supposed to solve persisted.

It took me longer than I would like to admit to understand why. The answer was not better engineering. It was better empathy.

Why Most Internal Tools Fail

The fundamental mistake I see teams make — the mistake I made myself many times — is treating internal tools as engineering projects rather than product projects. When you are building a customer-facing product, you would never dream of skipping user research. You would talk to customers, observe their workflows, understand their pain points, prototype solutions, and iterate based on feedback. But when it comes to internal tools, engineering teams routinely skip all of that. Someone in leadership says “we need a dashboard for X” and the team builds a dashboard for X without ever sitting down with the people who will actually use it.

The result is tools that solve the problem as engineers imagine it, not as users experience it. I once built an elaborate campaign performance dashboard with real-time data, drill-down capabilities, and every metric you could imagine. The marketing team used it exactly twice. When I finally asked them why, the answer was humbling: they needed three specific numbers every Monday morning, and my dashboard required six clicks and a mental model of our data architecture to get to those numbers. They could get the same information from a spreadsheet a junior analyst updated weekly. My tool was technically superior in every way and practically inferior in the one way that mattered — it was harder to get the answer they needed.

Dogfooding and Iteration Cycles

After enough of these humbling experiences, I adopted a rule that I now consider non-negotiable: before you write a single line of code for an internal tool, you spend a minimum of one week doing the job it is supposed to support. Sit with the operations team. Do their data entry. Run their reports. Feel the friction firsthand.

This does two things. First, it gives you genuine empathy for the workflow, not the sanitized version you get from a requirements document, but the real thing with all its messy workarounds and unstated assumptions. Second, it builds trust. When you go back to that team and say “I think I can save you three hours a day on this process,” they believe you because they watched you do the work.

Once you start building, the iteration cycles need to be measured in days, not weeks. I aim for a working prototype in front of real users within five business days of starting development. It does not need to be complete. It does not need to be polished. It needs to be real enough that someone can try to use it for their actual job and tell you where it falls apart. That feedback in the first week is worth more than six months of roadmap planning.

The Dashboard Graveyard

Every organization I have ever worked in has what I call a dashboard graveyard — a collection of reporting dashboards that someone built, that were briefly exciting, and that now sit unused. If you are honest with yourself, you probably have one too.

Dashboards fail for a specific and predictable reason: they present data without context and without a clear decision they are meant to support. A dashboard that shows you seventeen metrics about your ad spend is not useful. A dashboard that answers the question “should I increase budget on this campaign today, yes or no, and by how much” is useful. The difference is not in the data — it is in the framing.

When I build reporting tools now, I start with the decisions, not the data. I sit with the person who will use the tool and ask: “What decisions do you make on a daily or weekly basis? What information do you need to make those decisions? Where do you currently get that information, and what is frustrating about the process?” The tool gets built backward from the decisions it needs to support, and every element on the screen has to earn its place by contributing to one of those decisions.

This approach has an added benefit: it naturally limits scope. Instead of building a “marketing analytics platform” (which is really just a synonym for “project that never finishes”), you build a tool that answers four specific questions. You can ship that in two weeks. And if it turns out those were the wrong four questions, you have lost two weeks instead of six months.

Integrating With Existing Workflows

Here is a lesson I wish someone had taught me earlier: the best internal tool is one that meets people where they already are. If your operations team lives in spreadsheets, the most impactful thing you can build might be something that automatically populates a spreadsheet rather than replacing it. If your team communicates in a group chat, build a bot that surfaces information there rather than making people switch to a separate application.

Every time you ask someone to change their workflow, you are spending adoption currency. You only have so much of it. Spend it on the changes that genuinely matter and find ways to deliver value within existing habits for everything else.

I once worked on a project where the goal was to replace a complex, error-prone manual process that involved copying data between three spreadsheets and an email template. The engineering team’s instinct was to build a web application that handled the entire workflow. What we actually built was a script that watched a shared folder, processed the source spreadsheet automatically, and dropped the output into a second spreadsheet and a draft email. The operations team’s workflow barely changed — they still worked in spreadsheets, they still sent emails — but the error-prone middle steps disappeared. Adoption was instant because there was almost nothing to adopt.

Measuring Adoption and Gathering Feedback

You cannot improve what you do not measure, and this applies to internal tools just as much as customer-facing products. I track three metrics for every internal tool: daily active users as a percentage of the intended audience, time-to-task-completion compared to the old workflow, and what I call the “escape rate” — how often people start using the tool and then abandon it to complete the task another way.

That last metric is the most revealing. A high escape rate tells you that people are trying to use your tool but hitting a wall somewhere. That is actionable. You can observe where they bail out, fix the friction, and watch the escape rate drop. It is a much more useful signal than overall adoption numbers, which can be influenced by management mandates that force usage without actually solving problems.

For feedback, I have found that formal channels — feedback forms, scheduled review meetings — produce almost nothing useful. What works is embedding yourself. Spend an afternoon a month sitting near the people who use your tools. Watch them work. The things they complain about to their neighbor are more honest than anything they will write in a survey. The workarounds they have built tell you more than any feature request.

When Spreadsheets Are Actually the Right Answer

This might be the most counterintuitive thing I have learned as a technology leader: sometimes the right answer is a spreadsheet. Not every problem needs a custom application. Not every workflow needs to be automated.

Spreadsheets are incredible tools. They are flexible, they are familiar, they require zero training, and they can be modified by the people who use them without filing a ticket with engineering. When a process is still evolving, when the requirements are unclear, or when the volume is low enough that manual effort is not a bottleneck, a well-structured spreadsheet with some validation rules is often the best solution available.

I apply a simple test before greenlighting an internal tool project: Is the manual process costing more than twenty hours per week across the team? Has the workflow been stable for at least three months? Can we clearly articulate the decisions the tool needs to support? If the answer to any of those questions is no, we are probably building too early. Let the process mature in spreadsheets first, and build the tool once you actually understand the problem.

The best internal tools I have built were not the most technically impressive. They were the ones that disappeared into the background of someone’s workday, quietly making their job a little easier without asking them to change how they think. That is the standard we should be aiming for — not tools that impress engineers, but tools that serve the people who use them.