This is the first of a series of articles. If you’re curious about the next one, or just want to have a chat, follow me on Twitter. 🙂
Starting a new job can be challenging and, at times, overwhelming. For product managers, it oftentimes means jumping head-first into leading multiple ongoing projects while still learning the ropes at the new place. Between two workshop sessions at last year’s Productized conference, I had a great conversation with a fellow attendee about the new job she had just started.
Well, to be fair, it was less of a conversation and more of a monologue on her end, but it was enjoyable because of how happy she was to have started this new job. She loved it, and that was contagious! But there was one thing that made her a little nervous:
They weren’t data-driven at all, which is why they had hired her, a very data-driven product manager. But she was also used to working at a much bigger organization with people dedicated to getting her the data she needed. “I honestly don’t know where to start”, she explained, “I’m starting from scratch, but they expect me to pass on my approach to the rest of the team, and I’ve never had to do that.”
I wish I’d had better advice and concrete ideas to offer in that moment, but that part of our conversation stuck with me for weeks after because I kept wondering: What would my approach be if I was in her situation? Would I be able to set up a process for feature definition and tracking from scratch?
The following is a general feature impact tracking approach, from setting the goal before development to evaluation after release. I believe it can be used by most product teams, no matter what their resource situation looks like.
Set a user-centered Goal
First, we need to understand what our feature should achieve. We hopefully have a concrete goal in mind, but we also need to make sure we’re taking on the right perspective. To highlight this, let’s take a look at a negative example I’ve seen more often than I’d like to admit:
“We want to increase revenue from product segment X by Y% compared to the previous quarter.”
There’s a huge difference between starting with “What do I want?” versus “Who is this for?”. The example above focuses on the outcome for the business but doesn’t tell us anything about the feature. Even worse, it leaves out the most important detail: The user.
Good Stories Require Research
There’s a reason writing user stories has become a widely adopted practice. By filling in the Connextra template, we answer the most important questions about our feature:
- Who will use it?
- What do they want?
- Why do they want it?
Being able to answer these questions may at times require a lot of research, but not being able to answer them sets us up for failure because no user will want to interact with something that’s irrelevant or annoying to them.
Let’s assume our product is a car rental website targeting the German market. We’ve noticed that the bounce rate of pages in our vans category is very high compared to those of other categories. Thanks to a survey we ran targeted at bouncing users in the this category, we’ve gained a valuable insight: The vast majority of bouncing users leave because they struggle to find a van they can rent with their regular drivers license, despite most of our listings being suitable for the average license holder.
After some user interviews, we’ve determined that the best course of action would be to clearly display the required license for each vehicle in the vans category. Our Connextra-style user story (sans acceptance criteria) could read:
AS A customer looking to rent a van I WANT TO immediately see what license each van requires SO THAT I can easily tell which ones I’m allowed to drive.
There’s a great description of our feature goal! We can clearly see who it’s for, what it needs to achieve, and how that affects the user.
Make it SMART, but KISS
Since its introduction in the ’80s, the SMART acronym has become a management staple when it comes to goal definition and provides a great guideline to check your goals against. There are a few common versions of what SMART is supposed to stand for, I personally prefer this version for feature goals:
Specific
When aiming for specificity, it’s important to remember that we’re trying to be specific about the goal of our feature, not the feature itself! I’ve seen teams get lost in tiny details (who doesn’t love a 2 hour meeting about fonts?) without ever discussing what they are trying to achieve and for whom.
If we’ve done our research and we follow the aforementioned user story format, we should have a clear and specific idea of the user and our feature’s effect on the user.
Measurable
We want to somehow determine whether we’ve reached our goal, so we need to think about how we can measure the impact our feature has. Apart from helping us track our progress, defining measurable goals does two important things:
- It makes us critically question whether our feature can have any impact on our product: If we can’t measure its impact, does it have any impact at all?
- It ensures we have a clear vision of what success looks like and how it affects the user and our product.
I’ll cover picking the right metrics and estimating changes in the next post, since it’s too extensive of a topic to cover in a few sentences. If you’re curious, please follow me on Twitter, where I’ll be sharing future posts in this series.
Achievable
When we estimate the desired changes in our metrics, we should stay realistic. The younger your product, the more difficult a detailed estimate will be. If we don’t have a lot of data to go on, it’s best to be transparent about it instead of faking certainty. “We have measurably improved our signup rate” is a much better target metric change than “We have improved our signup rate by [random guess]”.
Relevant
It’s also important to check whether our feature goal is actually in line with our overall product strategy. Is what we are doing contributing to our strategic goals? Did we prioritize it over something else that could have a greater impact on our goal?
Time-bound
The time-bound criterion is often used to check if our goal has a deadline, so that we avoid postponing results. During the feature goal definition process, I prefer to think of it as: “How long should it take to see a change in metrics once we’ve implemented the feature?”. I find this approach more helpful because:
- It sets a deadline (e.g. “4 weeks after release”) for us to follow up with our feature performance.
- It doesn’t set a deadline for when to start and finish feature development, which would strongly clash with Agile roadmapping.
- It promotes thinking of short- and long-term effects on the metrics we’ve selected.
While we make sure our goal is SMART, it’s wise to stick to an old design principle: Keep it simple, stupid! If we over-engineer our goals, we kill what makes good goals so powerful. Instead of facilitating focus and setting an easily understandable target, complicated goals take attention away from the actual work to be done and overwhelm the team.
We’ve successfully defined a user-centered goal and made sure it’s SMART without over-engineering it. Next, we’ll dive a bit deeper into the big M by taking a look at how to select the best metrics for our goal, as well as estimating the changes we’re expecting.
How do you approach setting feature goals? Reach out for a chat over a (virtual) cup of coffee. 🙂
This is the first of a series of articles. If you’re curious about the next one, or just want to have a chat, follow me on Twitter.
Do you like or dislike this post? Please give me feedback via the thumbs up or thumbs down button: