Effective product execution using a product operating model
Great product execution follows from having an effective product operating model: the way that you consistently ship features that create value for your users and business. Once you have decided what to work on with your product strategy, your product operating model ensures that you build the right things in the right way: high quality, fast, and reducing risks up front.
This is critical because building products is difficult, and it’s easy to waste time on the opposite of effective product execution by building things that users don’t want, or can’t use if you’re not careful.
In practice, a product operating model is a rhythm of meetings, documents and other processes that help teams reduce risk early. As with all processes, it should be an accelerator for teams, and kept as lightweight as possible.
General product execution
Let’s start with some general principles for effective product execution, and then see how they translate to digital product development. To execute effectively on products four ingredients are required:
- Focus – doing a few, high impact things.
- Measures – measuring pacing and progress towards success.
- Ownership – having a single person accountable.
- Check-ins – reinforcing the focus, and auditing work in progress.
To be effective, you need to concentrate your effort on the most important initiatives. Time and effort are finite resources, and spending them on low value activities means you’ll have less to spend on high value activities. Ideally, each team should have a single focus – this maximises the progress in one high value area, eliminates context switching and makes it really easy for people to understand what they should be working on.
When put this way, having focus seems obvious, but in practice, it’s often not clear what the highest value activities are, and stakeholders tend to be happy stopping everything except the projects that directly affect themselves, thereby impacting the execution capabilities of product teams. Here are some tips for helping to maintain focus:
- When you’re giving too many priorities, ask where to start. This gets stakeholders to prioritise without implying other things are less important.
- Work out how much time in a cycle will be devoted to each priority, and compare this to how much has been done in the past. E.g. If you have 6 priorities for a quarter, and are getting push back on cutting any, present this back as 2 weeks per priority, and give a couple of examples of what the team can deliver in two weeks. Most stakeholders will realised that more time spent on fewer things would be better.
- If stakeholders try to shoehorn another priority into a cycle, then ask them what should come out of it. They need to be clear that there is a fixed amount of capacity in the team, and their demands can impact product execution. It’s healthy to discuss how to use it, but it’s unhealthy to pretend that capacity can be expanded through wishful thinking.
- Remember to include technical debt, maintenance and dependencies from other teams in your planning – perhaps as an estimated percentage of time. Many teams will forget this, which means they automatically fail to hit their goals, as they’ve overcommitted too much work to a cycle.
The second ingredient to effective product execution is measures. Measures quantify and make objective two things for us:
- Our goal: have we achieved what we set out to do?
- Our pacing: are we on track to reach our goal in the time we set ourselves?
In product execution and delivery, these are not necessarily – and indeed are rarely – the same thing.
A goal is always the measurement of an outcome, and comes in the format: “Move [metric] from [baseline] to [target] by [date]”. This makes it unambiguous when we have reached our goal, and whether we have been successful. After all, that’s what product teams are there to deliver. We care about creating user and business value, and not what or much much we need to ship to create that outcome.
Pacing is achieved by having leading metrics that indicate whether we are on track to reach our goal by the target date. If we are running behind, we can then take action to speed up, or change course whilst we still have time for this to have an effect. If we only realise we are going to miss our goal when we actually miss it, there’s no opportunity to adjust and improve en route.
For example, a goal might be: “Increase the number of active mobile users from 0 to 1000” – in this case the company doesn’t currently have an app. The pacing would be a plan showing which features needed to be built to release the app. We don’t care how many story points this is, but without shipping something, there is no app, and no active users.
In this example, it’s clear that number of active users would be a pretty poor measure of pacing, and it’s therefore totally fine to use an output measure instead. The test is whether this helps identify when the team needs support, and so acts as an accelerator to the team. It helps us reach the goal faster = good; it acts as a distraction from the goal = bad.
The third ingredient of execution is clear ownership of objectives and tasks. Each deliverable should have a single owner accountable for it. This doesn’t mean that they are doing all the work, but it means that they are the person on the hook for ensuring it gets done, and have the authority to chase down dependencies.
It’s a common trap for teams to think that ownership can be shared. The intention is to make sure that owners aren’t seen as more important, and enforce collaborative thinking. These are both noble reasons, but come at a great cost. Without a clear owner, people spend a huge amount of mental effort unconsciously trying to determine their zones of influence and informal status. They often default to a “wait-and-see” approach when seeing problems, rather than diving in to solve them, for fear of offending someone else. The result is much slower, inferior decision making.
Teams need to be mature enough to recognise that owners exist not to elevate some people over others, but to eliminate ambiguity. Their role is to stretch those around them to do their best work, not to hand out tasks and deadlines. Guidelines for creating ownership:
- Have a single person accountable for each task.
- Communicate clearly they are not the only “doer”, but they are the point of contact for updates, decisions and coordination.
- Recognise that most projects will remain collaborative, with multiple people’s work and input required for the best outcomes.
The fourth and final ingredient for executing effectively is regular check-ins. These can be in person or remote, synchronous or asynchronous, but cannot be automated. The cadence should balance keeping time spent in check-ins to a minimum, whilst ensuring that when progress slows, it is rapidly spotted and dealt with. Check-ins serve two purposes:
- Signalling – maintain a team’s focus and motivation.
- Auditing – ensure that the work in progress is being done to a high quality.
People pay attention to what is being talked about. If you constantly talk about customer needs, everyone will recognise this as important and the team will become customer centric. Conversely if you always talk about the bottom line, they will become more commercial. So, if you need to execute in one particular area, it’s important to keep this front of mind for everyone, and communicate about it regularly. Without this, people will assume that other priorities have come up, and slow down on the original task.
Whilst it’s nice for everyone to have their time to shine, you need to beware giving too much air time to projects that are not your core focus. Keep people’s attention where you want it to be, on the most important work.
A core responsibility of managers is to review the work that their teams are doing, and ensure that this is being done to the quality they expect. It naturally follows that one of the most important traits that a manager can have is to enforce high standards. Regular check-ins provide a forum for managers to do this auditing, and can trigger additional support if necessary (e.g. additional training, working sessions, etc.). This does not need to be done publicly (though this can save time and help setting standards if done constructively and with psychological safety), but cannot be ducked.
How to make check-ins effective:
- Keep the cadence high enough to be responsive, but without overburdening people’s time.
- Make sure the most important work is the most talked about work.
- Find time for work to be reviewed by managers, so they can ensure quality and lean in to support where necessary.
- Foster a culture of radical candour in check-ins, so that honest conversations can happen in an atmosphere of psychological safety.
Ok, so how does this all relate to execution in a product organisation? Just as teams will tend to have discovery and delivery going on in parallel, teams will need to think about execution of both these streams of work. Dual track delivery requires dual track execution.
On the discovery side, good execution at most companies involves a mid term planning cycle (often quarterly), and then Product Reviews of different flavours that map to the lifecycle of development. The Product Reviews can be held ad hoc, or batched into a regular weekly / fortnightly meeting. It often works well to have a standing time that everyone can make, and then let the agenda be set by the features that need discussion.
The purpose of these check-ins is to ensure that the biggest risks have been addressed up front, quality is maintained, and stakeholders are brought along the journey from problem definition to solution definition and launch. These check-ins should mirror what the team is documenting in its Team Overview, and PRDs, and should not require additional materials to be produced.
Quarterly planning – or whatever cadence you do mid term planning on – is a chance to make sure each team is set up for success. This is essentially a review of the Team Overview, to see if anything has changed:
- Objective – what we are trying to achieve with this team.
- Goal – how we measure our progress towards that objective.
- Strategy – what the key areas of focus are to achieve our objective.
This is a good opportunity to meet with stakeholders and make sure that you are aware of how they are thinking about things, and that you are aligned on the amount of support they will get for things that are not core to your focus. This can be held as a stand alone meeting, or can be worked into the regular Product Review slots.
Product Review: Kick Off
A Kick Off meeting is a point to review that the problem has been defined tightly enough for the team to enter design. It maps to the first part of a PRD, and should cover:
- Objective & success metrics – how we are defining success.
- Stakeholders – who needs to be kept in the loop.
- Context – why is this an important problem to work on.
- Scope & constraints – what are the limits we have placed on the solution.
- Key risks – where do we need to be particularly careful.
It’s a good time for the team to discuss how much time they anticipate investing in this problem. A team won’t know how long it will take to build a feature at this time, given they haven’t even begun exploring solutions, but they probably have a sense of whether this is something they want to spend a week, a month or a quarter on. The outcome of this meeting should be that the team has a tight mandate to go and explore solutions in discovery.
Product Review: Solution Review
This check-in is a chance to review the solution the team has come up with before the bulk of engineering is started (though this is often a fuzzy boundary). The team should have user flows and designs or a prototype, and may have conducted technical spikes. Major risks should have been reduced as far as possible before delivery starts, as this is the most expensive phase of development.
The team should discuss:
- User research – insights from user testing and user interviews.
- Quantitative analysis – insights from analytics on user behaviour.
- External references – how other products are solving similar problems.
- User flows – the states and screens the user can move between.
- Designs and prototypes – what the user interface will look like.
Coming out of this meeting, everyone should be clear on the solution the team is planning to build, and the timeline required to build it. With the designs clear and technical spikes completed, the team should also be able to give a reasonably accurate estimate of when this feature will launch, giving other teams (sales, marketing, customer service) enough notice to prepare and support it.
Product Review: Launch Readiness
This check-in comes just before the launch of a feature, and ensures that it will get the support it needs to be successful. Product launches often rely on the support of multiple other teams to make them a success – sales teams need to brief partners, marketing teams need to inform customers, customer success teams need to know how the feature works and answer user questions. This check-in makes sure everyone is aligned on the launch plans, and is ready to make the new release shine. It should cover:
- Testing plan – how the feature will be tested (e.g. staged roll out, AB testing).
- Tracking & analytics – how we will track engagement of the feature.
- Marketing plan – how we will inform users about it.
- Customer service plan – how we will brief user facing teams.
- Legal checks – ensuring we have minimised legal risk at launch.
- FAQs – both internal and external to clarify any common questions.
- Timeline – when release and roll out will happen.
Product Review: Impact Review
Product teams should care a lot more about outcomes than outputs, so it’s important that the final check-in for a feature isn’t the release, but a review of the impact it’s generated. This reinforces that the feature is only as important and successful as the outcome it generates. A good impact review will feed insights back to teams for them to make better decisions going forward. It should cover:
- Metrics – how the feature contributed to the team’s objective.
- Quantitative analysis – what else did we see in terms of engagement and user behaviour.
- Qualitative feedback – what was the response from users (via customer service or elsewhere).
- Next steps – any follow up actions or planned improvements.
Whilst product reviews cover the discovery side of development, it’s worth having a separate set of check ins for delivery. Engineering velocity can vary, and as there are no outcomes without outputs, you want to make sure that dips in velocity are addressed rapidly. There are lots of problems that can slow down development:
- Tech lead strength – More experienced tech leads are better at designing solutions, structuring work, and amplify the productivity of their teams.
- Clarity and stability of the goal – Product teams take time to get up to speed as they get to know the code base and problem space better. Changing or ambiguous priorities therefore have large switching costs.
- Understanding of the problem – Unless effort is applied to the right problems, teams ship “stuff” instead of “stuff users want”. You need to do more research on the customer journey if features don’t deliver the expected impact.
- Specialist skills – Some problems have well trodden solutions, but these aren’t immediately obvious to non experts — for example SEO and search. Ask peer companies and investors if you need specialist skills.
- Dependencies – Teams should be built as autonomous units of delivery. If this isn’t possible, then make sure that teams all have the same priorities so stuff doesn’t get blocked.
- Engineering capacity – Velocity doesn’t scale linearly with more engineers, but small teams (2–3 engineers) will not go as fast as larger teams (4–6 engineers).
- Engineering seniority – Not all engineers are equal. Experienced engineers will work considerably faster than juniors. Very junior devs slow down delivery, so make sure it’s the right time to invest in them.
- Code complexity – If the code base is complex then “simple” changes take a lot of time because it’s difficult to work out what you need to change, and because this leads to unintended consequences.
A Delivery Review is a forum where PMs and Tech Leads can discuss any problems they are facing, and get support from their managers. It should be characterised by true collaboration and psychological safety. Simply having a regular time for PMs and Tech Leads to discuss their team’s performance with leadership can help troubleshoot issues faster, and deliver a lot of value by accelerating teams.
A basic delivery report can make these conversations even more objective and productive, and covers three areas:
- Team health – is the team setup for success?
- Delivery – is the team shipping effectively?
- Impact – is the team creating value for users and the business?
This section of the Delivery Report has some quick indicators for whether the team is set up for success. It helps you spot root causes of problems with delivery. It might include:
- Engineering days – how much capacity has the team had in the past week
- Allocation of time – which themes or projects have they been working on
- Blockers & risks – qualitative description of issues facing the team
This section of the Delivery Report shows the amount of work that has been completed on each feature or epic. You should be able to estimate when features will reach milestones such as their kick off check in, solution review or release. How you measure progress could be with something as simple as by measuring the number of tickets or story points completed. Of course this doesn’t allow for the inaccuracies of estimation, but it provides a directional signal and sparks the right conversations:
- Is delivery slower than expected? Is there a persistent trend?
- Are there obvious reasons for this?
- What action can we take to speed the team up?
Having some measure of how delivery is proceeding allows you to problem solve issues where velocity is part of the problem. It won’t help you identify if you are working on the wrong things, but your Product Reviews should address this risk separately.
Finally, it’s important to include impact metrics – both your goal, and any leading indicators on the delivery review. More than anything this is a blunt reminder that you care about outcomes, not output. It can also help you understand (and communicate to others) the lag between developing and releasing a feature and seeing impact if you need to wait for a staged roll out or user adoption.
As with all our guides, we will outline a solid format you can use off-the-shelf here, but it will be most effective if you use this as a starting point and co-create this with your team to suit your own specific needs. The below example is appropriate for a mid-sized tech company, but a start up with a single team would obviously want something much lighter weight.
You can get a Google Sheet template for a Delivery Report here.
A combination of Product and Delivery Reviews allows you to really understand and challenge the teams’ thinking on what they are building, as well as understand how development is going. They are effective ways to execute across multiple product teams, highlighting teams that are struggling, and allowing you to focus your efforts on unblocking them. In combination, they allow you to reliably create value for your users and the business. And, as always, process should be an accelerator to teams, not an obstacle. If this process is working well, the teams will not only move faster, but be happy to participate in it.
What is a product operating model
A product operating model is the process you put in place to accelerate teams, and ensure they are consistently generating value for users and the business. In practice it is the application of focus, ownership, measures and check-ins. In practice it is a rhythm of meetings, documents and other processes that help teams reduce risk early. As with all processes, it should be an accelerator for teams, and kept as lightweight as possible.
What is the right amount of process to have?
You should aim to have as little process as possible, and as much as necessary. Process is always only as good as how much it speeds teams up, and every company will be different. If you think you can go fast by having fewer meetings and documents (or more!) then give it a go and see what happens. And as always, learn and iterate as you go.
What’s the point of tracking delivery?
Tracking delivery is an important leading indicator of whether a product team is on track to deliver the expected impact – i.e. its pacing. Whilst shipping features is not a measure of success for a product team, it’s a necessary prerequisite for delivering outcomes – there can be no outcomes without outputs.
How much time do all these meetings take?
A typical team might spend 30-60 mins in Product Review each week, and 15 mins in Delivery Review. This is usually a highly effective use of people’s time, as long as the attendance for the meetings is modest (perhaps 10 for Product Review and 4 for Delivery Review).
Won’t I spend all my time prepping for product reviews?
Teams should use the materials they are using internally at Product Review. Either the Team Overview for Quarterly Planning, or a PRD for other sessions. A Delivery Review shouldn’t take more than 5-10 mins to prepare. Something has gone wrong if PMs and Tech Leads are complaining that prep for these meetings is taking too much time.