Product Led Growth is all the rage in 2020. But growth doesn’t come from just having good products, it comes from a relentless focus on building the right thing at the right time for the right audience.
I recently spoke at the Product Led Summit 2020 presented by the Product Led Institute on the topic of experimentation and product roadmaps, and how high growth startups leverage experimentation to drive their product roadmap.
Topics in this talk include:
- A review of why startups fail
- How to set up a culture of experimentation
- How to get started running experiments
- How to use the step-functions of growth to ensure you’re building and launching the right feature at the right time for the right audience.
- How to setup and organize your experiment backlog
- Overview of where and how experiments fit into the overall product roadmap
This talk is great for growth marketers and product managers alike looking to step up their game.
Craig Zingerline (00:03):
Hey everyone, this is Craig Zingerline. I’m going to talk to you today about how high growth startups leverage experimentation to prioritize their product roadmap. We’re going to just get right into it. So here we go.
Craig Zingerline (00:15):
All right. The way products and features are typically built work something like this. You’ve got an amazing idea or somebody in your company does, and you think this feature is really going to move the needle so you put it in the product backlog and you get to work building. So you think it’s only going to take three sprints. We build, build, build, let’s go. You figure, hey, we’re going to launch this thing, we’re going to rake it in and you go live. One month later though, you realize, okay, well, maybe that didn’t work so well. I think we need to build one more thing.
Craig Zingerline (00:47):
So you’ve got this amazing idea. You said, this update’s really going to move the needle now. So you put it back in the backlog. This time it’s only going to take two more sprints. You get to work, you build, build, build, and you think, “Hey, I’m going to launch and we’re going to rake it in this time.” What ends up happening is usually disappointment or failure. So, during this talk, I’m going to walk you through some of the ways that we can get better at building features and we’re going to talk about how to build experimentation right into your product roadmap, right alongside your core product development work.
Craig Zingerline (01:22):
All right, so this is a screenshot of your typical product development life cycle. So you see it starts with the product backlog and your sprint backlog, and then you kind of move into your sprint. Sprints are generally one or two weeks, maybe up to 30 days. Most scrum teams work with a two week sprint cadence and then you have a release. The challenge here is that you’re working often with unvetted features and unvetted ideas. Why does this actually matter? Well, because most startups fail and usually not because of bad products, but most startups fail because of unvetted features, poor marketing and other market-related issues. The numbers are pretty sobering. So we’ll take a look.
Craig Zingerline (02:10):
In 2018, CB Insights, which is a publisher of data did a post-mortem. They interviewed 101 different startups on why they failed. These are the reasons. There’s not a lot with product here. So, product without a business model, but that’s not pure product. Product miss time. There’s really not a lot about bad features or we built the wrong thing. Paul Graham, in 2006, had a really interesting essay on 18 mistakes that kill startups. I’ll pause on this for a second. Again, there’s a whole bunch of issues here, but not a lot around just the product being bad, right? This is important because 74% of failures in startup have something to do with growth, or put differently, it’s really not about the product. It’s more about your features and timing. And we’ll dig into that in this lesson here.
Craig Zingerline (03:15):
How do we go about actually changing this? Well, doing the right thing means ensuring that the right thing is built at the right time. You want to make sure that the metrics and measurement for whatever you’re building are tied right into that feature and product development and you really have a lot of ownership over those metrics as you go. You want to ensure that both sets of stakeholders, so both your customers and the business that you’re working with gain value with each release. That’s one of the core tenants of growth is that it’s not just about the customer and it’s not just about growing your business. It’s actually a healthy balance between building a strong business and keeping your customers very happy.
Craig Zingerline (03:57):
You have to do both at the same time. You’ll also want to focus on making sure that experimentation and rapid iterations are starting to get sprinkled into your core product development. What is your team supposed to do here? I would urge you to think about the process and the approach that you’re going through when you’re approaching this. Come up with an experiment-based feature roadmap methodology that I’ll share with you here. Number one is that you validate the need for the feature first with real users, ideally something that’s out in the wild that you’re actually getting real feedback from. But you can start really, really simple. Get something basic out early and then iterate really rapidly, really quickly.
Craig Zingerline (04:41):
Leverage feedback from the start. It doesn’t really matter what you’re building. If you’re getting customer feedback and buyer feedback from real users, you’re going to start to learn much more rapidly and you’re not just going to trust your gut or your executive team who says, “Hey, we need to build these five things.” This really starts by using a hypothesis driven approach. I’ll show you what that means in a second here. When you’re working with more of an experimentation methodology with your product roadmap, you want to build your hypothesis. For each and every experiment that you’re going to be running, or each and every feature that you’re going to be building or plan to build, you really should build this hypothesis.
Craig Zingerline (05:20):
It goes something like this. By doing or not doing something, we believe that another thing will be impacted by something. Basically you can use this framework to fill in the blanks. Another thing you’re going to want to do is align your product managers really closely with marketing. At the product level, there’s a bunch of questions that you’re going to want to ask yourself. So why do we need this feature that we’re building? Who benefits from this feature that we’re building? What might this actually detract from? It’s not uncommon for features to be continuously deployed and rolled out and then utilization across those features or other features starts to tank. This leaves a lot of product managers and marketing teams scratching their head like, we thought we were just doing the right thing by continuously building.
Craig Zingerline (06:09):
But the unintended consequence of rapid feature development is, if it’s not been vetted, is that there might be a detraction on some other part of your business model or within your product funnel. You’ll also want to ask, what’s the goal with this release? What tangibly do we actually want to get done by putting this out there? Just building stuff to stay busy is a really poor excuse for how to run a startup. You really want to have a goal behind each and everything that you’re doing and ideally back that up with a little bit of data. How are you going to measure the success or failure of what you’re building? That gets back to the previous point. If you can’t measure what you’re putting out there, then it’s really, really hard to objectively tell whether or not what you’re building is actually the right thing.
Craig Zingerline (06:56):
What’s the simplest way we can validate this idea? I’ll walk you through the ways that you can actually go about launching features in the wild, but really you need to get in this mindset of really lean, starting small for most of the things. Maybe at an enterprise SaaS platform where you’re five years in the market, you have to go much broader and much deeper with your feature set. But for pre-product market fit or for growth stage startups, really a lot of times, the more simplistically you can deliver something, the faster your feedback cycles are going to be. This is really important. So be mindful of other work that the feature development that you’re working on is going to develop. Outside of just the product view, here are some questions. What and who do I need to support this feature?
Craig Zingerline (07:46):
Again, it’s not going to just be the product team or maybe not the marketing team. What other teams does this thing that you’re putting out their impact and what detractors are you going to come up with because of this release? Do I need to write content, emails, drip campaigns? How are you effectively, again, going to support this from a channel perspective? What other parts of the marketing funnel does this feature change? Does this change how you go to market? Does it change how you do activation? You really have to think through this holistically instead of just putting things out there that you don’t really understand how they’re going to impact the work of other people.
Craig Zingerline (08:27):
How are users going to learn about this feature? And then what happens if this rollout fails? Do you have the discipline to actually roll back? Do you have the measurement and tooling in place so that when you put a feature out there and it totally tanks or it kills your revenue streams or it tanks your conversion rates, do you have the discipline to roll it back? When we look at this kind of from a growth led product development standpoint with experimentation at the core, there’s a few ways that we can move forward. I talk a lot about what I call the step functions of growth. If you think about growth and feature and product development in terms of the step functions, really, at the start, you’ve got nothing and then you’re moving to something.
Craig Zingerline (09:09):
So you don’t have a feature and then you have a feature, you don’t have an idea, then you have an idea. The way that you want to think about this process when looking at experimentation within your startup is that you go from zero, and then zero to one, and then one to 10, 10 to 100, and then you move into this optimization land. The zero to one step function is really when you’re at the starting point of an experiment and you’re starting with something small and you’re probably doing a bunch of manual work to validate that feature, right?
Craig Zingerline (09:39):
Instead of spending those two weeks building something out, spend a couple of hours planning out how you’re going to get feedback on that future iteration of whatever you plan to build and put it out there in the wild. The next iteration gets a little bit more scientific, gets a little bit bigger and maybe you do a little bit of work there. The next iteration, that 10 to 100, is when you start to scale based on this feature. Here you may get team buy-in and the idea that you have turns into a real feature that you build and then you move into optimization.
Craig Zingerline (10:07):
I’m going to walk you through each level of this with a real example in a second. This is an example that I’ll walk you through with my current company Sandbox. And Sandbox is a B2C platform that lets supporters of the military members who are about to go through bootcamp, enables the supporters to write letters via our platform and they can do that a lot more cost effectively than physically writing a letter and overnighting it to the recruit. So we solve a huge pain point for that intersection of civilian and military life. So we monetize via this digital physical product. So we had this hypothesis that said, by leveraging discounts for some of our existing customers, we thought that we could increase revenue for a test cohort compared to our control. I’ll walk you through all these numbers in a second.
Craig Zingerline (11:00):
The feature development planning that we thought through was, okay, well most of the time the instantiation of how we’re going to go about testing a hypothesis is by building out a promo code system, but that was going to take multiple sprints. So we thought, okay, well what can we do to actually speed up the delivery cycle so that we can get something into the wild and get feedback before we go spend a whole bunch of time building something? We thought that promo codes could work or discounts could work, but we didn’t know for sure. Again, this was our hypothesis. So we said, okay, let’s do as little work as possible to validate this and then we can iterate as we go, only building features when needed.
Craig Zingerline (11:42):
For this set of experiments, we really held ourselves accountable for. not just getting into the build mindset, but getting into the test and experiment mindset. When we started to execution, what we realize is we actually had to build something small. So we had to build basically a piece of code to enable some of the users within our cohort to see a discounted price. So when users come in, we’ll have a control group that sees our standard pricing, and then we were able to put a certain cohort of users into a test group that just saw one discount on one of our products and so that was our starting point. So we didn’t build this huge promo code system. We literally just built as minimum of a feature as possible and we got it out there.
Craig Zingerline (12:28):
We developed that quickly, we deployed it, we launched it, and then we ran the step function zero to one test. Again, we’d never done anything with discounts. Now, we move into execution mode. Then we said, “Okay, well, once we get this out there, we’ll measure the results and we’ll look at next steps.” Here’s what it looked like. We manually picked a small cohort of people and we put them into this test group. This group saw a discount in our product and we also ran messaging campaigns to the cohort that was going to see this discount trying to drive more awareness of the campaign. We had marketing capabilities already that allow us to basically reach different cohorts or different sets of users that are in our funnel, and in this particular case we said, “Okay, well we’re going to do a little bit of messaging to tell them about this discount. We’re going to show them that and we’re also going to show it in the product.” Everybody else, the 95% of people in the control, didn’t even have any idea that this was happening.
Craig Zingerline (13:25):
At this point, statistical significance really was not what we were looking for. We wanted some directional signal that said, hey, this discount either works or doesn’t work, or maybe we’ll learn something from it. This is where we’re at in the step functions growth at zero to one. When we established the test, we had to look at it from a data standpoint. So, we established what we call our control, which is basically the baseline that we started this experiment at. Historically, when we looked at the prior 30 days of numbers of the letters purchased, which is what we sell in our product, our control group had 70% of users buying a single letter, 18% buying three and then 11% buying another bundle that we had.
Craig Zingerline (14:09):
That was our starting point. Our first test when we put this out manually, we decided to discount our three letter bundle. So you can see here that we were able to increase the percentage of people that purchased that three-letter bundle and we decreased the percentage of people that purchased one, but because of the purchase mix, for whatever reason, our revenue was actually lower than the control. So our first experiment was a failure. We went back, we change our messaging a little bit. We changed a little bit of just how we thought about the test, and mostly that came in the form of very simple message changes.
Craig Zingerline (14:48):
We didn’t touch the code base at all. When we send marketing messages out to the users, we basically were highlighting different value points. We failed again though. We did a better job actually of getting people to spend up funnel with us, but the revenue compared to the control was down, even worse than the first test. So we took a pause and we said, “Okay, well what did we learn?” We ran that for two different weeks. We learned that, okay, we can change user behavior, we can actually control to that, which is kind of cool, but the changes resulted in lower overall revenue. That’s not good. We’re a very revenue focused business.
Craig Zingerline (15:30):
Test one and two are failures. But we thought, well, there’s something there. Because we could change that user behavior, and you could see it in the numbers, we thought maybe we’ll just make some new plans and we’ll revisit our hypothesis and rebuild the hypothesis a little bit. We did no change to the product. Now we’re heading into our next iteration of experiments and it’s messaging only. This is still owned by marketing. There’s no product change at this point at all. This is really critical because I think when you start to see failure with your experiments, a lot of people either give up or they’ll decide to go back and build more stuff. In our case, we were really true to our methodology, which was we’re not going to change anything in terms of the product until we’ve really exhausted all other methods, let the engineering team focus on higher value things than potentially failing with these tests. So we didn’t make any changes.
Craig Zingerline (16:26):
What we ended up doing is we thought about that zero to one and decided to test discounts on a different type of purchase. So we looked at our single letter purchase, which is our highest seller. At this point we said, “Okay, well let’s maybe pull in some more folks to get their thoughts on how we should actually run and execute those experiments.” Here in this one to 10 step function, if you’ve got a data analyst or a data science team, this is a great time to pull them in. You’re getting a little bit more formal, a little bit more scientific. If you’ve got a design team, maybe add some design polish. You’re probably going to open this up to a bigger cohort of users. So the stakes go up a little bit.
Craig Zingerline (17:03):
Whereas the zero to one game, it’s really supposed to be low risk, kind of lo-fi, just pure experimentation. The one to 10 game, you’re starting to make your idea a little bit more … you’re getting a little bit more sophisticated and so you’ve got to put a little bit more thought into it. Here’s what we look at in terms of the step function of growth. We’ve kind of moved up to that next iteration with this next batch of tests. When we started discounting that single letter, again, there was no product change here. For the first time, we saw our revenue increase. The revenue split between our products changed in a positive way. Again, we had our hypothesis for the first time now, in experiment three we’re seeing, okay this is starting to work.
Craig Zingerline (17:49):
Then we went back to the drawing board and we said, “Okay, well how do we start to scale this effort?” For most organizations, when you get a positive read on your experiment and you can basically look at a wind versus your control, it’s probably time to move it up in terms of visibility. This 10 to 100 step function often is where you’ll get this in front of 50% or 80% of your audience. Maybe not 100% yet, but you’re going to be thinking of new ways to validate whatever hypothesis you might have, and that may include just getting in front of more people or trying to even whack your experiments. If you fail here, then and if you repeatedly fail here, it means you didn’t get enough traction from the experiment.
Craig Zingerline (18:34):
You probably got statistical significance saying that you’re making some difference, but it’s really not big enough to warrant going all in on multiple sprints worth of work on this big feature. So maybe you didn’t have a great idea after all. It’s way better to kill an idea at this point when it’s mostly just been maybe you, maybe a small team working on this, instead of going and spending a month on feature development and then launching it and then failing at that point. This is a much better place to fail. If you succeed though like we did, then you may have a feature item or a backlog item. For us, this actually meant looking at how do we take the data that we leverage and learned in our experiment and scale it up to everybody?
Craig Zingerline (19:19):
So remember, we have this original vision of building this promo code system. At this point, we decided, okay, we’ve got enough data now where we think we can build a new feature, which will actually expand our ability to get this in front of more people and give us more features within our admin tooling of our product to run more iterations of these tests. We thought that if we did that, since we had enough data coming in showing us that we should be successful, we decided, okay, well now it’s time to do a little bit of work. Here we’re at that 10 to 100 area. With growth led development here, we’re looking at amazing ideas. We think, hey, this feature’s really going to move the needle, we build, build, build, it’s going to take three sprints. And we launch, we rake it in, and we have a winner.
Craig Zingerline (20:06):
Basically that’s the process that we went through. If you look at some of the later iterations, once we put a little bit of engineering work into the product at this point with our promo system, by the end, we were able to influence our revenue by over 24% compared to the control. That’s two or three weeks worth of work and a whole bunch of testing before that work, but we were able to fine tune our feature development to a point where we could predict how these experiments were going to run. We got tons of data now in the cohorts and we were really able to impact our revenue in a very, very positive way, and in a lot of ways that’s been a game changer for us.
Craig Zingerline (20:48):
From there, it becomes more of an optimization game. So, it’s less, build, build, build, it’s more, okay let’s tweak the messaging, let’s test with different cohorts, let’s change colors and button sizes and display tactics and things like that once we get this thing out into the wild at scale. That’s what we call the optimization or always on mode. Once you walk through the step functions, it’s likely that you’ll build your learnings into your product. Just like we did with our promo code system, we took those learnings, we iterated, and then those became all the updates that we put into our software. In fact, we’re still working on new iterations of that promo code module that we started so simply. Then from there, you’re going to start the process over with a new set of hypotheses and you just kind of repeat, and it’s really fun.
Craig Zingerline (21:40):
The takeaway though is hold yourself and your product team accountable to not build too many unvetted features because that really is what will kill a startup. Switching gears a little bit, so when you look at roadmapping growth and how to get started with this stuff, really what you should be considering are two different streams of features that feed your product backlog. You saw this image down in the lower right before where you’ve got your product backlog, moving your sprint backlog, moving your sprint and then release. The way I look at this as a product manager is you’ve really got two streams. You’ve got your experiment stream, which is you’ve got this idea, you test, test, test, and then you build. Those ideas become part of your backlog, but they’ve been vetted.
Craig Zingerline (22:28):
Then there’s going to be things that you know or “know” you need to build, and that becomes its own stream. It’s not uncommon for that second batch to be where most startups spend most of their time, but the high growth startups really look at that experimentation stream and leverage those learnings and lessons to primarily be the feeder of the core backlog. I think there’s always things that you know that you need to build. If you’re processing payments, and that’s core to what you’re doing as a company, and you can’t process payments until you build a payment processor or do the integration, then you have to do that work.
Craig Zingerline (23:12):
Those are things that you know you have to do. There’s always going to be some of those or it’s likely there’s always going to be some of those. But the rest of it, you have to work from an experimentation standpoint if you really want to move fast and not waste time. From there, just generate ideas and go. So you don’t need full-time resources to run experiments. You don’t need a full time growth team. Everybody’s got ideas. You can workshop this with your company and you can leverage the crowd to get ideas. We do a lot of this at Sandbox where ideas come from everywhere. Pretty much every company where I’ve been at and any gigs that I’m on, it’s been really interesting to see who’s coming up with ideas, and it really is cross-functional. So you really have to embrace that. Come up with a methodology and a process so that anybody in your organization can participate in that experimentation process.
Craig Zingerline (24:04):
You also probably should have somebody functionally act as a growth lead. Really, what that means is that somebody is designated as the person who’s shepherding these ideas through the backlog process. It could be your product manager, it might be a growth lead. If you’ve got a growth product manager, that will be this person. But it could be somebody on marketing, it could be somebody in operations. Really, it doesn’t really matter, but you want somebody to basically be, not really the gatekeeper, but the organizer of these ideas and the person who socializes these ideas and have them functionally act as that.
Craig Zingerline (24:43):
Then obviously pulling these cross functional team members, so engineering product, etc, to support your efforts as you go. Most importantly, I think it’s critical for all the high growth companies that I’ve worked with, the executive team has really had buy-in to let the team be fairly autonomous in terms of what they’re allowed to do and enable them to take on some risk. When you free up the team, maybe you put some boundaries around what they can and can’t do or what they can and can’t spend, maybe some messaging constraints and things like that. When you open that up, you really may find that you’re able to move a lot faster because people feel invigorated that they can go from their idea through execution, through this process really quickly. It’s super engaging.
Craig Zingerline (25:29):
There’s a couple of different frameworks to use. There’s a RICE framework that’ll help you get organized. These frameworks are really meant to help with experimentation organization more than anything else and to kind of put a scope to the ideas. RICE was developed by Intercom. There’s one called ICE. I’ve got my own version of that, that I call ICEEE with three Es, or you can build your own and get cross team buy-in. This is my framework that we use and I’ve used a bunch in the past. Really what it is, is you’ve got a test name and you’ve got your hypothesis, and then you basically come up with a score.
Craig Zingerline (26:06):
So the higher the score, the better in terms of the growth test. You’ll likely want to take the items that have a higher score and build them first or get them into your growth testing framework first. If you look at this, you know you’ve got impact plus confidence, and there’s a multiplier there. And then you subtract out all the efforts. You’ve got engineering effort, marketing effort, and then this other effort, which is who else in the organization is going to need to support this feature that you’re putting out there or the work that’s going to go into that? And then you come up with your score. I’ll share this after and some resources, but it’s been fairly to keep us organized.
Craig Zingerline (26:46):
This is a very primitive version of a growth dashboard. I put this out here because, and this is a modified, but essentially a real screenshot of some stuff that I’ve done. You really need to be tracking your efforts on a weekly basis. It doesn’t have to be this perfect mixed panel or amplitude or Google Data Studio solution. You can hack together a report that looks pretty bad like this, but it actually gives you the metrics that you need on a week to week basis to keep yourself honest in terms of what’s working and what’s not. If you’re running experiments, then you have to track the results somewhere and I would look at it on a daily basis and I would track the metrics manually to start, so you really own your numbers.
Craig Zingerline (27:31):
Just a few other tips to stay organized. Experiments really should be driving the key portions of your product backlog. Define your process, stay consistent with your cadence and your measurement. I found and I’ve experimented with my own companies on this model here, like I’ve found that weekly experiment sprints and a weekly cadence of reporting where you’re looking at Monday through Sunday data, enables you, every Monday, to basically look at last week, get organized for this coming week and make whatever changes, go through the review process and then basically decide what you need to do that next week to do the next iteration of experiments and to work on that roadmap.
Craig Zingerline (28:18):
Then, work really, really closely, if you are not the PM, work closely with your PM to ensure that you’ve got alignment on all feature development. What you don’t want to do is create these two different work streams and there’s conflict between the two, like the best organizations, maybe there’s a tiny bit of conflict there because you’ve got your stuff we know we need to build bumping up against the stuff that has been vetted, but that’s healthy. I think though, you really need to be aligned as a product organization to really pull this off. Get in there early with your product managers or your product team and make sure that you’ve got good communication across the board so that you can support this methodology.
Craig Zingerline (29:00):
That’s it. I’ve got a bunch of resources up on my personal website, so feel free to check that out. I also have a whole bunch of stuff that I’ve put into a free growth curriculum online, so if you hit the Bitly URL there, you can check that out. Feel free to email me and I’d love to share more examples with you. Hopefully, this was helpful and thanks so much for spending some time with me today. Bye-bye.