The Startup Marketer's Experimentation Process

Over the past month, I’ve really tried to take a hard look at how I personally do marketing online. Specifically, I’ve been trying to create processes that I can use at any company as I progress through my marketing career. I wrote about how I onboard myself at a new company, but my goal with this post is to dive a bit deeper.

Before I jump into specifics, I believe that establishing a process around marketing experiments is one of the biggest competitive advantages you can have. So many marketers try tactics, and never document results (I’m guilty of this.) Even worse, there’s no “formula” to how you learn. This is bad, and I’ve been trying to change this, so I figured I’d share my process around experimentation. It’s still a work in progress, so any feedback is appreciated.

Startup Marketer's experimentation process

Rule #1: You don’t have all the answers

One of the biggest problems with running marketing experiments is how the experiment ideas are generated. Ideas are a dime a dozen, and getting current employees to come up with a backlog of ideas to test is a recipe for disaster.

I’d like to believe that my intuition for how these marketing tests should be run is good, but the problem is that I’ve most likely spent dozens of hours using the product, I know every flow inside and out, and it’s tough for me to relate with the average user. This is especially prevalent in onboarding flows.

Rule #2: Your customers hold the keys to growth

If you have plans of growing a sustainable business, you must charge money.

Who pays you money? Your customers.

Put simply, your customers are the keys to your success. It’s your job to find out what they love/hate about your offering, and improve based on this feedback. This is absurdly simple, yet extremely important.

One of my favorite things to ask customers (or potential customers) is “Where do you go online to stay up-to-date with your industry?” From a marketing perspective, I can learn exactly what channels I can reach them with. If it’s a popular blog, it’s probably a good place to guest post, if they spend their time on Twitter, perhaps Twitter ads could work.

This isn’t rocket science.

Rule #3: Qualitative feedback is the foundation

I like qualitative feedback. It’s rich with insight, typically comes straight from customers, and gives you a much better starting point to run experiments from. This is especially true as a young company.

Here’s an example. I’m looking to run an experiment, and I have two teams of people tasked with coming up with ideas of experiments that can be run (A/B tests, copy changes, etc.)

  • Team #1 decides to call customers, setup SnapEngage or Qualaroo on the site, and goes straight to the customer to find out what’s blocking them.
  • Team #2 decides to dig around in Google Analytics, Mixpanel, or KISSmetrics. They take the quantitative approach, and create a backlog of experiments.

My point here is that the experiments you run should have qualitative feedback as the foundation. You absolutely should measure results with analytics tools, but I highly advise that these tools are not the source of your experiment ideas.

I’d rather hear a customer say that the onboarding process is confusing and ask them why, than dig around in analytics and be left to my own conclusions.

Rule #4: Experiment cycle time is your competitive advantage

I’ve had a bit of free time on my hands recently, and decided that it was time to start playing Age of Empires (I know, it’s 2014, not 2007.)

Business is a bit like war without killing (hopefully), but I still want to crush my competition.

What crushes your competition? Learning faster than they do.

Again, this is why process matters. Let’s say Uber runs 10 marketing experiments a week, and they consistently find a winner 10% of the time (again, these are just random numbers.)

On the other hand, Lyft runs 5 marketing experiments a week, and they consistently pick a winner 10% of the time. Who wins?

Uber.

This is why proper organization of your experiments is important. Your experiment backlog needs a few critical pieces:

  • Conflicting experiments (oftentimes you can run multiple tests at once, just make sure they don’t conflict)
  • What measures success? If it’s an A/B test, what’s the confidence level and sample size?
  • What experiment is next?

A business that’s always testing is a business that’s always learning.

Rule #5: Proper documentation of old experiments

Make no mistake, documenting past experiments is boring. Oftentimes marketers like to keep the results in their head (once again, pointing at myself), but proper documentation is critical, especially when hiring additional marketers. Document key metrics, the dates the experiment ran, and whether the test was a winner, loser, or had no effect.

A spreadsheet is the best solution I’ve found so far, but it’s easy to forget about past experiments. Typically marketing teams that don’t document results end up running the same test many times.

I’ve been thinking about building software to help with this, if you’re having trouble too, just email me or tell me on Twitter.