Digital MarketingFour ways to test your marketing ideas instead of trusting blogs

Four ways to test your marketing ideas instead of trusting blogs

Don’t let someone else’s blog tell you what will work for your marketing. Test it yourself.

30-second summary:

  • Blogs are a valuable and insightful arm of any brands’ marketing strategy.
  • The drawback to that, of course, is that industry blogs are filled with untested theories and can start to resemble echo chambers if relied upon solely.
  • Marketing innovation comes not by reading the work of others, but by continually testing and trying new approaches to age-old problems.
  • Sarah Fruy explores new testing strategies marketers can explore in order to maximize the value of their funnels.

If more than a decade of work in the marketing trenches has taught me anything, it’s that no silver bullet will make all your problems go away.

Unfortunately, a misconception exists that a one-size-fits-all answer is available through a simple Google search. One marketer finds success with a particular tactic, writes a blog post about it, and tells everyone to use the same strategy. Before you know it, a listicle features this tactic and more people start viewing it as a best practice.

Rinse and repeat.

While this tactic might very well be the right strategy for your business, chances are good that it’s not. I absolutely encourage scouring the internet for helpful advice, but you must test that theory with your own audience on your own platforms. Consumer behavior is constantly evolving. As effective marketers, we need to test our theories as much as possible to avoid costly mistakes.

Building a culture of testing

Marketers who neglect testing are probably working from a waterfall approach rather than using an agile method. They believe success stems from big-bang campaign launches, with a long planning period leading up to a major release. They value their instincts over data-driven decisions.

This might result from a lack of knowledge around agile marketing practices or an organization that requires many layers of approval before a launch. Others believe that success lies in following in the footsteps of “greater” marketers and implementing their playbooks the best they can. They think, “If it worked for so-and-so, it will work for me.”

Others might cite budget as a barrier. But even a bootstrapped startup with no budget can find ways to test and validate ideas before going all-in. A large budget doesn’t prevent failure, as even larger corporations suffer from premature releases of products and ideas. For example, Microsoft developed a reputation in recent years for rolling out clunky products and campaigns — from Vista to corrupted chatbots — that suffered from hurried rollouts.

Who wants to risk a failure with so much time and money at stake? A few factors will help you better analyze your campaigns and institute a more successful testing program. Use these tips to build the testing culture you need to thrive:

1. Work with a cross-functional group

A recent Deloitte report found that 89% of executives listed organizational design through teams as their top priority for handling challenges in their businesses.

Building cross-functional teams with employees from different departments and skill sets allows for faster communication and decision-making. It also adds to the conversation more diverse perspectives and experiences, allowing you to interpret data points from a variety of angles and fueling more creative testing ideas.

If you aren’t ready to completely restructure your operations just yet, start by establishing an outside-in approach to idea generation by pulling in members of other departments for input and new concepts. Even a single variant perspective strengthens the ideas you’re building for new campaigns and helps you spot potential issues before implementation.

2. Don’t run one-off tests

An agile marketing team’s initial goal is typically to release a minimum viable product and then test the waters and see how a select segment of your market responds.

If you get a signal that the campaign is working, develop, and amplify that success. If the campaign doesn’t perform to your expectations, iterate on your approach or move on to a new program. Using the data as your guide, you will be building an entire culture of testing so you can be confident in every initiative you deploy.

Earlier this year, we decided to test exit modals on our website. The initial results were positive, so we scaled the number of pages and continued to see success. Our next step included personalizing the experience and testing different types of creative messages.

As you can see, one idea — “Should we implement exit modals on our site?” — spawned an ever-expanding list of testing ideas for our team. A well-structured program will yield many tests to prove a hypothesis before it’s ready for full-blown exposure, so stopping at a one-off test leaves plenty of valuable data undiscovered.

3. Don’t optimize for a single metric

It’s easy to set up a test and optimize for clicks or form fills, but you might also want to consider the long-term impact of short-term gains.

Maybe more people sign up for your free trial, but they also churn at a higher rate. If you optimize solely around their interest to sign up, you miss out on the bigger insight down the road: This new audience is actually decreasing your overall revenue. To illustrate why you can’t focus on one area, look no further than Homejoy, a home-cleaning startup.

The company invested a ton of resources into a single metric — customer acquisition. A $19 first-time customer promotional price fueled its growth. However, the deal mostly attracted a customer base interested in the discount. With no focus on customer retention, Homejoy saw only 25% of those homeowners return, and the growth stagnated, expediting the failure of the company.

4. Include both quantitative and qualitative research

A few years back, I was under pressure to have my startup business sponsor an event as a way to market our new product. Being in the early stages of product development, our customer persona wasn’t fully developed yet, but the opportunity guaranteed press coverage and a large volume of foot traffic in the market we were beta testing. So I took a risk and got us a booth.

Unfortunately, we discovered that the event’s audience wasn’t a strong demographic fit for us — something we would have learned if we had attended a year earlier and interacted with the audience firsthand.

Running tests can be very exciting when you hit statistical significance, but don’t let that evidence shield you from actually talking to your audience. Make sure to include open-ended questionnaires or user groups in your research program. Try to balance your research so you understand not only how a user responds, but also why the user responds that way.

No single idea or marketing initiative will work for everyone, no matter what the blogs say. Don’t rely on untested insights to drive your campaigns. Instead, do your research, narrow the scope to fit your needs, and test each new plan to ensure you’ve got a real winner on your hands.

Sarah Fruy, Director of Brand and Digital experience, leads the strategy and goals for Pantheon’s website and branded content. You can find Sarah on LinkedIn and Twitter @sarahfruy.

Resources

The 2023 B2B Superpowers Index
whitepaper | Analytics

The 2023 B2B Superpowers Index

8m
Data Analytics in Marketing
whitepaper | Analytics

Data Analytics in Marketing

10m
The Third-Party Data Deprecation Playbook
whitepaper | Digital Marketing

The Third-Party Data Deprecation Playbook

1y
Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study
whitepaper | Digital Marketing

Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study

1y