PPCThe 7 Deadly Sins of PPC Ad Optimization

The 7 Deadly Sins of PPC Ad Optimization

Each one of these major PPC ad optimization errors can serve to confound your test results. Avoiding these common mistakes when you set up your ad tests will ensure that the results of your tests will be observable over the long term.

7-deadly-sins

Have you stopped doing consistent ad testing in PPC because the results of your tests couldn’t be replicated over the long term? It could be that you’re committing one or more major errors when setting up your tests, thus invalidating the results.

Each one of these common errors, a.k.a., the 7 Deadly Sins of Ad Optimization, can serve to confound your test results.

1. Lack of or A Poorly Developed Hypotheses

Every test must begin with a clear, specific, and informed hypothesis. Scientific hypotheses should lay out test ads, a control ad, and the metric by which you will evaluate the differences in the performance of those ads.

Many times there isn’t a hypothesis because it is easy enough to simply write an ad and set it live to see how it does, and a forward thinking hypothesis is overlooked. Other times a hypothesis isn’t used because it seems difficult or pointless to prove it right.

Without a clear hypothesis, there is no frame of reference to help you decide whether a test is complete and whether you’ve learned anything.

2. Speaking to Multiple Intents in Test Groups

Many tests are conducted on ad groups that represent multiple user intents, making it incredibly difficult to interpret the results in a way that could drive long-term value. There are two ways this happens.

  • An advertiser will include keywords like “red shoes” and “red sneakers,” in the same group, and as such, the winning ad tends to be the least common denominator, not the best ad for each of those intents.
  • An advertiser will run a test on a broad match term with no match modifications or negative keywords.

The winning ad is typically non-specific and the results of this test are largely unrepeatable.

3. Overlooking Variations in Traffic & Devices

When it comes to traffic considerations, many seem to be a lot more concerned with having enough traffic and less concerned with traffic fluctuations and variations.

Factors such as seasonality, which leads to traffic dips and spikes, can significantly affect the results of a test and the insights gained from them.

Device considerations are also crucial in ad testing. Because searchers respond differently to ads when on desktops than when on mobile, you must be aware that both the ad copy and test results cannot be indiscriminately duplicated and applied across devices.

4. Excessive Variation Between Test and Control Ads

One of the caveats of creating a test ad that is excessively different from the control ad is that when analyzing the results, whether the test ad was a winner or loser, it is nearly impossible to determine which factor was responsible for the performance swings.

It’s crucial to be as specific as possible when creating a hypothesis and structuring a test around it.

Experimental ads should be created in a way that allows you to test the variables you intend to test, and the results can then be attributed to predefined changes.

5. Declaring Tests Over Too Soon

Testing can either be exciting or terrifying, depending on what the initial results begin to look like. As the test begins to accumulate data, many practitioners make premature assumptions about which ad is a winner and which is a loser.

An example is determining a test over before the time of a full sales cycle passes, which doesn’t give enough time for an accurate conversion rate to be revealed. Declaring a test over too soon can easily drive suboptimal decision-making when it comes time to leverage the results.

6. Applying Results Blindly Across Campaigns

The success of a specific element in one ad group, a “free download” call to action, for instance, is by no means an indication that the same element would yield the same results if applied across all other ad groups and campaigns.

This is particularly true of ad groups representing different intents. The only way to determine if leveraging results across other ad groups and campaigns would yield the same success is by testing methodically before applying.

7. Too Much Focus on Brand Terms

Every marketer must own their brand in search, but it is a problem if ad testing is only done on brand terms and then the results are assumed to be applicable to all keywords in the account.

A consumer who seeks out your brand specifically should be messaged to much differently that someone who is not as familiar with your brand. Trying to combine test results from brand and non-brand searches sets you up to miss the mark with a large portion of your ad creative.

Summary

Avoiding these common mistakes when you set up your ad tests will ensure that the results of your tests will be observable over the long term. Happy testing!

Image Credit: JanetR3/Flickr

Resources

The 2023 B2B Superpowers Index
whitepaper | Analytics

The 2023 B2B Superpowers Index

9m
Data Analytics in Marketing
whitepaper | Analytics

Data Analytics in Marketing

11m
The Third-Party Data Deprecation Playbook
whitepaper | Digital Marketing

The Third-Party Data Deprecation Playbook

1y
Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study
whitepaper | Digital Marketing

Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study

2y