Every single second, another 15.5 users join social media, which adds up to a total of 3.96 billion social media users worldwide. As a result, paid media campaigns on these platforms are a great opportunity to get in front of your target audience—no matter who your target audience is.
The average business spends 2.5% – 3.25% of its annual revenue on social media ads. For a company with $10 million in revenue, that’s a whopping $325,000. That’s why you need a strategy that maximizes the value of every penny invested.
And that’s where testing comes in.
You may have guesses about which ads will perform best, but they’re only guesses. Testing pits one ad against another to either prove or disprove your hypothesis so that you can make decisions based on quantifiable data instead of gut feelings.
With so many different ways to test (which we’ll get into in a moment), you need to consider 4 things before you begin:
Have a specific goal
What are you trying to accomplish with your paid social campaigns? Generate leads? Add to your list of followers? Get your brand in front of as many eyeballs as possible? Determine which metric will define success before you do anything else.
Know the benchmarks
Even a winning ad may not be performing as well as it could be. You should use your own data to benchmark performance but if you’re new to paid advertising you can use these basics to start:
- On LinkedIn, we see the average click-through-rate is between 0.27% – 0.35% and the average cost per click is $4-$10.
- On Facebook, the average click-through-rate is between 0.62% and 0.78% and the average cost per click is between $2.52 and $3.77
- On Twitter, the average click-through- rate is 0.86% and the average cost per click is $0.38.
These benchmarks will vary significantly by region, audience targeted and ad type. Testing is the only way to establish what your own benchmarks should look like and improve forecasting.
Think about your audience
Take what you already know about your audience and build on it. For example, if your target audience is busy C-suite executives, you may want to test using an image that has words on it (that will get your message across faster) vs. using a stock image.
If you have multiple key audiences, consider segmenting your audience for testing purposes to better understand the results and to establish more thorough benchmarks.
Consider your business goals
In addition to having a specific measurable goal, think about some of the wider business goals your organization has before you start running social media ads.
For example, if your organization wants to prioritize video production, testing short video ads can help determine which types of videos your audience responds to before you invest in longer, more expensive projects. Or, test a static image ad vs. a video ad to see if your audience even prefers videos in the first place!
Now that you’ve gotten off to the right start, let’s talk about the testing itself. In order to generate legitimate results, there are 3 rules to remember:
Run your test long enough to get a large enough sample size
On LinkedIn, this can take at least two weeks, while some of the other platforms only need a week or so. Leveraging a standard 10-14 testing window will ensure that the data you collect is more likely to be statistically significant and set your future campaigns up for success.
Are you thinking “but 10-14 days is a long time to wait”? It can feel like that as you watch results roll in but on platforms like LinkedIn, if you make edits to a campaign during that time frame, the testing window starts all over again and you delay valuable learnings even further. It can be tempting to tweak campaigns in the first days after you’ve launched but patience is key to allowing paid advertising platforms to use their intelligent algorithms to serve your ads to the right audience who is most likely to take your desired action.
Only test one variable at a time
With so many different testing options, you may want to test copy, images, and audiences all at the same time. Unfortunately, you won’t know which one was actually responsible for the higher engagement. Was it your captivating copy? Your irresistible image? Your perfectly-targeted audience? Stick to one at a time so that you always have an apples-to-apples comparison.
Statistical significance is important
After one ad wins, you need to ensure the results aren’t by chance—which means checking to make sure you’ve got a statistical significance of at least 95%. (There are calculators that measure this.) For example, if Ad A got 5,000 impressions and generated 50 new leads, you may think it’s not as effective as Ad B, which got 5,000 impressions and 60 new leads. However, that’s only a statistical significance of 75%. Ad B might be better, but you need to keep testing to find out.
Now let’s brainstorm all kinds of tests you can run, by variable:
If you want to test images…
- Do images with people perform better?
- Does the dominant color of your image (ex: dark background vs. light background) make a difference?
- Does having an image with words on it perform better?
- Do specific words on your image perform better than others?
If you want to test copy…
- Does a question perform better than a statement?
- Does shorter copy perform better than longer copy?
- Does having a statistic in your copy lead to more engagement?
- Does using an emoji lead to more engagement?
- Does using a hashtag generate more engagement?
- Do certain words in your CTA (ex: “Download Now” vs. “Start Reading”) perform better?
If you want to test your audience…
- Do certain job roles perform better (ex: people with technical knowledge, like CTOs and CIOs, vs. people with less technical knowledge, like CEOs and COOs)?
- Does limiting your audience to a specific industry or region make a difference?
- Do people at a certain level of your funnel (ex: people you’re trying to drive awareness with vs. customers you’re trying to drive loyalty with) engage better?
If you want to test video…
- Does a video lead to more engagement than a static image?
- Does video length matter (ex: less than 15 seconds vs. more than 15 seconds)?
- Does a specific thumbnail from your video generate more engagement than another thumbnail from the same video?
Finally, but maybe the most important aspect of selecting what you want to test is prioritization. Since you should only test one variable at a time and need to allow the test time to collect data to work from, it’s important that you start out by testing the variables that will be most impactful to your overall marketing strategy. Are you considering investing in marketing to a new audience? Testing your audience should be prioritized over images. Are you considering a new tagline that you’ll incorporate across other pieces of content? Tagline testing should come before CTA testing. Ask yourself these questions before deciding on what to test:
- What will I learn from this?
- Will the results of this test impact other marketing efforts?
- What other efforts are dependent on the learnings from this test?
As you can see, the sky’s the limit when it comes to testing your social media ads. As long as you have the right strategy—and stay true to your brand and your audience’s needs—you can generate data-driven insights that can help you make smarter, more cost-effective advertising decisions.