Responsive Search Ads (RSA) are an amazing ad format that offers extensive automation in delivering the most relevant ads. It is really effective and Google strongly encourages advertisers to use it. However, when we test it with the traditional approach, it seems ineffective compared to other ad formats.
Learn more about A/B testing ads and how you should A/B test your RSAs.
What is A/B ad testing?
In simple terms, A/B ad testing means testing different ad copies and finding the best ones. Today, most PPCs add multiple ads to an ad group and set the ad rotation to “Optimize: Favor best performing ads.” This way, Google itself shows the best performing ads.
Most of the time, people focus on CTR or Conversion Rate while testing their advertising campaigns. However, a higher CTR does not necessarily mean a conversion and ads with maximum conversion may result in fewer clicks. So, it is recommended to consider CPI (Conversion Per Impression) to find the best ad that meets your goals efficiently.
However, when the traditional testing approach is used for
RSAs compared to other ad formats such belgium phone number library as ETAs (Extended Test Ads), the performance of RSA looks poor. In most cases, RSAs have high CTRs but lower conversion rates, making RSA seem inefficient.
As more information came in on this topic, it became clear that the traditional testing approach was hiding the true power of responsive search ads.
Ineffectiveness of RSA’s traditional A/B testing
RSAs are better suited to appearing in search ad auctions, which previously did not allow ETAs, due to the nature of RSA automation and the larger character count. Therefore, comparing RSAs to ETAs is unfair. In rare cases, RSAs will appear for queries that have nothing to do with ETAs at all. Instead of comparing RSA performance to ETAs, the 6 best free project management software of 2024 start by evaluating incremental lift at the ad group level to fully understand the impact of RSA on performance.
How to A/B test your responsive search ads
Here is a case study from Metric Theory :
They split an ad group into two segments based on an even distribution of impressions and click volume. They then ran three ETAs on each ad group in our control group.They saw larger increases in clicks and impression volume for the ad groups with the
RSAs when we looked at the analytics from time to time:
Impression and click volume for the adb directory control ad groups remained essentially stable during the reporting period, leading to the assumption that the RSAs had a positive impact on increasing the overall impression and click volume for these ad groups.
RSAs were found to outperform ETAs when comparing key ad copy evaluation metrics in this scenario.
Anyone who evaluated ad groups based only on ETAs would have missed the necessary information that RSAs resulted in an incremental increase in impressions.