A Split Test allows you to test two different versions of your ads to determine which one performs best and to improve future campaigns. You'll have the option of pausing an ad group within your split test manually or when the system picks a winning ad group for you.
Determine a hypothesis to test. Before setting up a Split Test, determine what you are trying to test and how that can help inform your ad strategy. This will help you focus on what variables to test, design and run an effective A/B test.
Set up test groups with large differences in variables. When choosing your testing variable, we recommend that the settings for the two test groups differ significantly. There should be obvious differences to ensure the two ad groups don't produce similar results, so the system can determine a winning ad group.
Set an appropriate test budget. The budget for your Split Test should produce sufficient results to give you confidence in identifying the best performing strategy. When you create a Split Test, the system will provide the Estimated Testing Power based on the budget you set.
Allocate an adequate testing duration. We recommend testing for a minimum of 7 days to obtain the most reliable results. Split Tests can run for a maximum of 30 days. Testing for less than 7 days may not be long enough to determine a winner, while long testing may lead to a shortfall in the budget.
Set a large audience. It's recommended that you expand your audience to avoid insufficient sample size when running a Split Test.
Ensure a higher estimated power value. Power value is the likelihood of detecting potential differences in your ad group, which helps determine your chances of finding a winning result. We recommend a budget that gives you a power value of at least 80%.
Avoid making changes. Making changes to your ad group after your Split Test started running may impact its results or cause the ad group to go into review again. It is recommended that you don't change your Split Test settings after it has started running.