This information is partly outdated. Mobile app promotion now available via VK Ads

We have moved mobile app promotion to VK Ads, a platform with a case-proven efficiency where you can launch campaigns in five simple steps. Create account

Learn more about how to register and set up your VK Ads account by contacting your manager or our support team via caresupport@vk.company.
A/B test

A/B test is a method by which you can carry out experiments and compare the effectiveness of your advertising campaigns.

Reasons to use A/B test

A/B test helps to find out:
  • which of the advertisements (images, videos, etc.) works best provided that the target audience and the rate are the same;
  • which payment model is more efficient for your business: CPC, oCPM or CPM;
  • will there be an increase in conversions if you increase the rate compared to the current one;
  • what will happen if you switch the distribution of the budget from fast to even or how will the changes in the auction strategy affect distribution.

Previously, users started several campaigns with almost identical settings to test the hypothesis, which of the ads works better. However, in myTarget, these campaigns will affect each other, and test results will not be objective.

With A/B test, audiences do not overlap and campaigns do not affect each other, and you can get objective results.

How A/B test works

The "A/B test" block in the campaign settings allows you to conditionally divide the target audience of the campaign into 10 parts: each part is 10% of the audience. Select multiple parts to show ads only to users who are in those parts.
Split the audience to test two campaign options
By default, the campaign is shown in all 10 parts and 100% of the target audience.
1. For campaigns that participate in the same A/B test, add the same words to the name, so it will be easier to filter such campaigns on the dashboard to analyze the results.

2. If you run several different A/B test at the same time (for example, 2 campaigns in A/B test_1 and 2 campaigns in A/B test_2), remember that A/B test_1 campaigns can affect A/B test_2 campaigns if they target the same target audience.
For example

We want to compare two ads: on a red background and on a green background — the content of the ads is the same. To see which ad works best, let's compare two metrics: app-install conversion and install price.
Let's create two campaigns with the same settings and target audience:

  • Geo: Moscow
  • Gender: M, W
  • Age: 18-60
  • Rate: 100 rubles.
  • Format: Multiformat

In the announcement of one campaign we will use a picture with a red background, in another with a green one, the title, text and link to the application are the same.

To conduct tests in two campaigns in the settings of the "A/B test" block in the campaign with an ad on a red background specify the first 5 parts of the audience (1, 2, 3, 4, 5), and in a campaign with an ad on a green background — others 5 (6, 7, 8, 9, 10).

You can also use the button "Select 50% of the audience", which selects one of the parts.

Start campaigns.

How to evaluate results

To make conclusions from statistics, a sufficient amount of data is needed. To understand that the difference in results is significant, rely on the p-value calculation.
For example

If the campaigns received 90 and 110 conversions, can we say that the second campaign is better than the first? Let's use calculations for p-value=0.05 (calculator example)

CR1 = 90/2000 = 4.5% Confidence interval 3.7% – 5.5%, with confidence level 95%
CR2 = 110/2000 = 5.5%. Confidence interval 4.6% – 6.6%, with confidence level 95%

The intervals intersect, therefore the results lie within the error and it is impossible to draw a conclusion from these data.

The algorithm for A/B test

  1. Choose which metric you want to measure: for example, conversion rate per install, app-install price, conversion rate per event in the mobile app, conversion rate per target action on the site, or the price of these events.
  2. Estimate the volume of the target audience. In the campaign estimate, see how much coverage your ad campaign can get with a specified rate and budget. If the coverage is < 100k, it is worth considering whether you can get enough target events to compare the results.
  3. Determine the time for testing. The time depends on the reach of the target audience, but tests shorter than 3 days may not reveal significant influences due to the novelty of the ads, weekly fluctuations, etc.
  4. Prepare new campaigns (2 campaigns are enough for A/B testing). Each of the campaigns needs to test one change relative to the base campaign.
  5. In all campaigns, select disjoint parts of the audiences in the setting of "A/B test".
  6. Run campaigns.
  7. Evaluate the results in a dashboard, taking into account the assessment of statistical significance.
Was this article helpful?