Analyze A/B Test Results

Table of Contents

Introduction

A/B tests are very commonly performed by data analysts and data scientists.

For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.

Part I - Probability

To get started, let's import our libraries.

In [1]:
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import scipy.stats as stats
#set the seed to assure you get the same answers on quizzes
random.seed(42)

1. Now, read in the ab_data.csv data. Store it in df. Use your dataframe to answer the questions in Quiz 1 of the classroom.

a. Read in the dataset and take a look at the top few rows here:

In [2]:
df = pd.read_csv('ab_data.csv')
df.head()
Out[2]:
user_id timestamp group landing_page converted
0 851104 2017-01-21 22:11:48.556739 control old_page 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0
4 864975 2017-01-21 01:52:26.210827 control old_page 1

b. Use the below cell to find the number of rows in the dataset.

In [3]:
# 294478 rows
df.shape
Out[3]:
(294478, 5)

c. The number of unique users in the dataset.

In [4]:
# 290584 unique users
df.user_id.nunique()
Out[4]:
290584

d. The proportion of users converted.

In [5]:
# 12%
df['converted'].sum() / len(df)
Out[5]:
0.11965919355605512

e. The number of times the new_page and treatment don't line up.

In [6]:
# 1928 + 1965 = 3893
df.groupby(['group', 'landing_page'])['landing_page'].count()
Out[6]:
group      landing_page
control    new_page          1928
           old_page        145274
treatment  new_page        145311
           old_page          1965
Name: landing_page, dtype: int64
In [7]:
# 3893
len(df.query("(group == 'control') and (landing_page == 'new_page')") + df.query("(group == 'treatment') and\
                                                                                 (landing_page == 'old_page')"))
Out[7]:
3893

f. Do any of the rows have missing values?

In [8]:
# no
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 294478 entries, 0 to 294477
Data columns (total 5 columns):
 #   Column        Non-Null Count   Dtype 
---  ------        --------------   ----- 
 0   user_id       294478 non-null  int64 
 1   timestamp     294478 non-null  object
 2   group         294478 non-null  object
 3   landing_page  294478 non-null  object
 4   converted     294478 non-null  int64 
dtypes: int64(2), object(3)
memory usage: 11.2+ MB

2. For the rows where treatment is not aligned with new_page or control is not aligned with old_page, we cannot be sure if this row truly received the new or old page. Use Quiz 2 in the classroom to provide how we should handle these rows.

a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in df2.

In [9]:
# remove rows
df2 = df.drop(df[((df.group == 'control') & (df.landing_page == 'new_page')) | \
                 ((df.group == 'treatment') & (df.landing_page == 'old_page'))].index)
In [10]:
df2.shape
Out[10]:
(290585, 5)
In [11]:
# Double Check all of the correct rows were removed - this should be 0
df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]
Out[11]:
0

3. Use df2 and the cells below to answer questions for Quiz3 in the classroom.

a. How many unique user_ids are in df2?

In [12]:
# 290584 unique users
df2.user_id.nunique()
Out[12]:
290584

b. There is one user_id repeated in df2. What is it?

In [13]:
# 773192
duplicate_user = df2[df2['user_id'].duplicated()].user_id
duplicate_user
Out[13]:
2893    773192
Name: user_id, dtype: int64

c. What is the row information for the repeat user_id?

In [14]:
df2[df2['user_id'] == duplicate_user.iloc[0]]
Out[14]:
user_id timestamp group landing_page converted
1899 773192 2017-01-09 05:37:58.781806 treatment new_page 0
2893 773192 2017-01-14 02:55:59.590927 treatment new_page 0

d. Remove one of the rows with a duplicate user_id, but keep your dataframe as df2.

In [15]:
df2.drop_duplicates(['user_id'], inplace=True)
In [16]:
df2.shape
Out[16]:
(290584, 5)

4. Use df2 in the below cells to answer the quiz questions related to Quiz 4 in the classroom.

a. What is the probability of an individual converting regardless of the page they receive?

In [17]:
# 0.1196
df2['converted'].sum() / len(df2)
Out[17]:
0.11959708724499628

b. Given that an individual was in the control group, what is the probability they converted?

In [18]:
# 0.1204
control_conversion = df2[df2['group'] == 'control']['converted'].sum() / len(df2[df2['group'] == 'control'])
control_conversion
Out[18]:
0.1203863045004612

c. Given that an individual was in the treatment group, what is the probability they converted?

In [19]:
# 0.1188
treatment_conversion = df2[df2['group'] == 'treatment']['converted'].sum() / len(df2[df2['group'] == 'treatment'])
treatment_conversion
Out[19]:
0.11880806551510564

d. What is the probability that an individual received the new page?

In [20]:
# 0.5001
df2[df2['landing_page'] == 'new_page']['group'].count() / len(df2)
Out[20]:
0.5000619442226688

e. Consider your results from a. through d. above, and explain below whether you think there is sufficient evidence to say that the new treatment page leads to more conversions.

In [21]:
obs_diff = treatment_conversion - control_conversion
obs_diff
Out[21]:
-0.0015782389853555567

No, there is not sufficient evidence to say that the new treatment page leads to more conversions.
In our data, the probability of conversion is even slightly lower in the treatment group than in the control group.

Part II - A/B Test

Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed.

However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another?

These questions are the difficult parts associated with A/B tests in general.

1. For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of $p_{old}$ and $p_{new}$, which are the converted rates for the old and new pages.

$$H_0 : p_{new} - p_{old}\leq 0$$$$H_1 : p_{new} - p_{old} > 0$$

2. Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the converted success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the converted rate in ab_data.csv regardless of the page.

Use a sample size for each page equal to the ones in ab_data.csv.

Perform the sampling distribution for the difference in converted between the two pages over 10,000 iterations of calculating an estimate from the null.

Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use Quiz 5 in the classroom to make sure you are on the right track.

a. What is the convert rate for $p_{new}$ under the null?

In [22]:
# 0.1196
p_new = df2['converted'].sum() / len(df2)
p_new
Out[22]:
0.11959708724499628

b. What is the convert rate for $p_{old}$ under the null?

In [23]:
# 0.1196
p_old = df2['converted'].sum() / len(df2)
p_old
Out[23]:
0.11959708724499628

c. What is $n_{new}$?

In [24]:
df2.groupby(['group', 'landing_page'])['landing_page'].count()
Out[24]:
group      landing_page
control    old_page        145274
treatment  new_page        145310
Name: landing_page, dtype: int64
In [25]:
# 145310
n_new = df2[df2['landing_page'] == 'new_page']['landing_page'].count()
n_new
Out[25]:
145310

d. What is $n_{old}$?

In [26]:
# 145274
n_old = df2[df2['landing_page'] == 'old_page']['landing_page'].count()
n_old
Out[26]:
145274

e. Simulate $n_{new}$ transactions with a convert rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in new_page_converted.

In [27]:
treatment_df = df2.query('group == "treatment"')
sample_new = treatment_df.sample(n_new, replace=True)
new_page_converted = sample_new['converted']
new_page_converted.mean()
Out[27]:
0.11829192760305554

f. Simulate $n_{old}$ transactions with a convert rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in old_page_converted.

In [28]:
control_df = df2.query('group == "control"')
sample_old = control_df.sample(n_old, replace=True)
old_page_converted = sample_old['converted']
old_page_converted.mean()
Out[28]:
0.1204620234866528

g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).

In [29]:
# this is a result of one simulation
# the simulation needs to be repeated many times to enable hypothesis testing
p_diff_simulate = new_page_converted.mean() - old_page_converted.mean()
p_diff_simulate
Out[29]:
-0.0021700958835972617

h. Simulate 10,000 $p_{new}$ - $p_{old}$ values using this same process similarly to the one you calculated in parts a. through g. above. Store all 10,000 values in a numpy array called p_diffs.

In [30]:
# bootstrapping to get the sampling distribution of the conversion differences
control_conv_prob = []
treatment_conv_prob = []
p_diffs = []

# for loops are much slower than numpy functions
for _ in range(10000):
    sample_old2 = control_df.sample(n_old, replace=True)
    sample_new2 = treatment_df.sample(n_new, replace=True)

    control_conversion = sample_old2['converted'].sum() / n_old
    treatment_conversion = sample_new2['converted'].sum() / n_new

# numpy binomial function would generate the distribution given that the null is true
#control_conversion = np.random.binomial(n_old, p_old, 10000) / n_old
#treatment_conversion = np.random.binomial(n_new, p_new, 10000) / n_new
    
    control_conv_prob.append(control_conversion)
    treatment_conv_prob.append(treatment_conversion)
    p_diffs.append(treatment_conversion - control_conversion)
    
p_diffs = np.array(p_diffs)

i. Plot a histogram of the p_diffs. Does this plot look like what you expected?

In [31]:
plt.hist(control_conv_prob, alpha=0.5, color='blue')
plt.hist(treatment_conv_prob, alpha=0.5, color='green');
In [32]:
# we simulated a sampling distribution for the conversion difference by bootstrapping
plt.hist(p_diffs);
plt.axvline(x=0, color='black');
In [33]:
# the null hypothesis says the difference is less than or equal to 0
# there is 90% probability that the difference is less than 0 and therefore fitting the H0
stats.percentileofscore(p_diffs, 0)
Out[33]:
90.72
In [34]:
(p_diffs < 0).mean()
Out[34]:
0.9072
In [35]:
# alternatively, we can simulate the differences under the null, i.e. when the mean difference is 0
# now we can look at how likely it is we would observe our observed difference or a more extreme values in favour of H1, 
# given that the H0 is true, which in our case means difference values higher than the obs_diff
null_vals = np.random.normal(0, p_diffs.std(), p_diffs.size)
plt.hist(null_vals)
plt.axvline(x=obs_diff, color='black');
plt.axvline(x=np.percentile(null_vals, 95), color='red');

j. What proportion of the p_diffs are greater than the actual difference observed in ab_data.csv?

In [36]:
# proportion of the p_diffs greater than the actual difference observed is 50%
# however, if binomial was used to bootstrap, we would have the distribution under the null 
# and then it would be 90%, our p-value
(p_diffs > obs_diff).mean()
Out[36]:
0.5027
In [37]:
p_value = (null_vals > obs_diff).mean()
p_value
Out[37]:
0.9039
In [38]:
p_value = 1 - stats.percentileofscore(null_vals, obs_diff) / 100
p_value
Out[38]:
0.9039
In [39]:
# we would only be able to reject the null if the observed difference was higher than 0.002
np.percentile(null_vals, 95)
Out[39]:
0.002007593928530091

k. In words, explain what you just computed in part j. What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?

We calculated the p-value. The p-value of 0.9 says that given that the null hypothesis is true, there is 90% probability of observing our conversion difference (or one more extreme in favour of the alternative).
The null therefore cannot be rejected (with a type I error rate of 5% or any other reasonable type I error rate) and we should keep the old page.

l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let n_old and n_new refer the the number of rows associated with the old page and new pages, respectively.

In [40]:
import statsmodels.api as sm

convert_old = df2.query('(converted == 1) and (group == "control")').count()
convert_new = df2.query('(converted == 1) and (group == "treatment")').count()
n_old = df2.query('group == "control"').count()
n_new = df2.query('group == "treatment"').count()

m. Now use stats.proportions_ztest to compute your test statistic and p-value.

In [41]:
# the order is important and has to follow our hypotheses
counts = [convert_new.iloc[0], convert_old.iloc[0]]
nobs = [n_new.iloc[0], n_old.iloc[0]]
In [42]:
# we select the larger in the alternative attribute because that is our H1
z_score, p_value = sm.stats.proportions_ztest(counts, nobs, alternative='larger')
p_value
Out[42]:
0.9050583127590245
In [43]:
# z-score tells us it is exactly the z-score value (-1.3) standard deviations from the mean of N(0,1) distribution
z_score
Out[43]:
-1.3109241984234394
In [44]:
from scipy.stats import norm
# critical value for 5% type I error level
# we cannot reject the null because the z-score is lower than the critical value
critical_value = norm.ppf(1 - (0.05))
critical_value
Out[44]:
1.6448536269514722
In [45]:
# density plot
g = sns.distplot(np.random.normal(0, 1, 10000))
g.axvline(x=z_score, color='black')
g.axvline(x=critical_value, color='red');
In [46]:
# cdf plot
plt.hist(np.random.normal(0, 1, 10000), density=True, cumulative=True, alpha=0.5)
plt.axvline(x=z_score, color='black');
In [47]:
# z-score is on the 10th percentile of the distribution 
percentile = norm.cdf(z_score)
percentile
Out[47]:
0.09494168724097551
In [48]:
# p-value can be calculated as follows from the z-score
p_value = 1 - norm.cdf(z_score)
p_value
Out[48]:
0.9050583127590245

n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts j. and k.?

The findings of both parts agree.

Z-score is a statistic measured in terms of standard deviations from the mean that can be used to calculate p-value and decide on the hypothesis testing conclusions as is shown above.

The p-value means that we have 90% probability to get the observed difference given that the null is true. It is safe to say that we do not have evidence that the new page leads to more conversions and we should stick to the old page.

Part III - A regression approach

1. In this final part, you will see that the result you acheived in the previous A/B test can also be acheived by performing regression.

a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?

Logistic regression should be used for this case.

b. The goal is to use statsmodels to fit the regression model you specified in part a. to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create a column for the intercept, and create a dummy variable column for which page each user received. Add an intercept column, as well as an ab_page column, which is 1 when an individual receives the treatment and 0 if control.

In [49]:
df2['intercept'] = 1
df2['ab_page'] = pd.get_dummies(df2['group'])['treatment']
df2.head()
Out[49]:
user_id timestamp group landing_page converted intercept ab_page
0 851104 2017-01-21 22:11:48.556739 control old_page 0 1 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0 1 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0 1 1
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0 1 1
4 864975 2017-01-21 01:52:26.210827 control old_page 1 1 0

c. Use statsmodels to import your regression model. Instantiate the model, and fit the model using the two columns you created in part b. to predict whether or not an individual converts.

In [50]:
log_mod = sm.Logit(df2['converted'], df2[['intercept', 'ab_page']])
results = log_mod.fit()
Optimization terminated successfully.
         Current function value: 0.366118
         Iterations 6

d. Provide the summary of your model below, and use it as necessary to answer the following questions.

In [51]:
results.summary()
Out[51]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290582
Method: MLE Df Model: 1
Date: Fri, 05 Jun 2020 Pseudo R-squ.: 8.077e-06
Time: 14:51:34 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1899
coef std err z P>|z| [0.025 0.975]
intercept -1.9888 0.008 -246.669 0.000 -2.005 -1.973
ab_page -0.0150 0.011 -1.311 0.190 -0.037 0.007

e. What is the p-value associated with ab_page? Why does it differ from the value you found in Part II?

Hint: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in the Part II?

The p-value associated with ab_page is 0.19. The null cannot be rejected because 0.19 is above our Type I error threshold of 0.05.
The negative coefficient of ab_page is therefore insignificant, so we cannot say that the new page has any effect on the conversion rate.
The old page should therefore be kept because the new page did not prove to have higher conversions.
This conclusion is the same like in the previous part, however the p-value differs from the value found in Part II due to different hypotheses being tested in the two parts.
In this case, we are doing a two-tailed test, so the alternative is that the new page has a different conversion rate than the old page (in whichever direction). So if the new page actually performs worse than the old page, it would still fit the alternative hypothesis and this is why the p-value decreased (due to our negative observed difference).
This is the case because the null of the logistic regression is that the new page has no impact on conversions, i.e. that the probability of conversion is the same with the old page and the new page. The alternative for this case is that the probabilities are different.
On the other hand, in the previous part we did one-tailed test, in which the alternative was that the new page has higher conversions.

In [52]:
# to make a comparison to the previous part, having a two-tailed test there would mean that we would reject the null 
# if the observed conversion difference is either lower than -0.0023 or higher than 0.0023
# we see that we are definitely somewhat closer to the rejection region (i.e. there is also a lower p-value) in this case 
# than we were in the one-tailed case
plt.hist(null_vals)
plt.axvline(x=obs_diff, color='black')
plt.axvline(x=np.percentile(null_vals, 2.5), color='red')
plt.axvline(x=np.percentile(null_vals, 97.5), color='red');
In [53]:
print('2.5th percentile:', np.percentile(null_vals, 2.5))
print('97.5th percentile:', np.percentile(null_vals, 97.5))
2.5th percentile: -0.0023478950158733933
97.5th percentile: 0.0023667384803735337

f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?

We only look at the effect of the new page on the conversion rate right now. However, in reality many other factors probably also have influence on whether or not the user converts, such as when they are existing customers and might suffer from change aversion, or that they might convert due to other changes happening on the site or for some other reasons than being presented with the new page, for example due to their specific customer characteristics.

The disadvantage to adding more terms to the regression is for example the multiple comparison problem, which means that the more metrics are evaluated, the more likely it is to observe significant differences just by chance. The more inferences are made, the more likely erroneous inferences are to occur.
Adding more terms will always improve the model regardless of whether the added term adds a significant value. Adding many independent variables can potentially lead to overfitting, where our training data is exactly modeled, but the estimates do not work for new unknown data. The estimation can for example also suffer from multicollinearity, which occurs when we have highly correlated predictors.
Another potential issues to consider are Simpson's paradox or confounding variables.

g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives. You will need to read in the countries.csv dataset and merge together your datasets on the approporiate rows. Here are the docs for joining tables.

Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - Hint: You will need two columns for the three dummy variables. Provide the statistical output as well as a written response to answer this question.

In [54]:
countries_df = pd.read_csv('./countries.csv')
df_new = countries_df.set_index('user_id').join(df2.set_index('user_id'), how='inner')
df_new.head()
Out[54]:
country timestamp group landing_page converted intercept ab_page
user_id
834778 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0
928468 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1
822059 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1
711597 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0
710616 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1
In [55]:
# majority of customers comes from US, let's use it as the reference
df_new['country'].value_counts()
Out[55]:
US    203619
UK     72466
CA     14499
Name: country, dtype: int64
In [56]:
### Create the necessary dummy variables
dum_countries = pd.get_dummies(df_new['country'])
df4 = dum_countries.join(df_new, how='inner')
df4.head()
Out[56]:
CA UK US country timestamp group landing_page converted intercept ab_page
user_id
834778 0 1 0 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0
928468 0 0 1 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1
822059 0 1 0 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1
711597 0 1 0 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0
710616 0 1 0 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1
In [57]:
log_mod1 = sm.Logit(df4['converted'], df4[['intercept', 'UK', 'CA']])
results = log_mod1.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366116
         Iterations 6
Out[57]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290581
Method: MLE Df Model: 2
Date: Fri, 05 Jun 2020 Pseudo R-squ.: 1.521e-05
Time: 14:51:39 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1984
coef std err z P>|z| [0.025 0.975]
intercept -1.9967 0.007 -292.314 0.000 -2.010 -1.983
UK 0.0099 0.013 0.746 0.456 -0.016 0.036
CA -0.0408 0.027 -1.518 0.129 -0.093 0.012

The model above includes only the country of customers and no other explanatory variables. We see that these predictors are insignificant (their p-values are high), i.e. we cannot say that solely being from either UK or CA (as opposed to US) has a significant effect on the conversion rate.

In [58]:
log_mod2 = sm.Logit(df4['converted'], df4[['intercept', 'ab_page', 'UK', 'CA']])
results = log_mod2.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366113
         Iterations 6
Out[58]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290580
Method: MLE Df Model: 3
Date: Fri, 05 Jun 2020 Pseudo R-squ.: 2.323e-05
Time: 14:51:43 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1760
coef std err z P>|z| [0.025 0.975]
intercept -1.9893 0.009 -223.763 0.000 -2.007 -1.972
ab_page -0.0149 0.011 -1.307 0.191 -0.037 0.007
UK 0.0099 0.013 0.743 0.457 -0.016 0.036
CA -0.0408 0.027 -1.516 0.130 -0.093 0.012

Adding ab_page to the model makes no difference, all variables are still insignificant and the null cannot be rejected.

h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model.

Provide the summary results, and your conclusions based on the results.

In [59]:
UK_newpage = df4['ab_page'] * df4['UK']
df4['UK_newpage'] = UK_newpage
In [60]:
CA_newpage = df4['ab_page'] * df4['CA']
df4['CA_newpage'] = CA_newpage
df4.head()
Out[60]:
CA UK US country timestamp group landing_page converted intercept ab_page UK_newpage CA_newpage
user_id
834778 0 1 0 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0 0 0
928468 0 0 1 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1 0 0
822059 0 1 0 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1 1 0
711597 0 1 0 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0 0 0
710616 0 1 0 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1 1 0
In [61]:
### Fit Your Linear Model And Obtain the Results
log_mod3 = sm.Logit(df4['converted'], df4[['intercept', 'UK', 'CA', 'UK_newpage', 'CA_newpage']])
results = log_mod3.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366113
         Iterations 6
Out[61]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290579
Method: MLE Df Model: 4
Date: Fri, 05 Jun 2020 Pseudo R-squ.: 2.417e-05
Time: 14:51:47 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.2729
coef std err z P>|z| [0.025 0.975]
intercept -1.9967 0.007 -292.314 0.000 -2.010 -1.983
UK 0.0045 0.018 0.257 0.797 -0.030 0.039
CA -0.0073 0.037 -0.196 0.844 -0.080 0.065
UK_newpage 0.0108 0.023 0.475 0.635 -0.034 0.056
CA_newpage -0.0674 0.052 -1.297 0.195 -0.169 0.034
In [62]:
log_mod4 = sm.Logit(df4['converted'], df4[['intercept', 'ab_page', 'UK', 'CA', 'UK_newpage', 'CA_newpage']])
results = log_mod4.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366109
         Iterations 6
Out[62]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290578
Method: MLE Df Model: 5
Date: Fri, 05 Jun 2020 Pseudo R-squ.: 3.482e-05
Time: 14:51:51 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1920
coef std err z P>|z| [0.025 0.975]
intercept -1.9865 0.010 -206.344 0.000 -2.005 -1.968
ab_page -0.0206 0.014 -1.505 0.132 -0.047 0.006
UK -0.0057 0.019 -0.306 0.760 -0.043 0.031
CA -0.0175 0.038 -0.465 0.642 -0.091 0.056
UK_newpage 0.0314 0.027 1.181 0.238 -0.021 0.084
CA_newpage -0.0469 0.054 -0.872 0.383 -0.152 0.059
In [63]:
# pairwise correlations
# these are normally a good way to check for multicollinearity, however since we only have categorical variables 
# (which is also the probable reason why we cannot fit any good model), we do not learn much about their relationship
sns.pairplot(df4[['ab_page', 'UK', 'CA', 'UK_newpage', 'CA_newpage']]);
# we learn that number of customer with the new page is the same as with the old page
# we learn that UK customers with new page convert less than UK customers in general
# UK customers convert high above our overall conversion rate of around 12%

Adding the interaction terms does not improve the model, all variables are still insignificant.
This just confirms previous conclusions, which is that we do not have evidence to reject the null and the old page should be kept.

Exploration of timestamp for predicting conversions

Let's further look into whether the time of visits matters for the conversion.

In [64]:
print('There is less than one full month in our data:\nStart:', min(df4['timestamp']), '\nEnd:', max(df4['timestamp']))
There is less than one full month in our data:
Start: 2017-01-02 13:42:05.378582 
End: 2017-01-24 13:41:54.460509
In [65]:
df4['timestamp'] = df4['timestamp'].astype('datetime64')

plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
# conversion totals show little volatility between days, but no clear trend 
# (the first and the last day was not included as a whole)
g1 = df4.groupby(df4['timestamp'].dt.day).sum()['converted'].plot(kind="bar")
g1.set(title='Conversions by days')

plt.subplot(1,2,2)
# while traffic is almost the same every day
g2 = df4.groupby(df4['timestamp'].dt.day).count()['converted'].plot(kind="bar")
g2.set(title='Visits by days')

plt.show()
In [66]:
plt.figure(figsize=(20,5))
plt.subplot(1,2,1)
# conversions are seasonal based on the day of the week, with Mondays and Tuesdays being more successful than other days
g3 = df4.groupby(df4['timestamp'].dt.day_name()).sum()['converted'].plot(kind="bar")
g3.set(title='Conversions by day of the week')

plt.subplot(1,2,2)
# this is due to more traffic on the page on Mondays and Tuesdays
# this makes it an uninteresting point of view to add to the model
g4 = df4.groupby(df4['timestamp'].dt.day_name()).count()['converted'].plot(kind="bar")
g4.set(title='Visits by day of the week')

plt.show()
In [67]:
plt.figure(figsize=(20,5))

plt.subplot(1,2,1)
# conversions have a few peaks during the day
g5 = df4.groupby(df4['timestamp'].dt.hour).sum()['converted'].plot(kind="bar");
g5.set(title='Conversions by hour')

plt.subplot(1,2,2)
# while traffic is the same during the day
# it might be interesting to add this view to our model
g6 = df4.groupby(df4['timestamp'].dt.hour).count()['converted'].plot(kind="bar")
g6.set(title='Visits by hour')

plt.show()
In [68]:
# hour of the day groups to show that out of all information we have, hour of visiting the page might be the best 
# predictor of conversions
df4['hour'] = df4['timestamp'].dt.hour
# let's create a few bucket not to have 24 dummies
df4['night'] = df4.apply(lambda row: 1 if (row.hour > 0 and row.hour < 5) else 0, axis=1)
df4['morning'] = df4.apply(lambda row: 1 if (row.hour >= 5 and row.hour < 9) else 0, axis=1)
df4['midday'] = df4.apply(lambda row: 1 if (row.hour >= 9 and row.hour < 13) else 0, axis=1)
df4['afternoon'] = df4.apply(lambda row: 1 if (row.hour >= 13 and row.hour < 17) else 0, axis=1)
df4['evening'] = df4.apply(lambda row: 1 if (row.hour >= 17 and row.hour < 21) else 0, axis=1)
df4['lateevening'] = df4.apply(lambda row: 1 if (row.hour >= 21 or row.hour == 0) else 0, axis=1)

df4.head()
Out[68]:
CA UK US country timestamp group landing_page converted intercept ab_page UK_newpage CA_newpage hour night morning midday afternoon evening lateevening
user_id
834778 0 1 0 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0 0 0 23 0 0 0 0 0 1
928468 0 0 1 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1 0 0 14 0 0 0 1 0 0
822059 0 1 0 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1 1 0 14 0 0 0 1 0 0
711597 0 1 0 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0 0 0 3 1 0 0 0 0 0
710616 0 1 0 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1 1 0 13 0 0 0 1 0 0
In [69]:
# having any or all of 'ab_page', 'UK', 'CA', 'UK_newpage', 'CA_newpage' makes no difference to significance of time buckets
log_mod5 = sm.Logit(df4['converted'], df4[['intercept', 'ab_page', 'UK', 'CA', 'UK_newpage', 'CA_newpage',
                                           'night', 'morning', 'midday', 'afternoon', 'lateevening']])
results = log_mod5.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366074
         Iterations 6
Out[69]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290573
Method: MLE Df Model: 10
Date: Fri, 05 Jun 2020 Pseudo R-squ.: 0.0001298
Time: 14:55:43 Log-Likelihood: -1.0638e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.002083
coef std err z P>|z| [0.025 0.975]
intercept -1.9725 0.016 -123.900 0.000 -2.004 -1.941
ab_page -0.0207 0.014 -1.513 0.130 -0.047 0.006
UK -0.0059 0.019 -0.311 0.755 -0.043 0.031
CA -0.0176 0.038 -0.468 0.640 -0.092 0.056
UK_newpage 0.0318 0.027 1.196 0.232 -0.020 0.084
CA_newpage -0.0468 0.054 -0.870 0.384 -0.152 0.059
night -0.0678 0.020 -3.399 0.001 -0.107 -0.029
morning -0.0142 0.020 -0.719 0.472 -0.053 0.024
midday 0.0122 0.020 0.619 0.536 -0.026 0.051
afternoon -0.0169 0.020 -0.857 0.391 -0.056 0.022
lateevening 0.0017 0.020 0.084 0.933 -0.037 0.040
In [70]:
np.exp(results.params)
Out[70]:
intercept      0.139109
ab_page        0.979541
UK             0.994164
CA             0.982506
UK_newpage     1.032310
CA_newpage     0.954266
night          0.934453
morning        0.985908
midday         1.012240
afternoon      0.983210
lateevening    1.001655
dtype: float64
In [71]:
1 / np.exp(results.params)
Out[71]:
intercept      7.188603
ab_page        1.020887
UK             1.005870
CA             1.017805
UK_newpage     0.968701
CA_newpage     1.047926
night          1.070144
morning        1.014293
midday         0.987908
afternoon      1.017076
lateevening    0.998347
dtype: float64

The only statistically significant finding is that the conversion is 1.07 more likely to happen in the evening (5-8 PM) than at night (1-4 AM), holding all other values constant, but even this finding is not of much practical significance.
The conclusion is that the variables included in our data and which we worked with do not have a strong influence on whether or not the customer converts.

Model diagnostics

Let's look at some model diagnostics to confirm that our model is actually not good for predicting conversions.

In [72]:
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, precision_score, recall_score, accuracy_score
from sklearn.model_selection import train_test_split
In [73]:
# fitting the model using sklearn
# coefficients are slightly different than using sm because sklearn is more focused on correct predictions than coefficients
# intercept is added automatically
log_mod6 = LogisticRegression()
log_mod6.fit(df4[['ab_page', 'UK', 'CA', 'UK_newpage', 'CA_newpage', 'night', 'morning', 'midday', 
                  'afternoon', 'lateevening']], df4['converted'])
print(log_mod6.intercept_)
print(log_mod6.coef_)
[-1.97248247]
[[-0.02071187 -0.00588324 -0.01776707  0.03184734 -0.04670052 -0.0677624
  -0.01422128  0.01211551 -0.01695229  0.00161148]]
In [74]:
# model diagnostics
X = df4[['ab_page', 'UK', 'CA', 'UK_newpage', 'CA_newpage', 'night', 'morning', 'midday', 'afternoon', 'lateevening']]
y = df4['converted']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=0)
In [75]:
# coefficients of the train model
log_mod7 = LogisticRegression()
log_mod7.fit(X_train, y_train)
preds = log_mod7.predict(X_test)
print(log_mod7.intercept_)
print(log_mod7.coef_)
[-1.97629765]
[[-0.01587651  0.00300977 -0.01789624  0.01292014 -0.05377564 -0.06903411
  -0.01264868  0.0035405  -0.00628555 -0.00801783]]
In [76]:
# our model does not predict any conversions at all
cm = confusion_matrix(y_test, preds)
cm
Out[76]:
array([[51094,     0],
       [ 7023,     0]], dtype=int64)
In [77]:
print('TN = 51094, FP = 0\nFN = 7023, TP = 0')
TN = 51094, FP = 0
FN = 7023, TP = 0
In [78]:
# precision = TP / (TP + FP) ... division by 0 in this case
# precision_score(y_test, preds)
print('Precision: -')
# recall = TP / (TP + FN) = 0
print('Recall:', recall_score(y_test, preds))
# accuracy = (TP + TN) / (TP + FP + FN + TN) = 0.88
score = accuracy_score(y_test, preds)
print('Accuracy:', score)
Precision: -
Recall: 0.0
Accuracy: 0.8791575614708261
In [79]:
plt.figure(figsize=(4,4))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=.5, square = True,
cmap = 'Blues');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 8);
In [80]:
# our model has no predictive power, which we see from the confusion matrix and also from the ROC - AUC below: 
# AUC is as low as it can get (0.5)

from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
logit_roc_auc = roc_auc_score(y_test, log_mod7.predict(X_test))
fpr, tpr, thresholds = roc_curve(y_test, log_mod7.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('Log_ROC')
plt.show()
In [81]:
# the issue is largely caused by unbalanced data: the share of conversions in data is low, therefore the model 
# goes the safe way to not predict any at all, which yields quite high accuracy
# The data should be rebalanced by resampling more of the conversion data points, which we will not present here.
# Instead, let's try a different way, which is reducing the calculated probability threshold needed to evaluate 
# as conversion (0.5 by default):
log_mod8 = LogisticRegression()
log_mod8.fit(X_train, y_train)
preds = log_mod8.predict_proba(X_test)

# log_mod8.classes_ ... conversion probabilities are the second item in each sublist
preds
Out[81]:
array([[0.88547507, 0.11452493],
       [0.87959851, 0.12040149],
       [0.87997296, 0.12002704],
       ...,
       [0.87997296, 0.12002704],
       [0.87997296, 0.12002704],
       [0.87758398, 0.12241602]])
In [82]:
# extract probabilities of conversion
# we have 58117 rows in the test data
y_prob = []
for index, item in enumerate(preds):
    y_prob.append(item[1])
y_prob = pd.Series(y_prob)
y_prob
Out[82]:
0        0.114525
1        0.120401
2        0.120027
3        0.121720
4        0.112661
           ...   
58112    0.111862
58113    0.121044
58114    0.120027
58115    0.120027
58116    0.122416
Length: 58117, dtype: float64
In [83]:
# set a new threshold for the prediction to evaluate as conversion
# 0.12 is needed because everything higher than that still does not estimate any conversions at all
y_preds = (y_prob > 0.12).astype('int')
y_preds.value_counts()
Out[83]:
1    36040
0    22077
dtype: int64
In [84]:
cm = confusion_matrix(y_test, y_preds)
cm
Out[84]:
array([[19504, 31590],
       [ 2573,  4450]], dtype=int64)
In [85]:
# precision = TP / (TP + FP)
print('Precision:', precision_score(y_test, y_preds))
# recall = TP / (TP + FN)
print('Recall:', recall_score(y_test, y_preds))
# accuracy = (TP + TN) / (TP + FP + FN + TN)
score = accuracy_score(y_test, y_preds)
print('Accuracy:', score)
Precision: 0.1234739178690344
Recall: 0.6336323508472163
Accuracy: 0.412168556532512

The model now predicts some conversions as opposed to no conversions in the previous model, but the main metrics are still not impressive.
This was done more for illustration purposes than believing that it might really help to make a good model.
Both precision and accuracy are very low, which means that we many false positives and the overall share of all correct predictions is not very high.

In [86]:
plt.figure(figsize=(4,4))
sns.heatmap(cm, annot=True, fmt=".0f", linewidths=.5, square = True,
cmap = 'Blues');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score: {0}'.format(score)
plt.title(all_sample_title, size = 8);
In [87]:
# the new ROC
logit_roc_auc = roc_auc_score(y_test, y_preds)
fpr, tpr, thresholds = roc_curve(y_test, log_mod8.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('Log_ROC')
plt.show()

Conclusions

We used several ways to test whether the introduction of the new page increases conversions.
The conclusion in all of them is that the new page did not prove to be better than the old page and we do not have the evidence to switch to the new page.

We failed to find a model that would be good at predicting conversions based on the data we have available.