4 min read

Customer Happiness Index

In this analysis we will look at trends in Buffer’s Customer Happiness Index (CHI) and its components. The data contains CHI components since December 10, 2020.

CHI is made up of the following components:

  • Daily active users / monthly active users 7-day average (A1)
  • Percent NPS promoters * NPS response rate (A2)
  • Users experiencing a failed key action (B1)
  • Users raising support tickets unrelated to failed posts (B2)
  • Users impacted by outages and downtime (B3)

Key Findings

  • Failed key actions have an outsized influence on CHI.

  • Most failed posts are caused by authentication issues, but some, like those caused by schedule limits, may be preventable.

  • Chipping away at the causes of failed posts seems like a promising way to directly impact CHI.

Index Over Time

The plot below shows CHI over time. The largest defining feature so far is the dip around the holidays and the upwards shift after the first week of February.

Now let’s break this down and show each component. The total CHI metric is shown in the top-left graph. We can plainly see how the dip in dau_over_mau is correlated with the dip in chi. We can also see that failed_key_actions is strongly correlated with chi.

Next we’ll visualize the relative “size” of each component and show the percentage of CHI that each makes up.

Here we can see that failed key actions makes up over 50% of CHI. To me this warrants some further analysis.

Failed Posts

Let’s gather the 894 thousand posts that have failed since December 1, 2020.

# connect to bigquery
con <- dbConnect(
  bigrquery::bigquery(),
  project = "buffer-data"
)

# to make bigquery work
options(scipen = 20)

# define sql query
sql <- "
  select distinct
    id
    , timestamp
    , timestamp_trunc(timestamp, week) as week
    , user_id
    , channel
    , channel_type
    , error_abbreviation
    , error_message
    , post_id
  from segment_publish_server.post_failed
  where timestamp >= '2020-12-01'
"
  
# query BQ
failed_posts <- dbGetQuery(con, sql, page_size = 25000)

# save data
saveRDS(failed_posts, "failed_posts.rds")

We can see the number of users affected by failed posts each week is around 15-25k.

We can break this down by channel.

Instagram Business accounts, Facebook Pages, and Linkedin Pages are the biggest culprits it seems. Let’s look at the top error messages for each.

Authentication issues seem to be the main driver of these errors, but what about some of the others? Let’s see what the ig_business error message is.

# get error message
failed_posts %>% 
  filter(error_abbreviation == "ig_business") %>% 
  select(error_message) %>% 
  head(1)
## # A tibble: 1 x 1
##   error_message                                                                 
##   <chr>                                                                         
## 1 Uh oh! It doesn’t look like this is correctly set up as a business account. P…

This tells us that a large number of customers experience failed posts because they either didn’t properly set up an Instagram Business account or we failed to identify that the Instagram profile was a Business account.

Let’s take a look at the fb_invalid_param error.

# get error message
failed_posts %>% 
  filter(error_abbreviation == "fb_invalid_param") %>% 
  select(error_message) %>% 
  head(1)
## # A tibble: 1 x 1
##   error_message                                                                 
##   <chr>                                                                         
## 1 It looks like the URL in this post is not valid. Up for replacing the link an…

This may not be the most informative error message, but we might not have much more information. What exactly went wrong? Which Facebook parameter was invalid?

What about the error_fb_amount_data_reached error?

# get error message
failed_posts %>% 
  filter(error_abbreviation == "error_fb_amount_data_reached") %>% 
  select(error_message) %>% 
  head(1)
## # A tibble: 1 x 1
##   error_message
##   <chr>        
## 1 <NA>

There doesn’t appear to be an error message associated with the error_fb_amount_data_reached abbreviation, even though it causes thousands of users to experience failed posts.

What about something a bit more preventable, like schedule_limit? This error is shown when users hit the daily posting limit for a channel within a 24 hour time period.

# schedule limit errors
failed_posts %>% 
  filter(error_abbreviation == "schedule_limit") %>% 
  summarise(users = n_distinct(user_id),
            posts = n_distinct(id),
            channels = n_distinct(channel))
## # A tibble: 1 x 3
##   users posts channels
##   <int> <int>    <int>
## 1   905 58214        7

This error caused 62 thousand posts to fail for over 1000 users across 7 different channel types. This error feels like it could potentially be addressed by preventing users from creating posts if we know that will fail due to the schedule limit. Perhaps we already do this?

Let’s look at media_fail.

# get error message
failed_posts %>% 
  filter(error_abbreviation == "media_fail") %>% 
  select(error_message) %>% 
  unique() %>% 
  head(5)
## # A tibble: 3 x 1
##   error_message                                                                 
##   <chr>                                                                         
## 1 It looks like LinkedIn is having some trouble at the moment. Please try sendi…
## 2 Yikes, it looks like we were having trouble communicating with your Pinterest…
## 3 It looks like we're having trouble uploading this media. Please get in touch …

There are other errors, like instagram_video_duration_long, instagram_caption_length, and instagram_video_format, that could potentially be addressed with changes to the user experience. If we could chip away at these sorts of causes for failed posts, we may be able to make a non-insignificant dent in failed key actions, and therefore improve CHI.