Cluster of Six

Graphic by Carl Goodwin

Before each vote, the Speaker of the House yells “Division! Clear the Lobby”. I’d like to find which cluster of MPs (Members of Parliament) may be exiting the lobby and going their own way.

Hansard reports what’s said in the UK Parliament, sets out details of divisions, and records decisions taken during a sitting. The R package hansard package provides access to the data.

I’ll start by building a list of all Labour Party MPs.

url_prefix <- "http://data.parliament.uk/members/"

mps <- commons_members() %>%
  filter(party_value == "Labour" | about == str_c(url_prefix, "478")) %>%
  mutate(ID = str_replace(about, url_prefix, ""))

Creating a function will enable me to iterate through the MP list to extract their voting records.

pull_votes <- function(x) {
  mp_vote_record(x,
    start_date = "2017-06-08",
    end_date = "2018-01-28",
    verbose = FALSE
  ) %>%
    mutate(mp = x)
}

I’ll use it to extract the “aye” and “no” votes. Use of possibly prevents the code from stopping when it encounters former MPs for whom no data is returned.

votes <-
  map(mps$ID, possibly(pull_votes, NULL)) %>%
  compact() %>%
  map_dfr(simplify, "tibbles") %>%
  rename("lobby" = "vote")

saveRDS(votes, file = "votes.rds")

Voting the opposite way to the majority of the party, as well as non-votes, will both be of interest when assessing which MPs are “most distant” from the wider party.

votes_df <- votes %>%
  left_join(mps, by = c("mp" = "ID")) %>%
  select(about = about.x, title, date_value, lobby, mp, name = full_name_value) %>%
  mutate(
    vote = if_else(lobby == "aye", 1, -1),
    mp = str_c(name, " (", mp, ")"),
    about = str_replace(about, "http://data.parliament.uk/resources/", ""),
    title = str_c(title, " (", about, ")")
  ) %>%
  select(-lobby, -about, -name, -date_value) %>%
  pivot_wider(names_from = title, values_from = vote, values_fill = 0)

The data are standardised (i.e. scaled) to ensure comparability. This is verified by ensuring the mean and standard deviation are close to zero and one respectively.

scaled_df <- scale(votes_df[, -1]) %>%
  as_tibble() %>%
  mutate(mp = votes_df$mp) %>%
  select(mp, everything())

map(scaled_df[, -1], mean) %>% # should be ~zero
  as_tibble() %>%
  pivot_longer(everything()) %>%
  summarise(min_mean = min(value), max_mean = max(value)) %>%
  kable()
min_mean max_mean
0 0
map(scaled_df[, -1], sd) %>% # should be ~one
  as_tibble() %>%
  pivot_longer(everything()) %>%
  summarise(min_sd = min(value), max_sd = max(value)) %>%
  kable()
min_sd max_sd
1 1

I’d like to assess whether the data contain meaningful clusters rather than random noise. This is achieved quantitatively by calculating the Hopkins statistic, and visually by inspection.

If the Hopkins statistic is closer to 1 than 0, then we have data which may be clustered.

get_clust_tendency(scaled_df[, -1], n = nrow(votes_df) - 1) %>%
  pluck("hopkins_stat")
## [1] 0.783357

A visual assessment of clustering tendency reveals distance data exhibiting a visible structure.

fviz_dist(
  dist(scaled_df[, -1]),
  show_labels = FALSE,
  gradient = list(
    low = cols[1],
    mid = cols[3],
    high = cols[4]
  )
)

There are eight methods I could use for hierarchical clustering, and I’ll need to determine which will yield results that best fit the data.

The correlation plot below shows that the median and ward methods have a weaker correlation with the other five methods.

orig_dist <- scaled_df[, -1] %>% dist()

dend_meths <-
  c(
    "complete",
    "average",
    "single",
    "ward.D",
    "ward.D2",
    "mcquitty",
    "median",
    "centroid"
  )

dend_list <- map(dend_meths, function(x) {
  orig_dist %>%
    hclust(x) %>%
    as.dendrogram()
}) %>%
  reduce(., dendlist) %>%
  set_names(dend_meths)

corrplot(
  cor.dendlist(dend_list),
  "pie",
  "lower",
  col = cols[1],
  mar = c(1, 0.5, 4, 0.5),
  order = "AOE",
  tl.cex = 0.8,
  tl.col = "black",
  cl.cex = 0.7
)

The above plot does not tell us which method is optimal. For that, I’ll take each of the cluster agglomeration methods and calculate their cophenetic distances. I can then correlate these with the original distance to see which offers the best fit.

methods <- list(
  "complete",
  "average",
  "single",
  "ward.D",
  "ward.D2",
  "mcquitty",
  "median",
  "centroid"
)

best_method <- map_dfr(methods, function(x) {
  co_comp <-
    orig_dist %>%
    hclust(x) %>%
    cophenetic()
  tibble(
    correlation = cor(orig_dist, co_comp),
    method = x
  )
})

The plot below confirms the ward and median methods having a weaker fit. Average produces the strongest correlation coefficient of 0.98.

best_method %>%
  ggplot(aes(reorder(method, correlation), correlation)) +
  geom_col(fill = cols[1], width = 0.8) +
  geom_text(aes(label = str_c(method, "  ", round(correlation, 2))),
    hjust = 1.3, colour = "white"
  ) +
  coord_flip() +
  labs(
    x = "Method", y = "Correlation",
    title = "Cluster Method Correlation Coefficients",
    caption = "Source: Hansard"
  )

I can now plot the full Labour Party dendrogram using the average method. This shows a “cluster of six” MPs which is the last to merge with the rest of the party based on their voting pattern.

dend_avg <- orig_dist %>%
  hclust("average") %>%
  as.dendrogram()

labels(dend_avg) <- scaled_df$mp[order.dendrogram(dend_avg)]

dend <- dend_avg %>%
  color_branches(k = 2, col = cols[4]) %>%
  set("labels_cex", 0.4)

ggplot(rev(dend), horiz = TRUE, offset_labels = -0.2) +
  labs(
    y = "\nDistance", title = "Hierarchical Clustering of Labour MPs",
    subtitle = "Based on House of Commons Divisions Since the 2017 Election",
    caption = str_c(
      "\nSource: Hansard (Jun 28, 2017 to ",
      month(Sys.Date(), label = TRUE),
      " ",
      day(Sys.Date()),
      ", ",
      year(Sys.Date()),
      ")"
    )
  ) +
  theme(panel.border = element_blank())

I’ll zoom in on the “cluster of six”.

dend_cuts <- dend %>%
  assign_values_to_leaves_nodePar(19, "pch") %>%
  assign_values_to_leaves_nodePar(5, "cex") %>%
  assign_values_to_leaves_nodePar(cols[1], "col") %>%
  set("labels_cex", 0.4) %>%
  set("branches_lwd", 2.5) %>%
  color_branches(k = 2, col = cols[1]) %>%
  cut(h = 50)

ggplot(rev(dend_cuts$lower[[1]]),
  horiz = TRUE,
  nodePar = nodePar,
  offset_labels = -0.5
) +
  labs(
    title = "Cluster of Six",
    subtitle = "MPs who Branch off First in the Dendrogram"
  ) +
  theme_void() +
  theme(plot.margin = unit(c(1, 1, 1, 1), "cm"))

Summarising and sorting the total votes by MP tells me that the “cluster of six” MPs are among the eight MPs voting the fewest times. And I can, for example, verify the record for Emma Reynolds directly via Hansard.

fewest_votes <- votes %>%
  left_join(mps, by = c("mp" = "ID")) %>%
  group_by(mp = full_name_value, lobby) %>%
  summarise(total = n()) %>%
  ungroup() %>% 
  pivot_wider(names_from = "lobby", values_from = "total") %>%
  mutate(total = aye + no,
         mp = fct_reorder(mp, total)) %>%
  top_n(-10, wt = total) %>%
  pivot_longer(cols = -mp) %>% 
  filter(name != "total")

fewest_votes %>%
  ggplot(aes(mp, value, fill = name)) +
  geom_col() +
  geom_label(aes(label = value), position = position_stack()) +
  scale_fill_manual(values = cols[c(1, 3)]) +
  coord_flip() +
  labs(title = "Labour MPs Voting Fewest Times",
       y = "Votes", x = NULL, fill = NULL)

Non-voting will not be the only influencing factor. The “distant cluster” will be particularly influenced by a small minority of MPs voting in the opposite direction to the overwhelming majority.

Cook’s Distance visualises these influential outliers. This shows the voting of three MPs, all on the European Union Withdrawal Bill readings, to be particular outliers. All three MPs are in the “cluster of six”.

tidy_df <- votes_df %>%
  pivot_longer(cols = -mp, names_to = "title", values_to = "vote")

mod <- lm(vote ~ ., data = tidy_df)

mod_df <- mod %>%
  augment() %>%
  as_tibble()

ggplot(mod_df, aes(title, .cooksd, colour = mp)) +
  geom_jitter() +
  geom_label_repel(aes(label = if_else(.cooksd > 0.002, mp, NULL)), size = 4) +
  scale_colour_manual(values = wes_palette(220, name = "Moonrise2", type = "continuous")) +
  labs(title = "Cook's Distance") +
  coord_flip() +
  theme(
    panel.border = element_blank(),
    axis.text = element_text(size = 6),
    legend.position = "none"
  )

mod_df %>%
  filter(str_detect(title, "759161|824379|809989")) %>%
  mutate(title = str_wrap(title, 30)) %>% 
  ggplot(aes(title, .cooksd, colour = mp)) +
  geom_point(size = 4) +
  geom_label_repel(aes(label = if_else(.cooksd > 0.0015, mp, NULL)), size = 4) +
  ggtitle("Cook's Distance") +
  theme(
    axis.line.x = element_line(color = "grey60"),
    axis.text = element_text(size = 8),
    legend.position = "none",
    axis.title = element_blank()
  ) +
  scale_colour_manual(values = wes_palette(210, name = "Moonrise2", type = "continuous")) +
  coord_flip()

R Toolbox

Summarising below the packages and functions used in this post enables me to separately create a toolbox visualisation summarising the usage of packages and functions across all posts.

Package Function
base library[11]; c[5]; function[4]; Sys.Date[3]; list[2]; max[2]; min[2]; rev[2]; conflicts[1]; cumsum[1]; cut[1]; labels[1]; mean[1]; nrow[1]; readRDS[1]; round[1]; saveRDS[1]; scale[1]; search[1]; sum[1]
broom augment[1]
corrplot corrplot[2]
dendextend assign_values_to_leaves_nodePar[3]; set[3]; color_branches[2]; cor.dendlist[1]; dendlist[1]
dplyr mutate[10]; filter[7]; if_else[6]; as_tibble[5]; select[4]; summarise[4]; everything[3]; tibble[3]; group_by[2]; left_join[2]; arrange[1]; desc[1]; n[1]; rename[1]; top_n[1]; ungroup[1]
factoextra fviz_dist[1]; get_clust_tendency[1]
forcats fct_reorder[1]
ggplot2 aes[8]; ggplot[6]; labs[5]; coord_flip[4]; theme[4]; element_blank[3]; element_text[2]; geom_col[2]; scale_colour_manual[2]; element_line[1]; geom_jitter[1]; geom_label[1]; geom_point[1]; geom_text[1]; ggtitle[1]; position_stack[1]; scale_fill_manual[1]; theme_bw[1]; theme_set[1]; theme_void[1]; unit[1]
ggrepel geom_label_repel[2]
hansard commons_members[1]; mp_vote_record[1]
kableExtra kable[3]
lubridate day[1]; month[1]; year[1]
purrr map[5]; map_dfr[2]; possibly[2]; set_names[2]; compact[1]; map2_dfr[1]; pluck[1]; reduce[1]
readr read_lines[1]
rebus literal[4]; lookahead[3]; whole_word[2]; ALPHA[1]; lookbehind[1]; one_or_more[1]; or[1]
stats hclust[3]; as.dendrogram[2]; dist[2]; cophenetic[1]; cor[1]; lm[1]; reorder[1]; sd[1]
stringr str_c[7]; str_detect[4]; str_remove[2]; str_replace[2]; str_count[1]; str_remove_all[1]; str_wrap[1]
tibble enframe[1]
tidyr as_tibble[5]; pivot_longer[4]; everything[3]; tibble[3]; pivot_wider[2]; unnest[1]
wesanderson wes_palette[3]
Carl Goodwin
Carl Goodwin
IBM Data Scientist & Growth Leader
comments powered by Disqus

Related