Introduction. The frenzy around fake news seems like a decidedly 21st century phenomenon, yet conspiracy theories have long spread through human societies. Though the existence of contradictory accounts of an event is not new, it’s plausible that the way in which these accounts spread has changed. Has some societal reorganization changed the types of stories that get told or the rates at which such stories get spread?

Misinformation spreads in ways that parallel the diffusion of diseases (Centola & Macy 2007; Jackson & Yariv 2011), with dynamics informed by the structure of the social network in which contagion takes root (Jackson & Yariv 2011). How misinformation spreads in small towns in which people have a few long-distance collections and many local connections to other town residents could differ fundamentally from its spread along Facebook or Twitter, platforms in which a small number of nodes have many connections while the majority of nodes have very few (Jackson 2010). Virtual social networks are further characterized by high levels of homophily (Bakshy et al. 2015; De Choudhury 2011), the tendency of people to be connected with other people similar to themselves (McPherson et al. 2002), but the effect of homophily on the diffusion remains poorly understood (Jackson & Yariv 2011).

In this study, we compare the diffusion dynamics of misinformation under two network structures. We borrow the concepts of transmission probability and complex contagion from the disease models, suggesting that the likelihood of the spread of an opinion is dependent upon the level of skepticism agents have. We then extend our model to account for homophily by allowing individuals to sever ties with people who have different beliefs in favor of new ties with similar people.

Methods. To examine dynamics in the flow of accounts about an event, we construct an agent-based model in NetLogo. We allow for two contradictory accounts of an event, A1 and A2. At baseline, most agents have the dominant belief A1. We then select i agents to be initialized with belief A2. For each of these i agents, we further select one neighbor to additionally have belief A2 at baseline. The motivation is that people attend events together, so we would expect initial accounts to come from clusters of nodes.

We embed our agents within either a small-world network in which each agent is connected to its two neighbors on either side and then existing links are removed and replaced with a random connection with rewiring probability, \(p_{rw}\) (Watts & Strogatz 1998) or a scale-free network in which each agent is connected to agents already in the network, with a higher probability of linking to agents that already have many connections (“hubs”) versus those that have fewer connections (Albert & Barabasi 2002).

Fig 1. Visualization of a small-world network (left) and a scale-free network (right).

In each period, agents learn the beliefs of each of their neighbors. Each node, N is initialized with a property. \(N_i = \{m\}\), where \(m\) can take two values, \(A1\) or \(A2\), representing the two opinions. In each time period, we apply two rules:

  1. Switching: If some threshold pth of an agent’s neighbors hold belief \(A2\), the agent switches to A2 with probability \(1 - s\). We call \(s\) skepticism because it reflects the likelihood that an agent will reject its neighbors’ suggestions.
  2. Echo Chambers: A randomly selected node with belief \(A2\) replaces a link to a node with a belief A1 belief with a link to a node with the same belief with some probability \(p_{echo}\) in a process of homophily.

We initialized the models with \(N = 100\) agents. We fit 20 iterations over 40 time periods for an array of combinations of the parameters \(p_{th}\), \(s\), and \(p_{echo}\).

Results.

Fig 2. Distribution of the proportion polarized, stratified by skepticism, after 40 time periods with 6 agents initially affected, 0.1 < \(p_{rw}\) < 0.21 , \(p_{th} = 0.5\), and \(p_{echo} = 0\).

Fig 3. Fraction polarized over time with 6 agents initially affected, 0.1 < \(p_{rw}\) < 0.211, \(p_{th}\) = 0.5, \(p_{echo} = 0\), and \(s = 0.9\). Heavy lines show the average evolution; lighter lines show individual runs.

Discussion. Neither the scale-free model nor the small world model achieves complete diffusion (Figure 1). Regardless of the level of skepticism, misinformation spread in a scale-free model tends to stabilize within a small subset of misinformed agents because the hubs in this model serve as bottlenecks for diffusion; skepticism is more impactful in the small world model likely because agents are more likely to have achieved the necessary threshold level of neighbors with misinformation. However, when we examine diffusion dynamics over time in the echo chamber model, we see that diffusion is lower and stabilizes more quickly when homophily is high versus low, with larger differences in outcomes for the scale-free versus small world model (as illustrated in Appendix Figs A1 and A2). Notably, these differences emerge only when skepticism is high because it takes time for agents to sever prior ties.

This model relied on a number of assumptions, some more plausible than others. While our comparison of dynamics on small-world versus scale-free networks allows us to include key features of modern social structures in our models, these networks are in many ways highly unrealistic. Moreover, our networks were not directed, but a unidirectional or asynchronous model might be more appropriate for cases such as messages from celebrities or political leaders. We assumed that each agent affected each neighbor in each time period, and we did not allow for recovery or switching back to prior beliefs.

Nevertheless, our approach may have applications beyond the realm of fake news. Extending the model to include more than two contradictory accounts might allow us to model the selection of political candidates or brand preferences; assigning these different accounts differing levels of believability as a function of the types of agents spreading the accounts could offer insight on problems such as racism or political polarization; and allowing the accounts to blend and evolve over time would allow for applications to cultural or linguistic evolution. Finally, though we assigned each agent in the model the same skepticism, a model with skepticism levels that vary within groups or over time could help us target interventions in education or critical thinking skills.


References.


Appendix

Fig A1. Comparison of two small-world networks with \(p_{rw}\) = 0.11, \(s\) = 0.9, \(p_{threshold}\) = 0.5 with varying echo chamber probability over time.

Fig A2. Comparison of two scale-free networks with \(s\) = 0.9, \(p_{threshold}\) = 0.5 with varying echo chamber probability over time.