Joke Propagation

Ethan Fast, Elais Jackson, Abbie Jacobs, Eric Scott


Date: 21 June, 2009


Model

We use an artificial neural network model to simulate joke propagation in a heterogeneous social network. Nodes $ n_i$ represent persons. Each node is assigned a "personality" value, $ \mu_i \in U(-2,2)$, which is the mean of a Gaussian distribution, all of which have the same variance $ \sigma^2$. Jokes are represented by integers $ J_k \in U(-2,2)$. The probability that a person $ n_i$ likes a joke $ J_k$ and tells it to his or her friends depends on where in the distribution $ N(\mu_i,\sigma^2)$ the joke falls. Specifically, we define $ P_{like}(J_k, \mu_i) = G(-\vert J_k-\mu_i\vert)$, where $ G(x)=\frac{1}{\sqrt{2{\pi}{\sigma^2}}}\int_{-\infty}^{x}{e^{-\frac{y^2}{2\sigma^2}}dy}$ is the cumulative distribution of $ N(0,\sigma^2)$, so that jokes falling closest to the mean of the distribution highest probability of being retold.

The network is initialized to be fully connected with random weights.1 Each node has an edge direct to itself, simulating memory. The random weights are initially adjusted depending on the similarity between the two nodes personalities, i.e.

$\displaystyle w_{ij} = w_{0ij}(1-\frac{1}{4}\vert\mu_i - \mu_j\vert),$ (1)

where $ w_{0ij}$ is drawn from $ U(0,1)$, and the division by four is to prevent negative values. This adjustment represents and decreased probability of a node telling a joke to other nodes who have very different senses of humor.

Instaces of the the jokes are randomly assigned as inputs to nodes ("told" to them) at time $ t_0$, and the propogation loop begins. For each joke $ J_k$, every node has a probability $ P_{activation}(J_k, n_i)$ of sending an activation signal for $ J_k$, which depends on the incoming signals $ S(n_{ji})$ for that joke weighted by the probability $ P_{like}(J_k, \mu_i)$ of liking the joke, divided by crowding effects for all jokes: taking inspiration from Malthusian population capacity, we model a limit to the number of jokes a person can remember and tell by dividing the probability by the total input signal of all jokes raised to an explonent $ \alpha$. Note that we made a mistake in the equation by putting a normalizing factor in the denomonator (since $ P_{activation}$ must be less than or equal to one) which actually cancels out the dependence on incoming signals for $ J_k$.

$\displaystyle P_{activation}(J_k, n_i) = \frac{\sum_j{S_{k}(n_{ji})P_{like}(J_k,\mu_{i})}}{\sum_j{S_{k}(n_{ji})}}\frac{1}{(\sum_{j,k}{S_k(n_{ji})})^\alpha}$ (2)

The signal is binary, and is outputted to child nodes with strength $ S(n_{ij}) = w_{ij}$.

Results

We developed two visualizations of the simulation. The first displays a circle for each node which shrinks over time, but grows when it tells a joke. When a joke is told, the node flashes a color for a moment. Each color corresponds to a unique joke. The movement of the circles is meaningless.

The second visualization is a bar graph of the cumulative number of jokes told:

Image graph

About this document ...

Joke Propagation

This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.71)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html hw.tex -split=0

The translation was initiated by Eric "SigmaX" Scott on 2010-06-24


Footnotes

... weights.1
Preferably this would be replaced with a small-world/scale-free initialization, to more accurately represent realy social networks.
Eric "SigmaX" Scott 2010-06-24