Your Turn

by

Scott Christley and Yasmina Khoury



The Problem


Consider the following situation:


A group of, say, ten students each need to give a five minute talk, one after the other.  The order of presentations is not specified.  What happens?


Approach


The approach we decided to take was to model just the decision process, and leave open the question about the actual implementation of performing the presentations and the time issues involved.  We thought about the more complicated task of modeling the process of deciding how to decide; i.e. a discussion among the agents about a protocol for determining the presentation order.  In the end, we decided to develop two (canonical) models; one with an external entity driving the decision and another where the agents communicated amongst themselves.


Model 1 (Matching Algorithm)

Algorithm Analysis

Example of unfairness


Given 5 agents with following preferences:

    A1: 2,3,4,5,1

    A2: 2,4,5,1,3

    A3: 1,2,3,4,5

    A4: 5,4,3,2,1

    A5: 5,4,2,1,3


Outcome of algorithm for A1 chosen to move first:

    Slot 1: A3

    Slot 2: A1

    Slot 3: A5

    Slot 4: A2

    Slot 5: A4


Fairness metric: 5 points for first choice, decrease by one point for each lower choice.


Score: 20


Not globally optimal because the following outcome produces a score of 23.

    Slot 1: A3

    Slot 2: A2

    Slot 3: A1

    Slot 4: A5

    Slot 5: A4


The code for this model is available here.


Extensions to Model 1


 ‘Arranged marriage’ Algorithm


Look for a fairer match, reduce probability of people ending up with least preferred slots.


Mechanism


Algorithm Analysis


Example:


Round1: slot1: A3; slot2:A1,A2; slot5:A4,A5.

    Outcome: slot1:A3

Round2: slot2:A1,A2, slot3:A1, slot4:A2,A4,A5, slot5:A4,A5.

    Outcome: slot3:A1, slot2:A2

Round3: slot4:A4,A5, slot5: A4,A5

    Outcome: random.

Score=23


Simultaneous mechanism to sequential mechanism:

        

Model 2 (Agent-based Consensus)


With the matching algorithm, there is an implicit notion of an external entity that is driving the algorithm by randomly picking agents, maintaining the presentation order, etc.  We wanted to experiment with a completely decentralized model whereby agents communicate one-on-one and attempt to reach a consensus about the presentation order.


As a starting point for our model, we took Axelrod’s Cultural Dissemination model (Axelrod) as a loose basis for how agent's preferences would get disseminated through the population.  In Axelrod's model, each agents maintains a feature vector representing its culture and culture gets disseminated through agent interaction.  Each agent maintains its own version of the presentation order which somewhat corresponds to the cultural feature vector. However, our model differs significantly for both the spatial aspect and the actual interaction.  For our model, space is a "soup" meaning that any agent can interact with any other agent.  For our interaction, one agent must convince the other agent  of their desire for their preferred slot; while, Axelrod's agent interact with a probability based upon their similarity.


Each agent picks one slot as its preferred slot.

Each agent has a probability (weight) for how much it "desires" that slot.

Each agent maintains an internal version of the presentation order.

Each agent records its preferred slot into its presentation order.

Loop

    Randomly pick 2 agents to interact.

    Agent1 asks Agent2 for its slot.

If (If slot is empty in Agent1's presentation order) then

    // no conflict so Agent2 gets the slot

    Agent1 records Agent2’s slot in its presentation order.

If (If Agent1 has some agent in that slot) then

    // Agent2 tries to convince Agent1

    Random number compare to Agent2's "desire".

    if (Agent1 is convinced) then

        Agent1 records Agent2's slot in its presentation order.

        If Agent2 overwrites Agent1’s slot then

            // Agent2 took over our slot!

            Agent1 picks new random slot and new random probability.

Repeat until all agents converge to the same presentation order.


The code for this model is available here.


Extensions to Model 2


Results


We ran 100 simulations of each model, calculated a metric value for the final presentation order, then averaged the metric to compare the two models.


The metric is a simple scoring mechanism which gives 10 points to an agent if it gets its first preferred slot, and decrease by one point for each successively lower preferred slot; the metric value can reach zero but not go negative. With 10 agents, the maximum score is 100 which means each agent gets its first preference, but this max score can be less for any random set of preference lists because a conflict (two agents prefer the same slot) means that one agent must get a lower score.


For Model1, we got mean = 87.71 and standard deviation = 3.44.


For Model2, we got mean = 81.23 and standard deviation = 10.03.


Discussion


Why is Model2 performance worse than Model1?


One behavior that may be driving Model2's metric score down is if an agent has to pick new slot, because another agent convinces the agent for the slot, that agent is picking a random slot and it is likely that the spot has a conflict with another agent.  This means that an agent may have to keep randomly picking slots until it comes across a non-conflict, which drives down that agent's score.  A possible change to the behavior is that and agent picks only an open spot, thus reducing possible conflict.


The difference between the approaches of the two models highlights the difference between a central authority model and full consensus model.  While our results seem to indicate that having a central authority just pick people to fill slots provides the best total outcome, it would be interesting to explore whether there are scenarios where the consensus model does better and what are the conditions.