Kristen Hassmiller, Public Health, Michigan (khassmil@umich.edu)
Rodolfo Sousa, Policy, Manchester
Metropolitan U. (rodolfo@cfpm.org)
Consider the
following situation:
A group of, say,
20 bike riders are competing in a long road race. In any pack of riders, the
leader provides a big benefit to the other riders in the pack, and also
receives a slight advantage over riding alone. What happens?
1. Model, using
whatever techniques you wish, the above scenario.
2. Explicitly
state your model and key assumptions.
3. Summarize
key results.
4. Suggest some
potentially interesting future directions and questions for the model.
5. Suggest some
standard social science scenarios that could be usefully modelled using such a
process.
Each biker competing
in a single line road race follows a simple set of heuristics(tortoise, hare,
free-rider). The model was designed to explore
the impact of biker strategy and ability on individual and group (the formation
of packs)outcomes. The program was
coded in NetLogo and may be run and explored with different parametrisations
below.
Population
- set the sliders below the SETUP button to change the number of bikers assuming
each strategy.
Draft effect
- the energy credits (equivalent to potential distance travelled) gained by
followers (riders immediately behind another agent) and leaders (riders at the
front of a pack) are set on the sliders to the right.
Heterogeneity in
ability –
incremental energy available to each rider at each time step is a uniformly
distributed integer with a lower bound of one and an upper bound set through the
"Energy_upper_bound" slider.
Forward vision –
Agent’s forward vision (vf) is a linear function of their incremental energy (eave)
with a multiplier set by the “Forward_Visibility_Multiplier” (s.t., vf =
Forward_Visibility_Multiplier *eave).
Tortoises vision
– Because tortoises’ strategy does not encode incentives for the use of
their accumulated energy stock we assumed that they switch to hare behaviour
upon sight of the finish line. Tortoises may have an enhanced vision for this
purpose (while still using standard vision vf before the “end game” mode)
set in the slider “Multiplier_Tort_Vf_atEndGame”) st. that “end game
vision” = Multiplier_Tort_Vf_atEndGame * vf.
Click on the SETUP
button to set up the bikers.
Click on GO to
start the bike riders moving across the race space towards the red finish line.
Note that you may need to scroll to the right to follow the riders as they
progress beyond the edge of your screen (i.e., therace course display may be
longer than your screen). You may also calibrate thespeed at which the model
runs using the blue slider on the top left of thescreen.
created with NetLogo
view/download model file: GoLancefinal9.nlogo
This page was
automaticallygenerated by NetLogo 2.0.1. Questions, problems? Contact feedback@ccl.northwestern.edu.
The applet requires Java1.4.1 or
higher to run. It will not run on Windows 95 or Mac OS 8 or 9. Macusers must
have OS X 10.2.6 or higher and use a browser that supports Java 1.4applets (Safari
works, IE does not). On other operating systems, you may obtainthe latest Java
plugin from Sun'sJava site.
T stands for tortoise
– green
H stands for hare –
grey
FR stands for free
rider - red
Final results:
1.The strategy of the
first agent to cross the finish line (marked in red on the race space) is
displayed on the "Winner's Strategy" monitor.
2.To the right of
that you find monitors for the average final rank of each strategy – note that
these computations only become complete (and meaningful)once all agents crossed
the finish line.
Observing the race:
1."Breed
Distribution" plots a time series of the number of agents per strategy
still running s.t. decreases represent bikers crossing the finish line.
2."Average Rank
per Breed" is also a times series plot tracking the relative average
performance per strategy of the agents still on the racecourse(i.e., as
agent’s start to finish the plot only refers to the remaining racers).
1. The model is conceptualised such that riding in a pack confers some
benefit (accumulation of energy in a stock est). However, because we assumed
that agents cannot share the same spot this benefit comes at some cost -
breaking out of a pack requires enough energy (stored or possessed through
ability) to overcome all adjacent riders. This assumption attempts to capture
bikers’ closing in on competitors that attempt passing at low speeds.
2. As noted,
bikers’ forward visibility (vf) varies and is fully correlated with their
ability (incremental energy). This assumption is justified by the facts that: a)
the faster a biker is riding (controlled by energy) the further ahead they
should be able to see; and b) riders with more ability are often better riders
with higher awareness of their surroundings.
3. Schedule - Agent creation and move order follows the sequence T FR
H ,…, T FR H ...until all the
population goes through the loop (if the number of agents’ with each strategy
differs the last loops are left for the largest populations).
NOTE:
Observations are based only on visual inspection of runs with different
parameterizations (no rigorous experimentation or sensitivity analysis was
conducted).
The most general
observation is that there is a lot of variability in outcome by strategy across
the tested parameter space. It
seemed that overall there was the following rank: hares, free-riders, and
tortoises. Hares win often.
Free-riders sometimes win and tortoises very rarely do.
Overall, all strategies tend to ride in packs.
Given their full
exhaustion strategy (and
free-riders strategy to follow) hares often lead packs. Thus, when the leader
energy benefit increases (while the follower benefit is fixed) hares performance
improves.
To be successful, free
riders need to ride mixed strategy packs long enough to accumulate enough wind
draft follower energy credits to win. Naturally, free-riders performance varies
positively with the ratio of follower to leader energy credit points.
Furthermore, we often observe single strategy free-rider packs stuck in the last
position, leading to a significant decrease in the overall average ranking for
the strategy.
Tortoises seem to do best
in intermediate situations were neither free-riders nor hares have a clear
advantage.
Increased heterogeneity
in ability (higher upper bound for the incremental energy distribution) seems to
results in better free-rider performance. Perhaps
increased heterogeneity facilitates the emergence of more mixed strategy packs,
allowing free-riders to cash in on their strategy. Furthermore, because average
pack size is smaller, bikers are better able to use their stored energy credits.
Strategies: Add more/different
strategies, including strategies that are based on perceptions of others (characteristics
and/or strategy) and cooperation. Consider implementing an El Farol type of
approach, i.e. endogenous strategy choice.
Sequentiality:
Order of agent moves could be random, ranked by position, ranked by average or
available energy.
Topologies:
Alternative topologies like CA, continuous Cartesian space, realistic space (terrain
type, etc.).
Physics:
Realistic motion, overcoming, etc.
1. Does the outcome vary depending on number and heterogeneity in
ability of bikers; composition of biker strategy; length of race; credit given
to followers and leaders?
2. What happens
if ability is correlated with strategy chosen?
3. Can strategy
choice overcome deficiency in fitness? Is there an optimal choice for a biker
with a specified ability (and does the optimal choice vary by ability level)?
The key questions
that can be investigated in this model are:
1. In a game with
individual prizes but strong interaction effects, how does an individual find an
intertemporal balance of cooperation and competition?
2. How to
cooperate to exclude competitor group(s) from having a chance to win?
3. When to defect
from the cooperative group and compete for individual victory?
Applications:
1. General group
formation / cooperation mechanisms (e.g., tags).
2. Perception
mechanisms - how to evaluate perceived threat level of opponent and select
cooperation group?
3. Applied
example - workplace teams with hierarchical remuneration systems.
-----------------------
In this model,
the road is broken into unit spaces. Except for the starting space, two bikers
can never occupy the same space. The road is one space wide, so bikers can only
pass if they have enough ability to overtake all bikers in front of them.
Note on Space/Coordinates:
the race is run on the integer space {- screen-edge-x,..., 0} from left to right,
with 0 as the finish line.
------------------------------
color color based
on strategy (or breed)
eave energy on
average (reflects ability) - random
uniform integer [1 , random
energy_up_bound]
est energy stock
at t initialized to zero
vf forward visibilityvfMultiplier * eave
move where biker
moves ina time step
d_move where
biker desires to move
prevPt where
biker was before step
toFollow is there
a pack to join (yes=1)
rank position at
the end of the race
-------------------------------
ticks time step
counter initialized to 0
credFollow energy
credit for following (to est)
lead energy
credit for leading (to est)
nt # of bikers
with tortoise strategy
nh # of bikers
with hare strategy
nf # of bikers
with free-rider strategy
numFinish counter
for bikers program stops when = nt+nh+nf
-------------------------------
Tortoise:
Maintain average speed - don’t push anything! Travel as close to eave as
possible, moving back if spaces are full with no preference for joining packs or
riding alone. If the finish line is in sight, use all of est to bolt ahead in
the endgame. NOTE: As a “reward” for riding slow and steady throughout the
race, the tortoise may be able to see the finish line further than their
visibility, allowing them to bolt once the finish line is within “end game
vision” = x * vf.
Hare: Give it
all you have got - full exhaustion! Travel as close to eave+est as possible,
moving back if spaces are full with no preference for joining packs or riding
alone
Free-Rider: Take
the easy way (ride in the back of packs and bolt ahead when possible)
Join the pack
that is as far ahead as possible, but no further than min(biker’s visibility,
eave+est). If no pack is available, move eave and save stored energy.
----------------------
Agent Creation:
Agents are created one at a time in a loop by strategy type: Tortoise;
Free-rider, Hare. It is possible to change the code to modify the order of breed
creation to investigate any bias created.
Schedule:
1. Move with
asynchronous updating, move sequence = creation rank
2. Update energy
stock, est
3. Verify if race
is completed (if so, assign rank)
4. Calculate
rank of agents in the race course.
4. Stopping criteria: if biker crosses the finish line,
they die and numFinish is incremented by one. The model should stop running when
all agents have crossed the finish line.
5. Visualisations
NOTE: Creation
order equals move sequence – In NetLogo, if you “ask turtles”, or “ask a
whole breed”, the turtles are scheduled for execution in ascending order by ID
number. If you “ask patches”, the patches are scheduled for execution by row:
left to right within each row, and starting with the top row.
Once scheduled, an agent's "turn" ends only once it performs an
action that affects the state of the world, such as moving, or creating a turtle,
or changing the value of a global, turtle, or patch variable (setting a local
variable doesn't count). It is referred in the documentation that an option for
randomized scheduling is planned for future versions of NetLogo.
---------------------
We looked at
NetLogo for the very first time here. Although things seem to be working as they
should, it is very possible that there are bugs in this program.