Royal Society Summer Science Exhibition II

Anyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model. The choice, then, is not whether to build models; it's whether to build explicit ones. In explicit models, assumptions are laid out in detail, so we can study exactly what they entail. On these assumptions, this sort of thing happens. When you alter the assumptions that is what happens. By writing explicit models, you let others replicate your results [1].

16 reasons to make computer models:

Many people often think of computer models like some kind of calculator e.g. put numbers in and the answer comes out. They also often think that models are made to predict the future. Epstein is very keen to dispel such over-simplifications by listing a wide range of other possible reasons why we might create computer models:

  1. Explain (very distinct from predict)
  2. Guide data collection
  3. Illuminate core dynamics
  4. Suggest dynamical analogies
  5. Discover new questions
  6. Promote a scientific habit of mind
  7. Bound (bracket) outcomes to plausible ranges
  8. Illuminate core uncertainties.
  9. Offer crisis options in near-real time
  10. Demonstrate tradeoffs / suggest efficiencies
  11. Challenge the robustness of prevailing theory through perturbations
  12. Expose prevailing wisdom as incompatible with available data
  13. Train practitioners
  14. Discipline the policy dialogue
  15. Educate the general public
  16. Reveal the apparently simple (complex) to be complex (simple)

Some people are skeptical about computer modelling, perhaps due to poor use of models in the banking. The Modelling4All software has been designed to help people better understand the process of modelling, so that more people can critically examine models. Just like we learn to read and write to better appreciate the value of books, so we need to learn to experiment with and construct models to see they cannot be treated as oracles to the truth.

What is ABM

An agent-based model (ABM) is a class of computational models for simulating autonomous units in a system e.g., ants, chemicals, humans, flocks of birds, or cities. ABMs are created to study how systems can evolve as a consequence of the actions of a population of agents. Agents may be programmed to learn, adapt and reproduce. It may be appropriate to describe an ABM in both top-down and bottom-up language e.g. genes code for proteins which may cause the organism to effect the environment. The environment may then effect the organism by altering patterns of gene expression. Think The Selfish Gene by Richard Dawkins vs Music of Life by Denis Noble.

Examples of ABM

ABM is used in research in practically every discipline but particularly in the natural and social sciences. At Oxford the CABDyN (Complex Agent-Based Dynamic Networks) group brings together a wide range of disciplines broadly looking at complex systems and of late, research into networks. Ken Kahn has worked with academics at Oxford to produce guides and libraries of behaviours (NetLogo code fragments) to help students build the following models:

The SugarScape

The Sugarscape is an application of the ideas from complexity theory and Artificial Life to the Social Sciences. The goal is to see if the kind of behaviours seen in real societies can be modelled using just simple local rules, as suggested by these new disciplines. Unlike classic economic and social science theory, there is no need to assume that agents have perfect knowledge, that the population is homogeneous, or that an equilibrium has been reached. With a 'bottom up' model, the agent population can be highly heterogeneous, and highly dynamic.

Sugarscape is an incredibly simple model of agents living, interacting with their environment by searching for and eating sugar and causing pollution, interacting with each other by sex, trade, war, and disease, reproducing, and dying. Despite the simplicity of the models, the resulting behaviour can be surprising, marvelously complex, and even 'realistic'. As much behaviour as possible is allowed to 'emerge', rather than be programmed in. Crucially, there is no static externally-imposed fitness function: fitness is simply the measure of how well the agents survive in a varying environment

The work was published as a book titled Growing Artificial Societies: social science from the bottom up in 1996.

Opinion dynamics

Abstract from How can extremism prevail? A study based on the relative agreement interaction model, JASSS 2002:

We model opinion dynamics in populations of agents with continuous opinion and uncertainty. The opinions and uncertainties are modified by random pair interactions. We propose a new model of interactions, called relative agreement model, which is a variant of the previously discussed bounded confidence. In this model, uncertainty as well as opinion can be modified by interactions. We introduce extremist agents by attributing a much lower uncertainty (and thus higher persuasion) to a small proportion of agents at the extremes of the opinion distribution. We study the evolution of the opinion distribution submitted to the relative agreement model. Depending upon the choice of parameters, the extremists can have a very local influence or attract the whole population. We propose a qualitative analysis of the convergence process based on a local field notion. The genericity of the observed results is tested on several variants of the bounded confidence model.

Mathematical models of vaccination

The epidemics modelling guide was designed to explore the SVIR model as presented in "Mathematical models of vaccination", Almut Scherer and Angela McLean, British Medical Bulletin 2002;62 187-199. The model explores the spread of a virus through a population of individuals who are connected according to different network topologies e.g. where the rate that individual meet are constant across the population, or vary according to a normal distribution and a power law. Models of infection that take into account variation in the number of connections individuals have are obviously much more realistic and help us understand a wider range of possible outbreak scenarios.

Economic Cost and Health Care Workforce Effects of School Closures in the U.S.

School closure is an important component of U.S. pandemic flu mitigation strategy. The benefit is a reduction in epidemic severity through reduction in school-age contacts. However, school closure involves two types of cost. First is the direct economic impact of the worker absenteeism generated by school closures. Second, many of the relevant absentees will be health care workers themselves, which will adversely affect the delivery of vaccine and other emergency services. Neither of these costs has been estimated in detail for the United States. We offer detailed estimates, and improve on the methodologies thus far employed in the non-U.S. literature. We give estimates of both the direct economic and health care impacts for school closure durations of 2, 4, 8, and 12 weeks under a range of assumptions. We find that closing all schools in the U.S. for four weeks could cost between $10 and $47 billion dollars (0.1-0.3% of GDP) and lead to a reduction of 6% to 19% in key health care personnel. These should be considered conservative (i.e., low) economic estimates in that earnings rather than total compensation are used to calculate costs. We also provide per student costs, so regionally heterogeneous policies can be evaluated. These estimates permit the epidemiological benefits of school closure to be compared to the costs at multiple scales and over many durations. [2]

External links:

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License