Know more

About cookies

What is a "cookie"?

A "cookie" is a piece of information, usually small and identified by a name, which may be sent to your browser by a website you are visiting. Your web browser will store it for a period of time, and send it back to the web server each time you log on again.

Different types of cookies are placed on the sites:

  • Cookies strictly necessary for the proper functioning of the site
  • Cookies deposited by third party sites to improve the interactivity of the site, to collect statistics

Learn more about cookies and how they work

The different types of cookies used on this site

Cookies strictly necessary for the site to function

These cookies allow the main services of the site to function optimally. You can technically block them using your browser settings but your experience on the site may be degraded.

Furthermore, you have the possibility of opposing the use of audience measurement tracers strictly necessary for the functioning and current administration of the website in the cookie management window accessible via the link located in the footer of the site.

Technical cookies

Name of the cookie

Purpose

Shelf life

CAS and PHP session cookies

Login credentials, session security

Session

Tarteaucitron

Saving your cookie consent choices

12 months

Audience measurement cookies (AT Internet)

Name of the cookie

Purpose

Shelf life

atid

Trace the visitor's route in order to establish visit statistics.

13 months

atuserid

Store the anonymous ID of the visitor who starts the first time he visits the site

13 months

atidvisitor

Identify the numbers (unique identifiers of a site) seen by the visitor and store the visitor's identifiers.

13 months

About the AT Internet audience measurement tool :

AT Internet's audience measurement tool Analytics is deployed on this site in order to obtain information on visitors' navigation and to improve its use.

The French data protection authority (CNIL) has granted an exemption to AT Internet's Web Analytics cookie. This tool is thus exempt from the collection of the Internet user's consent with regard to the deposit of analytics cookies. However, you can refuse the deposit of these cookies via the cookie management panel.

Good to know:

  • The data collected are not cross-checked with other processing operations
  • The deposited cookie is only used to produce anonymous statistics
  • The cookie does not allow the user's navigation on other sites to be tracked.

Third party cookies to improve the interactivity of the site

This site relies on certain services provided by third parties which allow :

  • to offer interactive content;
  • improve usability and facilitate the sharing of content on social networks;
  • view videos and animated presentations directly on our website;
  • protect form entries from robots;
  • monitor the performance of the site.

These third parties will collect and use your browsing data for their own purposes.

How to accept or reject cookies

When you start browsing an eZpublish site, the appearance of the "cookies" banner allows you to accept or refuse all the cookies we use. This banner will be displayed as long as you have not made a choice, even if you are browsing on another page of the site.

You can change your choices at any time by clicking on the "Cookie Management" link.

You can manage these cookies in your browser. Here are the procedures to follow: Firefox; Chrome; Explorer; Safari; Opera

For more information about the cookies we use, you can contact INRAE's Data Protection Officer by email at cil-dpo@inrae.fr or by post at :

INRAE

24, chemin de Borde Rouge -Auzeville - CS52627 31326 Castanet Tolosan cedex - France

Last update: May 2021

Menu Logo Principal Oniris

Home page

Epidemiological modelling and its use to manage COVID-19

Insights into mechanistic models, by the DYNAMO team

Over the next few weeks, we will present some key elements of epidemiological modelling through short educational articles. These articles will help you to better understand and decipher the assumptions underlying the epidemiological models that are currently widely used, and how these assumptions can impact predictions regarding the spread of pathogens, particularly SARS-CoV-2. The objective is to discover the advantages and limitations of mechanistic modelling, an approach that is at the core of the DYNAMO team's work. The examples of models will be inspired by models used in crisis, but sometimes simplified to make them accessible.

#9 – The modelling process

In the previous articles, we have shown some of the major stages in the life of a model, based on concrete examples. We have seen the role of assumptions, knowledge and data in model development. Beyond the case of a specific disease, modelling is in fact part of a broader scientific approach to knowledge development, which we will explain in more detail here.

Modelling compared to other methods of knowledge acquisition

To better understand a biological system, three complementary approaches are used: observation, experimentation and modelling.

  • Observation studies the real system and provides knowledge and data in a specific context (a period, an area). However, observation biases exist: screening tests are imperfect, statistically reliable sampling is not always feasible, and the conditions of observation (observer, weather, sample storage, data entry) may be variable. In addition, it is impossible to observe the system exhaustively (some processes are said to be unobservable) and to know the impact of variations inherent in observing any biological system. Extrapolation to another period or area is therefore complex without large-scale, long-term longitudinal monitoring, which are rare and difficult to maintain due to their cost in time and resources, especially in the current context in research.
  • Experimentation provides very precise information on a particular process under controlled conditions. It allows the observation of processes for which no routine collection protocol is available (e.g. monitoring the intra-host immune response) at the cost of simplifying the experimental system (model species, living conditions, etc.). In addition, the amount of data collected is generally low due to the very high costs of these experiments.
  • Modelling studies the system as a whole, in an integrative manner. This approach requires less time and resources, while allowing the comparison of a very wide range of situations, all other things being equal. It is even possible to evaluate measures that do not yet exist, without ethical restrictions since the system is virtual. However, it is a simplified representation of reality, based on hypotheses, and fed with knowledge and data (observational or experimental). This approach must be informed by interactions between disciplines and with the end-users of the model outcomes.

Continuous back-and-forth between model and knowledge/observational/experimental data

Le processus de modélisation

Developing a model is essentially an iterative process, which must incorporate new knowledge, new observational data, but also take into account the evolution of target questions that the model is designed to answer. Classically, there are at least 3 consecutive phases:

  • Step 1 (blue) - A model cannot answer everything, it is necessary to specify its use and check that the modelling approach is relevant to answer the question asked. As in observational studies, it is necessary to delimit the system to be studied: what are the hypotheses regarding the mechanisms involved? What knowledge (often heterogeneous and diffuse) is available? From this, the conceptual scheme of the model can be built, which lists its state variables, the transitions between states (model structure), the functions and parameters required, but also the underlying assumptions and what the model will be able to predict (see article #1). This collective step brings together all the disciplines required by the issue. For Covid-19, for example, bringing together epidemiologists, virologists, infectious diseases specialists, immunologists, will identify the key processes in the dynamics of human infection, available data and knowledge, as well as the main gaps in knowledge. Controlling spread is a more complex issue than just understanding its transmission, with implications beyond biology, hence involving more disciplines. Achieving this is an important collective step that should not be overlooked!
  • Step 2 (turquoise) - This conceptual model is then implemented, either as a system of equations if it is simple enough (mathematical model), or in computer code (simulation model). The outputs of the model are evaluated, by comparing them with expert opinions, or with observations if data are available. The model can be adjusted at this stage, by estimating the most uncertain parameters (see article #8), or by revising some of its assumptions.
  • Step 3 (orange) - The model is then usable. The sensitivity of the model to the variation of its parameters is analyzed to identify the most detrimental gaps in knowledge and the potential levers to control the system. If the predictions of the model are robust to uncertainties and the model considered relevant (following the above analyses and also discussions with its end users, often the managers of the modelled system), it is used via numerical experiments to compare scenarios, identify optimal situations, understand the impact of a change in practices, etc. The model is then used in a different way according to its predictive capabilities. The new questions brought by this step, or the production of new knowledge or data, may require the modelling process to be continued from step 1.

Building a model: a world of compromise

Under no circumstances can the design of a model be the result of unilateral choices:

  • The model is co-constructed with biologists and end-users: it is therefore necessary to ensure that the model is transparent, with no hidden assumptions in the code, to facilitate interactions between disciplines, and that it is readable by end-users during the co-construction process.
  • The relevance of the predictions must be ensured, as well as their validity domain, their robustness with respect to the uncertainties on the parameters and the structure of the model, and this from the various disciplines involved.
  • The more complicated the model, the less easy to determine its parameters... but often, the more realistic and therefore the more readable by biologists and end-users. Parsimony is required, while maintaining a good level of flexibility and modularity so that the model does not have to be reprogrammed each time its assumptions are revised. This also makes it easier to update the model when new knowledge is produced. On the other hand, it often decreases the performance of the codes, although technical solutions exist today to remedy this.

 In article #10, we'll see how to analyze a simulation model.