Know more

About cookies

What is a "cookie"?

A "cookie" is a piece of information, usually small and identified by a name, which may be sent to your browser by a website you are visiting. Your web browser will store it for a period of time, and send it back to the web server each time you log on again.

Different types of cookies are placed on the sites:

  • Cookies strictly necessary for the proper functioning of the site
  • Cookies deposited by third party sites to improve the interactivity of the site, to collect statistics

Learn more about cookies and how they work

The different types of cookies used on this site

Cookies strictly necessary for the site to function

These cookies allow the main services of the site to function optimally. You can technically block them using your browser settings but your experience on the site may be degraded.

Furthermore, you have the possibility of opposing the use of audience measurement tracers strictly necessary for the functioning and current administration of the website in the cookie management window accessible via the link located in the footer of the site.

Technical cookies

Name of the cookie

Purpose

Shelf life

CAS and PHP session cookies

Login credentials, session security

Session

Tarteaucitron

Saving your cookie consent choices

12 months

Audience measurement cookies (AT Internet)

Name of the cookie

Purpose

Shelf life

atid

Trace the visitor's route in order to establish visit statistics.

13 months

atuserid

Store the anonymous ID of the visitor who starts the first time he visits the site

13 months

atidvisitor

Identify the numbers (unique identifiers of a site) seen by the visitor and store the visitor's identifiers.

13 months

About the AT Internet audience measurement tool :

AT Internet's audience measurement tool Analytics is deployed on this site in order to obtain information on visitors' navigation and to improve its use.

The French data protection authority (CNIL) has granted an exemption to AT Internet's Web Analytics cookie. This tool is thus exempt from the collection of the Internet user's consent with regard to the deposit of analytics cookies. However, you can refuse the deposit of these cookies via the cookie management panel.

Good to know:

  • The data collected are not cross-checked with other processing operations
  • The deposited cookie is only used to produce anonymous statistics
  • The cookie does not allow the user's navigation on other sites to be tracked.

Third party cookies to improve the interactivity of the site

This site relies on certain services provided by third parties which allow :

  • to offer interactive content;
  • improve usability and facilitate the sharing of content on social networks;
  • view videos and animated presentations directly on our website;
  • protect form entries from robots;
  • monitor the performance of the site.

These third parties will collect and use your browsing data for their own purposes.

How to accept or reject cookies

When you start browsing an eZpublish site, the appearance of the "cookies" banner allows you to accept or refuse all the cookies we use. This banner will be displayed as long as you have not made a choice, even if you are browsing on another page of the site.

You can change your choices at any time by clicking on the "Cookie Management" link.

You can manage these cookies in your browser. Here are the procedures to follow: Firefox; Chrome; Explorer; Safari; Opera

For more information about the cookies we use, you can contact INRAE's Data Protection Officer by email at cil-dpo@inrae.fr or by post at :

INRAE

24, chemin de Borde Rouge -Auzeville - CS52627 31326 Castanet Tolosan cedex - France

Last update: May 2021

Menu Logo Principal Oniris

Home page

Epidemiological modelling and its use to manage COVID-19

Insights into mechanistic models, by the DYNAMO team

Over the next few weeks, we will present some key elements of epidemiological modelling through short educational articles. These articles will help you to better understand and decipher the assumptions underlying the epidemiological models that are currently widely used, and how these assumptions can impact predictions regarding the spread of pathogens, particularly SARS-CoV-2. The objective is to discover the advantages and limitations of mechanistic modelling, an approach that is at the core of the DYNAMO team's work. The examples of models will be inspired by models used in crisis, but sometimes simplified to make them accessible.

#8 - The link to data: how are models calibrated?

The relevance of the predictions of epidemiological models is often limited by the uncertainty of their parameter values. But how are these parameters estimated? And how is the onset of the epidemic defined?

Different information sources can be used :

  • the scientific literature,
  • experimental and historical data,
  • monitoring data on the current epidemic dynamics (number of individuals tested positive, number of individuals developing severe clinical signs, number of deaths due to the disease). Since the beginning of the COVID-19 epidemic, all of this information has been collected and centralised.

Observable processes (such as duration of the symptomatic state or at the hospital) are readily available. More uncertain, often unobservable parameters (transmission rate, latency duration) need to be estimated. For this purpose, various inference methods exist, each with their advantages and disadvantages. Inference methods that maximise the likelihood of the model are often used (for example, see this link). However, the likelihood of a model cannot always be assessed, especially when the model is dynamic, with a large number of variables, and stochastic, or if the available data are spatiotemporal, incomplete, censored, or imperfect. Likelihood-free methods have been developed to overcome this methodological problem.

Although in our example, maximizing likelihood is possible, let us use the likelihood-free methods to explain their use. These methods are preferred in the DYNAMO team because the epidemiological systems we usually study are complex. Here we used the ABC-SMC (Approximate Bayesian Computation - Sequential Monte Carlo) method. ABC methods are quite intuitive: (1) sets of parameters are generated by drawing parameter values in prior distributions; (2) a simulation of the model is carried out for each set of parameters and compared to the real data via summary statistics (i.e. a simplified representation of the data); (3) the sets of parameters with a distance between simulated and observed summary statistics below a tolerance threshold are selected — the smaller the distance, the more likely the set of parameters; (4) the parameter values are then estimated (posterior distributions). A disadvantage of these methods is the very large number of simulations to be carried out, resulting in substantial calculation costs.

Let's take again the model of article #6 with lockdown from 16 March, and estimate 4 of its parameters: β (the transmission rate), σ (the multiplying factor reducing the excretion of Ip, Ia, Ips), 1/ε (the average duration of latency), and the date of introduction (which should be seen here as the date of perennial establishment of the infection in the population). The data used correspond to the hospital data of the COVID-19 epidemic, more specifically the number of deaths over time, represented in the model by the M-state. The amount of data available may impacts the estimates, especially if there are few data. For the example, we estimated the parameters by mobilizing the available data at 3 dates: one week before lockdown (t = 68 days), the day before lockdown (t = 75 days), and one week after lockdown (t = 82 days). Post-lockdown data were not used. These scenarios lead to parameter estimates that are sufficiently different to predict contrasting epidemic dynamics, clearly illustrating the need to update the models very regularly, especially if they are used for health management purposes.

Values of estimated parameters (mean and 90% confidence interval)

Scenario

β

σ

1/ε

date of intro.

t = 68

1.89 [1.09 ; 2.76]

0.48 [0.35 ; 0.79]

3.3 [1.2 ; 4.7]

34 [33 ; 35]

t = 75

1.48 [0.77 ; 2.33]

0.43 [0.12 ; 0.84]

3.3 [1.0 ; 4.9]

19 [16 ; 21]

t = 82

1.48 [0.85 ; 2.40]

0.47 [0.17 ; 0.85]

3.3 [1.1 ; 4.8]

22 [20 ; 24]

In addition to the model parameters, the initial conditions of the model can also be estimated, highlighting in our example an establishment of infection in the population about one month before the first deaths.

Distribution de la date d'introduction du virus, estimée en fonction de 3 jeux de données

A posterior distributions of the date of establishment of the infection according to the amount of usable data.

Using data available as of March 10 (in blue) gives an estimate of the introduction date as of February 5 (t=34 days). With more data available (in green then yellow), the date of introduction is estimated to be around January 24 (t=22 days).

Nombre de nouveaux décès par jour
Nombre de décès cumulés par jour

Model predictions in number of new deaths (left) and cumulative number of deaths (right), depending on the amount of data that can be used to estimate model parameters. The other parameters and simulation conditions are the same as in article #6.

By using the data available one week after containment (t=82 days, in yellow) to estimate the model input parameters, the simulations give a result closer to the observed data than in other cases (where less data are available).

We hope that this article has convinced you that it is essential that models and observational data work together! Updating the models in real time when new cases occur is a necessary challenge, especially at the beginning of an epidemic, to improve the predictive quality of the models and thus the confidence in their predictions. However, inference methods are not infallible. The results also depend on the assumptions (structure of the model), the type and quality of the data... Moreover, the more parameters to be estimated, the more complex it becomes. Calibrating a model is a process that can take a long time, and must be coupled with analyses to verify that the available data actually enable to estimate the desired parameters.

Article #9 will take a step back from the previous articles to discuss why (and how) to mobilize a mechanistic modelling approach in epidemiology.