Nuova ricerca

FRANCESCO DE PRETIS

Assegnista di ricerca
Dipartimento di Comunicazione ed Economia
Docente a contratto
Dipartimento di Comunicazione ed Economia


Home | Curriculum(pdf) | Didattica |


Pubblicazioni

2024 - Benchmark dose modeling for epidemiological dose-response assessment using prospective cohort studies [Articolo su rivista]
De Pretis, Francesco; Zhou, Yun; Xun, Pengcheng; Shao, Kan
abstract

Benchmark dose (BMD) methodology has been employed as a default dose-response modeling approach to determine the toxicity value of chemicals to support regulatory chemical risk assessment. Especially, a relatively standardized BMD analysis framework has been established for modeling toxicological data regarding the formats of input data, dose-response models, definitions of benchmark response, and model uncertainty consideration. However, the BMD approach has not been well developed for epidemiological data mainly because of the diverse designs of epidemiological studies and various formats of data reported in the literature. Although most of the epidemiological BMD analyses were developed to solve a particular question, the methods proposed in two recent studies are able to handle cohort and case-control studies using summary data with consideration of adjustments for confounders. Therefore, the purpose of the present study is to investigate and compare the "effective count"-based BMD modeling approach and adjusted relative risk (RR)-based BMD analysis approach to identify an appropriate BMD modeling framework that can be generalized for analyzing published data of prospective cohort studies for BMD analysis. The two methods were applied to the same set of studies that investigated the association between bladder and lung cancer and inorganic arsenic exposure for BMD estimation. The results suggest that estimated BMDs and BMDLs are relatively consistent; however, with the consideration of established common practice in BMD analysis, modeling adjusted RR values as continuous data for BMD estimation is a more generalizable approach harmonized with the BMD approach using toxicological data.


2024 - Individual consent in cluster randomised trials for non-pharmaceutical interventions: going beyond the Ottawa statement [Articolo su rivista]
Leblanc, Marissa; Williamson, Jon; De Pretis, Francesco; Landes, Jürgen; Rocca, Elena
abstract


2024 - The Ambiguity Dilemma for Imprecise Bayesians [Articolo su rivista]
Radzvilas, Mantas; Peden, William; De Pretis, Francesco
abstract


2023 - Bayesian benchmark dose modeling methods for epidemiological dose-response assessment using prospective cohort studies [Abstract in Atti di Convegno]
DE PRETIS, Francesco; Shao, Kan
abstract


2023 - Fast Methods for Drug Approval: Research Perspectives for Pandemic Preparedness [Articolo su rivista]
Abdin, Ahmad Yaman; De Pretis, Francesco; Landes, Jürgen
abstract

: Public heath emergencies such as the outbreak of novel infectious diseases represent a major challenge for drug regulatory bodies, practitioners, and scientific communities. In such critical situations drug regulators and public health practitioners base their decisions on evidence generated and synthesised by scientists. The urgency and novelty of the situation create high levels of uncertainty concerning the safety and effectiveness of drugs. One key tool to mitigate such emergencies is pandemic preparedness. There seems to be, however, a lack of scholarly work on methodology for assessments of new or existing drugs during a pandemic. Issues related to risk attitudes, evidence production and evidence synthesis for drug approval require closer attention. This manuscript, therefore, engages in a conceptual analysis of relevant issues of drug assessment during a pandemic. To this end, we rely in our analysis on recent discussions in the philosophy of science and the philosophy of medicine. Important unanswered foundational questions are identified and possible ways to answer them are considered. Similar problems often have similar solutions, hence studying similar situations can provide important clues. We consider drug assessments of orphan drugs and drug assessments during endemics as similar to drug assessment during a pandemic. Furthermore, other scientific fields which cannot carry out controlled experiments may guide the methodology to draw defeasible causal inferences from imperfect data. Future contributions on methodologies for addressing the issues raised here will indeed have great potential to improve pandemic preparedness.


2023 - Incentives for Research Effort: An Evolutionary Model of Publication Markets with Double-Blind and Open Review [Articolo su rivista]
Radzvilas, Mantas; DE PRETIS, Francesco; Peden, William; Tortoli, Daniele; Osimani, Barbara
abstract

Contemporary debates about scientific institutions and practice feature many proposed reforms. Most of these require increased efforts from scientists. But how do scientists' incentives for effort interact? How can scientific institutions encourage scientists to invest effort in research? We explore these questions using a game-theoretic model of publication markets. We employ a base game between authors and reviewers, before assessing some of its tendencies by means of analysis and simulations. We compare how the effort expenditures of these groups interact in our model under a variety of settings, such as double-blind and open review systems. We make a number of findings, including that open review can increase the effort of authors in a range of circumstances and that these effects can manifest in a policy-relevant period of time. However, we find that open review's impact on authors' efforts is sensitive to the strength of several other influences.


2023 - Making decisions with evidential probability and objective Bayesian calibration inductive logics [Articolo su rivista]
Radzvilas, Mantas; Peden, William; De Pretis, Francesco
abstract

Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What is their performance relative to traditional Bayesianism? In this article, we develop an agent-based model of a classic binomial decision problem, including players based on variations of Evidential Probability and Objective Bayesianism. We compare the performances of these players, including against a benchmark player who uses standard Bayesian inductive logic. We find that the calibrated players can match the performance of the Bayesian player, but only with particular acceptance thresholds and decision rules. Among other points, our discussion raises some challenges for characterising “cautious” reasoning using imprecise probabilities. Thus, we demonstrate a new way of systematically comparing imprecise probability systems, and we conclude that calibration inductive logics are surprisingly promising for making decisions.


2022 - A smart hospital-driven approach to precision pharmacovigilance [Articolo su rivista]
De Pretis, Francesco; van Gils, Mark; Forsberg, Markus M
abstract

Researchers, regulatory agencies, and the pharmaceutical industry are moving towards precision pharmacovigilance as a comprehensive framework for drug safety assessment, at the service of the individual patient, by clustering specific risk groups in different databases. This article explores its implementation by focusing on: (i) designing a new data collection infrastructure, (ii) exploring new computational methods suitable for drug safety data, and (iii) providing a computer-aided framework for distributed clinical decisions with the aim of compiling a personalized information leaflet with specific reference to a drug's risks and adverse drug reactions. These goals can be achieved by using 'smart hospitals' as the principal data sources and by employing methods of precision medicine and medical statistics to supplement current public health decisions.


2022 - E-synthesis for carcinogenicity assessments: A case study of processed meat [Articolo su rivista]
De Pretis, Francesco; Jukola, Saana; Landes, Jürgen
abstract

Rationale, Aims and Objectives Recent controversies about dietary advice concerning meat demonstrate that aggregating the available evidence to assess a putative causal link between food and cancer is a challenging enterprise. Methods We show how a tool developed for assessing putative causal links between drugs and adverse drug reactions, E-Synthesis, can be applied for food carcinogenicity assessments. The application is demonstrated on the putative causal relationship between processed meat consumption and cancer. Results The output of the assessment is a Bayesian probability that processed meat consumption causes cancer. This Bayesian probability is calculated from a Bayesian network model, which incorporates a representation of Bradford Hill's Guidelines as probabilistic indicators of causality. We show how to determine probabilities of indicators of causality for food carcinogenicity assessments based on assessments of the International Agency for Research on Cancer. Conclusions We find that E-Synthesis is a tool well-suited for food carcinogenicity assessments, as it enables a graphical representation of lines and weights of evidence, offers the possibility to make a great number of judgements explicit and transparent, outputs a probability of causality suitable for decision making and is flexible to aggregate different kinds of evidence.


2021 - A Battle in the Statistics Wars: a simulation-based comparison of Bayesian, Frequentist and Williamsonian methodologies [Articolo su rivista]
Radzvilas, Mantas; Peden, William; DE PRETIS, Francesco
abstract

The debates between Bayesian, frequentist, and other methodologies of statistics have tended to focus on conceptual justifications, sociological arguments, or mathematical proofs of their long run properties. Both Bayesian statistics and frequentist ("classical") statistics have strong cases on these grounds. In this article, we instead approach the debates in the "Statistics Wars" from a largely unexplored angle: simulations of different methodologies' performance in the short to medium run. We used Big Data methods to conduct a large number of simulations using a straightforward decision problem based around tossing a coin with unknown bias and then placing bets. In this simulation, we programmed four players, inspired by Bayesian statistics, frequentist statistics, Jon Williamson's version of Objective Bayesianism, and a player who simply extrapolates from observed frequencies to general frequencies. The last player served a benchmark function: any worthwhile statistical methodology should at least match the performance of simplistic induction. We focused on the performance of these methodologies in guiding the players towards good decisions. Unlike an earlier simulation study of this type, we found no systematic difference in performance between the Bayesian and frequentist players, provided the Bayesian used a flat prior and the frequentist used a low confidence level. The Williamsonian player was also able to perform well given a low confidence level. However, the frequentist and Williamsonian players performed poorly with high confidence levels, while the Bayesian was surprisingly harmed by biased priors. Our study indicates that all three methodologies should be taken seriously by philosophers and practitioners of statistics.


2014 - A Statistical Mechanics Approach to Immigrant Integration in Emilia Romagna (Italy) [Relazione in Atti di Convegno]
DE PRETIS, Francesco; Vernia, Cecilia
abstract

Integration phenomena are social processes among human beings that take place every day when an autochthone population is experiencing the arrival of new immigrants. Although being a rising phenomenon (involving now over one billion people according to United Nations) which questions societies and policy-makers all over the world, numerical measurements capable to give robust insights over the way immigrant integration occurs are still far from what is usually considered an affordable standard in mathematical and physical sciences. Basing our analysis on previous seminal works, we follow here a statistical physics approach to the analysis of immigrant integration. In specific, we consider a large dataset collected by the Emilia Romagna region office of statistics (Italy), containing information over all marriages occurred amid the regional population during a sixteen years span, from 1995 to 2010. We define as quantifier of integration the percentage of marriages with spouses of mixed origin and we perform several analyses over the dataset, including binning and data fitting. The final outcome consists in an emerging pattern: quantifier’s average measurements align around a square root fit when considered with respect to a suitable function of the immigrant density. The theoretical interpretation we offer is that such result agrees with a suitable version of the Curie-Weiss model used in statistical mechanics to describe ferromagnetisms. More explicitly, immigrants living in Emilia Romagna municipalities seem to present mainly imitative behavior’s phenomena in making social actions for integration. The result emerged with Emilia Romagna data complies with previous works concerning similar data coming from Spain.