Philosophical Perspectives on Evidence and Theory Construction
Wednesday, 17 April, 2019
2910 Edouard-Montpetit,
Room 422, University of Montreal
10:00 – 11:00
“The Uncertain Method of Drops” : How a Non-uniform Fluid Unit Survived the Century of Standardization
Rebecca Jackson, Indiana University
In the early 19th century, amid sweeping metrological reforms, the future of the “drop” as a fluid unit in British science and medicine looked grim indeed. Drops were known to vary widely in size, leading to a level of imprecision particularly unacceptable for medical dosages. In 1809, The Pharmacopoeia of the Royal College of Physicians of London introduced a new unit, the minim, defined as 1/60th of a fluid drachm, to replace the variable and imprecise “drop.” Yet, the minim is now completely unknown to current medical practice, while drops are still used in many medical and pharmaceutical contexts. In this talk, I combine philosophical and historical perspectives to argue that developments in medical and measuring devices, as well as in the practice of medicine, created an epistemic role for the non-uniform drop to fulfill for centuries to come, while these same developments rendered the standardized minim tragically obsolete.
11:00 – 12:00
Individuating Quantities
Eran Tal, McGill University
When discrepancies are discovered between the outcomes of different
measurement procedures, two sorts of explanation are open to scientists. Either (i) some of the outcomes are inaccurate or (ii) the procedures are not measuring the same quantity. I argue that, due to the possibility of systematic error, the choice between (i) and (ii) is underdetermined in principle by any possible evidence. Consequently, foundationalist criteria of quantity individuation are either empty or circular. I propose a coherentist, model-based account of measurement that avoids the underdetermination problem, and use this account to explain how scientists individuate quantities in practice.
measurement procedures, two sorts of explanation are open to scientists. Either (i) some of the outcomes are inaccurate or (ii) the procedures are not measuring the same quantity. I argue that, due to the possibility of systematic error, the choice between (i) and (ii) is underdetermined in principle by any possible evidence. Consequently, foundationalist criteria of quantity individuation are either empty or circular. I propose a coherentist, model-based account of measurement that avoids the underdetermination problem, and use this account to explain how scientists individuate quantities in practice.
12:00 – 1:30
Lunch
1:30 – 2:30
Varieties of Robustness and Multiple Determination
Klodian Coko, Rotman Institute of Philosophy, University of Western Ontario
Multiple determination is the epistemic strategy of using multiple independent empirical procedures to establish “the same” result. A classic example of multiple determination is Jean Perrin’s description of thirteen different procedures to determine Avogadro’s number (the number of molecules in a gram-mole of a substance), at the beginning of the twentieth century (Perrin 1909, 1913). In the contemporary literature in philosophy of science, multiple determination is considered to be a variant of robustness reasoning: experimental robustness, measurement robustness, or simply robustness, are the terms that are commonly used to refer to this strategy (Wimsatt 1981, 2007; Woodward 2006; Calcott 2011; Soler et al., eds. 2012).
I argue that the strategy of using multiple independent procedures to establish “the same” result is not a variant of robustness. There are many variants of robustness strategies, but multiple determination is not one of them. I claim that treating multiple determination strategy as a robustness variant mischaracterizes its structure and it is not helpful for understanding its epistemic role and import in scientific research. I argue that there are many features that distinguish multiple determination from the many robustness variants. I present these features and argue that they are related to the same central difference: whereas all the robustness variants can be construed as involving some sort of invariance (of the robust result) to different types of perturbations, multiple determination cannot be so construed. The distinguishing feature of the multiple determination strategy is its ability to support a specific type of a no-coincidence argument. Namely that it would be an improbable coincidence for multiple determination procedures, independently of one another, to establish “the same” result, and yet for the result to be incorrect or an artefact of the determination procedures. Under specific conditions, the no-coincidence argument from multiple determination–in addition to being used to argue for the validity of the result–can be also used to argue for the validity of the determination procedures. No such no-coincidence argument can be construed from simple invariance to perturbations. Robustness is a set of epistemic strategies better suited for discovering causal relations and dependencies.
Finally, I claim that, besides the philosophical reasons, there are also historical reasons to keep multiple determination and robustness distinct. Μultiple determination can be considered to be the historical descendant of William Whewell’s nineteenth century notion of consilience of inductions (a form of hypotheticodeductive reasoning). On the other hand, robustness strategies can be considered to be the descendants of John S. Mill’s nineteenth century methods of direct induction (a form of inductive reasoning).
I argue that the strategy of using multiple independent procedures to establish “the same” result is not a variant of robustness. There are many variants of robustness strategies, but multiple determination is not one of them. I claim that treating multiple determination strategy as a robustness variant mischaracterizes its structure and it is not helpful for understanding its epistemic role and import in scientific research. I argue that there are many features that distinguish multiple determination from the many robustness variants. I present these features and argue that they are related to the same central difference: whereas all the robustness variants can be construed as involving some sort of invariance (of the robust result) to different types of perturbations, multiple determination cannot be so construed. The distinguishing feature of the multiple determination strategy is its ability to support a specific type of a no-coincidence argument. Namely that it would be an improbable coincidence for multiple determination procedures, independently of one another, to establish “the same” result, and yet for the result to be incorrect or an artefact of the determination procedures. Under specific conditions, the no-coincidence argument from multiple determination–in addition to being used to argue for the validity of the result–can be also used to argue for the validity of the determination procedures. No such no-coincidence argument can be construed from simple invariance to perturbations. Robustness is a set of epistemic strategies better suited for discovering causal relations and dependencies.
Finally, I claim that, besides the philosophical reasons, there are also historical reasons to keep multiple determination and robustness distinct. Μultiple determination can be considered to be the historical descendant of William Whewell’s nineteenth century notion of consilience of inductions (a form of hypotheticodeductive reasoning). On the other hand, robustness strategies can be considered to be the descendants of John S. Mill’s nineteenth century methods of direct induction (a form of inductive reasoning).
2:30 – 3:30
Evidence Amalgamation Without Measurement of Evidential Strength
François Claveau, Université de Sherbrooke
In a recent article, Veronica J. Vieland and Hasok Chang discuss the “evidence amalgamation problem” in the context of meta-analysis. Their article is published in a special issue on the topic of evidence amalgamation in Synthese, which signals that philosophers of scientific practices have not failed to notice the importance of the topic. Indeed, the practices taking stock of a variety of observations are widespread in science, and the systematic amalgamation procedure known as meta-analysis has grown in popularity in the last decades (e.g., Stanley, 2001; Higgins and Green, 2008). Vieland and Chang’s article is among a small set of philosophical studies on meta-analysis to date. Vieland and Chang (2018, 22) sum up their main claim thus: “our point is that as soon as we talk about evidence measurement, it behooves us to invoke MAP [the Measurement Amalgamation Principle] both as a tool for critiquing existing candidate measures and, ideally, as a guide in developing better ones.” Because Vieland and Chang take amalgamation to be an instance of evidence measurement, we interpret this statement as implying that:
V&C main thesis: Any instance of evidence amalgamation should meet MAP.
If this thesis is correct, the philosophy of meta-analysis would have made a step forward by having a principle to assess scientific practices. In this comment, we will however argue that the V&C thesis is no secure ground to advance the philosophy of meta-analysis.
V&C main thesis: Any instance of evidence amalgamation should meet MAP.
If this thesis is correct, the philosophy of meta-analysis would have made a step forward by having a principle to assess scientific practices. In this comment, we will however argue that the V&C thesis is no secure ground to advance the philosophy of meta-analysis.
3:30 – 4:00
Break
4:00 – 5:00
On the Notion of Integration in Interdisciplinary Studies
Kevin Kaiser, Université de Montréal
La notion d’intégration est à la fois vu comme condition nécessaire et suffisante pour caractériser la recherche interdisciplinaire, sa differentia specifica. Or, ce concept montre une extrême confusion notamment dans l’identification de ce qui est intégré et comment l’intégrer, certains qualifiant ces imprécisions de «talon d’Achille de l’interdisciplinarité» (Repko, 2007; p. 7). Dans cette conférence, il sera soutenu que les modèles d’intégration développés en philosophie de la biologie sont à même de répondre à ces imprécisions.
Pour ce faire, le diverses familles de modèles d’intégration (néomécaniste, par unité coordonnée et interchamps) développés dans cette branche seront rapportées. Ensuite, leur capacité à clarifier ces imprécisions sera analysée de façon comparative afin d’isoler la ou les familles de modèle(s) les plus apte à être exportés en philosophie de l’interdisciplinarité. Pour finir, certaines critiques à ces modèles seront explorés.
Pour ce faire, le diverses familles de modèles d’intégration (néomécaniste, par unité coordonnée et interchamps) développés dans cette branche seront rapportées. Ensuite, leur capacité à clarifier ces imprécisions sera analysée de façon comparative afin d’isoler la ou les familles de modèle(s) les plus apte à être exportés en philosophie de l’interdisciplinarité. Pour finir, certaines critiques à ces modèles seront explorés.