Publicado in Critical Metascience
Autor Mark Rubin

Preregistration Distinguishes Between Exploratory and Confirmatory Research? Previous justifications for preregistration have focused on the distinction between “exploratory” and “confirmatory” research. However, as I discuss in this recent presentation, this distinction faces unresolved questions.

References

Inglés

Bayes and Beyond

Publicado in Philosophy of Science
Autor Geoffrey Hellman

Several leading topics outstanding after John Earman's Bayes or Bust? are investigated further, with emphasis on the relevance of Bayesian explication in epistemology of science, despite certain limitations. (1) Dutch Book arguments are reformulated so that their independence from utility and preference in epistemic contexts is evident. (2) The Bayesian analysis of the Quine-Duhem problem is pursued; the phenomenon of a “protective belt” of auxiliary statements around reasonably successful theories is explicated. (3) The Bayesian approach to understanding the superiority of variety of evidence is pursued; a recent challenge (by Wayne) is converted into a positive result on behalf of the Bayesian analysis, potentially with far-reaching consequences. (4) The condition for applying the merger-of-opinion results and the thesis of underdetermination of theories are compared, revealing significant limitations in applicability of the former. (5) Implications concerning “diachronic Dutch Book” arguments and “non-Bayesian shifts” are drawn, highlighting the incompleteness, but not incorrectness, of Bayesian analysis.

Inglés

How About Bust? Factoring Explanatory Power Back into Theory Evaluation

Publicado in Philosophy of Science
Autor Larry Laudan

1. Introduction. The papers by Hellman and Mayo offer up a rich menu of problems and proposed solutions, so there is much here for a friendly critic to fasten on. In order to bring a modicum of focus to my commentary, I shall limit my remarks to the Duhem problem and its radiations in epistemology and methodology. Both Mayo and Hellman claim to have solutions to that hoary old problem and they tout these solutions as key indicators of the explanatory power of their respective technical epistemologies, whether Bayesian or Neyman/Pearsonian. Like Mayo, I shall be arguing that the Bayesian treatment of Duhem's problem is no solution at all; that, indeed, it fails to grapple with the core challenges posed by the purported ambiguities of falsification. My response to Mayo's more detailed, and I think more right-headed, treatment of the Duhem problem will be more complex. While I believe that she is moving in the right direction in many respects, I think that she fails to see one key dimension of the Duhemian conundrum. Indeed, she risks solving not Duhem's problem but quite a different one. I shall gently try to encourage her to steer her way back towards the central task.

Inglés

The case for formal methodology in scientific reform

Publicado in Royal Society Open Science
Autores Berna Devezer, Danielle J. Navarro, Joachim Vandekerckhove, Erkan Ozge Buzbas

Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological reform attempts suffer from similar mistakes and over-generalizations to the ones they aim to address. We argue that this can be attributed in part to lack of formalism and first principles. Considering the costs of allowing false claims to become canonized, we argue for formal statistical rigor and scientific nuance in methodological reform. To attain this rigor and nuance, we propose a five-step formal approach for solving methodological problems. To illustrate the use and benefits of such formalism, we present a formal statistical analysis of three popular claims in the metascientific literature: (i) that reproducibility is the cornerstone of science; (ii) that data must not be used twice in any analysis; and (iii) that exploratory projects imply poor statistical practice. We show how our formal approach can inform and shape debates about such methodological claims.

Inglés

When and How to Deviate From a Preregistration

Publicado in Collabra: Psychology
Autor Daniël Lakens

As the practice of preregistration becomes more common, researchers need guidance in how to report deviations from their preregistered statistical analysis plan. A principled approach to the use of preregistration should not treat all deviations as problematic. Deviations from a preregistered analysis plan can both reduce and increase the severity of a test, as well as increase the validity of inferences. I provide examples of how researchers can present deviations from preregistrations and evaluate the consequences of the deviation when encountering 1) unforeseen events, 2) errors in the preregistration, 3) missing information, 4) violations of untested assumptions, and 5) falsification of auxiliary hypotheses. The current manuscript aims to provide a principled approach to deciding when to deviate from a preregistration and how to report deviations from an error-statistical philosophy grounded in methodological falsificationism. The goal is to help researchers reflect on the consequence of deviations from preregistrations by evaluating the test’s severity and the validity of the inference.

PsyArXiv

On the Use and Misuses of Preregistration: A Reply to Klonsky (2024)

Publicado

In his commentary, Klonsky (2024) outlines several arguments for why preregistration mandates (PRMs) will have a negative impact on the field. Klonsky’s overarching concern is that when preregistration ceases to be a tool for research and becomes an indicator of quality itself (a primary example being preregistration badges), it loses its intended benefits. Separate from his concerns surrounding policies like preregistration badges, Klonsky also critiques the practice of preregistration itself, arguing that it can impede our use of other valuable research tools (e.g., multiverse analyses, exploratory analyses). We provide a response to Klonsky’s concerns about preregistration and related policies. First, we provide conceptual clarification on the purpose of preregistration, which was missing in Klonsky’s commentary. Second, with a clearer conceptual framework, we highlight where some of Klonsky’s concerns are warranted, but also highlight where Klonsky’s concerns, critiques, and proposed alternatives to the use of preregistration fall short. Third, with this conceptual understanding of preregistration, we briefly outline some challenges related to the effective implementation of preregistration in psychological science.

Inglés

A Bayesian perspective on severity: risky predictions and specific hypotheses

Publicado in Psychonomic Bulletin & Review
Autores Noah van Dongen, Jan Sprenger, Eric-Jan Wagenmakers

AbstractA tradition that goes back to Sir Karl R. Popper assesses the value of a statistical test primarily by its severity: was there an honest and stringent attempt to prove the tested hypothesis wrong? For “error statisticians” such as Mayo (1996, 2018), and frequentists more generally, severity is a key virtue in hypothesis tests. Conversely, failure to incorporate severity into statistical inference, as allegedly happens in Bayesian inference, counts as a major methodological shortcoming. Our paper pursues a double goal: First, we argue that the error-statistical explication of severity has substantive drawbacks; specifically, the neglect of research context and the specificity of the predictions of the hypothesis. Second, we argue that severity matters for Bayesian inference via the value of specific, risky predictions: severity boosts the expected evidential value of a Bayesian hypothesis test. We illustrate severity-based reasoning in Bayesian statistics by means of a practical example and discuss its advantages and potential drawbacks.