Social SciencesMedium

Stories by Mark Rubin on Medium

Stories by Mark Rubin on Medium
Stories by Mark Rubin on Medium
Home PageRSS Feed
language
Published
Author Mark Rubin

In this new article, I consider questionable research practices in the field of metascience. A questionable metascience practice (QMP) is a research practice, assumption, or perspective that’s been questioned by several commentators as being potentially problematic for metascience and/or the science reform movement. I discuss 10 QMPs that relate to criticism, replication, bias, generalization, and the characterization of science.

Published
Author Mark Rubin

Researchers often distinguish between: (1) Exploratory hypothesis tests — unplanned tests of post hoc hypotheses that may be based on the current results, and (2) Confirmatory hypothesis tests — planned tests of a priori hypotheses that are independent from the current results This distinction is supposed to be useful because exploratory results are assumed to be more “tentative” and “open to bias” than confirmatory results.

Published
Author Mark Rubin

In this paper (Rubin, 2022), I make two related points: (1) researchers should halve two-sided p values if they wish to use them to make directional claims, and (2) researchers should not halve their alpha level if they’re using two one-sided tests to test two directional null hypotheses.Sometimes, two-sided tests are called “two-tailed” tests!

Published
Author Mark Rubin

In this paper (Rubin, 2021), I consider two types of Type I error probability. The Neyman-Pearson Type I error rate refers to the maximum frequency of incorrectly rejecting a null hypothesis if a test was to be repeatedly reconducted on a series of different random samples that are all drawn from the exact same null population. Hence, the Neyman-Pearson Type I error rate refers to a long run of exact replications.

Published
Author Mark Rubin

In this paper (Rubin, 2020), I consider Fisher’s criticism that the Neyman-Pearson approach to hypothesis testing relies on the assumption of “repeated sampling from the same population.” This criticism is problematic for the Neyman-Pearson approach because it implies that test users need to know, for sure, what counts as the same or equivalent population as their current population.

Published
Author Mark Rubin

In this paper (Rubin, 2017), I consider Gelman and Loken’s (2013, 2014) garden of forking paths problem. Forking paths occur when researchers decide which analyses to perform based on information from their sample. For example, a researcher may decide whether or not to drop an item from a scale based on how it affects the scale’s Cronbach alpha coefficient within the current sample.

Published
Author Mark Rubin

The Costs of HARKing: Does it Matter if Researchers Engage in Undisclosed Hypothesizing After the Results are Known? While no-one’s looking, a Texas sharpshooter fires his gun at a barn wall, walks up to his bullet holes, and paints targets around them. When his friends arrive, he points at the targets and claims he’s a good shot.Source: Dirk-Jan Hoek.

Published
Author Mark Rubin

Preregistration entails researchers registering their planned research hypotheses, methods, and analyses in a time-stamped document before they undertake their data collection and analyses. This document is then made available with the published research report in order to allow readers to identify discrepancies between what the researchers originally planned to do and what they actually ended up doing.

0