Work
"The Logit Model Measurement Problem" (forthcoming in Philosophy of Science)
Traditional wisdom dictates that statistical model outputs are estimates, not measurements. Despite this, statistical models are employed as measurement instruments in the social sciences. In this article, I scrutinize the use of a specific model—the logit model—for psychological measurement. Given the adoption of a criterion for measurement that I call comparability, I show that the logit model fails to yield measurements due to properties that follow from its fixed residual variance.
A paper on machine learning (email for manuscript)
Models in machine learning are opaque - some call them 'black-box models' for this reason. It is often argued that opaque models need special ethical and legal attention not afforded to classical statistical models. There is no consensus in the literature, however, about what opacity is and how it distinguishes models in machine learning from classical statistical models. Words like 'explainability' and 'interpretability' are used interchangeably to characterize model opacity.
In this paper, I argue for three main conclusions: (1) If we recognize a distinction between formal and efficient explanations, then black-box models are explainable, at least formally. (2) Given that black-box models are formally explainable, opacity is best understood as a lack of interpretability rather than a lack of explainability. Interpretability, I argue, can be precisely defined as a mapping between model terms and relations in a real system. (3) Once model opacity is defined in terms of interpretability rather than explainability, some legal and ethical worries about the use of black-box models for public-facing decisions are deflated.
"Carnap and Bar-Hillel's Theory of Semantic Information" (forthcoming)
This paper is included in the forthcoming 'Carnap Handbuch' published by Metzler Verlag and edited by Christian Dambock and Georg Scheimer.
A paper on the language dependence objection to probability (email for manuscript)
In this paper I offer a new defense of Carnap's inductive logic and the logical interpretation of probability that goes with it. While many consider language relativity to undermine the objectivity of Carnap's system of inductive logic, I show that deductive logic is vulnerable to the same concerns about language relativity. It is better, I propose, to accept that both inductive and deductive logic offer an account of objective relations between propositions but not objective initial truth value or probability assignments. I argue that the logical interpretation of probability is the correct interpretation of non-physical probability, where non-physical probability characterizes, for example, the non-deductive support relation between evidence and hypothesis.
A paper criticizing the reliabilist justification of frequentist hypothesis testing (email for manuscript)
Frequentist hypothesis testers in statistics face the probabilist objection: according to the probabilist objection, frequentist inferences about hypotheses are not justified because the probability of the hypothesis is not given. Frequentist hypothesis testing uses the probability of the evidence given a hypothesis under consideration in order to make inferences (the p-value). If the evidence is very unlikely given the hypothesis, the hypothesis is rejected. This pattern of inference is invalid, the probabilist objector complains, because one needs the probability of the hypothesis under consideration given the evidence to make an inference about the hypothesis. In other words, the frequentist hypothesis tester commits the base-rate fallacy. In this paper, I show that an attempt to avoid the probabilist objection by appeal to reliabilism fails.