New arXiv paper argues Bayesian accounts miss key Humean ingredients of causal judgment
What the paper claims
A new preprint on arXiv, "Hume's Representational Conditions for Causal Judgment: What Bayesian Formalization Abstracted Away" (arXiv:2604.03387), reopens a classical epistemological debate for cognitive science: David Hume’s conditions for forming causal ideas are not mere historical curiosities, the author contends, but substantive representational constraints that modern Bayesian models routinely abstract away. The paper identifies three conditions in Hume’s account — experiential grounding (ideas must trace back to sensory impressions), structured retrieval (associative processes operate through organized networks rather than simple pairwise links), and vivacity transfer (inference produces a felt conviction, not merely an updated probability) — and argues that probabilistic formalisms capture only part of this picture.
Why this matters beyond philosophy
Why should machine-learning researchers or experimental psychologists care? Because if causal judgment depends on sensory grounding, memory structure, and a phenomenology of conviction, then purely probabilistic treatments — which treat beliefs as distributions updated by evidence — may miss crucial constraints on how humans actually form and report causal beliefs. The paper sketches implications for cognitive modeling and for AI systems that aim to emulate or interpret human causal reasoning: can a black‑box Bayesian updater reproduce the felt certainty of a scientist’s “this caused that” conclusion? The author suggests not, and calls for hybrid models that reintroduce representational content and mnemonic architecture into formal accounts.
Context and next steps
The work appears as a theoretical contribution rather than an empirical refutation: it reviews Humean texts and maps their conditions onto contemporary modeling choices, pointing to experiments and computational architectures that could operationalize the three conditions. For readers unfamiliar with arXiv, this is a publicly available preprint platform where ideas circulate before peer review. Whether the community will take up the challenge — and whether richer models can be built without sacrificing Bayesian clarity and tractability — remains an open question.
