234420

(2018) Synthese 195 (6).

Content and misrepresentation in hierarchical generative models

Alex Kiefer, Jakob Hohwy

pp. 2387-2415

In this paper, we consider how certain longstanding philosophical questions about mental representation may be answered on the assumption that cognitive and perceptual systems implement hierarchical generative models, such as those discussed within the prediction error minimization (PEM) framework. We build on existing treatments of representation via structural resemblance, such as those in Gładziejewski (Synthese 193(2):559–582, 2016) and Gładziejewski and Miłkowski (Biol Philos, 2017), to argue for a representationalist interpretation of the PEM framework. We further motivate the proposed approach to content by arguing that it is consistent with approaches implicit in theories of unsupervised learning in neural networks. In the course of this discussion, we argue that the structural representation proposal, properly understood, has more in common with functional-role than with causal/informational or teleosemantic theories. In the remainder of the paper, we describe the PEM framework for approximate Bayesian inference in some detail, and discuss how structural representations might arise within the proposed Bayesian hierarchies. After explicating the notion of variational inference, we define a subjectively accessible measure of misrepresentation for hierarchical Bayesian networks by appeal to the Kullbach–Leibler divergence between posterior generative and approximate recognition densities, and discuss a related measure of objective misrepresentation in terms of correspondence with the facts.

Publication details

DOI: 10.1007/s11229-017-1435-7

Full citation:

Kiefer, A. , Hohwy, J. (2018). Content and misrepresentation in hierarchical generative models. Synthese 195 (6), pp. 2387-2415.

This document is unfortunately not available for download at the moment.