SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Mind/Brain Identity Theory

The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain. Strictly speaking, it need not hold that the mind is identical to the brain. Idiomatically we do use ‘She has a good mind’ and ‘She has a good brain’ interchangeably but we would hardly say ‘Her mind weighs fifty ounces’. Here I take identifying mind and brain as being a matter of identifying processes and perhaps states of the mind and brain. Consider an experience of pain, or of seeing something, or of having a mental image. The identity theory of mind is to the effect that these experiences just are brain processes, not merely correlated with brain processes.

Some philosophers hold that though experiences are brain processes they nevertheless have fundamentally non-physical, psychical, properties, sometimes called ‘qualia’. Here I shall take the identity theory as denying the existence of such irreducible non-physical properties. Some identity theorists give a behaviouristic analysis of mental states , such as beliefs and desires, but others, sometimes called ‘central state materialists’, say that mental states are actual brain states. Identity theorists often describe themselves as ‘materialists’ but ‘physicalists’ may be a better word. That is, one might be a materialist about mind but nevertheless hold that there are entities referred to in physics that are not happily described as ‘material’.

In taking the identity theory (in its various forms) as a species of physicalism, I should say that this is an ontological, not a translational physicalism. It would be absurd to try to translate sentences containing the word ‘brain’ or the word ‘sensation’ into sentences about electrons, protons and so on. Nor can we so translate sentences containing the word ‘tree’. After all ‘tree’ is largely learned ostensively, and is not even part of botanical classification. If we were small enough a dandelion might count as a tree. Nevertheless a physicalist could say that trees are complicated physical mechanisms. The physicalist will deny strong emergence in the sense of some philosophers, such as Samuel Alexander and possibly C.D. Broad. The latter remarked (Broad 1937) that as far as was known at that time the properties of common salt cannot be deduced from the properties of sodium in isolation and of chlorine in isolation. (He put it too epistemologically: chaos theory shows that even in a deterministic theory physical consequences can outrun predictability.) Of course the physicalist will not deny the harmless sense of "emergence" in which an apparatus is not just a jumble of its parts (Smart 1981).

1. Historical Antecedents

2. the nature of the identity theory, 3. phenomenal properties and topic-neutral analyses, 4. causal role theories, 5. functionalism and identity theory, 6. type and token identity theories, 7. consciousness, 8. later objections to the identity theory, other internet resources, related entries.

The identity theory as I understand it here goes back to U.T. Place and Herbert Feigl in the 1950s. Historically philosophers and scientists, for example Leucippus, Hobbes, La Mettrie, and d'Holbach, as well as Karl Vogt who, following Pierre-Jean-Georges Cabanis, made the preposterous remark (perhaps not meant to be taken too seriously) that the brain secretes thought as the liver secretes bile, have embraced materialism. However, here I shall date interest in the identity theory from the pioneering papers ‘Is Consciousness a Brain Process?’ by U.T. Place (Place 1956) and H. Feigl ‘The "Mental" and the "Physical"’ (Feigl 1958). Nevertheless mention should be made of suggestions by Rudolf Carnap (1932, p. 127), H. Reichenbach (1938) and M. Schlick (1935). Reichenbach said that mental events can be identified by the corresponding stimuli and responses much as the (possibly unknown) internal state of a photo-electric cell can be identified by the stimulus (light falling on it) and response (electric current flowing) from it. In both cases the internal states can be physical states. However Carnap did regard the identity as a linguistic recommendation rather than as asserting a question of fact. See his ‘Herbert Feigl on Physicalism’ in Schilpp (1963), especially p. 886. The psychologist E.G. Boring (1933) may well have been the first to use the term ‘identity theory’. See Place (1990).

Place's very original and pioneering paper was written after discussions at the University of Adelaide with J.J.C. Smart and C.B. Martin. For recollections of Martin's contributions to the discussion see Place (1989) ‘Low Claim Assertions’ in Heil (1989). Smart at the time argued for a behaviourist position in which mental events were elucidated purely in terms of hypothetical propositions about behaviour, as well as first person reports of experiences which Gilbert Ryle regarded as ‘avowals’. Avowals were thought of as mere pieces of behaviour, as if saying that one had a pain was just doing a sophisticated sort of wince. Smart saw Ryle's theory as friendly to physicalism though that was not part of Ryle's motivation. Smart hoped that the hypotheticals would ultimately be explained by neuroscience and cybernetics. Being unable to refute Place, and recognizing the unsatisfactoriness of Ryle's treatment of inner experience, to some extent recognized by Ryle himself (Ryle 1949, p. 240), Smart soon became converted to Place's view (Smart 1959). In this he was also encouraged and influenced by Feigl's ‘"The Mental" and the "Physical" ’ (Feigl 1958, 1967). Feigl's wide ranging contribution covered many problems, including those connected with intentionality, and he introduced the useful term ‘nomological danglers’ for the dualists' supposed mental-physical correlations. They would dangle from the nomological net of physical science and should strike one as implausible excrescences on the fair face of science. Feigl (1967) contains a valuable ‘Postscript’.

Place spoke of constitution rather than of identity. One of his examples is ‘This table is an old packing case’. Another is ‘lightning is an electric discharge’. Indeed this latter was foreshadowed by Place in his earlier paper ‘The Concept of Heed’ (Place 1954), in which he took issue with Ryle's behaviourism as it applied to concepts of consciousness, sensation and imagery. Place remarked (p. 255)

The logical objections which might be raised to the statement ‘consciousness is a process in the brain’ are no greater than the logical objections which might be raised to the statement ‘lightning is a motion of electric charges’.

It should be noticed that Place was using the word ‘logical’ in the way that it was used at Oxford at the time, not in the way that it is normally used now. One objection was that ‘sensation’ does not mean the same as ‘brain process’. Place's reply was to point out that ‘this table’ does not mean the same as ‘this old packing case’ and ‘lightning’ does not mean the same as ‘motion of electric charges’. We find out whether this is a table in a different way from the way in which we find out that it is an old packing case. We find out whether a thing is lightning by looking and that it is a motion of electric charges by theory and experiment. This does not prevent the table being identical to the old packing case and the perceived lightning being nothing other than an electric discharge. Feigl and Smart put the matter more in terms of the distinction between meaning and reference. ‘Sensation’ and ‘brain process’ may differ in meaning and yet have the same reference. ‘Very bright planet seen in the morning’ and ‘very bright planet seen in the evening’ both refer to the same entity Venus. (Of course these expressions could be construed as referring to different things, different sequences of temporal stages of Venus, but not necessarily or most naturally so.)

There did seem to be a tendency among philosophers to have thought that identity statements needed to be necessary and a priori truths. However identity theorists have treated ‘sensations are brain processes’ as contingent. We had to find out that the identity holds. Aristotle, after all, thought that the brain was for cooling the blood. Descartes thought that consciousness is immaterial.

It was sometimes objected that sensation statements are incorrigible whereas statements about brains are corrigible. The inference was made that there must be something different about sensations. Ryle and in effect Wittgenstein toyed with the attractive but quite implausible notion that ostensible reports of immediate experience are not really reports but are ‘avowals’, as if my report that I have toothache is just a sophisticated sort of wince. Place, influenced by Martin, was able to explain the relative incorrigibility of sensation statements by their low claims: ‘I see a bent oar’ makes a bigger claim than ‘It looks to me that there is a bent oar’. Nevertheless my sensation and my putative awareness of the sensation are distinct existences and so, by Hume's principle, it must be possible for one to occur without the other. One should deny anything other than a relative incorrigibility (Place 1989).

As remarked above, Place preferred to express the theory by the notion of constitution, whereas Smart preferred to make prominent the notion of identity as it occurs in the axioms of identity in logic. So Smart had to say that if sensation X is identical to brain process Y then if Y is between my ears and is straight or circular (absurdly to oversimplify) then the sensation X is between my ears and is straight or circular. Of course it is not presented to us as such in experience. Perhaps only the neuroscientist could know that it is straight or circular. The professor of anatomy might be identical with the dean of the medical school. A visitor might know that the professor hiccups in lectures but not know that the dean hiccups in lectures.

Someone might object that the dean of the medical school does not qua dean hiccup in lectures. Qua dean he goes to meetings with the vice-chancellor. This is not to the point but there is a point behind it. This is that the property of being the professor of anatomy is not identical with the property of being the dean of the medical school. The question might be asked, that even if sensations are identical with brain processes, are there not introspected non-physical properties of sensations that are not identical with properties of brain processes? How would a physicalist identity theorist deal with this? The answer (Smart 1959) is that the properties of experiences are ‘topic neutral’. Smart adapted the words ‘topic-neutral’ from Ryle, who used them to characterise words such as ‘if, ‘or’, ‘and’, ‘not’, ‘because’. If you overheard only these words in a conversation you would not be able to tell whether the conversation was one of mathematics, physics, geology, history, theology, or any other subject. Smart used the words ‘topic neutral’ in the narrower sense of being neutral between physicalism and dualism. For example ‘going on’, ‘occurring’, ‘intermittent’, ‘waxing’, ‘waning’ are topic neutral. So is ‘me’ in so far as it refers to the utterer of the sentence in question. Thus to say that a sensation is caused by lightning or the presence of a cabbage before my eyes leaves it open as to whether the sensation is non-physical as the dualist believes or is physical as the materialist believes. This sentence also is neutral as to whether the properties of the sensation are physical or whether some of them are irreducibly psychical. To see how this idea can be applied to the present purpose let us consider the following example.

Suppose that I have a yellow, green and purple striped mental image. We may also introduce the philosophical term ‘sense datum’ to cover the case of seeing or seeming to see something yellow, green and purple: we say that we have a yellow, green and purple sense datum. That is I would see or seem to see, for example, a flag or an array of lamps which is green, yellow and purple striped. Suppose also, as seems plausible, that there is nothing yellow, green and purple striped in the brain. Thus it is important for identity theorists to say (as indeed they have done) that sense data and images are not part of the furniture of the world. ‘I have a green sense datum’ is really just a way of saying that I see or seem to see something that really is green. This move should not be seen as merely an ad hoc device, since Ryle and J.L. Austin, in effect Wittgenstein, and others had provided arguments, as when Ryle argued that mental images were not a sort of ghostly picture postcard. Place characterised the fallacy of thinking that when we perceive something green we are perceiving something green in the mind as ‘the phenomenological fallacy’. He characterizes this fallacy (Place 1956):

the mistake of supposing that when the subject describes his experience, when he describes how things look, sound, smell, taste, or feel to him, he is describing the literal properties of objects and events on a peculiar sort of internal cinema or television screen, usually referred to in the modern psychological literature as the ‘phenomenal field’.

Of course, as Smart recognised, this leaves the identity theory dependent on a physicalist account of colour . His early account of colour (1961) was too behaviourist, and could not deal, for example, with the reversed spectrum problem, but he later gave a realist and objectivist account (Smart 1975). Armstrong had been realist about colour but Smart worried that if so colour would be a very idiosyncratic and disjunctive concept, of no cosmic importance, of no interest to extraterrestrials (for instance) who had different visual systems. Prompted by Lewis in conversation Smart came to realize that this was no objection to colours being objective properties.

One first gives the notion of a normal human percipient with respect to colour for which there are objective tests in terms of ability to make discriminations with respect to colour. This can be done without circularity. Thus ‘discriminate with respect to colour’ is a more primitive notion than is that of colour. (Compare the way that in set theory ‘equinumerous’ is antecedent to ‘number’.) Then Smart elucidated the notion of colour in terms of the discriminations with respect to colour of normal human percipients in normal conditions (say cloudy Scottish daylight). This account of colour may be disjunctive and idiosyncratic. (Maxwell's equations might be of interest to Alpha Centaurians but hardly our colour concepts.) Anthropocentric and disjunctive they may be, but objective none the less. David R. Hilbert (1987) identifies colours with reflectances, thus reducing the idiosyncrasy and disjunctiveness. A few epicycles are easily added to deal with radiated light, the colours of rainbows or the sun at sunset and the colours due to diffraction from feathers. John Locke was on the right track in making the secondary qualities objective as powers in the object, but erred in making these powers to be powers to produce ideas in the mind rather than to make behavioural discriminations. (Also Smart would say that if powers are dispositions we should treat the secondary qualities as the categorical bases of these powers, e.g. in the case of colours properties of the surfaces of objects.) Locke's view suggested that the ideas have mysterious qualia observed on the screen of an internal mental theatre. However to do Locke justice he does not talk in effect of ‘red ideas’ but of ‘ideas of red’. Philosophers who elucidate ‘is red’ in terms of ‘looks red’ have the matter the wrong way round (Smart 1995).

Let us return to the issue of us having a yellow, purple and green striped sense datum or mental image and yet there being no yellow, purple and green striped thing in the brain. The identity theorist (Smart 1959) can say that sense data and images are not real things in the world: they are like the average plumber. Sentences ostensibly about the average plumber can be translated into, or elucidated in terms of, sentences about plumbers. So also there is having a green sense datum or image but not sense data or images, and the having of a green sense datum or image is not itself green. So it can, so far as this goes, easily be a brain process which is not green either.

Thus Place (1956, p. 49):

When we describe the after-image as green... we are saying that we are having the sort of experience which we normally have when, and which we have learned to described as, looking at a green patch of light.

and Smart (1959) says:

When a person says ‘I see a yellowish-orange after-image’ he is saying something like this: " There is something going on which is like what is going on when I have my eyes open, am awake, and there is an orange illuminated in good light in front of me".

Quoting these passages, David Chalmers (1996, p. 360) objects that if ‘something is going on’ is construed broadly enough it is inadequate, and if it is construed narrowly enough to cover only experiential states (or processes) it is not sufficient for the conclusion. Smart would counter this by stressing the word ‘typically’. Of course a lot of things go on in me when I have a yellow after image (for example my heart is pumping blood through my brain). However they do not typically go on then: they go on at other times too. Against Place Chalmers says that the word ‘experience’ is unanalysed and so Place's analysis is insufficient towards establishing an identity between sensations and brain processes. As against Smart he says that leaving the word ‘experience’ out of the analysis renders it inadequate. That is, he does not accept the ‘topic-neutral’ analysis. Smart hopes, and Chalmers denies, that the account in terms of ‘typically of’ saves the topic-neutral analysis. In defence of Place one might perhaps say that it is not clear that the word ‘experience’ cannot be given a topic neutral analysis, perhaps building on Farrell (1950). If we do not need the word ‘experience’ neither do we need the word ‘mental’. Rosenthal (1994) complains (against the identity theorist) that experiences have some characteristically mental properties, and that ‘We inevitably lose the distinctively mental if we construe these properties as neither physical nor mental’. Of course to be topic neutral is to be able to be both physical and mental, just as arithmetic is. There is no need for the word ‘mental’ itself to occur in the topic neutral formula. ‘Mental’, as Ryle (1949) suggests, in its ordinary use is a rather grab-bag term, ‘mental arithmetic’, ‘mental illness’, etc. with which an identity theorist finds no trouble.

In their accounts of mind, David Lewis and D.M. Armstrong emphasise the notion of causality. Lewis's 1966 was a particularly clear headed presentation of the identity theory in which he says (I here refer to the reprint in Lewis 1983, p. 100):

My argument is this: The definitive characteristic of any (sort of) experience as such is its causal role, its syndrome of most typical causes and effects. But we materialists believe that these causal roles which belong by analytic necessity to experiences belong in fact to certain physical states. Since these physical states possess the definitive character of experiences, they must be experiences.

Similarly, Robert Kirk (1999) has argued for the impossibility of zombies. If the supposed zombie has all the behavioural and neural properties ascribed to it by those who argue from the possibility of zombies against materialism, then the zombie is conscious and so not a zombie.

Thus there is no need for explicit use of Ockham's Razor as in Smart (1959) though not in Place (1956). (See Place 1960.) Lewis's paper was extremely valuable and already there are hints of a marriage between the identity theory of mind and so-called ‘functionalist’ ideas that are explicit in Lewis 1972 and 1994. In his 1972 (‘Psychophysical and Theoretical Identifications’) he applies ideas in his more formal paper ‘How to Define Theoretical Terms’ (1970). Folk psychology contains words such as ‘sensation’, ‘perceive’, ‘belief, ‘desire’, ‘emotion’, etc. which we recognise as psychological. Words for colours, smells, sounds, tastes and so on also occur. One can regard common sense platitudes containing both these sorts of these words as constituting a theory and we can take them as theoretical terms of common sense psychology and thus as denoting whatever entities or sorts of entities uniquely realise the theory. Then if certain neural states do so too (as we believe) then the mental states must be these neural states. In his 1994 he allows for tact in extracting a consistent theory from common sense. One cannot uncritically collect platitudes, just as in producing a grammar, implicit in our speech patterns, one must allow for departures from what on our best theory would constitute grammaticality.

A great advantage of this approach over the early identity theory is its holism. Two features of this holism should be noted. One is that the approach is able to allow for the causal interactions between brain states and processes themselves, as well as in the case of external stimuli and responses. Another is the ability to draw on the notion of Ramseyfication of a theory. F.P. Ramsey had shown how to replace the theoretical terms of a theory such as ‘the property of being an electron’ by ‘the property X such that...’. so that when this is done for all the theoretical terms, we are left only with ‘property X such that’, ‘property Y such that’ etc. Take the terms describing behaviour as the observation terms and psychological terms as the theoretical ones of folk psychology. Then Ramseyfication shows that folk psychology is compatible with materialism. This seems right, though perhaps the earlier identity theory deals more directly with reports of immediate experience.

The causal approach was also characteristic of D.M. Armstrong's careful conceptual analysis of mental states and processes, such as perception and the secondary qualities, sensation, consciousness, belief, desire, emotion, voluntary action, in his A Materialist Theory of the Mind (1968a) with a second edition (1993) containing a valuable new preface. Parts I and II of this book are concerned with conceptual analysis, paving the way for a contingent identification of mental states and processes with material ones. As had Brian Medlin, in an impressive critique of Ryle and defence of materialism (Medlin 1967), Armstrong preferred to describe the identity theory as ‘Central State Materialism’. Independently of Armstrong and Lewis, Medlin's central state materialism depended, as theirs did, on a causal analysis of concepts of mental states and processes. See Medlin 1967, and 1969 (including endnote 1).

Mention should particularly be made here of two of Armstrong's other books, one on perception (1961), and one on bodily sensations, (1962). Armstrong thought of perception as coming to believe by means of the senses (compare also Pitcher 1971). This combines the advantages of Direct Realism with hospitality towards the scientific causal story which had been thought to have supported the earlier representative theory of perception. Armstrong regarded bodily sensations as perceptions of states of our body. Of course the latter may be mixed up with emotional states, as an itch may include a propensity to scratch, and contrariwise in exceptional circumstances pain may be felt without distress. However, Armstrong sees the central notion here as that of perception. This suggests a terminological problem. Smart had talked of visual sensations. These were not perceptions but something which occurred in perception. So in this sense of ‘sensation’ there should be bodily sensation sensations. The ambiguity could perhaps be resolved by using the word ‘sensing’ in the context of ‘visual’, ‘auditory’, ‘tactile’ and ‘bodily’, so that bodily sensations would be perceivings which involved introspectible ‘sensings’. These bodily sensations are perceptions and there can be misperceptions as when a person with his foot amputated can think that he has a pain in the foot. He has a sensing ‘having a pain in the foot’ but the world does not contain a pain in the foot, just as it does not contain sense data or images but does contain havings of sense data and of images.

Armstrong's central state materialism involved identifying beliefs and desires with states of the brain (1968a). Smart came to agree with this. On the other hand Place resisted the proposal to extend the identity theory to dispositional states such as beliefs and desires. He stressed that we do not have privileged access to our beliefs and desires. Like Ryle he thought of beliefs and desires as to be elucidated by means of hypothetical statements about behaviour and gave the analogy of the horsepower of a car (Place 1967). However he held that the dispute here is not so much about the neural basis of mental states as about the nature of dispositions. His views on dispositions are argued at length in his debate with Armstrong and Martin (Armstrong, Martin and Place, T. Crane (ed.) 1996). Perhaps we can be relaxed about whether mental states such as beliefs and desires are dispositions or are topic neutrally described neurophysiological states and return to what seems to be the more difficult issue of consciousness. Causal identity theories are closely related to Functionalism, to be discussed in the next section. Smart had been wary of the notion of causality in metaphysics believing that it had no place in theoretical physics. However even so he should have admitted it in folk psychology and also in scientific psychology and biology generally, in which physics and chemistry are applied to explain generalisations rather than strict laws. If folk psychology uses the notion of causality, it is no matter if it is what Quine has called second grade discourse, involving the very contextual notions of modality.

It has commonly been thought that the identity theory has been superseded by a theory called ‘functionalism’. It could be argued that functionalists greatly exaggerate their difference from identity theorists. Indeed some philosophers, such as Lewis (1972 and 1994) and Jackson, Pargetter and Prior (1982), have seen functionalism as a route towards an identity theory.

Like Lewis and Armstrong, functionalists define mental states and processes in terms of their causal relations to behaviour but stop short of identifying them with their neural realisations. Of course the term ‘functionalism’ has been used vaguely and in different ways, and it could be argued that even the theories of Place, Smart and Armstrong were at bottom functionalist. The word ‘functionalist’ has affinities with that of ‘function’ in mathematics and also with that of ‘function’ in biology. In mathematics a function is a set of ordered n-tuples. Similarly if mental processes are defined directly or indirectly by sets of stimulus-response pairs the definitions could be seen as ‘functional’ in the mathematical sense. However there is probably a closer connection with the term as it is used in biology, as one might define ‘eye’ by its function even though a fly's eye and a dog's eye are anatomically and physiologically very different. Functionalism identifies mental states and processes by means of their causal roles, and as noted above in connection with Lewis, we know that the functional roles are possessed by neural states and processes. (There are teleological and homuncular forms of functionalism, which I do not consider here.) Nevertheless an interactionist dualist such as the eminent neurophysiologist Sir John Eccles would (implausibly for most of us) deny that all functional roles are so possessed. One might think of folk psychology, and indeed much of cognitive science too, as analogous to a ‘block diagram’ in electronics. A box in the diagram might be labelled (say) ‘intermediate frequency amplifier’ while remaining) neutral as to the exact circuit and whether the amplification is carried out by a thermionic valve or by a transistor. Using terminology of F. Jackson and P. Pettit (1988, pp. 381–400) the ‘role state’ would be given by ‘amplifier’, the ‘realiser state’ would be given by ‘thermionic valve’, say. So we can think of functionalism as a ‘black box’ theory. This line of thought will be pursued in the next section.

Thinking very much in causal terms about beliefs and desires fits in very well not only with folk psychology but also with Humean ideas about the motives of action. Though this point of view has been criticised by some philosophers it does seem to be right, as can be seen if we consider a possible robot aeroplane designed to find its way from Melbourne to Sydney. The designer would have to include an electronic version of something like a map of south-eastern Australia. This would provide the ‘belief’ side. One would also have to program in an electronic equivalent of ‘go to Sydney’. This program would provide the ‘desire’ side. If wind and weather pushed the aeroplane off course then negative feedback would push the aeroplane back on to the right course for Sydney. The existence of purposive mechanisms has at last (I hope) shown to philosophers that there is nothing mysterious about teleology. Nor are there any great semantic problems over intentionality (with a ‘t’). Consider the sentence ‘Joe desires a unicorn’. This is not like ‘Joe kicks a football’. For Joe to kick a football there must be a football to be kicked, but there are no unicorns. However we can say ‘Joe desires-true of himself "possesses a unicorn" ’. Or more generally ‘Joe believes-true S’ or ‘Joe desires-true S’ where S is an appropriate sentence (Quine 1960, pp. 206–16). Of course if one does not want to relativise to a language one needs to insert ‘or some samesayer of S’ or use the word ‘proposition’, and this involves the notion of proposition or intertranslatability. Even if one does not accept Quine's notion of indeterminacy of translation, there is still fuzziness in the notions of ‘belief’ and ‘desire’ arising from the fuzziness of ‘analyticity’ and ‘synonymy’. The identity theorist could say that on any occasion this fuzziness is matched by the fuzziness of the brain state that constitutes the belief or desire. Just how many interconnections are involved in a belief or desire? On a holistic account such as Lewis's one need not suppose that individuation of beliefs and desires is precise, even though good enough for folk psychology and Humean metaethics. Thus the way in which the brain represents the world might not be like a language. The representation might be like a map. A map relates every feature on it to every other feature. Nevertheless maps contain a finite amount of information. They have not infinitely many parts, still less continuum many. We can think of beliefs as expressing the different bits of information that could be extracted from the map. Thinking in this way beliefs would correspond near enough to the individualist beliefs characteristic of folk and Humean psychology.

The notion ‘type’ and ‘token’ here comes by analogy from ‘type’ and ‘token’ as applied to words. A telegram ‘love and love and love’ contains only two type words but in another sense, as the telegraph clerk would insist, it contains five words (‘token words’). Similarly a particular pain (more exactly a having a pain) according to the token identity theory is identical to a particular brain process. A functionalist could agree to this. Functionalism came to be seen as an improvement on the identity theory, and as inconsistent with it, because of the correct assertion that a functional state can be realised by quite different brain states: thus a functional state might be realised by a silicon based brain as well as by a carbon based brain, and leaving robotics or science fiction aside, my feeling of toothache could be realised by a different neural process from what realises your toothache.

As far as this goes a functionalist can at any rate accept token identities. Functionalists commonly deny type identities. However Jackson, Pargetter and Prior (1982) and Braddon-Mitchell and Jackson (1996) argue that this is an over-reaction on the part of the functionalist. (Indeed they see functionalism as a route to the identity theory.) The functionalist may define mental states as having some state or other (e.g., carbon based or silicon based) which accounts for the functional properties. The functionalist second order state is a state of having some first order state or other which causes or is caused by the behaviour to which the functionalist alludes. In this way we have a second order type theory. Compare brittleness. The brittleness of glass and the brittleness of biscuits are both the state of having some property which explains their breaking, though the first order physical property may be different in the two cases. This way of looking at the matter is perhaps more plausible in relation to mental states such as beliefs and desires than it is to immediately reported experiences. When I report a toothache I do seem to be concerned with first order properties, even though topic neutral ones.

If we continue to concern ourselves with first order properties, we could say that the type-token distinction is not an all or nothing affair. We could say that human experiences are brain processes of one lot of sorts and Alpha Centaurian experiences are brain processes of another lot of sorts. We could indeed propose much finer classifications without going to the limit of mere token identities.

How restricted should be the restriction of a restricted type theory? How many hairs must a bald man have no more of? An identity theorist would expect his toothache today to be very similar to his toothache yesterday. He would expect his toothache to be quite similar to his wife's toothache. He would expect his toothache to be somewhat similar to his cat's toothache. He would not be confident about similarity to an extra-terrestrial's pain. Even here, however, he might expect some similarities of wave form or the like.

Even in the case of the similarity of my pain now to my pain ten minutes ago, there will be unimportant dissimilarities, and also between my pain and your pain. Compare topiary, making use of an analogy exploited by Quine in a different connection. In English country gardens the tops of box hedges are often cut in various shapes, for example peacock shapes. One might make generalizations about peacock shapes on box hedges, and one might say that all the imitation peacocks on a particular hedge have the same shape. However if we approach the two imitation peacocks and peer into them to note the precise shapes of the twigs that make them up we will find differences. Whether we say that two things are similar or not is a matter of abstractness of description. If we were to go to the limit of concreteness the types would shrink to single membered types, but there would still be no ontological difference between identity theory and functionalism.

An interesting form of token identity theory is the anomalous monism of Davidson 1980. Davidson argues that causal relations occur under the neural descriptions but not under the descriptions of psychological language. The latter descriptions use intentional predicates, but because of indeterminacy of translation and of interpretation, these predicates do not occur in law statements. It follows that mind-brain identities can occur only on the level of individual (token) events. It would be beyond the scope of the present essay to consider Davidson's ingenious approach, since it differs importantly from the more usual forms of identity theory.

Place answered the question ‘Is Consciousness a Brain Process?’ in the affirmative. But what sort of brain process? It is natural to feel that there is something ineffable about which no mere neurophysiological process (with only physical intrinsic properties) could have. There is a challenge to the identity theorist to dispel this feeling.

Suppose that I am riding my bicycle from my home to the university. Suddenly I realise that I have crossed a bridge over a creek, gone along a twisty path for half a mile, avoided oncoming traffic, and so on, and yet have no memories of all this. In one sense I was conscious: I was perceiving, getting information about my position and speed, the state of the bicycle track and the road, the positions and speeds of approaching cars, the width of the familiar narrow bridge. But in another sense I was not conscious: I was on ‘automatic pilot’. So let me use the word ‘awareness’ for this automatic or subconscious sort of consciousness. Perhaps I am not one hundred percent on automatic pilot. For one thing I might be absent minded and thinking about philosophy. Still, this would not be relevant to my bicycle riding. One might indeed wonder whether one is ever one hundred percent on automatic pilot, and perhaps one hopes that one isn't, especially in Armstrong's example of the long distance truck driver (Armstrong 1962). Still it probably does happen, and if it does the driver is conscious only in the sense that he or she is alert to the route, of oncoming traffic etc., i.e. is perceiving in the sense of ‘coming to believe by means of the senses’. The driver gets the beliefs but is not aware of doing so. There is no suggestion of ineffability in this sense of ‘consciousness’, for which I shall reserve the term ‘awareness’.

For the full consciousness, the one that puzzles us and suggests ineffability, we need the sense elucidated by Armstrong in a debate with Norman Malcolm (Armstrong and Malcolm 1962, p. 110). Somewhat similar views have been expressed by other philosophers, such as Savage (1976), Dennett (1991), Lycan (1996), Rosenthal (1996). A recent presentation of it is in Smart (2004). In the debate with Norman Malcolm, Armstrong compared consciousness with proprioception. A case of proprioception occurs when with our eyes shut and without touch we are immediately aware of the angle at which one of our elbows is bent. That is, proprioception is a special sense, different from that of bodily sensation, in which we become aware of parts of our body. Now the brain is part of our body and so perhaps immediate awareness of a process in, or a state of, our brain may here for present purposes be called ‘proprioception’. Thus the proprioception even though the neuroanatomy is different. Thus the proprioception which constitutes consciousness, as distinguished from mere awareness, is a higher order awareness, a perception of one part of (or configuration in) our brain by the brain itself. Some may sense circularity here. If so let them suppose that the proprioception occurs in an in practice negligible time after the process propriocepted. Then perhaps there can be proprioceptions of proprioceptions, proprioceptions of proprioceptions of proprioceptions, and so on up, though in fact the sequence will probably not go up more than two or three steps. The last proprioception in the sequence will not be propriocepted, and this may help to explain our sense of the ineffability of consciousness. Compare Gilbert Ryle in The Concept of Mind on the systematic elusiveness of ‘I’ (Ryle 1949, pp. 195–8).

Place has argued that the function of the ‘automatic pilot’, to which he refers as ‘the zombie within’, is to alert consciousness to inputs which it identifies as problematic, while it ignores non-problematic inputs or re-routes them to output without the need for conscious awareness. For this view of consciousness see Place (1999).

Mention should here be made of influential criticisms of the identity theory by Saul Kripke and David Chalmers respectively. It will not be possible to discuss them in great detail, partly because of the fact that Kripke's remarks rely on views about modality, possible worlds semantics, and essentialism which some philosophers would want to contest, and because Chalmers' long and rich book would deserve a lengthy answer. Kripke (1980) calls an expression a rigid designator if it refers to the same object in every possible world. Or in counterpart theory it would have an exactly similar counterpart in every possible world. It seems to me that what we count as counterparts is highly contextual. Take the example ‘water is H 2 O’. In another world, or in a twin earth in our world as Putnam imagines (1975), the stuff found in rivers, lakes, the sea would not be H 2 O but XYZ and so would not be water. This is certainly giving preference to real chemistry over folk chemistry, and so far I applaud this. There are therefore contexts in which we say that on twin earth or the envisaged possible world the stuff found in rivers would not be water. Nevertheless there are contexts in which we could envisage a possible world (write a science fiction novel) in which being found in rivers and lakes and the sea, assuaging thirst and sustaining life was more important than the chemical composition and so XYZ would be the counterpart of H 2 O.

Kripke considers the identity ‘heat = molecular motion’, and holds that this is true in every possible world and so is a necessary truth. Actually the proposition is not quite true, for what about radiant heat? What about heat as defined in classical thermodynamics which is ‘topic neutral’ compared with statistical thermodynamics? Still, suppose that heat has an essence and that it is molecular motion, or at least is in the context envisaged. Kripke says (1980, p. 151) that when we think that molecular motion might exist in the absence of heat we are confusing this with thinking that the molecular motion might have existed without being felt as heat. He asks whether it is analogously possible that if pain is a certain sort of brain process that it has existed without being felt as pain. He suggests that the answer is ‘No’. An identity theorist who accepted the account of consciousness as a higher order perception could answer ‘Yes’. We might be aware of a damaged tooth and also of being in an agitation condition (to use Ryle's term for emotional states) without being aware of our awareness. An identity theorist such as Smart would prefer talk of ‘having a pain’ rather than of ‘pain’: pain is not part of the furniture of the world any more than a sense datum or the average plumber is. Kripke concludes (p. 152) that the

apparent contingency of the connection between the mental state and the corresponding brain state thus cannot be explained by some sort of qualitative analogue as in the case of heat.

Smart would say that there is a sense in which the connection of sensations (sensings) and brain processes is only half contingent. A complete description of the brain state or process (including causes and effects of it) would imply the report of inner experience, but the latter, being topic neutral and so very abstract would not imply the neurological description.

Chalmers (1996) in the course of his exhaustive study of consciousness developed a theory of non-physical qualia which to some extent avoids the worry about nomological danglers. The worry expressed by Smart (1959) is that if there were non-physical qualia there would, most implausibly, have to be laws relating neurophysiological processes to apparently simple properties, and the correlation laws would have to be fundamental, mere danglers from the nomological net (as Feigl called it) of science. Chalmers counters this by supposing that the qualia are not simple but unknown to us, are made up of simple proto-qualia, and that the fundamental laws relating these to physical entities relate them to fundamental physical entities. His view comes to a rather interesting panpsychism. On the other hand if the topic neutral account is correct, then qualia are no more than points in a multidimensional similarity space, and the overwhelming plausibility will fall on the side of the identity theorist.

On Chalmers' view how are we aware of non-physical qualia? It has been suggested above that this inner awareness is proprioception of the brain by the brain. But what sort of story is possible in the case of awareness of a quale? Chalmers could have some sort of answer to this by means of his principle of coherence according to which the causal neurological story parallels the story of succession of qualia. It is not clear however that this would make us aware of the qualia. The qualia do not seem to be needed in the physiological story of how an antelope avoids a tiger.

People often think that even if a robot could scan its own perceptual processes this would not mean that the robot was conscious. This appeals to our intuitions, but perhaps we could reverse the argument and say that because the robot can be aware of its awareness the robot is conscious. I have given reason above to distrust intuitions, but in any case Chalmers comes some of the way in that he toys with the idea that a thermostat has a sort of proto-qualia. The dispute between identity theorists (and physicalists generally) and Chalmers comes down to our attitude to phenomenology. Certainly walking in a forest, seeing the blue of the sky, the green of the trees, the red of the track, one may find it hard to believe that our qualia are merely points in a multidimensional similarity space. But perhaps that is what it is like (to use a phrase that can be distrusted) to be aware of a point in a multidimensional similarity space. One may also, as Place would suggest, be subject to ‘the phenomenological fallacy’. At the end of his book Chalmers makes some speculations about the interpretation of quantum mechanics. If they succeed then perhaps we could envisage Chalmers' theory as integrated into physics and him as a physicalist after all. However it could be doubted whether we need to go down to the quantum level to understand consciousness or whether consciousness is relevant to quantum mechanics.

  • Armstrong, D.M., 1961, Perception and the Physical World , London: Routledge.
  • –––, 1961, Bodily Sensations , London: Routledge.
  • –––, 1962, ‘Consciousness and Causality’, and ‘Reply’, in D.M. Armstrong N. Malcolm, Consciousness and Causality , Oxford: Blackwell.
  • –––, 1968a, A Materialist Theory of the Mind , London: Routledge; second edition, with new preface, 1993.
  • –––, 1968b, ‘The Headless Woman Illusion and the Defence of Materialism’, Analysis , 29: 48–49.
  • –––, 1999, The Mind-Body Problem: An Opinionated Introduction , Boulder, CO: Westview Press.
  • Armstrong, D.M., Martin, C.B. and Place, U.T., 1996, Dispositions: A Debate , T. Crane (ed.), London: Routledge.
  • Braddon-Mitchell, D. and Jackson, F., 1996: Philosophy of Mind and Cognition , Oxford: Blackwell.
  • Broad, C.D., 1937, The Mind and its Place in Nature , London: Routledge and Kegan Paul.
  • Campbell, K., 1984, Body and Mind , Notre Dame, IN: University of Notre Dame Press.
  • Carnap, R., 1932, ‘Psychologie in Physikalischer Sprache’, Erkenntnis , 3: 107–142. English translation in A.J. Ayer (ed.), Logical Positivism , Glencoe, IL: Free Press, 1959.
  • –––, 1963, ‘Herbert Feigl on Physicalism’, in Schilpp 1963, pp. 882–886.
  • Chalmers, D.M., 1996, The Conscious Mind , New York: Oxford University Press.
  • Clark, A., 1993, Sensory Qualities , Oxford: Oxford University Press.
  • Davidson, D., 1980, ‘Mental Events’, ‘The Material Mind’ and ‘Psychology as Part of Philosophy’, in D. Davidson, Essays on Actions and Events , Oxford: Clarendon Press.
  • Dennett, D.C., 1991, Consciousness Explained , Boston: Little and Brown.
  • Farrell, B.A., 1950, ‘Experience’, Mind , 50: 170–198.
  • Feigl, H., 1958, ‘The “Mental” and the “Physical”’, in H. Feigl, M. Scriven and G. Maxwell (eds.), Concepts, Theories and the Mind-Body Problem (Minnesota Studies in the Philosophy of Science, Volume 2), Minneapolis: University of Minnesota Press; reprinted with a Postscript in Feigl 1967.
  • –––, 1967, The ‘Mental’ and the ‘Physical’, The Essay and a Postscript , Minneapolis: University of Minnesota Press.
  • Heil, J., 1989, Cause, Mind and Reality: Essays Honoring C.B. Martin , Dordrecht: Kluwer Academic Publishers.
  • Hilbert, D.R., 1987, Color and Color Perception: A Study in Anthropocentric Realism , Stanford: CSLI Publications.
  • Hill, C.S., 1991, Sensations: A Defense of Type Materialism , Cambridge: Cambridge University Press.
  • Jackson, F., 1998, ‘What Mary didn't know’, and ‘Postscript on qualia’, in F. Jackson, Mind, Method and Conditionals , London: Routledge.
  • Jackson, F. and Pettit, P., 1988, ‘Functionalism and Broad Content’, Mind , 97: 381–400.
  • Jackson, F., Pargetter, R. and Prior, E., 1982, ‘Functionalism and Type-Type Identity Theories’, Philosophical Studies , 42: 209–225.
  • Kirk, R., 1999, ‘Why There Couldn't be Zombies’, Proceedings of the Aristotelian Society (Supplementary Volume), 73: 1–16.
  • Kripke, S., 1980, Naming and Necessity , Cambridge, MA: Harvard University Press.
  • Levin, M.E., 1979, Metaphysics and the Mind-Body Problem , Oxford: Clarendon Press.
  • Lewis, D., 1966, ‘An Argument for the Identity Theory’, Journal of Philosophy , 63: 17–25.
  • –––, 1970, ‘How to Define Theoretical Terms’, Journal of Philosophy , 67: 427–446.
  • –––, 1972, ‘Psychophysical and Theoretical Identifications’, Australasian Journal of Philosophy , 50: 249–258.
  • –––, 1983, ‘Mad Pain and Martian Pain’ and ‘Postscript’, in D. Lewis, Philosophical Papers (Volume 1), Oxford: Oxford University Press.
  • –––, 1989, ‘What Experience Teaches’, in W. Lycan (ed.), Mind and Cognition , Oxford: Blackwell
  • –––, 1994, ‘Reduction of Mind’, in S. Guttenplan (ed.), A Companion to the Philosophy of Mind , Oxford: Blackwell.
  • Lycan, W.G., 1996, Consciousness and Experience , Cambridge, MA: MIT Press.
  • Medlin, B.H., 1967, ‘Ryle and the Mechanical Hypothesis’, in C.F. Presley (ed.), The Identity Theory of Mind , St. Lucia, Queensland: Queensland University Press.
  • –––, 1969, ‘Materialism and the Argument from Distinct Existences’, in J.J. MacIntosh and S. Coval (eds.), The Business of Reason , London: Routledge and Kegan Paul.
  • Pitcher, G., 1971, A Theory of Perception , Princeton, NJ: Princeton University Press.
  • Place, U.T., 1954, ‘The Concept of Heed’, British Journal of Psychology , 45: 243–255.
  • –––, 1956, ‘Is Consciousness a Brain Process?’, British Journal of Psychology , 47: 44–50.
  • –––, 1960, ‘Materialism as a Scientific Hypothesis’, Philosophical Review , 69: 101–104.
  • –––, 1967, ‘Comments on Putnam's “Psychological Predicates”’, in W.H. Capitan and D.D. Merrill (eds.), Art, Mind and Religion , Pittsburgh: Pittsburgh University Press.
  • –––, 1988, ‘Thirty Years on–Is Consciousness still a Brain Process?’, Australasian Journal of Philosophy , 66: 208–219.
  • –––, 1989, ‘Low Claim Assertions’, in J. Heil (ed.), Cause, Mind and Reality: Essays Honoring C.B. Martin , Dordrecht: Kluwer Academic Publishers.
  • –––, 1990, ‘E.G. Boring and the Mind-Brain Identity Theory’, British Psychological Society, History and Philosophy of Science Newsletter , 11: 20–31.
  • –––, 1999, ‘Connectionism and the Problem of Consciousness’, Acta Analytica , 22: 197–226.
  • –––, 2004, Identifying the Mind , New York: Oxford University Press.
  • Putnam, H., 1960, ‘Minds and Machines’, in S. Hook (ed.), Dimensions of Mind , New York: New York University Press.
  • –––, 1975, ‘The Meaning of “Meaning”’, in H. Putnam, Mind, Language and Reality , Cambridge: Cambridge University Press.
  • Quine, W.V.O., 1960, Word and Object , Cambridge, MA: MIT Press.
  • Reichenbach, H., 1938, Experience and Prediction , Chicago: University of Chicago Press.
  • Rosenthal, D.M., 1994, ‘Identity Theories’, in S. Guttenplan (ed.), A Companion to the Philosophy of Mind , Oxford: Blackwell, pp. 348–355.
  • –––, 1996, ‘A Theory of Consciousness’, in N. Block, O. Flanagan, and G. Güzeldere (eds.), The Nature of Consciousness , Cambridge, MA: MIT Press.
  • Ryle, G., 1949, The Concept of Mind , London: Hutchinson.
  • Savage, C.W., 1976, ‘An Old Ghost in a New Body’, in G.G. Globus, G. Maxwell and I. Savodnik (eds.), Consciousness and the Brain , New York: Plenum Press.
  • Schilpp, P.A. (ed.), 1963, The Philosophy of Rudolf Carnap , La Salle, IL: Open Court.
  • Schlick, M., 1935, ‘De la Relation des Notions Psychologiques et des Notions Physiques’, Revue de Synthese , 10: 5–26; English translation in H. Feigl and W. Sellars (eds.), Readings in Philosophical Analysis , New York: Appleton-Century Crofts, 1949.
  • Smart, J.J.C., 1959, ‘Sensations and Brain Processes’, Philosophical Review , 68: 141–156.
  • –––, 1961, ‘Colours’, Philosophy , 36: 128–142.
  • –––, 1963, ‘Materialism’, Journal of Philosophy , 60: 651–662.
  • –––, 1975, ‘On Some Criticisms of a Physicalist Theory of Colour’, in Chung-ying Cheng (ed.), Philosophical Aspects of the Mind-Body Problem , Honolulu: University of Hawai‘i Press.
  • –––, 1978, ‘The Content of Physicalism’, Philosophical Quarterly , 28: 339–341.
  • –––, 1981, ‘Physicalism and Emergence’, Neuroscience , 6: 109–113.
  • –––, 1995, ‘“Looks Red” and Dangerous Talk’, Philosophy , 70: 545–554.
  • –––, 2004, ‘Consciousness and Awareness’, Journal of Consciousness Studies , 11: 41–50.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

  • Identity Theories , an incomplete paper by U.T. Place, published in the Field Guide to Philosophy of Mind

consciousness | functionalism

Acknowledgments

I would like to express my thanks to David Armstrong, Frank Jackson and Ullin Place for comments on an earlier draft of this article and David Chalmers for careful editorial suggestions.

Copyright © 2007 by J. J. C. Smart

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol
  • PMC10641890

An evidence-based critical review of the mind-brain identity theory

Associated data.

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

In the philosophy of mind, neuroscience, and psychology, the causal relationship between phenomenal consciousness, mentation, and brain states has always been a matter of debate. On the one hand, material monism posits consciousness and mind as pure brain epiphenomena. One of its most stringent lines of reasoning relies on a ‘loss-of-function lesion premise,’ according to which, since brain lesions and neurochemical modifications lead to cognitive impairment and/or altered states of consciousness, there is no reason to doubt the mind-brain identity. On the other hand, dualism or idealism (in one form or another) regard consciousness and mind as something other than the sole product of cerebral activity pointing at the ineffable, undefinable, and seemingly unphysical nature of our subjective qualitative experiences and its related mental dimension. Here, several neuroscientific findings are reviewed that question the idea that posits phenomenal experience as an emergent property of brain activity, and argue that the premise of material monism is based on a logical correlation-causation fallacy. While these (mostly ignored) findings, if considered separately from each other, could, in principle, be recast into a physicalist paradigm, once viewed from an integral perspective, they substantiate equally well an ontology that posits mind and consciousness as a primal phenomenon.

1. Introduction

Since the times of René Descartes in the 17th century, the mind–body problem has been one of the central debates in the philosophy of mind, psychology, and neuroscience. The conventional Cartesian dualism is no longer considered tenable but other forms of dualism, or theoretical frameworks of philosophical idealism, or more generally, non-physicalist ontologies, state that mind and consciousness cannot be explained as a mere result of neural processes.

Dualism is opposed by an identity theory, which, instead, considers mind processes as identical to brain processes, and consciousness as nothing other than an emergent epiphenomenon arising from the collective interaction of the neuronal activity. Sentience, with all its subjective dimensions of experiences, feelings, and thoughts, is a physical process determined only by the laws of physics. Qualia–the subjective, phenomenal, and mental experiences we can access only introspectively, such as the perception of color, or that of pain and pleasure–are physical brain states, while any speculation concerning an immaterial mind or consciousness is considered an unnecessary hypothesis.

Dualists and monists have different schools of thought but, despite the variety of opinions, it is fair to say that most scientists and philosophers consider themselves to be material monists. For example, according to a survey ( Bourget and Chalmers, 2020 ) 51.9% of philosophers declare themselves ‘physicalists’ vs. 32.1% as non-physicalists, and 15.9% as ‘other’. On the other hand, exceptional human experiences occur frequently in both the general population and in scientists and engineers ( Wahbeh et al., 2018 ).

However, there is a growing awareness that a mere functional investigation will not answer questions of a more philosophical nature. The belief that the progress of modern neurosciences would soon shed light on David Chalmer’s notorious ‘hard problem of consciousness’ ( Chalmers, 1995 ) has turned out to be too optimistic. This is because, unlike other physical processes, in which both causes and effects can be observed from a third-person perspective, in consciousness studies, one is confronted with a cause–the brain activity–that one can still analyze from a third-person perspective that, however, apparently produces an effect we call ‘conscious experience,’ or just ‘sentience,’ which can be apprehended only from a first-person perspective. This ‘perspectival asymmetry’ makes consciousness in its subjective and experiential dimension stand out as a phenomenon alien to any attempt at conceptual causal and ontological scientific reduction. Inside a naturalistic framework, the origin and ontology of the phenomenal subjective conscious experience remain unclear.

While most arguments were based on a physicalist line of reasoning (for a review, see ( Seth and Bayne, 2022 )), and also other post-materialistic models of consciousness that are not exclusively based on brain activity exist (for a review and discussion see ( Wahbeh et al., 2022 )), here it is shown that there are also strictly neuroscientific facts that have not received sufficient appreciation and that give us good reasons to look upon the physicalist assumptions with a more critical eye. Non-neurocentric paradigms of consciousness that posit mind and consciousness as a fundamental primitive, rather than matter, remain a viable option. No particular dualistic, panpsychist, Eastern philosophical, or metaphysical scheme is favored. Rather, a variety of findings, especially when seen jointly and in their relationship to each other, could suggest other possible ways of interpreting the neuroscientific findings, and that this might even have more explanatory power in terms of an underlying post-material ontology.

A preliminary note of conceptual and terminological clarity is necessary. In psychology, or the philosophy of mind, and neurological sciences, the words ‘consciousness,’ ‘mind’, and ‘self-awareness’ are defined and used with different significances, sometimes with overlapping or conflating semantics. In fact, for historical reasons, the mind-brain identity theory used the terms ‘mind’ and ‘consciousness’ somewhat interchangeably ( Smart, 2022 ). Here, however, ‘consciousness’ will relate to phenomenal consciousness–that is, Nagel’s famous ‘what-it-is-like’ states ( Nagel, 1974 ) underlying our subjective qualitative experiences, ‘qualia,’ that what makes us sentient of perceptions, feelings, sensations, pleasures or pains, and self-aware as a unified subject. Phenomenal consciousness is not to be confused with ‘mind’ which, at least in the present context, relates to the cognitive functions of thought, memory, intelligence, ideas, concepts, and meanings. The two are to be kept distinct in the sense that the mind’s thoughts come and go, while the conscious experiencing subject is permanent. I deem this distinction necessary because the question relating to the physicality of the spectrum of all our psychological dimensions, as we are going to see later, may not have a unique answer. For example, one can argue for the unphysical nature of phenomenal consciousness but maintain that memory is in the brain, or that low-level cognition (e.g., sensory perception modalities) are neuronal epiphenomena, while other high-level functions (decision-making, agency, reasoning, and planning) are not.

Having made this distinction, in the following, I will first examine more closely the logical framework that sustains a mechanistic conception by pointing out some conventional neurological causation-correlation fallacies.

Let us first question some basic assumptions. Does the physical change of a brain state leading to cognitive impairment or altered states of consciousness provide a necessary and sufficient logical proof that mind and consciousness are an emergent cerebral phenomenon?

After all, it is undeniable that there is a direct relation between the physical state of our brains and our subjective experiences (e.g., Aguinaga et al., 2018 ), ( Vollenweider and Preller, 2020 ), ( Davis et al., 2008 ). Dopamine is a neurotransmitter molecule that enables biochemical transmission among neurons and that is responsible for the effects of a drug like cocaine. We know that psychedelic drugs can lead to intense subjective effects. It is a well-known fact that brain damage can lead to severe cognitive impairments. If Broca’s area, a left cerebral hemisphere area, is lesioned, one loses the ability to speak (interestingly, though, not the ability to comprehend language). Someone being anesthetized using anesthetic drugs (seemingly) ‘loses’ consciousness. And nowadays, we have a number of sophisticated brain scan technologies making it clear, beyond any reasonable doubt, that for every conscious experience, there exists a neural correlate in our brains.

Thus, apparently, a neuroscience that is based on brain chemistry and loss-of-function lesion studies leaves no place for any form of non-material monistic approach. Mental states and conscious self-awareness seem to emerge from matter; there is no distinction. Our personalities, identities, moods, and states of consciousness seem to depend on the biophysical state of our brains.

And yet, few further critical thoughts should make it clear that such a correlation is not a sufficiency criterion. One must secure one’s theoretical framework from a possible logical fallacy believing that correlation implies causation. The fact that two events are always coincidental or always happen shortly, one after the other, does not imply that the first event caused the second event to happen. If event B always follows event A, we are not entitled to conclude that A is the cause of B. These sorts of logical fallacies are known as ‘post-hoc fallacies’.

Nevertheless, the necessity and sufficiency that the explanation of our qualitative experiential dimension is to be chiefly found in neural circuits remains a rarely questioned belief [with few exceptions, e.g., in the field of behavioral processes ( Gomez-Marin, 2017 )]. There is a general tendency to believe that causal mechanistic explanations based on neural lower-level properties are better than higher-level behavioral accounts. For example, Krakauer et al. pointed out that neuroscientists (and, I would add, too many psychologists and most analytical philosophers of mind) frequently use language to hide more than to reveal, by assuming that a neural causal efficacy equals understanding–that is, charging it with an explanatory power it does not have. The result that “neural activity X is necessary and sufficient for behavior Y to occur” allows a causal claim often added by a further explanatory sentence that rearticulates the same causal result employing ‘filter verbs’ (such as “produces, “generates,” “enables,” etc.) and that, however, masks the faulty logic to cause a metaphysical position to pass as empirical data ( Krakauer et al., 2017 ).

But, what are the alternatives to the mind–body identification that could be in line with the above correlation between mental states and physical neural correlates of consciousness?

In fact, the metaphor most idealists prefer is the ‘filter theory of consciousness,’ which dates back to an original idea of William James, who stated: “My thesis is now this: that, when we think of the law that thought is a function of the brain, we are not required to think of productive function only; we are entitled also to consider permissive or transmissive function . And the ordinary psycho-physiologist leaves this out of his account ” (emphasis in the original text) ( James, 1898 ).

James thought of the brain and thought in the frame of a ‘bidirectional transducer theory’ using the analogy of the prism separating white light into respective colored beams. If a broken prism fails in its function to ‘reveal’ the colored light beams, this should not lure us into the logical correlation-causation fallacy that the prism ‘produces’ colored light. The material and structural modification of the optical medium modifies the refractive gradient that ‘transduces’ light with a different chromatic dispersion but does not ‘create’ it. A prism is just an object with a transmissive function; it does not ‘generate’ anything.

Aldous Huxley expressed a similar idea and proposed that the brain is a ‘reducing valve’ of what he called a ‘Mind at large,’ a universal or cosmic Mind comprising all of reality with all ideas and all thoughts. According to Huxley, our mind filters reality under normal conditions because, otherwise, we would be overwhelmed by the knowledge of this universal Mind. Psychedelic drugs can remove the filter and bring us into contact with the Mind at large, leading to the experiences that several mystics describe. In his words: “To make survival possible biologically, Mind at large has to be funneled through the reducing valve of the brain and nervous system” ( Huxley, 1954 ). For Huxley, the brain was a material ‘connecting device,’ an ‘interface’ or ‘relay station.’ In this view, human mind is a localization of a universe-wide Mind projected into our brains. The brain filters and suppresses this universal Mind but does not ‘produce’ it.

An understanding of the mind-brain relationship reminiscent of Eastern philosophies, and that maintains similar views, is neatly summarized by the Indian mystic and poet Sri Aurobindo: “Our physical organism no more causes or explains thought and consciousness than the construction of an engine causes or explains the motive-power of steam or electricity. The force is anterior, not the physical instrument” ( Aurobindo, 1919 ).

From these perspectives, mind uses the brain as an instrument, as an interface of expression. Mind and consciousness are constrained and interdependent from the brain but aren’t generated by the instrument itself.

Notice that this standpoint is not entirely alien to our ordinary understanding of how a digital computer works. Knowing everything about its hardware, and recreating its exact physical structure in every detail, would not lead us to a machine that makes anything meaningful or useful. Software–that is, a running code written by an intelligent external agent–is needed. Here, also, a computer is only an instrument, a means of expression for a cognitive entity, not its origin or source. In fact, studying a microprocessor with the same criteria employed by modern neuroscience, trying to reverse-engineer its functions by analyzing local field potentials, or selectively lesioning its units by correlating this with its behavior, would turn out to be a quite difficult task: We would still have a long way to go to explain how it works and figure out the whole running code, which is the real ‘agent’ causing the behavior of the machine ( Jonas and Kording, 2017 ).

Thus, neural correlates of consciousness, or loss-of-function lesion-based studies, do not constitute a sufficient logical foundation for a mind-brain identity theory. We have the right to maintain the contrary hypothesis: Consciousness, mental states, and emotional states are more or less ‘funneled through’ depending on the physical state of a brain. The brain could equally well be seen as a physical substrate through which these conscious states manifest without leading to any inconsistency with current scientific knowledge. How current neuroscience not only fails to falsify this hypothesis but maybe even suggestive of this claim, is the purpose of the next section, with a review of old and new neuroscientific findings that are asking for clarification if one wants to save the mind-consciousness-brain identity theory. Another part will review the evidence for the neural correlates of memory. A brief section will focus on the emergent fields in the study of plant and cellular ‘basal cognition.’ A discussion and concluding remarks will follow.

2. From (lack of) evidence to interpretation

2.1. the search for the ‘seat of consciousness’.

Crick and Koch once postulated that the claustrum, a sheet-like neuronal structure hidden beneath the inner surface of the neocortex, might give rise to “integrated conscious percepts”–that is, act like the “seat of consciousness” ( Crick and Koch, 2005 ). Modern neuroscience, however, indicates that the claustrum behaves more like a neuronal information router than an organ responsible for a specific function ( Madden et al., 2022 ). To date, there is no evidence, not even indirect or circumstantial, of a single brain region, area, organ, anatomical feature, or modern Cartesian pineal gland that takes charge of this mysterious job of ‘producing’ or ‘generating’ consciousness. Most of the brain is busy processing sensory inputs, motor tasks, and automatic and sub- or unconscious physiological regulations (such as the heartbeat, breathing, the control of blood pressure and temperature, motor control, etc.) that do not lead to qualitative experiences. Neural activity alone cannot be a sufficient condition to lead to phenomenal consciousness. The vast majority of brain activity is unconscious–that is, non-conscious cognitive processes (e.g., mnemonic, perceptual, mental or linguistic tasks) and physiological processes (e.g., cardiac, hormonal, thermal regulation, etc.) taking place outside of our conscious awareness. This raises the question: What distinguishes a neural process that leads to a conscious experience from that which does not?

For example, the cerebellum is almost exclusively dedicated to motor control functions, and its impairment leads to equilibrium and movement disorders. However, it does not affect one’s state of consciousness. Its role in ‘generating’ experience seems to be marginal, if any. There are also rare cases of people who live without a cerebellum (‘cerebellar agenesis’) and have only mild or moderate motor deficits or other types of disorders ( Feng et al., 2015 ). This is a fact that seemingly confirms the brain’s proverbial neuro-plasticity, which we will see next through other extraordinary examples.

It may be worth recalling that the neuronal architecture in our bodies is not confined to the brain–that is, it goes far beyond our heads, through the brain stem, and down through the spinal cord. The central nervous system is made up of the brain and the spinal cord. The latter is responsible for the transmission of nerve signals from and to the motor cortex; as is well known, injury to it can result in paralysis. But, again, no cognitive deficit or state of consciousness is altered by impairments of the spinal cord. This leaves only one option: If there is a ‘seat of consciousness,’ it must be identified somewhere in the cerebral cortex or subcortical areas of the brain ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1150605-g001.jpg

Case of cerebellar agenesis: Living (and walking) without the cerebellum. Credit: Feng et al. (2015) . Reproduced with permission of Oxford University Press.

Another interesting example of how the correlation-causation fallacy conditions scientific and popular understanding of the mind–body problem can be illustrated by an interesting experimental finding that showed how stimulation of the thalamus arouses macaques from stable anesthesia ( Redinbaugh et al., 2020 ). The awake, sleeping, and anesthetized states could be aroused with the stimulation of the central lateral thalamus. The straightforward conclusion seemed clear. The ultimate origin and switch ‘modulating’ consciousness was discovered. If your consciousness ‘depends’ on the state of your thalamus, which is ‘switched’ on and off with the touch of a button, then the thalamus must be the ‘seat of consciousness.’ Is this an unavoidable conclusion?

First of all, observing from a third-person perspective the absence of an external physiological signature as evidence for a lack of internal first-person sentience is yet another correlation-causation fallacy that has too frequently led to unwarranted conclusions. For example, that anesthesia induces an unconscious state with the patient having no subjective experience is far from obvious. We simply do not know if it really induces a completely unconscious state or a conscious but non-metacognitive no-report state that makes one unable to recall past experiences once one is back in the waking state. The former assumption is, unfortunately, taken in most cases as the standard scientific approach. Whereas, indications suggest that anesthetic-induced unresponsiveness does not induce complete disconnectedness ( Radek et al., 2018 ; Turku, 2018 ). Interestingly in this regard is also the so-called twilight anesthesia, an anesthetic technique that sedates patients only mildly and induces amnesia but no loss of consciousness ( Scheinin et al., 2020 ). During this ‘twilight state,’ patients are responsive and can be asked to perform some tasks that they will not be able to recollect after the surgery. This case alone shows that the inability to recall events during sedation is no proof of unconsciousness.

Moreover, there is now a non-negligible amount of scientific literature, presenting empiric evidence on parasomnia (sleepwalking), hypnosis, non-REM sleep, and subjects in a vegetative state, that some form of conscious awareness is also present in all these non-responsive states of consciousness (e.g., Owen and Coleman, 2006 ; Oudiette et al., 2009 ; Cruse et al., 2011 ; Siclari et al., 2018 ; Mackenzie, 2019 ). Arguing and extrapolating from the lack of superficial physical cues and mnemonic retention to a verdict that declares someone to be ‘unconscious’–that is, as having no subjective phenomenal experience–is, at least from the philosophical perspective, again betraying a logical correlation-causation fallacy.

But even if we assume that there is no internal experience when we are anesthetized, the relevant question remains: Do these sorts of experimental findings confirm that the thalamus is the ‘seat of consciousness’? Is it a sort of modern replacement for Descartes’ pineal gland in its mechanistic-material monist version?

The thalamus is responsible for sensory information processing. It is known that its main job is to function as a relay and feedback station between sensory brain areas and the cerebral cortex. For example, it functions as a hub between the optical nerves that transport the visual information coming from our retinas to the visual cortex. Even if one remained conscious by turning down the functionality of the thalamus, one would no longer see anything because the neural pathways between the retina and the visual cortex are interrupted. From that, however, nobody would conclude that the thalamus is the seat of the visual experience for which the visual cortex is responsible, as we know that it is a ‘hub,’ a ‘transducer’ or a ‘filter.’ From this perspective, the thalamus’ function is to ‘integrate’ the information flow of the several brain areas; if this is disrupted, it leads to a ‘loss’ of consciousness.

Thus, these findings do not tell us much about the generation of conscious experience. However, if there is not one single ‘seat of consciousness,’ could it be that the combination and activity of some or all of the different brain areas do ‘produce’ the subjective experience? Considerable attention in this direction has been focused on theories such as the ‘Integrated Information Theory’ (IIT) ( Oizumi et al., 2014 ; Tononi, 2015 ) and the ‘Global Workspace Theory’ (GWT) ( Baars, 1988 ), according to which the amount and integration of information and the momentarily active and accessible memory determine the level of consciousness leading to a conscious entity. A process of integrating the information and the memory coming from all the brain areas may be the efficient cause of our experiential richness. In fact, we have sufficient evidence that compels us to abandon this simplistic view of a compartmentalized brain, with modern neuroscience thinking more in terms of network science, in which several brain regions are highly interconnected and interdependent. No brain region does only one thing, and no neurons supposedly have only one function. Most neurons have several functions, not a single purpose. It turns out that whenever we hear a sound, have a visual experience, have feelings or emotions, or perform a motoric task, the whole brain is involved. Even such an apparently highly specialized brain region as the primary visual cortex carries out information processes related to hearing, touch, and movement ( Merabet et al., 2008 ; Liang et al., 2013 ). The reason why we nevertheless tend to associate specific brain regions with specific cognitive, sensorial, or motoric functions is that brain scans show only a temporal snapshot of the brain’s most intense activity. We are seeing only a few ‘tips of the iceberg’ and missing the overall activity in the noise. When studies are conducted using less noisy but much more expensive and complicated detection methods, most of the brain’s activity becomes visible ( Gonzalez-Castillo et al., 2012 ). Therefore, it would seem plausible that if consciousness arises from the activity of a complex aggregation of neurons, at least some brain areas must work together in a unified whole via thalamic activity.

However, how far these conjectures align with reality is questionable.

Because a natural question could be that of asking if and how a subjective feeling of selfhood changes if someone were to split your brain into two parts? Would you feel somewhat less conscious and less ‘yourself’? As is well-known, this is a very real surgical procedure performed since the 1940s: the corpus callosotomy (although used only rarely nowadays). It is performed to treat the worst cases of epilepsy (patients having up to 30 seizures a day) that did not respond to medical treatment. In this procedure, the corpus callosum, the nerve tract connecting the left and right brain hemispheres, is severed (in part or, in some cases, entirely), thereby avoiding the spread of epileptic activity between the two halves of the brain ( Figure 2 ). Its natural function is to ensure communication between the two cerebral cortexes of the two hemispheres to integrate and coordinate motor, sensory, and cognitive functions, such as moving left and right limbs, the visual integration of the left and right sight, etc. Because most of the brain’s activity is distributed throughout both hemispheres, with no indication of one or the other part being responsible for generating our sense of ‘self,’ one must wonder how the patients who have gone through such an acute surgical intervention feel. Do their split brains ‘generate’ a dual consciousness and split personality?

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1150605-g002.jpg

Does brain-splitting cause ‘self-splitting’?

Disagreement exists about whether in these patients a subject unity is present or if they display any signs of multiple first-person perspectives ( De Haan et al., 2020 ). They deny being a different person from what they were before surgery, and close relatives who knew the split-brain patients before and after surgery do not notice any personality change ( Bogen et al., 1965 ; Sperry, 1968 , 1984 ; Pinto et al., 2017 ),

Of course, there can be more or less severe drawbacks. In some cases, the so-called ‘alien-hand syndrome’ can take over, in which one hand appears to have a mind of its own. This occasionally happens when the two hemispheres’ representations of reality come into conflict and one wants to override the other. In these instances, decision-making and volition between the two hemispheres clash. An example is the patient‘s struggle to overcome an antagonistic behavior, such as knowing what cloth they want to wear, while one of their hands takes control and reaches out for another cloth they do not want at all. However, this should not be confused with two personalities competing against each other (as in the case of dissociative identity disorders), as split-brain patients identify with only one body and perceive their disobedient limb as being subjected to annoying motoric misbehavior; they do not report any sensation of some other internal personality taking control. The brain–or, more precisely, our two brains–tell us two different ‘stories.’ Split-brain patients seem to identify with one of the stories–that is, consciously access one of its interpretations–and keep the other in a subconscious or subliminal awareness, what the American cognitive neuroscientist Michael Gazzaniga used to call the ‘left-brain interpreter.’

Recent investigations also question the canonical textbook findings ( Pinto et al., 2017 , 2020 ). While it is confirmed that a corpus callosotomy splits the visual perception of the environment in two, several patients can nevertheless see them both and report it to the outside world–that is, they can access their language centers. Moreover, there is no evidence for memory loss ( Forsdyke, 2015 ).

In my view, confusion surrounding split-brain psychology arises only if we conflate the ‘unity of mind’ with a ‘unity of consciousness’ and sense of selfhood. If we do not confuse mental states as being the origin or efficient cause of consciousness, then any apparent paradox dissipates. Split-brainers may have two (eventually even conflicting) hemispheric and motor-sensory mental states (something not entirely unusual in healthy subjects) but even if one argues and provides evidence for a ‘two-minds’ model, that would not imply a split sense of identity or self-awareness. One can consciously and subliminally be aware of a plurality of experiences, yet retain the experience of singularity. There can be several experiences and representations generated in a brain, with or without a representational unity, which, nevertheless, belongs to and is experienced by one subject [for a more detailed analysis of this point see ( De Haan et al., 2021 )]. A ‘split subjective identity’ resulting from split-brain in the sense of a symptomatology similar to what we know from dissociative identity disorder characterized by the disruption of identity in two distinct personalities, differing not just in sensory-motor functioning or depersonalization disorders, but also each with two psychological behaviors, characters, affects, social preferences, and experienced as alternating ‘possessions’ with cognitive discontinuities and different memories of autobiographical information, as observed by others and reported by the (alternating) subjects themselves, is not observed.

So, if our subjective and conscious experience is generated by the integrated activity of the whole brain, why does not such a radical bisection lead to any modification of our state of awareness? Given the severing of the corpus callosum of a brain, one would expect a loss or at least a diminishing of conscious awareness because there would be a loss of working memory and information integration. However, nothing like this happens. The ‘unity of consciousness’ remains unaffected and, thereby, unexplained.

To save the paradigm, those who endorse the view that in such brain condition consciousness can no longer be ‘integrated’, point out that in not all documented cases was a complete transection of the corpus callosum performed. The truth, however, is that in several cases, the complete sectioning was performed and even confirmed by MRI imaging or radiological means ( Gazzaniga, 1985 ).

Yet, one may still point out that a complete transection still leaves some residual subcortical structures intact, which allows for some communication between the two hemispheres, potentially maintaining the ‘self’ of the patients.

To further substantiate the contrary hypothesis, one could mention cases in which there is no second hemisphere to communicate with in the first place. To treat epilepsy, the most extreme surgical intervention is to remove an entire brain hemisphere, that is, by hemispherectomy. Usually, this is done only in childhood because, supposedly, young brains can rewire themselves much more efficiently than older ones. Figure 3 shows the fMRI in a sample of six rare high-functioning patients after partial or complete surgical removal of one cerebral hemisphere.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1150605-g003.jpg

Hemispherectomy Brain Anatomy - Six adults with left (HS2 and HS3) or right (HS1, HS4, HS5, and HS6) hemispherectomy. Credit: Kliemann et al. (2019) . Reproduced under the terms of CC BY NC ND.

Interestingly, Nature seems to take the left/right distinction and early plasticity hypothesis not so seriously. That the left–right brain task distribution is not an inescapable neurological dogma is testified to by people born with only one hemisphere. For example, while in healthy subjects the left visual field is represented in the right hemisphere and vice versa, someone born with only one hemisphere can develop maps of both visual fields in it ( Muckli et al., 2009 ). Hemispherectomy on adults older than 18 years turns out to be just as safe and effective as in early childhood ( McGovern et al., 2019 ). Even in the case of a left hemispherectomy, Broca’s language area–which in normal conditions is in the left hemisphere–can be recovered in the right part of the brain ( Vargha-Khadem et al., 1997 ). Further evidence reports of subjects in whom the frontal lobe was missing from childhood without any measurable linguistic impairments, as shown by the case of a woman who grew up without her left temporal lobe but speaks in English and Russian ( Tuckute et al., 2022 ; Figure 4 ). This does not mean that persons missing a hemisphere do not suffer consequences–there is suboptimal word and face recognition ( Granovetter et al., 2022 ) but whether it plays a role in the unity of consciousness remains to be seen.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1150605-g004.jpg

Speaking without the brain’s language area. Credit: Tuckute et al. (2022) . Copyright 2022, reproduced with permission from Elsevier.

A possible explanation is that because these patients already had severe seizures originating in one of the hemispheres, the functional rewiring on the other hemisphere began before the surgery. The findings tend to disconfirm this easy way out. Though interconnectivity inside the brain networks increases, interconnectivity between brain regions with the same function after hemispherectomy does not differ from that of two hemispheric control subjects ( Kliemann et al., 2019 ). That plasticity alone can explain this state of affairs is far from proven (more on this later).

However, it is, most patients become seizure-free, and their cognition is relatively unchanged after surgery (some motoric and cognitive functions decrease but others improve). Overall, these patients appear to be ‘normal.’ Cognitive measures typically changed little between surgery and follow-up ( Pulsifer et al., 2004 ), and in everyday life, one could not tell the difference between humans having a whole brain or only half of one. And, most notably, the subjects report no ‘half-self,’ ‘half-awareness,’ or ‘half-consciousness.’

If the mind-brain identity theory is correct, and consciousness emerges as an integration of functional centers, with no particular ‘seat of consciousness,’ then only one brain hemisphere must be sufficient to accomplish the task.

But instances are found in which both hemispheres are severely damaged and there is not much left to integrate. Worth a reminder is how, in 1980, the British pediatrician John Lorber reported that some adults cured of childhood hydrocephaly had no more than 5% volume of brain tissue with a cerebral cortex as thin as 1 mm ( Lewin, 1980 ). While some had cognitive and perceptual disorders and several developed epilepsy, others were surprisingly asymptomatic and even of above-average intelligence.

Then, in 2007, in Marseille, France, a 44-year-old man complaining of weakness in his left leg submitted to an MRI brain scan ( Feuillet et al., 2007 ). As Figure 5 shows, the skull was abnormally filled with cerebrospinal fluid, leaving only a thin sheet of actual brain tissue. As an infant, he’d had a shunt inserted into his head to drain the fluid but it was removed when he was 14. Evidently, the cerebrospinal fluid build-up did not stop and ended up reducing the brain’s size to 50–75% compared to its normal volume. Though he had a below-average IQ (75/100), this man had a job, a family, and a normal life.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1150605-g005.jpg

MRI image of a hydrocephalus brain. Credit: Feuillet et al. (2007) . Copyright 2022, reproduced with permission from Elsevier.

Another example that should raise doubts is the cases of children in a developmental vegetative state–that is, what the American Academy of Neurology (as declared in its guideline report in 1995 and confirmed in 2018) officially considers as being a neurovegetative state in which there is “no evidence of purposeful behavior suggesting awareness of self or environment” ( Giacino et al., 2018 ). In other words, a universal rule reduces them to unconscious children who cannot suffer because this supposedly requires a functioning cerebral cortex.

Nevertheless, only one case showing the contrary should be sufficient to disprove a universal rule. Four such cases were brought to light in 1999 by a group led by Shewmon et al. (1999) . They studied the states of awareness in congenitally decorticate children–that is, the cases of four children who were almost completely lacking cortical tissue and were neurologically certified as being in a vegetative state. Yet, the loving care of their mothers (or of someone who adopted them and bonded with them via dedicated full-time caring) could gradually ‘awaken’ in them a conscious awareness. From an initially unresponsive state, they showed clear signs of having developed auditory perception and visual awareness (despite the total absence of the occipital lobe that, in normal conditions, hosts the visual areas). For example, they tracked faces and toys, looked at persons they recognized, could distinguish between their mothers or caretakers, listened to music for which they manifested preferences with their facial expressions, including smiling and crying, and, at least in one case, gave clear indications of self-recognition in a mirror. Shewmoon notes: “Were they [the decorticate children] not humans studied by clinicians but rather animals studied by ethologists, no one would object to attributing to them ‘consciousness’ (or ability to ‘experience’ pain or suffering) based on their evident adaptive interaction with the environment.”

These cases seem to contradict the prevailing theory, according to which the cerebral cortex generates consciousness.

One can still point out that the children were not completely decorticated, as some cortical tissue was still left. Figure 6 shows that a remnant of the frontal lobe is still present, possibly producing the conscious awareness. But that neural mechanisms of conscious function cannot be confined to the cerebral cortex alone is becoming much more plausible ( Merker, 2007 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1150605-g006.jpg

Congenitally decorticate children MRI brain scan (midline sagittal and posterior coronal plane). Credit: Shewmon et al. (1999) . Reproduced with permission from Wiley.

In fact, other speculations now retire to the last cerebral bastion for the seat of consciousness: the brainstem ( Solms and Panksepp, 2012 ). Indeed, its stimulation can trigger intense emotions and feelings. But the question is: What property of a neural circuitry dedicated to the most physical and basal control of cardiac, respiratory, and homeostatic functions, containing mainly neurons for motor and sensory tasks, can also give rise to such an apparently immaterial and completely different and unrelated ‘function’ or ‘property’ as a conscious experience? We do not know. However, this is yet another fact telling us that we have the right, at least hypothetically, to assume that they do not and are equally allowed to study these facts in the light of a different paradigm than that of a mind-brain identity.

Overall, the cases mentioned above (except for those of the congenitally decorticate children) of people who have undergone corpus callosotomy or hemispherectomy, or people suffering from hydrocephalus, cerebellar agenesis, or several other types of brain damage, show how surprisingly intact their higher cognitive functions remain. One would expect that the first victims of such invasive neurological changes or surgical interventions would be the complex and high-demanding cognitive functions so characteristic of the mind, such as intellectual skills, abstract thinking, decision-making, reason, logically and willfully planning actions, and so on. Instead, it turns out that even if large brain masses are injured or absent, the cognitive skills of the subject remain substantially unaltered. Further empirical inquiry is needed to show if the same holds for the integrity of subjective experience and no altered states of consciousness or qualitative changes of sensory perception arise.

2.2. Further questions on the mind-brain relationship

These remarkable cases also confirm that brain size and the number of neurons in a brain do not (or, at least, do not necessarily) indicate one’s intelligence. Size matters for manipulative complexity, such as the more complex hand movements in primates, which humans can develop superbly (think of the hands of an expert musician playing piano; Heldstab et al., 2020 ). However, a direct correlation between brain size and mental skills is not that straightforward. We like to believe that our brain size makes us human but rarely do we question what one means by ‘size.’ The number of neurons? The weight of the brain? Its brain-to-body ratio? Or its volume? Humans do not have the largest brain size in any of the aforementioned senses. The human brain has about 90 billion neurons, weighs ca. 1.1 to 1.4 kg, and has a volume of 1,300 cm 3 . However, the brain of an elephant has three times the number of neurons we have, and the weight and volume of the brain of a sperm whale are six times as much. Meanwhile, ants have a six times larger brain-to-body-mass ratio. A bit of an extreme example showing how cognitive skills and brain size are decoupled is the case of mouse lemurs, whose brains are 1/200th the size of monkeys’ but that perform equally well on a primate intelligence test ( Fichtel et al., 2020 ) Therefore, brain size alone does not make for a more developed mind, either while brain size does not scale with memory information content ( Forsdyke, 2014 ; Forsdyke, 2015 ). Then what does?

It is plausible to assume that a certain degree of complexity is a mandatory factor for a brain or whatever material structure to display a form of intelligence and cognitive skills. One could think of a measure of ‘brain connectivity’–that is, the number of wirings between neurons (through their axons, dendrites, and synapses) and the speed at which they transmit and receive signals–as an indicator of its complexity and see if it somehow scales with the cognitive functionality. However, MRI studies reveal that all mammals, including humans, share equal brain overall connectivity ( Assaf et al., 2020 ). The efficiency of information transfer through the neural network in a human is comparable to that of a mouse. It is independent of the structure or size of the brain and does not vary from species to species. So, things cannot be as easy as that.

However, what the above-mentioned clinical cases have in common is the presence of the cerebral cortex. In fact, some neurologists or cognitive scientists conjecture that phenomenal consciousness resides in the cerebral cortex. This belief is not unproblematic either.

First of all, because the neocortex exists only in humans and other mammals, one must conclude that birds, fish, octopuses, amphibians, and reptiles are, per definition, all ‘unconscious’ and incapable of having some more or less elementary form of conscious subjective experience. There is no sentience; they do not feel pain, fear, or pleasure or have whatever feeling. They are considered Cartesian automatons or philosophical zombies.

But evidence is beginning to emerge that, for example, the neural correlate patterns of sensory perception in a corvid bird aren’t substantially different from the neural correlate patterns in humans having a similar sensory conscious subjective experience ( Nieder et al., 2020 ). Moreover, one wonders how some birds can also perform amazing cognitive feats despite their forebrains consisting of lumps of gray cells. It turns out that cortex-like circuits in avian birds exist that are reminiscent of mammalian forebrains, and the idea that advanced cognitive skills are possible only because of the evolution of the highly complex cerebral cortex in mammals is becoming less plausible ( Stacho et al., 2020 ). Sufficiently strong evidence concludes that both cephalopods and crustaceans are sentient ( Cox et al., 2021 ). This is unsurprising: Common sense does not really need any scientific proof to accept that ravens, crows, octopuses, or lobsters are sentient beings.

All these findings require an explanation from the physicalist viewpoint, which identifies the mind and consciousness with the brain.

Of course, one could resort to the usual conjecture that neural plasticity explains all things. Neural plasticity certainly plays a role and undoubtedly has its explanatory power. However, in most cases, it remains conjectural and is invoked to fill the gaps that save the paradigm. Some caution would be appropriate. For example, a recent study challenges the idea of adaptive circuit plasticity, according to which the brain recruits existing neurons to take over for those that are lost from stroke. Definitive evidence for functional remapping after stroke remains lacking. Undamaged neurons do not change their function after a stroke to compensate for damaged ones, as the conventional re-mapping hypothesis believed ( Zeiger et al., 2021 ).

Moreover, it is observed that when a brain injury occurs, causing some form of amnesia, what was thought to be lost forever may reemerge into awareness, sometimes after years. Those whose loved ones suffered from dementia may have noted how memory and clarity of thought suddenly and quite surprisingly reappeared in a brief moment of lucidity, called ‘paradoxical lucidity,’ or even ‘terminal lucidity.’ Sometimes, bursts of mental clarity occur shortly before people die. Credible reports document cases in which people with dementia, advanced Alzheimer’s, schizophrenia, or even severe brain damage suddenly return briefly to a normal cognitive state [for a review, see ( Nahm et al., 2012 ); for some more recent findings, see ( Batthyány and Greyson, 2021 )]. It is hard to recast these brief episodes of lucidity, which last less than 1 h or even a few minutes, by resorting to brain plasticity.

One might also question if, besides the spatial distribution or localization of the neural correlates of consciousness, the intensity of its metabolic activity plays a role in generating a conscious experience. For example, it is well known how the practice of meditation or psychedelic drugs can change our brain chemistry and give rise to the dissolution of the sense of boundaries and intense subjective experiences, respectively. From the perspective of the material monist, which equates mind and brain as being one and the same thing, one assumes that the intensity of ‘mind-expanding’ psychedelics must be directly proportional to an increase in neural activity and connectivity. A dead brain is the cessation of any cerebral activity, in which case we assume there is no consciousness left, while an intensely subjective experience presumably involves high neural activity. One would, therefore, expect to find that the subjectively felt intensity of a hallucinogen proportionally correlates with neuronal activity.

However, the contrary turned out to be the case. A BOLD-fMRI study reported a significant decrease in brain activity–that is, decreased blood flow and venous oxygenation as being inversely proportional to the intensity of the subjective experience reported by the test subjects ( Carhart-Harris et al., 2012 ). The authors of this research remark how this fact is reminiscent of Aldous Huxley’s ‘reducing valve’ metaphor in the brain that acts to limit our perceptions in an ordinary state of consciousness [see also Koch’s take on this ( Koch, 2012 )]. These findings were later confirmed by further studies with other hallucinogenic drugs such as LSD and ayahuasca ( Palhano-Fontes et al., 2015 ; Carhart-Harris et al., 2016 ; Lewis, 2017 ). For a more detailed analysis of this rationale see ( Kastrup, 2016 ). Kastrup also notes how several brain function impairments are accompanied by richer and more intense subjective experiences of self-transcendence (e.g., near-death-experiences associated with dramatically reduced brain function; Kastrup, 2017 ).

Williams and Woollacott point out how the idea of brain processes attenuating or filtering out mental acuity and broader perceptual awareness is consistent with the literature on meditation studies and Indian non-dual philosophy derived from spiritual practices: Reduced brain activity induced by reduced conceptual activity results in increased cognitive clarity, perceptual sensitivity and awareness expansion ( Williams and Woollacott, 2021 ), suggesting that domains of awareness exist that do not depend upon brain functions.

Furthermore, a neurophenomenological study in the meditating brain showed that the reduction of beta band activity is related to a decreased ‘sense-of-boundaries’–that is, to self-dissolution states giving rise to non-dual awareness ( Dor-Ziderman et al., 2016 ). Similarly, Katyal and Goldin found that deeper meditation experiences are accompanied by increased alpha oscillations (closely linked to inhibitory processing and are often related to the suppression of distractors during attentional cognitive processing) and suppressed theta oscillations (potentially indicating reduced self-monitoring) ( Katyal and Goldin, 2021 ).

Long-time meditators report a state of ‘minimal phenomenal content’, or as a ‘non-dual awareness’ of ‘pure consciousness’, and that could be posited as ‘consciousness as such.’ Investigations on Buddhist meditation suggest distinct correlates of nondual states exist but describe it as ‘non-representational’ awareness ( Josipovic, 2014 ; Josipovic and Miskovic, 2020 ). Metzinger, instead, conjectures that it could be related to some neurological representational model realized in some brain region with some specific physical properties or neural signatures and correlates that have yet to be discovered Metzinger, 2020 . While Katyal argues that the phenomenology of nondual meditative states suggests that a purely non-representational conscious state–that is, a ‘transcendental’ state beyond conscious experience– may transcend any such neural signatures altogether ( Katyal, 2022 ).

2.3. The search for the neural correlates of memory

There remain other aspects to explain but that escape a materialistic paradigm with a strikingly similar pattern to that of consciousness and mentation: the neural correlates of memory. Also, in this case, one thing is certain: Memory is not stored in a specific brain area like it is on a digital computer. More than a century of research into the biological foundation of memory has not led to tangible results providing convincing evidence that such substratum exists. This is not a new issue. It dates back to Henri Bergson’s opposition to a reductionist understanding of memory ( Bergson, 1896/1912 ). Bergson considered memory to be of an immaterial and spiritual nature rather than being stored in the brain.

One might assume that information content should somehow scale with brain size. This is not observed, however ( Forsdyke, 2014 ), ( Forsdyke, 2016 ). For example, hemispherectomy in children does not lead to memory impairment ( Tavares et al., 2020 ). How can it be that someone without half of the brain has no measurable memory impairment? We could explain this by resorting to the plasticity of the brain or the functions of residual brain tissues. Or, we could conjecture that memory is stored in both hemispheres; therefore, if one hemisphere is lost, the other remains unimpaired (a hypothesis that could also fit well with supposed evolutionary advantages). Or because it is the diseased hemisphere that is removed in all these cases, Nature might have provided a mechanism that transfers the memories to the healthy hemisphere before surgery. However, we should be aware that these are conjectures, hypotheses, and speculations, not scientifically established truths. Memory storage and retrieval in biological brains remains a largely unexplained mechanism, and no conclusive evidence exists that proves it to be of a physical nature.

Other research that might suggest how and where memories are stored in brains comes from experiments performed on freshwater flatworms called planaria. These creatures can be trained to associate an electric shock with a flash of light. Therefore, one might expect that they must have encoded the experience in their brains.

Flatworm planarians have an incredible self-regeneration ability ( Ivankovic et al., 2019 ). If this worm is cut in half, each amputated body part regenerates as two new fully formed flatworms. Not only does the part with the head form a new tail but the remaining tail also forms a new head with a brain and eyes. In 1959, James V. McConnell showed that the newly-formed planaria with a new brain also maintained its conditioned behavior ( McConnell, 1959 ). The newly-formed living being never received the electric shock and light flash of the training phase and yet it reacted as if it had a memory of the training it had never received.

Memories, if physical, may be stored not only in the brain but also throughout the body, in non-neuronal tissue.

McConnel’s idea was that RNA molecules could transfer memory from one planarian to another as a “memory molecule.” Motivated by this idea, he injected worms with RNA taken from those trained and reported that the training had been transferred. However, further research could not convincingly reproduce McConnel’s experiments.

In 2013, Shomrat and Levin vindicated McConnel’s first experiments by using computerized training of planarians, replacing manual procedures that caused previous test attempts to fail ( Shomrat and Levin, 2013 ). Then, in 2018, Bédécarrats showed how the extracted RNA from a long-term trained sea slug, the aplysia, can induce sensitization in an untrained aplysia ( Bédécarrats et al., 2018 ). This is taken as evidence for the molecular basis of memory and the hypothesis that RNA-induced epigenetic changes lead to the protein synthesis required to consolidate or inhibit memory. These local translations into synaptic proteins determining the neural structure of memory are actually the mainstream engram model.

However, the problem with this hypothesis is that the fastest protein synthesis causes cellular changes in timescales of minutes. How could it possibly be responsible for our ability to store and recall memories almost instantaneously?

Moreover, the still common idea that long-time memory is mapped as synaptic connectivity is challenged by the fact that it is possible to erase synaptic connections while maintaining the same conditioned behavior in the aplysia. Long-term memory and synaptic changes can, at least in some cases, be dissociated ( Chen et al., 2014 ). It has also been shown that the brain tissue turns over at a rate of 3–4% per day, which implies a complete renewal of the brain tissue proteins within 4–5 weeks ( Smeets et al., 2018 ). If the synaptic trace theory is correct, and since synapses are made of proteins, how can, in the presence of this turnover, long-time memory consolidation be achieved in synaptic strengths and neural connection patterns? Notice how the fact that proteins have short lifetimes is in line with the volatility of synaptic connections. How can considerably volatile changes in synaptic connections underlie the storage of information for long periods (even in the absence of learning; Trettenbrein, 2016 ; Mongillo, 2017 )? If memory is physical, other physical repositories must be viable (DNA, cellular organelles, etc.), or a paradigm shift is necessary.

The search for engrams–that is, the group of neurons supposedly responsible for the physical representation of memory–resorts mostly to the correlation between the memory evaluation based on fear conditioning behavioral tasks of rodents and its presumed associated neural changes. For example, in a series of articles the group of Tonegawa claims to have discovered engram cells ( Liu et al., 2012 ; Redondo et al., 2014 ; Ryan et al., 2015 ; Roy et al., 2016 ). They show how light-induced optogenetic reactivation of mice hippocampal neurons that were previously tagged during fear conditioning, induces a freezing behavior characteristic of fear memory recall. While the same activation of cells in non-fear-conditioned mice, or fear-conditioned mice in another context, did not elicit the same freezing behavior. Therefore, the activation of these context-specific neurons seems to suggest that they act like memory engrams of the specific fearful experience.

However, unclear is what really motivates the freezing behavior. The question is whether the cells’ activation led to the memory retrieval of the fearful experience leading to the freezing behavior, if it activates the fear-like emotional state first before any memory retrieval, or if the mice might stop simply because they perceive an unexpected stimulus that might not be related with any fear or remembrance. Only the first case could potentially support the engram hypothesis, but lacking a first-person account, we will never know. While, on the contrary, the second case would only show that the activation of those cells triggers an emotional state that precedes the memory retrieval, and thus, the activated cells would not represent memory engrams (after all, we know that in humans also, stimulation of specific brain centers can lead to panic attacks associated with traumatic events, but these are not necessarily considered as the physical repository of the trauma memory.) While the third case questions whether mice freezing behavior correlates with fear perception in the first place. A lack of motion could be due to many things, not just fear. Moreover, besides the hippocampus, it is possible to induce freezing by activating a variety of brain areas and projections, such as the lateral, basal and central amygdala, periaqueductal gray, motor and primary sensory cortices, prefrontal projections, and retrosplenial cortex ( Denny et al., 2017 ). It is not clear what the freezing behavior is really about.

This, again, shows how the correlation-causation fallacy based on a loss-of-function lesion rationale should be seen with a more critical eye.

Meanwhile, we are also allowed to speculate about a third complementary alternative. Memories associated to physical cues and lower cognitive processes and computational tasks for deductive, inferential, syntactic, predictive optimization problem-solving are material–that is, implemented in a synaptic and molecular basis for consolidation of learned behavior, fact learning, pattern recognition, recording and retrieval of representational content, external sensory cues and other physical information [e.g., see ( Gershman, 2023 ), and that is also an interesting account of the puzzle of the biological basis of memory]. While other memories may be associated to higher cognitive functions involving inductive, non-algorithmic tasks and conceptualizations–that is, memory consolidation and recall of abstract thoughts, semantic categories, and non-representational forms of introspective intuitive cognition and creative expressions that may go beyond a Turing-machine-like information processing [e.g., see ( Marshall, 2021 ), or, for alternatives such as ‘extracorporeal information storage’, see also ( Forsdyke, 2015 )].

2.4. Cognition without a brain

As a concluding note, it is worthy of mention that an increasing body of evidence shows that an at-least elementary form of cognition is already present and working in multicellular and single-celled lifeforms, without any neural substrate. Research in plant biology demonstrates how vegetal and cellular life shows elements of cognitive behavior that were not suspected or were simply considered impossible without a brain. There is extensive literature now that, especially in the last decade, has consistently shown how plants change behavior and adapt, respond predictively, possess some form of memory, resort to air and underground communication systems based on chemical, visual, and acoustic signals, have learning abilities and can evaluate their surroundings, make decisions, and have a cooperative behavior. It is not inappropriate to speak openly of a ‘minimal’ or ‘proto-cognition’ of cells, what is now called ‘basal cognition’. For some reviews see Trewavas (2017) , Gershman et al. (2021) , and Lyon (2015) .

Some climbing plants exhibit an anticipatory prehensile mechanism and able to purposefully plan its movements by an ‘approach-to-grasp’ behavior before having any physical contact with a support ( Guerra et al., 2019 ). Other aspects could be mentioned, such as plants’ adaptive changes that reflect developmental decisions based on ‘root-perception.’ Having no central nervous system or information processing centers, roots are, nonetheless, “able to integrate complex cues and signals over time and space that allow plants to perform elaborate behaviors analogous, some claim even homologous, to those of intelligent animals,” as Novoplansky describes it ( Novoplansky, 2019 ).

Several experiments with unicellular creatures have made it clear that conditioned behavior in single cells exists as well and is comparable in its complexity to that of plants.

An example could be the evidence of conditioned behavior in amoebae. It could be shown how the motility pattern of the Amoeba proteus under the influence of the two stimuli is consistent with associative conditioned behavior ( De la Fuente et al., 2019 ).

A quite surprising ‘brain-less problem-solving’ was (re-)discovered in another protozoan. In 1906, the American zoologist Herbert Spencer Jennings noted how the Stentor roeselii could escalate actions to avoid an irritant stimulus by a complex hierarchy of avoidance behaviors in which the protozoan first enacts a strategy, sees if it works, and if not, resorts to another strategy in a series of attempts to solve a problem. One hundred and 13 years later, in 2019, Jennings’ observations were confirmed ( Dexter et al., 2019 ).

Another notorious example of non-brain-centered cellular cognition is that of the Physarum polycephalum , a large amoeba-like slime mold plasmodium that exhibits several skills and behavioral patterns that could be labeled as ‘proto-intelligent’. For example, it can find the minimum length between two points in a labyrinth, and minimize the network path and complexity between multiple food sources ( Nakagaki, 2004 ). Learning processes of habituation with anticipating conditioned behavior was shown as well ( Saigusa et al., 2008 ). For an in-depth review on slime molds see also ( Reid, 2023 ).

Finally, worth a mention is the behavior of the simplest life form, namely, bacteria. These also can sense the environment, actively move within it, target food, avoid toxic substances, and meaningfully change their swimming direction. Most evident is this behavior when they come together forming a bacterial community that shows surprising problem-solving abilities. Bacteria communicate with each other and coordinate gene expression, which determines the collective behavior of the entire community to achieve a common goal with collaborative problem-solving abilities [for a review of bacteria’s behavior see ( Lyon, 2015 )].

If and how this basal cognition may also imply instances of phenomenal consciousness–that is, some form of more or less ‘basal sentience’–is debatable but can be substantiated by arguments that aren’t exclusively philosophical ( Segundo-Ortin and Calvo, 2021 ). More recently, Parise et al. reviewed the ecological literature, suggesting the existence of an “extended cognition”–that is, a paradigm where one no longer considers the brain as the exclusive seat of cognition, but generalizes it to environmentally extended cognitive processes ( Parise et al., 2023 ).

3. Discussion

The paper presented a series of neurological and biological observations whose implications remain controversial. This overview started by questioning the assumption of a lesion-based sufficiency criterion that identifies the causal relationship between the impairment of a specific cerebral area and the, thereby, assumed suppression of phenomenal consciousness and/or cognitive processes, as proof of a material monistic mind-brain identity interpretation. Motivated by this assumption we asked whether the idea of a specific brain area, structure, or its related activity, as being responsible for the qualitative and subjective experiences is consistent with the evidence, and pointed out the lack of conclusive evidence that the phenomenal dimension and singularity of the sense of self-hood, together with its higher cognitive functions is disrupted despite large impairments, suggesting that the hypothesis of a (local or global) brain-based ‘seat of consciousness’, if not inconsistent, must be too simplistic.

Some other neurological aspects of the mind/consciousness-brain relationship were investigated, such as the non-trivial scaling between cerebral size and neural complexity with intelligence, the hypothesis of the cerebral cortex as a center for subjective experience, by comparing it in humans and in other non-mammals, and we examined if and how far neural plasticity alone can be invoked to explain the recovery of cognitive functionalities. Of particular interest is the fact that, contrary to expectations, an inverse relationship between brain activity and conscious experience exists. Reduced brain activity leads to increased cognitive clarity and awareness expansion, seemingly suggesting that at least some aspects of our conscious experience do not depend upon the intensity of brain activity.

The now more than a century longstanding search for the physical basis of memory and memory engram cells was examined. While the predominant paradigm favors the engram hypothesis, here we highlighted how several findings challenge the conventional materialistic view. Observations like memory retention in hemispherectomy cases and planaria’s regenerative memory, along with the limitations of protein synthesis as an explanation and volatility of synaptic connections raise doubts about synaptic trace theory.

Finally, emerging evidence in plant and cellular biology challenges the assumption that all cognition requires a neural substrate. Plant and cellular lifeforms exhibit forms of basal cognition, with abilities including adaptation, memory, communication, learning, decision-making, and problem-solving. Notable instances include the slime mold intelligent behaviors ( Reid, 2023 ) and bacterial communities’ coordinated problem-solving abilities, demonstrating that cognition is not exclusive to organisms with brains ( Dinet et al., 2021 ).

Overall, these findings do not support the mind-brain identity ontology so straightforwardly as is commonly believed. The much too often unquestioned assumption that sees the nervous system as a sine-qua-non condition for conscious experience and cognitive behavior is challenged and we are equally allowed to consider cognition and sentience, not as emerging epiphenomena but as inherent ‘pre-neuronal’ aspects of life.

Of course, ‘pre-neuronal’ does not necessarily mean ‘pre-physical.’ These findings do not refute physicalism in and of themselves. Each of the cited neurobiological facts, when considered separately, may still be saved by several speculations inside the limitations dictated by material monism. The left column of the following table summarizes the findings discussed. The right column furnishes the possible interpretations that could, in principle, save a material monistic paradigm.

However, taken together the lack of these correlations, if we see things jointly in a wider context, that is, without selectively limiting our attention to the single phenomenon seen in isolation, and by taking a coherent integral view in which each phenomenon is seen collectively as the expression of a deeper causal principle underlying the entire pattern, another ontology that does not need such a plurality of physical interpretations is possible. A non-physicalist standpoint that sees mind and consciousness not as an epiphenomenon of matter but, rather, fundamental primitives that manifest through the material substrate (e.g., by what James called a ‘transmissive’ rather than ‘generating’ function) in line with a dualistic, idealistic, or other post-material worldviews. A viewpoint, that does not assume a mind-brain identity as a given apriorism but rather sees consciousness and mind as fundamental, with the brain a ‘physical mind’ that mediates information from and to a non-physical mind, could accommodate the above-listed lack of correlation between neurological and experiential/cognitive phenomenality inside a paradigm that does not need all these mechanistic conjectures.

Anyway, a future direction of systematic research that does not always assume the mind-brain identity as a given fact and leaves doors open to other perspectives, would be sufficient to potentially lead to powerful new insights that were previously overlooked. A possible future generalist approach, that does not necessarily impose one or another metaphysical worldview but starts with the assumption of a ‘post-material psychology’, could be a line of research ( Beauregard et al., 2018 ). The mind–body problem and the hard problem of consciousness remain controversial issues more than ever, but non-physical ontologies of mind and consciousness are far from having been expunged by science. We have the right to explore these as a viable option not despite but, to the contrary, because of neuroscientific evidence that has been selectively dismissed for too long but cannot be ignored forever–if we can connect the dots.

Data availability statement

Author contributions.

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • Aguinaga D., Medrano M., Vega-Quiroga I., Gysling K., Canela E. I., Navarro G., et al.. (2018). Cocaine effects on dopaminergic transmission depend on a balance between Sigma-1 and Sigma-2 receptor expression . Front. Mol. Neurosci. 11 :11. doi: 10.3389/fnmol.2018.00017, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Assaf Y., Bouznach A., Zomet O., Marom A., Yovel Y. (2020). Conservation of brain connectivity and wiring across the mammalian class . Nat. Neurosci. 23 , 805–808. doi: 10.1038/s41593-020-0641-7, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aurobindo S. (1919). The life divine . Sri Aurobindo Ashram Trust.
  • Baars B. J. (1988). A cognitive theory of consciousness . New York: Cambridge University Press. [ Google Scholar ]
  • Batthyány A., Greyson B. (2021). Spontaneous remission of dementia before death: results from a study on paradoxical lucidity . Psychol. Conscious. 8 , 1–8. doi: 10.1037/cns0000259 [ CrossRef ] [ Google Scholar ]
  • Beauregard M., Trent N. L., Schwartz G. E. (2018). Toward a postmaterial psychology: Theroy, resaerch, and applications . New Ideas Psychol. 50 , 21–33. doi: 10.1016/j.newideapsych.2018.02.004 [ CrossRef ] [ Google Scholar ]
  • Bédécarrats A., Chen S., Pearce K., Cai D., Glanzman D. L. (2018). RNA from trained Aplysia can induce an epigenetic engram for long-term sensitization in untrained Aplysia . eNeuro 5 :38. doi: 10.1523/ENEURO.0038-18.2018, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bergson H. (1896/1912). Matter and memory . New York: McMillan. [ Google Scholar ]
  • Bogen J. E., Fisher E., Vogel P. (1965). Cerebral commissurotomy: a second report . JAMA 194 , 1328–1329. doi: 10.1001/jama.1965.03090250062026, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bourget D., Chalmers D. (2020). Philosophers on philosophy: The 2020 PhilPapers survey - (forthcoming) . Philosophers' Imprint.
  • Carhart-Harris R. L., Erritzoe D., Williams T., Stone J. M., Reed L. J., Colasanti A., et al.. (2012). Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin . Proc. Natl. Acad. Sci. 109 , 2138–2143. doi: 10.1073/pnas.1119598109, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Carhart-Harris R. L., Muthukumaraswamy S., Roseman L., Kaelen M., Droog W., Murphy K., et al.. (2016). Neural correlates of the LSD experience revealed by multimodal neuroimaging . PNAS 113 , 4853–4858. doi: 10.1073/pnas.1518377113, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chalmers D. (1995). Facing up the problem of consciousness . J. Conscious. Stud. 2 , 200–219. [ Google Scholar ]
  • Chen S., Cai D., Pearce K., Sun P. Y. W., Roberts A. C., Glanzman D. L. (2014). Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia . elife 3 :e03896. doi: 10.7554/eLife.03896, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cox V., Zacarias S., Waldhorn D.R. (2021). Crustacean and cephalopod sentience briefing . London: Conservative Animal Welfare Foundation. Available at: https://www.conservativeanimalwelfarefoundation.org/wp-content/uploads/2021/06/CAWF-Crustacean-and-Cephalopod-Sentience-Report.pdf [ Google Scholar ]
  • Crick F. C., Koch C. (2005). What is the function of the claustrum? Philos. Trans. R. Soc. Lond. 360 , 1271–1279. doi: 10.1098/rstb.2005.1661 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cruse D., Chennu S., Chatelle C., Bekinschtein T. A., Fernández-Espejo D., Pickard J. D., et al.. (2011). Bedside detection of awareness in the vegetative state: a cohort study . Lancet 378 , 2088–2094. doi: 10.1016/S0140-6736(11)61224-5, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Davis C., Kleinman J. T., Newhart M., Gingis L., Pawlak M., Hillis A. E. (2008). Speech and language functions that require a functioning Broca’s area . Brain Lang. 105 , 50–58. doi: 10.1016/j.bandl.2008.01.012, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Haan E. H. F., Corballis P. M., Hillyard S. A., Marzi C. A., Seth A., Lamme V. A. F., et al.. (2020). Split-brain: what we know now and why this is important for understanding consciousness . Neuropsychol. Rev. 30 , 224–233. doi: 10.1007/s11065-020-09439-3, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Haan E. H. F., Scholte H. S., Pinto Y., Foschi N., Polonara G., Fabri M. (2021). Singularity and consciousness: a neuropsychological contribution . J. Neuropsychol. 15 , 1–19. doi: 10.1111/jnp.12234, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De la Fuente I. M., Bringas C., Malaina I., Fedetz M., Carrasco-Pujante J., Morales M., et al.. (2019). Evidence of conditioned behavior in amoebae . Nat. Commun. 10 :11677. doi: 10.1038/s41467-019-11677-w [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Denny C. A., Lebois E., Ramirez S. (2017). From engrams to pathologies of the brain . Front. Neural Circuits 11 :23. doi: 10.3389/fncir.2017.00023, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dexter J. P., Prabakaran S., Gunawardena J. (2019). A complex hierarchy of avoidance behaviors in a single-cell eukaryote . Curr. Biol. 29 , 4323–4329. doi: 10.1016/j.cub.2019.10.059, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dinet C., Michelot A., Herrou J., Mignot T. (2021). Basal cognition: conceptual tools and the view from the single cell . Philos. Trans. R. Soc. B 376 :20190755. doi: 10.1098/rstb.2019.0755, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dor-Ziderman Y., Ataria Y., Fulder S., Goldstein A., Berkovich-Ohana A. (2016). Self-specific processing in the meditating brain: a MEG neurophenomenology study . Neurosci. Conscious. 2016 :niw019. doi: 10.1093/nc/niw019, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Feng Y., Jiang Q. J., Sun X. Y., Zhang R. W. (2015). A new case of complete primary cerebellar agenesis: clinical and imaging findings in a living patient . Brain 138 :e353. doi: 10.1093/brain/awu239, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Feuillet L., Dufour H., Pelletier J. (2007). Brain of a white-collar worker . Lancet 370 :262. doi: 10.1016/S0140-6736(07)61127-1, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fichtel C., Dinter K., Kappeler P. M. (2020). The lemur baseline: how lemurs compare to monkeys and apes in the primate cognition test battery . Zool. Sci. 8 :e10025. doi: 10.7717/peerj.10025, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Forsdyke D. R. (2014). Long-term memory: scaling of information to brain size . Front. Hum. Neurosci. 8 :397. doi: 10.3389/fnhum.2014.00397 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Forsdyke D. R. (2015). Wittgenstein’s certainty is uncertain: brain scans of cured Hydrocephalics challenge cherished assumptions . Biol. Theory 10 , 336–342. doi: 10.1007/s13752-015-0219-x [ CrossRef ] [ Google Scholar ]
  • Forsdyke D. R. (2016). “ Memory: what is arranged and where? ” in Evolutionary bioinformatics . ed. Forsdyke D. R. (Cham: Springer; ), 367–380. [ Google Scholar ]
  • Gazzaniga M. (1985). MRI assessment of human callosal surgery with neuropsychological correlates . Neurology 35 , 1763–1766. doi: 10.1212/WNL.35.12.1763, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gershman S. J. (2023). The molecular memory code and synaptic plasticity: a synthesis . Biosystems 224 :104825. doi: 10.1016/j.biosystems.2022.104825, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gershman S. J., Balbi P., Gallistel C. R., Gunawardena J. (2021). Reconsidering the evidence for learning in single cells . elife 10 :e61907. doi: 10.7554/eLife.61907, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Giacino J. T., Katz D. I., Schiff N. D., Whyte J., Ashman E. J., Ashwal S., et al.. (2018). Practice guideline update recommendations summary: Disorders of consciousness: Report of the Guideline Development, Dissemination, and Implementation Subcommittee of the American Academy of Neurology; the American Congress of Rehabilitation Medicine; and the National Institute on Disability, Independent Living, and Rehabilitation Research . Am. Acad. Neurol. 91 , 450–460. doi: 10.1212/WNL.0000000000005926, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gomez-Marin A. (2017). “ Causal circuit explanations of behavior: are necessity and sufficiency necessary and sufficient? ” in Decoding neural circuit structure and function . eds. Çelik A., Wernet M. (Berlin: Springer; ) [ Google Scholar ]
  • Gonzalez-Castillo J., Saad Z. S., Handwerker D. A., Inati S. J., Brenowitz N., Bandettini P. A. (2012). Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis . PNAS 109 , 5487–5492. doi: 10.1073/pnas.1121049109, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Granovetter M. C., Robert S., Ettensohn L. B. M., Behrmann M. (2022). With childhood hemispherectomy, one hemisphere can support–but is suboptimal for–word and face recognition . PNAS Nexus 119 :6119. doi: 10.1073/pnas.2212936119 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guerra S., Peressotti A., Peressotti F., Bulgheroni M., Baccinelli W., D’Amico E., et al.. (2019). Flexible control of movement in plants . Nature Sci. Rep. 9 :53118. doi: 10.1038/s41598-019-53118-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Heldstab S. A., Isler K., Schuppli C., van Schaik C. P. (2020). When ontogeny recapitulates phylogeny: fixed neurodevelopmental sequence of manipulative skills among primates . Sci. Adv. 6 :eabb4685. doi: 10.1126/sciadv.abb4685, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huxley A. (1954). The doors of perception . London, England: Chatto and Windus. [ Google Scholar ]
  • Ivankovic M., Haneckova R., Thommen A., Grohme M. A., Vila-Farré M., Werner S., et al.. (2019). Model systems for regeneration: planarians . Development 146 :684. doi: 10.1242/dev.167684, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • James W. (1898). Human immortality . Cambridge: Houghton, Mifflin and Company, The Riverside Press. [ Google Scholar ]
  • Jonas E., Kording K. P. (2017). Could a neuroscientist understand a microprocessor . PLoS Comput. Biol. 13 :e1005268. doi: 10.1371/journal.pcbi.1005268, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Josipovic Z. (2014). Neural correlates of nondual awareness in meditation . Ann. N. Y. Acad. Sci. 1307 , 9–18. doi: 10.1111/nyas.12261, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Josipovic Z., Miskovic V. (2020). Nondual awareness and minimal phenomenal experience . Front. Psychol. 11 :11. doi: 10.3389/fpsyg.2020.02087 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kastrup B. (2016). What neuroimaging of the psychedelic state . J. Cogn. Neuroethics 4 , 1–9. [ Google Scholar ]
  • Kastrup B. (2017). Self-transcendence correlates with brain function impairment . J. Cogn. Neuroethics 4 , 33–42. [ Google Scholar ]
  • Katyal S. (2022). Reducing and deducing the structures of consciousness through meditation . Front. Psychol. 13 :13. doi: 10.3389/fpsyg.2022.884512, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Katyal S., Goldin P. (2021). Alpha and theta oscillations are inversely related to progressive levels of meditation depth . Neurosci. Conscious. 2021 :42. doi: 10.1093/nc/niab042, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kliemann D., Adolphs R., Tyszka J. M., Fischl B., Yeo B. T. T., Nair R., et al.. (2019). Intrinsic functional connectivity of the brain in adults with a single cerebral hemisphere . Cell Rep. 29 , 2398–2407.e4. doi: 10.1016/j.celrep.2019.10.067, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Koch K. (2012). This is your brain on drugs. [online] Available at: https://www.scientificamerican.com/article/this-is-your-brain-on-drugs/ .
  • Krakauer J. W., Ghazanfar A. A., Gomez-Marin A., MacIver M. A., Poeppel D. (2017). Neuroscience needs behavior: correcting a reductionist Bias . Neuron 93 , 480–490. doi: 10.1016/j.neuron.2016.12.041, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lewin R. (1980). Is your brain really necessary? Science 210 , 1232–1234. doi: 10.1126/science.7434023, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lewis C. R. (2017). Two dose investigation of the 5-HT-agonist psilocybin on relative and global cerebral blood flow . NeuroImage 159 , 70–78. doi: 10.1016/j.neuroimage.2017.07.020, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liang M., Mouraux A., Hu L., Iannetti G. D. (2013). Primary sensory cortices contain distinguishable spatial patterns of activity for each sense . Nat. Commun. 4 :1979: 1979. doi: 10.1038/ncomms2979, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liu X., Ramirez S., Pang P. T., Puryear C. B., Govindarajan A., Deisseroth K., et al.. (2012). Optogenetic stimulation of a hippocampal engram activates fear memory recall . Nature 484 , 381–385. doi: 10.1038/nature11028, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lyon P. (2015). The cognitive cell: bacterial behavior reconsidered . Front. Microbiol. 6 :264. doi: 10.3389/fmicb.2015.00264, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mackenzie G. (2019). Can they feel? The capacity for pain and pleasure in patients with cognitive motor dissociation . Neuroethics 12 , 153–169. doi: 10.1007/s12152-018-9361-z, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Madden M. B., Stewart B. W., White M. G., Krimmel S. R., Qadir H., Barrett F. S., et al.. (2022). A role for the claustrum in cognitive control . Trends Cogn. Sci. 26 , 1133–1152. doi: 10.1016/j.tics.2022.09.006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marshall P. (2021). Biology transcends the limits of computation . Prog. Biophys. Mol. Biol. 165 , 88–101. doi: 10.1016/j.pbiomolbio.2021.04.006, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McConnell J. V. (1959). Worms and things . The Worm Runner’s Digest.
  • McGovern R. A., Moosa N. V., Jehi L., Busch R., Ferguson L., Gupta A., et al.. (2019). Hemispherectomy in adults and adolescents: seizure and functional outcomes in 47 patients . Epilepsia 60 , 2416–2427. doi: 10.1111/epi.16378, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Merabet L. B., Hamilton R., Schlaug G., Swisher J. D., Kiriakopoulos E. T., Pitskel N. B., et al.. (2008). Rapid and reversible recruitment of early visual cortex for touch . PLoS One 3 :e3046. doi: 10.1371/journal.pone.0003046, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Merker B. (2007). Consciousness without a cerebral cortex: a challenge for neuroscience and medicine . Behav. Brain Sci. 30 , 63–81. doi: 10.1017/S0140525X07000891, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Metzinger T. (2020). Minimal phenomenal experience: meditation, tonic alertness, and the phenomenology of “pure” consciousness . Philos. Mind Sci. 1 , 1–44. doi: 10.33735/phimisci.2020.I.46 [ CrossRef ] [ Google Scholar ]
  • Mongillo G. R. S. L. Y. (2017). Intrinsic volatility of synaptic connections–a challenge to the synaptic trace theory of memory . Curr. Opin. Neurobiol. 46 , 7–13. doi: 10.1016/j.conb.2017.06.006, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Muckli L., Naumer M. J., Singer W. (2009). Bilateral visual field maps in a patient with only one hemisphere . PNAS 106 , 13034–13039. doi: 10.1073/pnas.0809688106, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nagel T. (1974). What is it like to be a bat? Philos. Rev. 83 , 435–456. doi: 10.2307/2183914, PMID: [ CrossRef ] [ Google Scholar ]
  • Nahm M., Greyson B., Kelly E. W., Haraldsson E. (2012). Terminal lucidity: a review and a case collection . Arch. Gerontol. Geriatr. 55 , 138–142. doi: 10.1016/j.archger.2011.06.031, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nakagaki T. (2004). Obtaining multiple separate food sources: behavioural intelligence in the Physarum plasmodium . Proc. R. Soc. Lond. B 271 , 2305–2310. doi: 10.1098/rspb.2004.2856, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nieder A., Wagener L., Rinnert P. (2020). A neural correlate of sensory consciousness in a corvid bird . Science 369 , 1626–1629. doi: 10.1126/science.abb1447, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Novoplansky A. (2019). What plant roots know? Semin. Cell Dev. Biol. 92 , 126–133. doi: 10.1016/j.semcdb.2019.03.009, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Oizumi M., Albantakis L., Tononi G. (2014). From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0 . Public Libr. Sci. Comput. Biol. 10 :3588. doi: 10.1371/journal.pcbi.1003588, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Oudiette D., Leu S., Pottier M., Buzare M. A., Brion A., Arnulf I. (2009). Dreamlike mentations during sleepwalking and sleep terrors in adults . Sleep 32 , 1621–1627. doi: 10.1093/sleep/32.12.1621, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Owen A. M., Coleman M. R. (2006). Detecting awareness in the vegetative state . Ann. N. Y. Acad. Sci. 1129 , 130–138. doi: 10.1196/annals.1417.018, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Palhano-Fontes F., Andrade K. C., Tofoli L. F., Santos A. C., Crippa J. A. S., Hallak J. E. C., et al.. (2015). The psychedelic state induced by Ayahuasca modulates the activity and connectivity of the default mode network . PLoS One 10 :e0118143. doi: 10.1371/journal.pone.0118143, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parise A. G., Gubert G. F., Whalan S., Gagliano M. (2023). Ariadne’s thread and the extension of cognition: a common but overlooked phenomenon in nature . Front. Ecol. Evol. 10 :10. doi: 10.3389/fevo.2022.1069349 [ CrossRef ] [ Google Scholar ]
  • Pinto Y., de Haan E. H. F., Villa M. C., Siliquini S., Polonara G., Passamonti C., et al.. (2020). Unified visual working memory without the anterior Corpus callosum . Symmetry 12 :2106. doi: 10.3390/sym12122106 [ CrossRef ] [ Google Scholar ]
  • Pinto Y., Neville D. A., Otten M., Corballis P. M., Lamme V. A. F., de Haan E. H. F., et al.. (2017). Split brain: divided perception but undivided consciousness . Brain 140 , aww358–aww337. doi: 10.1093/brain/aww358, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pulsifer M. B., Brandt J., Salorio C. F., Vining E. P. G., Carson B. S., Freeman J. M. (2004). The cognitive outcome of Hemispherectomy in 71 children . Epilepsia 45 , 243–254. doi: 10.1111/j.0013-9580.2004.15303.x, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Radek L., Kallionpää R. E., Karvonen M., Scheinin A., Maksimow A., Långsjö J., et al.. (2018). Dreaming and awareness during dexmedetomidine- and propofol-induced unresponsiveness . BJA Br. J. Anaesth. 121 , 260–269. doi: 10.1016/j.bja.2018.03.014, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Redinbaugh M., Phillips J. M., Kambi N. A., Mohanta S., Andryk S., Dooley G. L., et al.. (2020). Thalamus modulates consciousness via layer-specific control of cortex . Neuron 106 , 66–75.e12. doi: 10.1016/j.neuron.2020.01.005, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Redondo R., Kim J., Arons A., Ramirez S., Liu X., Tonegawa S. (2014). Bidirectional switch of the valence associated with a hippocampal contextual memory engram . Nature 513 , 426–430. doi: 10.1038/nature13725, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Reid C. R. (2023). Thoughts from the forest floor: a review of cognition in the slime mould Physarum polycephalum . Anim. Cogn. 2023 :23. doi: 10.1007/s10071-023-01782-1, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Roy D., Arons A., Mitchell T., Pignatelli M., Ryan T. J., Tonegawa S. (2016). Memory retrieval by activating engram cells in mouse models of early Alzheimer’s disease . Nature 531 , 508–512. doi: 10.1038/nature17172, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ryan T. J., Roy D. S., Pignatelli M., Arons A., Tonegawa S. (2015). Engram cells retain memory under retrograde amnesia . Science 348 , 1007–1013. doi: 10.1126/science.aaa5542, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saigusa T., Tero A., Nakagaki T., Kuramoto Y. (2008). Amoebae anticipate periodic events . Phys. Rev. Lett. 100 :018101. doi: 10.1103/PhysRevLett.100.018101, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scheinin A., Kantonen O., Alkire M., Långsjö J., Kallionpää R. E., Kaisti K., et al.. (2020). Foundations of human consciousness: imaging the twilight zone . J. Neurosci. 41 , 1769–1778. doi: 10.1523/JNEUROSCI.0775-20.2020 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Segundo-Ortin M., Calvo P. (2021). Consciousness and cognition in plants . Wiley Interdiscip. Rev. Cogn. Sci. 13 :e1578. doi: 10.1002/wcs.1578 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Seth A. K., Bayne T. (2022). Theories of consciousness . Nat. Rev. Neurosci. 23 , 439–452. doi: 10.1038/s41583-022-00587-4, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shewmon D., Holmes G., Byrne P. (1999). Consciousness in congenitally decorticate children: developmental vegetative state as self-fulfilling prophecy . Dev. Med. Child Neurol. 41 , 364–374. doi: 10.1017/S0012162299000821, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shomrat T., Levin M. (2013). An automated training paradigm reveals long-term memory in planarians and its persistence through head regeneration . J. Exp. Biol. 216 , 3799–3810. doi: 10.1242/jeb.087809, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Siclari F., Bernardi G., Cataldi J., Tononi G. (2018). Dreaming in NREM sleep: a high-density EEG study of slow waves and spindles . J. Neurosci. 38 , 9175–9185. doi: 10.1523/JNEUROSCI.0855-18.2018, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smart J. J. C. (2022). The mind-brain identity theory - the Stanford encyclopedia of philosophy . Available at: https://plato.stanford.edu/archives/win2022/entries/mind-identity/ .
  • Smeets J., Horstman A. M. H., Schijns O. E. M. G., Dings J. T. A., Hoogland G., Gijsen A. P., et al.. (2018). Brain tissue plasticity: protein synthesis rates of the human brain . Brain 141 , 1122–1129. doi: 10.1093/brain/awy015, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Solms M., Panksepp J. (2012). The “id” knows more than the “Ego” admits: Neuropsychoanalytic and primal consciousness perspectives on the Interface between affective and cognitive neuroscience . Brain Sci. 2 , 147–175. doi: 10.3390/brainsci2020147, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sperry R. W. (1968). Hemisphere deconnection and unity in conscious awareness . Am. Psychol. 23 :723. doi: 10.1037/h0026839, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sperry R. W. (1984). Consciousness, personal identity and the divided brain . Neuropsychologia 22 , 661–673. doi: 10.1016/0028-3932(84)90093-9, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stacho M., Herold C., Rook N., Wagner H., Axer M., Amunts K., et al.. (2020). A cortex-like canonical circuit in the avian forebrain . Science 369 :5534. doi: 10.1126/science.abc5534, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tavares T. P., Kerr E. N., Smith M. L. (2020). Memory outcomes following hemispherectomy in children . Epilepsy Behav. 112 :107360. doi: 10.1016/j.yebeh.2020.107360, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tononi G. (2015). Integrated information theory . Available at: http://www.scholarpedia.org/article/Integrated_information_theory .
  • Trettenbrein P. C. (2016). The demise of the synapse as the locus of memory: a looming paradigm shift . Front. Syst. Neurosci. 10 :88. doi: 10.3389/fnsys.2016.00088, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Trewavas A. (2017). The foundations of plant intelligence . Interface Focus 7 :20160098. doi: 10.1098/rsfs.2016.0098, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tuckute G., Paunov A., Kean H., Small H., Mineroff Z., Blank I., et al.. (2022). Frontal language areas do not emerge in the absence of temporal language areas: a case study of an individual born without a left temporal lobe . Neuropsychologia 169 :108184. doi: 10.1016/j.neuropsychologia.2022.108184, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Turku U. O. (2018). Consciousness is partly preserved during general anesthesia . Available at: https://www.sciencedaily.com/releases/2018/07/180703105631.htm .
  • Vargha-Khadem F., Carr L. J., Isaacs E., Brett E., Adams C., Mishkin M., et al.. (1997). Onset of speech after left hemispherectomy in a nine-year-old boy . Brain 120 , 159–182. doi: 10.1093/brain/120.1.159, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vollenweider F. X., Preller K. H. (2020). Psychedelic drugs: neurobiology and potential for treatment of psychiatric disorders . Nat. Rev. Neurosci. 21 , 611–624. doi: 10.1038/s41583-020-0367-2, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wahbeh H., Radin D., Cannard C., Delorme A. (2022). What if consciousness is not an emergent property of the brain? Observational and empirical challenges to materialistic models . Front. Psychol. 13 :955594. doi: 10.3389/fpsyg.2022.955594, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wahbeh H., Radin D., Mossbridge J., Vieten C., Delorme A. (2018). Exceptional experiences reported by scientists and engineers . Explore (N.Y.) 14 , 329–341. doi: 10.1016/j.explore.2018.05.002, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Williams B., Woollacott M. H. (2021). Conceptual cognition and awakening: insights from non-dual Saivism and neuroscience . J. Transpers. Psychol. 53 , 119–139. [ Google Scholar ]
  • Zeiger W. A., Marosi M., Saggi S., Noble N., Samad I., Portera-Cailliau C. (2021). Barrel cortex plasticity after photothrombotic stroke involves potentiating responses of pre-existing circuits but not functional remapping to new circuits . Nat. Commun. 12 :3972. doi: 10.1038/s41467-021-24211-8, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • The Student Experience
  • Financial Aid
  • Degree Finder
  • Undergraduate Arts & Sciences
  • Departments and Programs
  • Research, Scholarship & Creativity
  • Centers & Institutes
  • Geisel School of Medicine
  • Guarini School of Graduate & Advanced Studies
  • Thayer School of Engineering
  • Tuck School of Business

Campus Life

  • Diversity & Inclusion
  • Athletics & Recreation
  • Student Groups & Activities
  • Residential Life

Marc Novicoff

Writing Portfolio

Philosophy of the Mind: Dualism vs Identity Theory

In philosophy, there exists a view known as dualism, where consciousness and the mind are believed to be fundamentally different from the physical. There is substance dualism, argued by Descartes, that declares that minds and bodies are two different substances entirely. There is also property dualism, argued by Brie Gertler, that declares that consciousness and therefore experience do not consist of merely physical properties. Then there are those views that are not dualist. Of these, I will be discussing D.M. Armstrong’s functionalism, which describes a mental state as “characteristically, the cause of certain effects and the effect of certain causes” (82). In this way, a mental state like hunger is characteristically an effect of lack of food, and characteristically the cause of getting up to find something to eat. I see functionalism as the most compelling because it handles such issues as solipsism, which Cartesian dualism struggles with, and mental causation, which property dualism struggles with by having to come to philosophically undesirable conclusions (epiphenomenalism, overdetermination, denial of causal closure) to understand it. Moreover, I see experience as not beyond the physical; if one was able to grasp all the physical, they would grasp the experience. A completely physically knowledgeable person may gain a new skill through doing something new, but they already know all there is to know about the experience.

First, we will begin with Cartesian dualism. Descartes comes to a dualist conclusion by setting off on an epistemological journey to set a firm foundation for all knowledge. The key to laying this firm foundation is to distill the things we think we know down to the things we are absolutely certain we know. Descartes does this through a series of doubts. In this series, Descartes goes through each of his foundational beliefs; if it can be doubted at all, it shall be put aside. After all these doubtable beliefs have been put aside, only the certain will remain, and that will be what we can build knowledge off of.

Descartes’ first type of foundational belief is that his senses give him accurate information about the world and what it is like. The doubts for this foundational belief are not difficult to find: sometimes one mishears things; sometimes one sees refracted light. In this way, one’s senses routinely deceive them. We now know sense-based perception is not certain and it can be put aside. That first belief is then altered to remove doubt, creating Descartes’ second type of foundational belief that at least in good conditions (nothing loud preventing you from hearing correctly, no water refracting light), the senses provide accurate information when working together. This too is easily doubtable, Descartes notes, given that dreams, hallucinations, and mental illnesses all cause the senses to work together, in good conditions, to deceive their users. The senses are put aside. Descartes continues the method, going now to his third type of foundational belief, that at least his intellect gives him knowledge of the general types of things, including conceptual truths like mathematical facts. Even this, Descartes argues, is doubtable. He conceives of an evil genius that could be deceiving him and leading him to believe inaccurate conceptual truths. Thus, the intellect’s ability to discern conceptual truths is put aside with the senses’ ability to discern environmental truths.

What is left of Descartes’ knowledge after the senses and the ability to see conceptual truths are gone? Only his mind. Descartes is only certain that he—a thing that thinks—exists. This thing that thinks that Descartes is sure exists is therefore a distinct type of thing, one that is knowable with certainty through introspection. It is also unextended, meaning it doesn’t take up measurable space. This kind of unextended, thinking thing, knowable with certainty via introspection shall be called a mind. Bodies, on the other hand, are things that are extended, unthinking, and knowable via perception (and therefore never knowable with certainty). Descartes’ view is aptly called substance dualism, as it recognizes two distinct substances: minds and bodies.

With substance dualism, there arrive some philosophical problems. I will be focusing on the two most glaring to me, the first being the issue of other minds. If minds are only knowable through introspection, what possible reason does one have to believe that anybody else has a mind?  A situation where the existence of other minds is not certain is troubling not only because it challenges our existing views, but it also might cause moral dilemmas. If nobody else has a mind, shouldn’t the world bend to the satisfaction of my mind? I should steal what I want and kill who I want; nothing that is definitely thinking will suffer. The second most glaring issue is that of mental causation. If minds and bodies are so fundamentally different, how do they seem to interact constantly? When I’m hungry (an issue of the mind), I put my philosophy reading down and go get the Snickers sitting on my desk (undeniably an issue of the body). Cartesian dualism struggles with these two issues, solipsism and mental causation. Functionalism does not.

Armstrong presents his functionalism by first presenting identity theory. This is the theory that the mind is identical to the brain. Every mental state is just a brain state. Let’s use an example: there is a well-documented neurological connection between dopamine and joy. An identity theorist would therefore say joy is dopamine. [*] Armstrong sees this theory as a “perfectly intelligible one…once we achieve a correct view of the analysis of mental concepts” (82). Armstrong’s mission then is not to firmly rule on the correctness of identity theory, but rather to “make the way smooth” for it and make it seem plausible (82).

Armstrong is able to accomplish this by reconceiving of mental states as things that follow from certain inputs, and lead to certain outputs. He compares this to the meaning of the word poison. A poison is defined as any substance that causes certain effects, like hurting the body in a certain way enough to potentially kill a person. Similarly, hunger can be defined as “a state of a person or animal that characteristically brings about food-seeking and food-consuming behavior by that person or animal” (82). Any mental state can theoretically be defined in this way, as flowing from inputs to outputs. Take my anger at a friend of mine: it flows from the inputs of my perception of his doing something wrong, and it characteristically leads to me the output of confronting the friend. Inputs can also be brain states themselves. The mental state of utter despair a clinically depressed person feels is an effect of abnormal brain chemistry (an input) and the utter despair is also a cause for them to pick up the phone and text their friend to ask if he knows any good clinical psychologists (an output). Next, we will discuss a new kind of dualist, who has found an issue (or so she thinks) with functionalism: aren’t mental states more than just the effects of causes and the causes of effects? What of the experience of a mental state?

I will be using Gertler to stand for such dualists, known as property dualists. These dualists believe that the mental has different properties as the physical. In Gertler’s words, “conscious experience is neither constituted nor necessitated by structural-dynamic phenomena” (6). The structural-dynamic phenomena are enough to exhaustively constitute physical states, but “conscious experience” is of fundamentally different properties. Gertler is arguing for the qualitative nature of the mental. Since the mental is qualitative, she argues it cannot be understood exhaustively through the physical sciences. Having set up her thesis, Gertler begins to defend it with a couple thought experiments. For Gertler, both of these thought experiments come to the conclusion she is looking for: there is something about experience, even if we can’t put our finger on it, that isn’t structural-dynamic (or physical, as most people would say).

The first of these experiments is known as the knowledge argument. The thought experiment centers on Mary, a genius neuroscientist living in the future. Being so talented and futuristic, Mary has learned everything about color vision, but she has never seen any colors. She knows the wavelength of red, and she knows the neural effects of seeing red. Yet, as the example goes, she will learn something new when she sees red. She will likely even admit this, exclaiming “Oh, this is what it’s like to see red” (9). The second thought experiment asks us to imagine a zombie that has all the exact same physical characteristics of us down to every single atom, but isn’t conscious. The implication of the exercise is that we can easily imagine this, and we therefore conceive of consciousness as not merely physical.

I don’t like either thought experiment, [†] but the gist is easy to get. Each thought experiment is trying to get us to agree that consciousness is much more than the physical by taking the physical to the extreme. Just because all the physical is there, that doesn’t necessarily mean consciousness is, the zombie theorist tells us. Just because you know all the physical doesn’t mean you know the conscious experience, the Mary theorist tells us. Unlike Cartesian substance dualism, this property dualism seems able to handle the concept of other minds. However, it still struggles with mental causation, the issue of how the mental can cause the physical.

To understand the issue of mental causation, let’s look at the example we’ve already mentioned a couple times: hunger. A hungry person has a physical state of lacking, and a mental state of hunger. That person will then contract his muscles in the right way to go get food, and as he eats, he will feel satiated. Mental state #1 is hunger; physical state #1 is energy-deficient cells (and other physical features of hunger, maybe cortisol or adrenaline); mental state #2 is satisfaction; physical state #2 is cells beginning to produce energy from the food being processed (and other physical features, maybe dopamine releases). What caused physical state #2? According to the principal of causal closure, each physical event must be able to be fully explained by a physical cause, so physical state #1 must be to blame. What then of mental state #1? The identity theorist says that mental state #1 must then have been included in physical state #1. Armstrong’s functionalism is in line with this (or at least proves its intelligibility). What does the dualist say of mental state #1? The dualist has 3 options that maintain mental state #1’s distinctness from physical state #1.

  • Mental state #1 had no physical effects, and mental states never have any physical effects (this is called epiphenomenalism).
  • Mental state #1 caused physical state #2, but so did physical state #1. This is overdetermination.
  • Causal closure is not a true idea.

Gertler essentially arrives at the third. She argues that causal closure relies on an “exhaustive” nature of physical causality. This is a much higher bar than just what scientists have arrived at in the study of structure and dynamics. Therefore, the physicalists require the same amount of armchair reasoning to defend their theses as dualists do theirs. The debate really comes down to what a person finds conceivable, she says. Dualists require conceptions of subjective consciousness and (seemingly) zombies. Physicalists require conceptions of the structural-dynamic as being able to exhaustively explain all physical phenomena, which has not yet been proven.

I stand with the functionalist, and I also happen to stand with the identity theory it supposedly enables, though I won’t get into the latter. Functionalism doesn’t struggle with the solipsism Cartesian dualism enables. Functionalism also handles mental causation nicely as it is literally defines mental states causally (the effects of certain causes and the causes of certain effects). By contrast, dualism’s handling of mental causation is not philosophically desirable in my opinion. Let’s go through the 3 options for a property dualist to understand mental causation while maintaining the distinctness of the physical:

  • Epiphenomenalism goes against much of what we consider to be true, like that our hunger motivates us to eat or that our sadness causes tears. Epiphenomenalists may have good solutions to this, but at best it is an idea that runs contrary to everything most people believe about the mind and its ability to influence our actions.
  • Overdetermination implies that only one of either the physical or the mental is really necessary. Without violating causal closure (meaning the physical is necessary), this renders the mental causally unnecessary. It may still have causality, but it doesn’t have importance. Again, this is unsettling. Shouldn’t hunger be the motivating force of us getting food? Not just an entirely needless feeling?
  • Denying causal closure—if causal closure is denied, then the physical and the mental must both be needed for physical effects. This seems incorrect as well. Take hunger again: if food-seeking requires both physical lacking and hunger, then how come things that don’t have a mental (like bacteria) have so many food-seeking behaviors. Surely food-seeking doesn’t require the mental.

What about the thought experiments that prove dualism to be true, though? What about Mary and the zombie? The zombie experiment dies out if the assumption of conceivability doesn’t hold (and it doesn’t for me), but Mary is slightly more interesting, after we tweak the example. Let’s say there is a brilliant neuroscientist named Ann, living in a future where nearly everything is known about vision science, but cone transplants do not yet exist. Ann grows up in the normal world with normal colors, but she is just incapable of seeing red due to her not possessing the right cones. This hasn’t stopped her from being educated, and she has become a brilliant vision scientist who now knows everything physical there is to know about color vision. During Ann’s quest for tenure at the university she works at, she figures out how a cone transplant could be conducted. She finds an eye surgeon who is capable and has the transplant performed on her. Walking outside the clinic, she sees a stop sign. She is seeing red for the first time. However, I don’t believe she is learning anything new. She has gotten a new skill, the ability to see red, but I do not see what she learns. She experiences all the physical effects of seeing red for the first time, but she has expected all of them. All of the workings of her brain are just as her scientific models have predicted because she knew everything. The only thing that has changed is that she has gained the skill of seeing red and has now herself exhibited all the same physical effects of color vision that she had studied.

Cartesian dualism puts us in a place where we doubt the minds of others. Property dualism doesn’t struggle with this issue, but it does struggle with mental causation, forcing us to admit the ineffectualness of the mental, the needlessness of the mental, or the lack of physical without mental. All three of these are undesirable. Functionalism defines its mental states as existing in this causal chain, making mental causation easy while still avoiding the solipsistic potential conclusions of Descartes. The main worry about functionalism and identity theory is it leaves out the key nature of the mental as qualitative and beyond the reaches of science. The thought experiments that one could propose to prove the qualitative nature of experience involve assuming existing agreement on dualism (the zombie, who would be denied by identity theorists anyway) or the mistake of thinking that acquiring a skill is learning a fact (Mary, or even Ann). Thus, the functionalist view is the one I side with.

[*] They would probably be more sophisticated than this. A true type-type identity theorist might say something more like “joy is just the brain activity scientifically associated with joy and dopamine is clearly a part of that associated brain activity.” Upon the discovery of each new physical characteristic of the brain associated with feelings of joy, a type-type identity theorist would add that physical characteristic to her conception of joy.

[†] The Mary example makes no real sense to me. She is an incredible neuroscientist living in the distant future, in a time where literally everything physical about color vision is known. In fact, she literally knows the wavelength of red. How is it possible she hasn’t built a machine (or come across one thousands of years ago) that can produce colors based on wavelengths? We can produce many wavelengths today, and it’s quite easy to figure out which we can see by looking at them and seeing if they look like a color to our eyes. I struggle to understand this thought experiment, but I think her being unable to see red makes the whole experiment much more comprehensible. I will return to this example.

The Zombie example is even worse in my opinion. It relies entirely on the conceivability of this zombie. Conceivability is very subjective and I would doubt there is an identity theorist who would admit the conceivability of the zombie.

Philosophy Now: a magazine of ideas

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Brains & Minds

Philosophy of mind: an overview, laura weed takes us on a tour of the mind/brain controversy..

In the twentieth century philosophy of mind became one of the central areas of philosophy in the English-speaking world, and so it remains. Questions such as the relationship between mind and brain, the nature of consciousness, and how we perceive the world, have come to be seen as crucial in understanding the world. These days, the predominant position in philosophy of mind aims at equating mental phenomena with operations of the brain, and explaining them all in scientific terms. Sometimes this project is called ‘cognitive science’, and it carries the implicit assumption that cognition occurs in computers as well as in human and animal brains, and can be studied equally well in each of these three forms.

Before the mid-twentieth century, for a long time the dominant philosophical view of the mind was that put forward by Ren é Descartes (1596-1650). According to Descartes, each of us consists of a material body subject to the normal laws of physics, and an immaterial mind, which is not. This dual nature gives Descartes’ theory its name: Cartesian Dualism. Although immaterial, the mind causes actions of the body, through the brain, and perceptions are fed to the mind from the body. Descartes thought this interaction between mind and body takes place in the part of the brain we call the pineal gland. However, he didn’t clarify how a completely non-physical mind could have a causal effect on the physical brain, or vice versa , and this was one of the problems that eventually led to dissatisfaction with his theory.

In the early twentieth century three strands of thought arose out of developments in psychology and philosophy which would come together to lead to Cartesian Dualism being challenged, then abandoned. These were Behaviorism, Scientific Reductionism and Vienna Circle Verificationism. I will begin with a very brief summary of each of those positions before I describe various contemporary views that have evolved from them:

Behaviorism : Behaviorists accept psychologist B.F. Skinner’s claims that mental events can be reduced to stimulus-response pairs, and that descriptions of observable behavior are the only adequate, scientific way to describe mental behavior. So, for behaviorists, all talk about mental events – images, feelings, dreams, desires, and so on – is really either a reference to a behavioral disposition or it is meaningless. Behaviorists claim that only descriptions of objectively observable behavior can be scientific. Introspection is a meaningless process that cannot yield anything, much less a ‘mind’ as a product, and all human ‘mental’ life that is worth counting as real occurs as an objectively observable form of behavior. Head-scratching is objectively observable. Incestuous desire is not; nor is universal doubt, apprehension of infinity, or Cartesian introspection. Philosophers like Carl Hempel and Gilbert Ryle shared the view that all genuine problems are scientific problems.

Verificationism was a criterion of meaning for language formulated by the Logical Positivists of the Vienna Circle, who argued that any proposition that was not an a logical truth or which could not be tested was literally meaningless. For example, a mother’s claim that the cat will bite Jimmy if he doesn’t stop teasing her is testable, but a theologian’s claim that the Infinite Absolute is invisibly bestowing grace in the world is not.

Scientific Reductionism is the claim that explanations in terms of ordinary language, or sciences such as psychology, physiology, biology, or chemistry, are reducible to explanations at a simpler level – ultimately to explanations at the level of physics. Some (but not all) mental terms can be ‘operationalized’, or reduced to testable and measurable descriptions. Only these ones will rate as real mental events to the scientific reductionist. There will be no Cartesian or Platonic ‘mind’ left over to be something different from a body.

Mental Events are Physical

The Oxford philosopher Gilbert Ryle (1900-1976) had another way to explain away the mind that Plato and Descartes believed exists independently of a body. Ryle characterized Cartesian Dualism as a ‘category mistake’. Category mistakes, as the name suggests, involve putting something into the wrong logical category. In Ryle’s example, a visitor to Oxford wanders around the various colleges, libraries, laboratories and faculty offices, and then asks: “Can I see the University?” She has missed the fact that in seeing the buildings she was already seeing the university.

Ryle claimed that Descartes’ ‘ghost in the machine’ – his immaterial mind in a material body – is a similar mistake. Descartes thinks he must have a ‘soul’ in his body that possesses his talents, memories and character. Ryle says that like the university, the mind is just the organization of Descartes’ body’s propensities. Bodies don’t need a ghost to run them. According to Ryle, the properties of a person are better understood as adjectives modifying a body, than as a noun (an object) parallel to it. Intelligence, for example, is not a thing that exists apart from and parallel to a body, but rather is a collection of properties a body has. Intelligence includes properties such as social skill, quick wit, organizational ability, math ability, a sense of humor, musical talent, articulateness, critical thinking skill, and artistic sensitivity. Someone who never exhibited any of these skills or abilities would not be called intelligent; and anyone who is considered intelligent exhibits some of these talents.

Ludwig Wittgenstein (1889-1951) contributed an argument against private language. He claimed that for a symbol or a word to have a meaning there must be agreement among people about what the symbol is to mean. Plato’s idea of what ‘triangle’ means was that an ‘inner’ mental image occurs ‘in your mind’. By contrast, according to Wittgenstein, ‘triangle’ is a public word, used to communicate in a social group. Children learn its correct and incorrect applications by being corrected by elders in their use of the word. According to Wittgenstein, apart from the social use, ‘triangle’ has no meaning. Similarly on this account, ‘mind’ has no meaning apart from its effects.

J.J.C. Smart added materialism to scientific reductionism in this developing point of view by claiming that mental states could literally be particular states of the brain – so that for example some C-fibres firing in one’s brain would be identical with a specific feeling of pain. This became known as the Mind-Brain Identity Theory, and for a while it dominated philosophical discussions about mental events. Since then, however, Identity Theory discussions have been superceded by discussions driven by computer metaphors, such as Functionalism, Neurological Reductivist Materialism, Supervenience Theories, and Naturalistic Dualism. So let’s look at those newer theories.

Functionalism

Functionalism is the theory that the important thing about mental states is not where they are located or what they are made of, but what function they perform.

Alan Turing is generally regarded as one of the fathers of computer science: among other achievements, he produced the first ever design for a stored-program computer. He also argued that artificial intelligence is intelligence in every sense of the word. In a 1950 paper he described a scenario which has since become known as the Turing Test. Suppose you are communicating with two people on the other side of a wall. You pass notes through a slot and figure out which of the people is responding to your notes. Now, suppose that one of the people is replaced by a computer, and you can’t tell that this has happened. Do you have any reason to say that the person you were communicating with before is intelligent but the computer is not? Turing says, no, you don’t. If intelligence consists of your ability to solve math problems, keep track of lots of information, organize data, recognize recurring patterns, and play chess, and the computer can do all of these things better and faster than you can, then you have no right to claim that you are intelligent and it is not. Now that Big Blue has beaten Kasparov at chess, and the best Jeopardy players have been beaten by IBM’s Watson, Turing’s claim seems even more convincing.

Turing is identifying mental properties with mental functions – not with observable behavior, as Ryle did; nor with brain states, as Smart did. Turing assumes mental functions can cause behavior and brain states, but not that they’re identical with either behavior or brain states.

Hilary Putnam, writing in the 1970s, argued that a feeling of pain could be a function that is in principle realizable in a collection of silicon chips or some other physical apparatus as well as in a brain. Putnam called the idea that humans can think but computers can’t, ‘hydrocarbon chauvinism’. He further claimed that any organism can be described as a probabilistic automaton – i.e., as a system that undergoes transitions from initial states, through processing functions, to output states which can be predicted with varying degrees of accuracy. All organisms are systems that causally interact with the environment, have processing procedures, and output effects, claimed Putnam. (He has since changed his mind about functionalism and become a pragmatist.)

Jerry Fodor added to the functionalist program the proviso that any function capable of working as brain states do must be computational. Neurons, structures and patterns in the brain can be described in terms of mathematical models. Therefore if mental events are to be functionally connected to brains in a one-to-one correspondence, then they too must be realizable through a language of thought in a digitizable format.

Neurological Reductionism

Paul and Patricia Churchland espouse a position they call ‘eliminative materialism’, which argues that the project of neuroscience will actually prove to be even more radical than identity theorists like Smart realized. The Churchlands claim that talk of mental states will eventually be abandoned altogether, in favor of a radically different view of how the brain works not identified with brain states.

According to the Churchlands, folk psychology is the way most people think about how thinking works. So for example, most people now think that we have a stream of consciousness that contains images and conceptions of a wide variety of types about which we have beliefs and attitudes. Our beliefs and attitudes are colored by our feelings, which include mental states like joy, sorrow, resentment, anxiety and relief. We also think that the way we sense the world and ourselves is largely a direct representation of the way the world is; so the world contains cold and hot, colored, shaped, hard and soft, threatening and soothing things, and our bodies sometimes are those ways as well. All of this is false, according to the Churchlands. It is not just a bit misleading, the way a fuzzy map might misrepresent some areas of terrain. It is downright false across the board, in the way that the notion that demonic possession explains mental illness is false.

Paul Churchland points out how radically scientific revolutions alter the way people think about things. When Aristotle’s theories in physics were replaced by Newtonian physics, his ideas like ‘natural motion is circular’ just ceased to exist. Likewise, science now has no place for phlogiston, choleric personalities, and demonic possession. Churchland predicts that in the same way, at some point in the near future, people will no longer even try to introspect to see how they are doing. Just as a psychologist might now tell a depressed patient to stop worrying about why he is depressed and take some Prozac, so in the future, people might figure out how they are doing mentally by giving themselves a home fMRI or CAT scan and having their computer analyze the data. The resulting analysis will have nothing in common with “I’m sad because my cat died,” or “I’m elated over the beautiful sunset.”

Churchland has three arguments in favor of eliminative materialism. The first is that folk psychology fails to explain such common activities as sleep, learning, intelligence and mental illness. Since folk psychology has been around for thousands of years, it isn’t lack of time to work out the details folk psychology suffers from, it is explanatory poverty. Secondly, the history of ideas supports elimination of old conceptual frameworks. Folk notions of motion were completely replaced by Newtonian physics, leaving not a trace. Folk ideas of cosmology, fire and life were equally cockeyed. The phenomena of conscious intelligence are more complex and harder to understand than any of the above, so there is little likelihood that our folk ideas about consciousness could be right. Thirdly, it is highly improbable that folk psychology will be reduced to neurobiology. Reductions require that the specific principles and types of things in one theory closely mirror those in the reduced theory. Neurobiology is highly unlikely to do this.

Daniel Dennett adds to the Churchlands’ project a claim that interpreting a system as an intentional and rational system is simply a matter of taking a particular type of stance with respect to the system. To see Big Blue the chess computer as rational and interpret its movement as planning to attack a queen, is simply an admission that we don’t know what design or physical features of Big Blue produced the behavior we observe, and so the behavior appears rational. Complex systems, says Dennett, appear intentional when viewed ‘from the top down’, and mechanical when viewed ‘from the bottom up’. To Dennett, agents, intentionality, meanings in language, phenomenal qualities, intelligence in the abstract, and mental entities in general, can play no engineering role in explaining the workings of any system, human or otherwise. So, in all cases of apparent rationality, apparent agents can be decomposed into mechanical parts.

Supervenience

Donald Davidson and Jaegwon Kim agree with the reductionists that only physical and mechanical principles explain anything. But they insist that phenomenal experience, such as the experience of seeing a sunset, adds something to a human life that a computer might lack. Kim and Davidson both said that phenomenal qualities are supervenient properties of brains: properties arising simply because the physical processes in the brains were working. The supervenience of mental phenomena on brain activity like this is understood as paralleling the supervenience of smoke on fires: the smoke does not causally effect the fire, but will be there, as a by-product, whenever a fire is occurring. These philosophers thus avoided denying the reality of mental experience, but the supervenient phenomenal properties are here viewed as playing no causal role in thinking or action. This supervenient view, of mental phenomena being causally-ineffective emergent properties of the brain, is similar to the position in philosophy of mind called epiphenomenalism .

Naturalistic Dualism and the Hard Problem

David Chalmers, however, argues that materialist reductionism of the Churchlands’ type throws out too much, and cannot deal with the fact that humans enjoy sunsets. Chalmers agrees with Thomas Nagel that there is something that it feels like to be a bat, or a human, but there may be nothing that it is like to be a TV set. (Computers are left an open question).

Chalmers argues that functionalists and reductionists are only dealing with the ‘easy problems’ of consciousness. Problems such as how an organism learns, how the sense mechanisms work, how the brain processes sensory input and the like, are all mechanical questions about organic functions, so as one would expect, mechanical explanations are adequate to explain them. The hard problem, according to Chalmers, is why any of these events should be accompanied by phenomenal experience: what it’s like to see red, for example. He argues that there are no physical facts about brains from which it follows that phenomenal experience should occur for those and only those physical events for which it does occur. In other words, there’s nothing physically special about the brain which explains experiences. Further, rejecting behaviorism, Chalmers points out that a first-person perspective is required to even know that phenomenal properties accompany the physical events.

Chalmers argues for a form of dualism that he calls ‘naturalistic dualism’. To explain consciousness in full, he argues, requires taking phenomenal experience seriously. But, unlike Plato and Descartes, Chalmers believes that the conscious phenomena are dependent on the existence of brain states. This implies that the relationship between the mental states and their biochemical base is scientifically discoverable. Also, the conscious states must mirror the functions performed by the biochemical states in some important ways. Chalmers also calls his position ‘non-reductive functionalism’.

Objections to the Cognitive Science Program

While John Searle agrees with the materialist leanings of the cognitive scientists, he has been arguing that functionalists and eliminativists take the computer model too seriously, as actually descriptive of the functioning of a mind (Strong Artificial Intelligence) rather than as a helpful metaphor (Weak AI). Searle’s two main objections to Strong AI concern the distinction between syntax and semantics in language, and the distinction between causation and logical inference in reality.

The syntax of a sentence is the grammar or logical structure of the sentence. It can be captured through a formulation of this structure in symbolic logic. The semantics of a sentence is its meaning or reference. Searle says that philosophers like Turing, Fodor, the early Putnam and other advocates of Strong AI collapse semantics into syntax. There are some reasons for doing this. For instance, Turing could translate the code that the Germans were using in World War II using only his syntactic engine, without reference to the meaning of what he was translating. However Searle argues that in the way they operate, languages do not collapse semantics into syntax. He makes this point most clearly through his Chinese Room example. A person who speaks no Chinese, sitting in a room, has cards with Chinese characters on slipped under the door to him. He has a rule-book for processing these characters, and passes further character cards out of the room according to those rules. A person outside the room interprets the output as someone answering questions in Chinese. Searle says that the ability to string Chinese symbols together according to grammatical or logical rules does not however constitute speaking Chinese, because the person in the room does not understand the reference or meanings of the symbols that a speaker of Chinese would give them. To understand the meanings, one would have to understand not only what the cards refer to, but a lot about Chinese culture, nuances of tone and context, social structure, mannerisms, etc. None of this data is contained in or reducible to the syntactical rules of Chinese.

Searle’s second point concerns the distinction between causation and logical inference. Since the AI revolution began in the late twentieth century, a good deal of philosophical effort has gone into trying to show that a specific logical formula ‘p implies q’ is equivalent to or somehow reducible to the scientific claim ‘p causes q’. Searle says there are several serious problems with this project. The main one is that logical relations are time-insensitive, and, for the most part, symmetrical: since ‘p implies q’ is equivalent to ‘not-q implies not-p’, I can derive either from the other in either order. Yet causation is neither time-insensitive nor symmetrical in this way.

To Searle, the reason computational logic patterns can’t be causal explanations of mind/brain behavior is that they are simulations. He points out that simulating a hurricane on a computer may tell you some things about the hurricane, but it doesn’t constitute causing a hurricane. And the simulation has no causal power to make the hurricane do anything, such as change course or grow less powerful. Likewise, simulated fires don’t burn anything, and simulated car crashes don’t bend any metal. Simulated logical patterns don’t cause mental states or influence brain states. Searle accuses the Strong AI people of confusing their virtual reality with the real thing.

Further Objections to Reductionism

Emotions: Recent discoveries by Antonio Damasio and Jaak Panksepp about the role of emotions in decision-making and social reasoning have raised further doubts about the strongly cognitive model of mind inherited from Descartes and perpetuated by the Strong AI /Turing machine model. Far from being the distractions to mental operations that Plato and Descartes represented them as being, emotions have turned out to be essential elements in mental functioning. Patients with pre-frontal-cortex brain injuries, like the railroad worker Phineas Gage [see here ], or other brain injuries that impair emotional functioning, become incapable of even simple planning. Without emotional drive, cognition appears to become dysfunctional, at least in humans.

The Extended Mind: Other critics of the reductionistic agenda in the philosophy of mind have pointed out that many aspects of our mental functioning are not brain-bound in the way identity theorists supposed. The psychologist J.J. Gibson articulated the idea of human thinking as ecologically embedded in a body and an environment. Following this, Andy Clark argues that one’s body, ability to move, and system of environmental affordances, are as much a part of one’s mental functioning as are brain functions. Clark shifts the philosophical emphasis from analysis of the brain to analysis of a human’s kinesthetic interaction with an ecological and social space. He points out that large-scale social projects, such as a building project or a disaster relief effort, occur across a considerably extended space and through the intersection of many people’s minds, and are not limited to neuronal firings in any individual brain. Clark, in a joint paper with David Chalmers, discusses the fictional example of Otto, a man with memory problems who remembers the location of a library (and other useful pieces of information) by writing it down in a notebook. They argue that Otto’s memory is literally in the notebook, not in his brain. Similarly, much of the memory of all of us arguably now resides in a variety of electronic devices.

Panpsychism: A more robust form of criticism of the reductionist program comes from a revival of panpsychism by philosophers such as Galen Strawson and Gregg Rosenberg, and physicists such as Henry Stapp. They concur with Alfred North Whitehead’s view that for consciousness to be anywhere in nature it must be everywhere in nature, and with William James’ view that our stream of consciousness is open to intrusions from an environmentally-pervasive conscious ‘more’. In other words, everything has an element of consciousness. For most of the materialists, consciousness exists only as a rare occurrence in the brains of a single or a few species (if at all). The panpsychists charge that on this account, consciousness is a complete ‘ontological dangler’: a few anomalous islands of consciousness surface, for little apparent reason, in a vast sea of insentient and unconscious dead matter. Strawson, Stapp and Rosenberg object that the materialist picture arises from a Newtonian misunderstanding of matter. However, in quantum physics, matter may not be insentient, unconscious and dead, but have an element of consciousness too.

© Prof. Laura Weed 2011

Laura Weed is Professor of Philosophy at the College of Saint Rose in Albany, New York.

Here are some handy hints for further reading: José Luis Bermudez, Anthony Marcel & Naomi Eilan, eds., The Body and the Self , 1998 David Chalmers, The Conscious Mind , 1996 Paul Churchland, Neurophilosophy , 1986 Andy Clark, Supersizing the Mind , 2008 Antonio Damasio, Looking for Spinoza: Joy, Sorrow, and the Feeling Brain , 2003 Donald Davidson, Essays on Actions & Events , 1980 Daniel C. Dennett, Consciousness Explained , 1991 Jerry Fodor, The Language of Thought , 1975 Jay Garfield, Foundations of Cognitive Science: The Essential Readings , 1990 Thomas Nagel, The View From Nowhere , 1986 Jaak Panksepp, The Archaeology of Mind: Neural Origins of Human Emotion , 2010 Hilary Putnam, Mind, Language & Reality , 1975 Gregg Rosenberg, A Place for Consciousness , 2004 Gilbert Ryle, The Concept of Mind , 1949 John Searle, Intentionality , 1983 J.J.C. Smart, ‘Sensations and Brain Processes’, Philosophical Review , vol. LXVII, 1959 Henry P. Stapp, Mind, Matter and Quantum Mechanics , 2nd edition, 2004 Galen Strawson, ‘Realistic Monism: Why physicalism entails panpsychism’, Journal of Consciousness Studies , 13, 2006

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy . X

HYPOTHESIS AND THEORY article

Reincarnating the identity theory.

\r\nErik Myin*

  • Department of Philosophy, Centre for Philosophical Psychology, University of Antwerp, Antwerp, Belgium

The mind/brain identity theory is often thought to be of historical interest only, as it has allegedly been swept away by functionalism. After clarifying why and how the notion of identity implies that there is no genuine problem of explaining how the mental derives from something else, we point out that the identity theory is not necessarily a mind/brain identity theory. In fact, we propose an updated form of identity theory, or embodied identity theory, in which the identities concern not experiences and brain phenomena, but experiences and organism-environment interactions. Such an embodied identity theory retains the main ontological insight of its parent theory, and by invoking organism-environment interactions, it has powerful resources to motivate why the relevant identities hold, without posing further unsolvable problems. We argue that the classical multiple realization argument against identity theory is built on not recognizing that the main claim of the identity theory concerns the relation between experience and descriptions of experience, instead of being about relations between different descriptions of experience and we show how an embodied identity theory provides an appropriate platform for making this argument. We emphasize that the embodied identity theory we propose is not ontologically reductive, and does not disregard experience.

The Official Story

“The Identity Theory” forms, after treatments of dualism and behaviorism, the typical chapter 3 in an Introduction to Philosophy of Mind handbook. There, it will be narrated how Smart and Place, seeking to do justice to “inner” aspects of mind allegedly ignored by behaviorism, identified mental processes and states with brain processes and states, creating the mind/brain identity theory. Inevitably, the narrative will lead to the difficulties of the identity theory to deal with the phenomenal, the refutation of it by Putnam’s multiple realization arguments, and the subsequent replacement of the identity theory by functionalism.

Though this has, by all standards, become the official story, we think it should not be taken for granted. Quite the opposite. Contrary to this official story, some form of the identity theory – so we will contend – still offers the best available means to deal with the question of how minds and bodies relate. Moreover, contrary to reigning consensus, the identity theory has not been refuted by multiple realization arguments. However, an identity theory need not necessarily be a mind/brain identity theory. In particular, we think that it is possible to combine the idea of identity with an embodied or enactive view of the mind. Moreover, the result holds considerable explanatory potential. Or such we will defend in this paper.

Identity and Explanation

The identity theory proposes to identify the mental with something else: somehow, what we call mental is not different from what we call physical or material. As we’ll see, it’s possible to develop different forms of identity theory from this root idea, theories, furthermore, which are standardly piled together and presented as the classic identity theory. Yet, so we think, it is the very idea of identity from which the main merits of these theories flow. The idea of strict identity, which lies at the heart of identity theories, is that something that we call by different names, or encounter in different ways, is despite initial appearances, actually not different, but identical, in the sense of being one and the same thing (see Smart, 1959 ). Identity theories, of whatever stripe, hold that this notion of strict identity forms the basis for an adequate response to the question of how the mental relates to the physical.

Consider one standard example used in discussions of the identity theory: the identification of the Evening Star with the Morning Star. The relevant identification consists in denying that, despite our initial impressions, there are two different entities at play here. If we thought that there were, we were wrong. Importantly, once the identification is made, the main explanatory task that we are left with is to understand why we previously missed the identity, or how it was possible for us to be misled by different appearances of the same object – different ways in which we encountered the single planet Venus. Of course, in coming to make the identification, we need to have reasons for making it. In the example, this will be that, on reflection, the different appearances show particular patterns in time and space. For example, when taking into account time differences due to location, we notice we can explain the timing of the appearances of the Evening Star and of the appearances of the Morning Star. Moreover, taking into account the different perspectives due to location, the appearances are of an object with the same shape and size. Crucially, while such facts motivate why it is reasonable to believe that the identity holds (see Hutto and Myin, 2013 : 176), they don’t explain why Venus, under whatever description, has the properties that it has. For example, such facts can be referred to for justifying why someone holds that the Evening Star is the Morning Star, yet they don’t thereby explain why the Morning Star is identical to the Evening Star, or why the planet called Venus, Morning Star or Evening Star is as it is. In fact, identities such as the one that holds between the Evening Star and the Morning Star cannot be further explained 1 . They just hold, and one can either fail, or manage to be aware of them.

The reason why a strict identity cannot be explained, lies in the identity itself. For if there are explanations, for some X and Y (where these are real entities or events, not encounters with entities or events), of why X relates to Y, these typically are explanations in terms of how X causes, brings forth, or generates Y. But that is to say that X and Y are not identical to start with, for something can cause, bring forth, or generate only something which is different from itself. For an X and a Y which are identical, the idea of the one causing, bringing forth, or generating the other is non-sensical. To use another classic example, Clark Kent doesn’t cause Superman, he is Superman. It is possible to ask questions such as “Why is Clark Kent always where and when Superman shows up?” and the answer to such questions lies in pointing at the identity. Also, it is possible to ask “Why is Superman Clark Kent?” But the answer that one can provide to that question is one of explaining why we should believe in that identity, not one which offers some elucidating explanation of the identity itself, of why it holds, as distinct from why we should think that it holds.

If the usual example of the Morning Star and the Evening Star allows us to illustrate the, for our purposes, relevant aspects of what could be called the logic of identity, there’s something potentially misleading about it too. For there is nothing experiential or subjective (even in the minimal sense of essentially being tied to a person or subject) about that star (or rather: planet). The different ways of encountering it, which should not be mistaken for the encountering of different things, are of a different nature, being just different objective perspectives on an object. But with experience, this changes, because experience can be ‘encountered’ in different ways: it can be enacted, or embodied, by the subject of the experience, but it can also be encountered objectively, for example when it is observed by another subject.

Pursuing this difference requires that we first introduce a new species of identity theory.

Embodied Identity Theory: Going Wide

The identity theory as proposed by Place, Feigl, and Smart did more than identifying the mental with the physical: it identified mental states and processes with brain states and brain processes.

Indeed, their identity theory was a mind/brain identity theory, and often these phrases are taken to be synonymous. Tellingly, the most outspoken recent defender of the identity theory, Thomas Polger, explicitly commits to a mind/brain identity theory ( Polger, 2004 ).

But nothing in the idea of identity demands that the terms of identity be mind and brain, instead of mind and something else. As a consequence, it is possible to develop an identity theory in line with an embodied or enactive view of the mind (such E-views have been proposed by many, see Thompson, 2007 ; Barrett, 2011 ; Hutto and Myin, 2013 , 2017 ). According to such views, experience and cognition are to be (re-) conceived in terms of organism-environment interactions. Sensation, perception, experience and cognition are “things organisms do,” and should be understood in terms of past and current interactions with the environment ( Hutto and Myin, 2013 ). Explanations of experience, mind and cognition are subject to an “equal partner principle” ( Hutto and Myin, 2013 , p. 137) according to which environmental and intra-organismic factors can have equal weight in explanations of mental phenomena. The brain is seen as one of the players in the game, not as the locus of mindedness – that status is conferred to the spatially and temporally situated organism.

We here are not going to defend the legitimacy, or superiority of such embodied/enactive view of the mind an sich (though many, including the current authors have done so elsewhere, see Hutto and Myin, 2013 , 2017 , Zahnoun, 2018 ). Rather, our current goal is to show that one can combine an identity theory and the embodied/enactive view of the mind and to argue that such a combination of an embodied/enactive view of experience and an identity theory is not only possible, but eminently viable.

So what can be said in favor of an identity theory that “goes wide,” or holds that experience and other phenomena referred to as mental are identical to situated activities of organisms in environments? To start answering this, consider an identification in a murder case. What a murder is, constrains what, or rather who, the murderer can be identified with, namely, a human being. Given that a murder is a premeditated act in which one human being is killed by another human being, a murder case can only be laid to rest by identifying a previously unknown murderer with a human being. It would be out of the question to take an object or an animal as a possibility for identification. The specifics of a particular case further narrow down the possibilities for identification. Whoever is singled out as the murderer must have been at the right place at the right time, must have left traces, must have some plausible motivation or psychological history, and so on. The more it is shown that such constraints apply to an actual identification, the more belief in the identity is motivated (a phrase from Hutto and Myin, 2013 , p. 175).

Now return to experience and the question of what it is to be identified with. The fact that a particular experience has the general characteristics that it has, such as being perspectival, subjective and affect-laden, exerts overall constraints on what it can be identified with. Activities of organisms fit the bill nicely, for they always have the required perspectivalness. They have a “value” uniquely related to a particular organism’s needs.

In addition, specific aspects of particular organism-environment interactions fit the bill when it comes to the particular phenomenal aspects of specific experiences. Particular phenomenal experiences occur in particular circumstances: we experience a sponge’s softness in the activity of squeezing it ( Myin, 2003 ), “the stinging sharpness of a pin prick, the bitter-sweet taste of dark chocolate” ( Schier, 2009 ) when we prick our fingers with a pin, or when we eat chocolate. The features of the interaction match the features of the experience. When we stop squeezing the sponge, or squeeze it too hard, the feeling of softness fades quickly, or gets replaced by feeling the hardness of one’s opposing hand. Pushing the pin in, accrues the pain, and brings it deeper into the body; the ways we handle the chocolate in our mouths, how we move it around, whether we bite or chew it, or let it slowly melt, affects, in predictable and controllable ways, the fine details of our gustatory experiences – just like putting glasses on drastically changes the visual experience of a myopic person in predictable and controllable ways (for many more examples of how interactive situation and in particular olfactory feel are related, see Cooke and Myin, 2011 ). In short, because person/environment interactions allow to meet both constraints dictated by general features of experience, and by the details of particular experiences, they seem to be appropriate terms to identify experiences with.

Of course, philosophers have argued, if not for the existence, then for the possibility of experiences divorced in some way from organisms, or from systems relevantly like organisms. Looked upon from the naturalistic perspective in which scientific explainability – broadly understood – stands central, arguments which only establish the possibility of disembodied experience, without any concern for explanation are not acceptable. That is, “dangling” possibilities, or possibilities which can be conceived, but which, if they would exist, would lack any explanation for why they would occur, are not acceptable for a naturalism which takes explainability serious. To come to this conclusion is to reiterate the prioritizing of explanatory concerns—avoiding “nomological danglers” ( Feigl, 1958 ) – which has always been a prime motive for identity theorists. To illustrate: the idea of a momentaneous experience, that comes into and instantaneously goes out of existence because of a “quantum accident” ( Clark, 2009 ) 2 , might in some modal sense be possible. Yet the fact that such occurrence would, by its very nature, be utterly unexplainable in even the most broadly construed naturalistic terms – it would be a miracle of sorts—renders it irrelevant as a consideration to draw conclusions from regarding the relation between minds, brains and bodies (elaborating on a point made in Myin, 2016 : 100, see also Myin and Loughlin, 2018 ).

In contrast, the proposed identification of experiences and organismic environment-involving activities offers what seem exactly the right ingredients to explain conspicuous aspects of conscious experience such as perspectivalness or affect-ladenness. That is, these aspects of experience become more understandable after such an identification because the life of an organism provides an evaluative perspective from which organism-relative interests can flow forth, and from which subjective phenomena like desire, fear, pleasure and disgust can begin to be made sense of ( Thompson, 2007 , also Dennett, 1991 , chapter 7). Moreover, biology gives us a platform to understand how such aspects gradually emerged, and evolved from “humble beginnings” into complex forms. Or, a biological, evolutionary framework provides us with the means to explain the coming into being of beings for which things, or matters, matter. Once there are organisms, with their (inter-)activity dependent ways of continued existence, the idea of a unique position from which to interact with that world, and the gradual development of forms of organization which benefit from the position created by the existence finds a foothold ( Thompson, 2007 ; Kirchhoff and Froese, 2017 ) 3 .

The crucial move made by an embodied identity theory, so we propose, lies in the idea that telling the story of this gradual emergence of an organismic perspective, is the telling of the story of how experience, subjectivity and phenomenality emerged, and doing so in a gapless way. That is, during this evolution, it became something for the organism having such a perspective to have such perspective. But that “becoming” should not be cashed out in other terms than identity. What happened was not that some special ingredient became added to the mix, but rather that specific forms of organization came into existence. Being an organism having that form of organization, that is, actually occupying a particular perspective, living or enacting a life, is, for such an organism, to have experience.

Following the logic of identity exposed above, there is no further explanation to be given of why actually occupying such a perspective, being the organism that it is, coincides with being an experiencer.

Of course, there’s the recurring temptation to raise exactly this question for a further explanation. After all, being an organism and occupying a certain organismic perspective amount to objective facts and if they also hold the key to experience, an explanation is due of how objective facts can turn into subjective facts – or of how facts which can be adequately characterized by objective descriptions, can be identical to facts which fail to be fully captured in such objective descriptions.

This paradigmatic reasoning doesn’t respect the logic of identity, however. For if experience is actually identical to occupying an organism-bound perspective, it is not the case that objective facts turn into subjective facts. Rather, some objective events are identical to some subjective events.

Yet, crucially, one can relate in different ways to the facts of experience. One can enact, or live experiences or one can relate to them “from the outside”: by observing them or reflecting upon them. Simply said, and borrowing some vocabulary from Merleau-Ponty (1945) , there’s “lived” experience and there’s “reflected upon” experience. For example, there’s actually engaging in certain perceptual interactions and there is reflecting on how one engages in certain perceptual interactions. The same events will be lived perceptual experiences in the one case, and related to as objective descriptions of a perceptual interaction in the other case.

The fact that taking an objective/descriptive stance toward something is itself an enacted experience, does not make it any more possible to occupy both the lived and the observational, reflective, or descriptive perspective on the same experience simultaneously. For it remains the case that a stance-taking experience is a different experience than the perceptual experience the stance is taken toward.

This impossibility is crucial, and provides a possible explanation for the tenacity of hard problems about phenomenal consciousness – a reason for why it seems to us that objective accounts of experience always contain gaps. Such seeming omissions might be explicated, not as the leaving out of something that more work from the objective perspective could provide, but rather as the very impossibility to take up both the subjective, experiential and the objective, reflective, and descriptive perspective at once. Attempting to take up these two perspectives simultaneously is like an attempt to see a Necker cube in both its spatial orientations at the very same moment. It simply cannot be done. However, our identity approach allows to recognize this impossibility problem for what it is. It is not the kind of problem on which we can expect a future science to deliver what is currently missing. We have not identified a gap in current scientific theorizing that, given enough patience, we can expect to be filled at some future point in time. Rather, the identity approach outlined allows us to recognize the problem as an impossible one to solve ( Hutto and Myin, 2013 , chapter 8. See also Zahnoun, 2018 , chapter 5).

Again there might be a temptation to delve deeper. For other identities than those between the mental and the physical are different. For example, the perspective a Roman soldier had on water is different from the perspective a physical chemist has on H 2 O. This case is analogous to the Morning Star/Evening Star situation. Yet despite the differences, both perspectives are objective, reflective or descriptive. But in the case of the mind-body problem, one perspective is objective, and the other one not. Why this disanalogy? Our answer is that the root of the difference lies not in the existence of a queer new kind of objective fact, i.e., subjective facts, but because, in the case of organisms, but not in the case of water or planets, the subjective or experiential perspective exists – and that fact can be explained by invoking the biological history.

At this point, one could concur with our construal of the relation between the experiential and subjective on the one side and the objective, reflective and descriptive on the other and with the implication that the expectation of a scientific solution to the ‘Hard Problem of Consciousness’ is based on a mistaken view of the problem. Yet, so one could argue, this only raises a still deeper question, or a still harder problem, namely, the question why reality is such that experience arises at all – even if we have a satisfying diagnosis and therapy for our concerns. That is, perhaps one can accept that no further explanation is required for why some complex forms of organism-environment organization and interaction are identical to experience; and perhaps one can become convinced that the quest for a straightforward solution to hard problems of consciousness is not so much difficult as impossible. But, so one can insist, this still leaves the question of why reality is such that these forms of organism-environment interaction exist, and why reality is such that certain forms of organism-environment interaction are identical to experience with experiential qualities.

This is by all measures a reasonable concern. But notice that by understanding the mind/body problem in these terms, one is rephrasing it as an existential, rather than a scientific question. The questions asked now are questions about our place in the universe, as we find it, and as it remains even after everything is scientifically explained. Even more conspicuously so than it is the case for the original hard problem of phenomenality, this kind of question is one that should not, and cannot be solved in the way standard scientific questions are solved – irrespective of whether or not it can be solved in some other than scientific sense.

Multiple Realization and Identity: What’s the Worry?

Our attempt to reinvigorate, or rather, reincarnate an identity proposal regarding the mind-body relation might be considered to be a pointless exercise. Weren’t identity proposals shipwrecked, once and for all, as soon as it was pointed out that, because specific types of mental states or processes can be realized by different types of physical structures, they can therefore not be identical with these physical structures? That is, doesn’t the fact that mental types (such as pain) can be multiply realized as tokens of different physical types (in a mammal’s brain, an octopus’s brain or an alien’s silicon brain), show that mental types are not identical to physical types? To this day, the argument from multiple realization (henceforth: MR) remains widely accepted, both as an argument against psychophysical identity, and against reductionist approaches to the psychological (see, for instance, Fodor, 1974 , and for a critique, Bickle, 2003 ). However, the idea that the alleged multiple realizability of the mental can rule out possible psychophysical identities is not as solid as it prima facie might seem to be.

The thesis of the multiple realizability of the mental is defined as the claim that the same type of mental entity can be realized by different types of physical entities. We’re deliberately using the wide notion of ‘entity’ here so as to comprise the different kinds of things of which multiple realizability is being predicated in the literature. Multiple realizability is said to apply to states, processes, events or properties, depending on whose account one is considering. Moreover, accepting this thesis is supposed to entail a rejection of a possible mind-body identity theory.

First we should get clear on what the thesis of MR is supposed to be exactly. However, as Lawrence Shapiro points out, “despite philosophers’ ready acceptance of MRT 4 , it is not a precise thesis.” ( Shapiro, 2000 : 636) We think the imprecise nature of the MRT is the result of the imprecise nature of the elements that make up the thesis. On the one hand, multiple realizability is predicated of one and the same type. But what is a type, exactly? And what does it mean to attribute sameness to types? On the other hand, it is also unclear what the realization relation is supposed to be, exactly. Apparently, the relation has to be such that it rules out identity. But how, exactly, does it manage to do this? Also with regard to this question, we find no answer. Kim tells us in a footnote:

The term ‘realize’ used in [the multiple realizability principle] has not been explained. As we make progress... its meaning should become clearer; in the meantime, you will not go far astray if you read ‘P realizes M’ as ‘P is a neural substrate, or correlate, of M”’ ( Kim, 1996 : 102, fn. 4).

But how can neural entities be said to be a substrate of, or correlate with apparently abstract entities like types? In addition, it is also unclear when we are allowed to speak of a type being multiply realizable 5 . Lemons, oranges and grapefruits are all realizations of the type ‘citrus fruit.’ Does this mean that the type ‘citrus fruit’ is multiply realizable? But then, aren’t all types multiply realized in their token instantiations? But this would reduce the MRT to the utterly trivial ‘thesis’ that different things can belong in the same category. Considering these questions, it becomes clear that much of the MRT’s obscurity is the result of the fact that it is unclear to what extent the MRT requires a metaphysical commitment to an ontology of types, tokens, and a relation of realization between both. According to Thomas Polger 6 , the MRT should be given an empirical reading which avoids the nagging metaphysical issues above. Polger claims that the MRT “is most plausibly thought of as the claim that psychological state kinds are shared in common across at least some physical creature kinds, for example, across species.” ( Polger, 2013 : 870) Furthermore, this is said to come closest to what Hilary Putnam had in mind when he first presented MR as an argument against mind-brain identity. Recall this oft-quoted passage:

Consider what the brain-state theorist has to do to make good his claims. He has to specify a physical-chemical state such that any organism (not just a mammal) is in pain if and only if (a) it possesses a brain of a suitable physical-chemical structure; and (b) its brain is in that physical-chemical state. This means that the physical-chemical state in question must be a possible state of a mammalian brain, a reptilian brain, a mollusc’s brain (octopuses are mollusca, and certainly feel pain), etc ( Putnam, 1975 : 436).

Putnam proposes it is extremely unlikely that the identity theorist will be able to “make good his claims” because of the fact that creatures with different brains can nevertheless be in the same mental state (pain). His claim is clearly supposed to be empirical in nature:

I shall not apologize for advancing an empirical hypothesis. Indeed, my strategy will be to argue that pain is not a brain state, not on a priori grounds, but on the grounds that another hypothesis is more plausible. ( Putnam, 1975 : 433).

With these manoeuvers, Putnam has reframed the identity theory as hinging on a claim about the relation between tokens of mental phenomena and types. Apart from the already indicated unclarities about core concepts crucially involved in the steps in which this reframing takes place, the whole enterprise rests, so we will now argue, on a deeper confusion about what the identity theory is about. In a nutshell, as we construe it, the core claim of an identity theory is one about relating, on the one side the experiential, as encountered from an experiential perspective, and about what’s encountered from an onlooking, objective, reflective or descriptive perspective. The crucial claim of an identity theory consists in spelling out that, despite initial appearances, the experience – the experience itself, not its description – is not different from what can also be referred to in an objective way, via observations, and in particular via descriptions. In other words, the identity theory wants to make a connection between, on the one hand, the realm of experience, characterized by subjectivity, and the realm of the objective, characterized by observation, reflection and description, on the other hand. Putnam’s reframing of the identity theory, in contrast, construes the theory as aiming to make connections within the realm of the objective, reflective and descriptive, in particular between descriptions of physical events in particular or general terms and descriptions of mental events in particular or general terms. But this is a completely different project. The difference between objective versus subjective ways or modes of encountering is a different difference than the difference between descriptions of the physical and descriptions of the mental, irrespective whether these descriptions are particular or general (token or type). The thesis that two pains can be characterized with the same mental description, yet not with the same physical description, is a thesis about how different descriptions of the experiential relate, not about how the experiential relates to the descriptive. As a consequence, Putnam’s considerations don’t even touch on what the identity theory, as we have construed it, is about.

In other words, the MR challenge is simply misplaced for the identity theory (as we take it) as a philosophical view on experience. It is, literally, besides the question.

Our rejection of the relevance of the argument from MR for the identity theory pivots on the difference between experience and its description. Embodied views of experience as enacted organism-environment interaction allow to bring home the point forcefully. For such embodied views allow to see experience purely as organism-environment interaction ( Hutto and Myin, 2013 , 2017 ; Raleigh, 2015 ). In particular, such enacted experience needn’t have any descriptive representational content. It is specific to certain circumstances, and to a certain organism, and it can perhaps be less or more appropriate to certain circumstances. Yet despite such specificity, the experience does not specify, or carry a semantically evaluable content, which is about those circumstances which it is a specific reaction to – or embedded interaction with. Not carrying descriptive content, it doesn’t form either a particular or a general description: the experience of pain doesn’t self-describe as a token brain state, a token pain, a type pain, or a type brain state. On such understanding of experience, the tension between the experiential and the descriptive is a natural fact.

Of course, one can make explicit in one’s description of an experience that one is focusing on a particular experience, or that one is talking about the similarities between experiences. Here, at the level of descriptions, the difference between tokens and types makes sense. And then one can ask questions about the relations between descriptions, for example whether a second more general (type) description applies, or does not apply to two tokens to which a first more general description applies. But these are questions regarding the relations between descriptions, and not relations between a phenomenon and descriptions. The core identity claim is about the latter: about the phenomenon of experience and the realm of objective descriptions. What’s asserted is that, whenever we identify something subjectively as a mental entity (a toothache, say), we can in principle also descriptively identify it as some physical entity, because it is just one thing, but related to in different ways (subjectively experienced vs. intersubjectively described)

Of course, it matters which descriptions are used at the objective/descriptive side, when making the connection between the experiential and the objective. What’s on the objective side should be characterized in terms of organismic doings, objectively described. That is, if our embodied identity theory is right, it should be a description which picks out the naturalistically intelligible conditions in which consciousness actually occurs. Naturalistically intelligible here means: explainable, as we have indicated. That disqualifies any form of functionalism that holds that there is something like a functional(ist) level, on which the functional kinds reside. What we object to is any functionalism that makes the a priori claim that the mental should be characterized in terms of multiply realizable functional types.

The question is: how can one know, a priori , that such a functional level of multiply realizable types exists that are optimal for the description of mentality? Recall Polger, who rightly states that, on an empirical, naturalistically acceptable understanding of functional types, “psychological state kinds are shared in common by at least some physical creature kinds, for example, across species.” ( Polger, 2013 : 870) This formulation doesn’t speak of an unexplicated realization relation, yet it does contain the idea that one and the same mental type (or kind) can be shared by creatures of a different physical kind. Leaving aside the important questions of what is to count as sufficiently similar or different and what not, note that Polger’s reformulation still speaks of types or kinds, which are now said to be shared. Strictly speaking, however, the idea that one and the same type can actually be shared by different creatures is, when taken at face value, again the expression of a specific metaphysical assumption, namely the assumption that the occurrence of a certain mental event needs to be ontologically understood in relation to types. On this account, saying that two different creatures can both have pain needs to be understood in terms of these two creatures sharing a type. But again, from an empirical, naturalistic, point of view, how can an abstract entity like a type literally be “shared” or “distributed” amongst different creatures? To reformulate the idea that the mental should be described in terms of multiply realizable functional types as a fully fledged empirical hypothesis, we also need to cash out the notion of ‘type’ in empirically respectable terms. It would be a reification of ‘types’ to think of them as individual things with their own, perhaps non-spatiotemporal existence, things that can moreover be distributed amongst other things, or that can manifest themselves in physical incarnations. Rather, from an empirical, naturalistic point of view, types are best understood as the names of the accepted categories in accordance to which we, in a certain community, structure the world, in the light of this or that purpose. In other words, saying that the mental type ‘pain’ is shared by a human being, a cat and an octopus is simply a way of saying that these animals can sometimes have a sensation which we assume to be relevantly similar (i.e., according to an accepted classificatory criterion) so as to allow the identification of both sensations as pains. Simply put, on the empirical reading, claims cast in terms of shared functional types state that creatures which we classify as relevantly different in physical respects (according to some accepted criterion) can be in a mental state, or have a kind of experience, which we classify as relevantly similar (again, according to some accepted criterion). Reformulating Putnam’s example, he apparently held an octopus brain to be relevantly different from human brains, yet he claimed that octopi and humans can nevertheless have experiences which are relevantly similar enough so as to warrant the label ‘pain.’

The crucial point to be made here is that whether the relevant similarities obtain is not something which can be decided in any a priori , decontextualized way. Importantly, which similarities are relevant depends on the context. In some contexts, differences which might matter in other contexts, might be disregarded. For example, one might talk about a general class of analgesic substances, but in some contexts (such as avoiding allergic reactions), very specific chemical details might matter. These are well known problems having to do with the grain of functional analysis: there is no one level of properly ‘functional’ causality below which anything else is ‘implementation detail,’ or any dichotomous division of natural properties into the ‘functional’ versus the ‘structural’. Most importantly, whether or not functional descriptions, at some level, are multiply realizable, in some context, seems to be something that can only be empirically established, by investigating the cases, and finding out about how similar or dissimilar these different ‘realizations’ behave. Of course, all of this already presupposes that we agree on criteria by which to judge what is, and what isn’t relevantly similar or dissimilar.

We are not merely reiterating the points made by Putnam regarding functionalism as “advancing an empirical thesis.” On the contrary, although we agree that specific functionalist claims to the extent that this or that functional kind is multiply realizable are empirical, we are emphasizing that assessing the plausibility of such hypothesis should proceed in an empirical way: by considering actual cases, instead of by a priori argument irrespective of such cases 7 . This raises doubt on one immensely influential line of argument in favor of functionalism, and against the identity theory. We have in mind arguments that flow forth from the fusion of functionalism with widely accepted ideas about information processing. In a nutshell, it is widely accepted that information, and by implication information processing, is something that is independent of the material medium in which the informational processes take place. This seems obviously so when the information at issue is semantic. The same semantics can be carried by something printed on paper, grafted in stone or transferred by acoustic waves. But exactly the same seems to hold for other kinds of information, such as information based on covariation, or information understood to be processes in or by computing machines. Irrespective of its physical makeup, anything that has the right causal connections with something else can be said to carry information about that second thing in a co-variational sense. And it is a well-known engineering fact that computers can be made, and be made to carry out exactly the same computations, by a wide variety of material means. Now, if one also holds that cognition is some form of information processing, and if it hardly requires argument that information processing is independent of physical substrates, cognition itself becomes evidently, without requiring further investigation or consideration of cases, matter or medium independent, or multiply realizable (for such reasoning, see for example Piccinini, 2015 ).

However, as many have argued, there are good reasons to take cognition to be embodied interaction instead of information processing (see, for instance, Hutto and Myin, 2013 , 2017 ). If cognition is embodied interaction, instead of information processing, though prima facie it remains a possibility that cognition is multiply realizable in some non-trivial and scientifically interesting sense, this doesn’t follow with the apparent immediacy it does within an information processing framework.

Suppose, for example, that non-representational, non-information processing dynamical systems accounts of cognition are correct, according to which cognition is organism-environment adaptation or coordination at multiple temporal and spatial scales at once. During such multi-scale interactions, structures dynamically emerge and stabilize, but on such an account, these are physical changes within the coupled organism-environment system rather than the acquisition of representations or the processing of information. Due to the occurrence of the multi-scale changes, organisms become able to deal with the current environment in a way which is sensitive, adapted or attuned, to what is currently strictly speaking absent, or abstract. If embodied and embedded cognitive systems are dynamical systems in the way sketched here, this has important implications for multiple realizability. Cognitive phenomena then are phenomena which occur under specific conditions of massive complexity. As they might require multiple levels of interrelated and mutually sustaining structure and coordination, they might only be possible in certain kinds of systems. Some philosophers, sympathetic to a dynamical/ecological perspective ( Di Paolo and Thompson, 2014 ), have argued that the kind of structural dynamics required, is the self-sustaining, self-creating dynamics of living systems. Further, the interdependence characterizing such systems allows to recognize the role of bodily processes and structures as fundamental to cognition. If the cognitive activities of an organism as a whole depend on the ways in which its parts are coordinated on multiple scales, what one does with one’s hands while speaking becomes an integral part of the process of speaking, a thesis that is congruent with recent theorizing on gesture (see Goldin-Meadow and Alibali, 2013 ). And what goes for the body, goes for the environment, in a broad sense, so that it includes not only one’s physical, but also one’s sociocultural context (see Spivey and Spevack, 2017 ) for a maximally inclusive account of cognition).

We think that the prospects for the multiple realizability of cognition would vastly change if such a dynamical model of cognition held true, instead of a computational, or information processing model. For while it would remain possible to model such a system mathematically 8 , this would not mean that what’s modeled would actually be multiply realizable – in contradistinction to being ‘simulatable’ – in materially different substrates. It could not be easily dismissed that only one thing can actually have that dynamical structure belonging to the whole complex system. Of course, one could only simulate that structure, but that would be a simulation, not an actual “instantiation” – not something that had that structure. We leave it to further work to pursue the issue of the relation between embodied cognition and multiple realizability of the structure. But what has been said already sufficiently underscores the general point we want to make here: that multiple realizability should, apart from being contextualized, not only be taken as an empirical thesis, but that one should also not too rapidly conclude, on the basis of very general considerations, that it actually applies. Attention to the actual specifics of cognition, highlighted by embodied approaches to cognition, can help to counteract that tendency. We take this to form a welcome side-benefit of (re-)incarnating the identity theory.

Identity and Reduction

If our embodied identity theory is not defeated by multiple realization arguments, it should be clear that it also is not vulnerable to another famous line against it, due to Searle, namely that it disregards the mind. Such complaint might apply to versions of the identity theory in which identity is lumped together with reductionism, or at least, ontological reductionism. These reductionist interpretations are typically expressed through, what Donald Davidson calls, the “nothing-but” reflex ( Davidson, 1980 : 214). The claim that the mental is identical with the physical is almost always understood as synonymous with the ontologically reductive materialist claim that the mental is ‘nothing but’ the physical, or that, more specifically, humans are ‘nothing but’ physico-chemical mechanisms. It follows, then, that if one takes identity theory to require a commitment to ontologically reductionist materialism, any argument against this kind of reductionism is by default an argument against identity theory. Indeed, we do find reductionist tendencies within the classic identity proposals. Consider, for instance, these lines from Smart.

It seems to me that science is increasingly giving us a viewpoint whereby organisms are able to be seen as physicochemical mechanisms… That everything should be explicable in terms of physics (together of course with descriptions of the ways in which the parts are put together-roughly, as biology is to physics as radio-engineering is to electromagnetism) except the occurrence of sensations seems to me to be frankly unbelievable ( Smart, 1959 : 142).

And in later work, we read:

I shall be concerned to put man in his place by defending the view that he is nothing more than a complicated physical mechanism.… I wish to argue for the view that conscious experiences are simply brain processes ( Smart, 1963 : 15 & 88, m.e.).

The reductionist nature of these assertions is undeniable. And also in Place’s classic paper, we find support for a reductionist interpretation of identity when he writes that, to all identity statements (statements using the ‘is’ of composition), we can add the qualification ‘and nothing else’, so that we could say that consciousness is a brain process, and nothing else (see Place, 1956 : 45). To the extent that these accounts should be read as ontologically reductionist, they are eliminativist with regard to the phenomenal. Indeed, some have identified this alleged eliminativism as the identity theory’s greatest weakness (next to its inability to deal with the multiple realizability of the mental). In his 1992 The Rediscovery of the Mind, Searle puts the objection as succinctly as possible when he claims that identity theory “leaves out the mind.” ( Searle, 1992 : 53) But if an identity theory can really be said to be at bottom an eliminativist materialism, we should start to wonder whether this theory can still be probably labeled an identity theory. The problem, after all, is that a relation of strict identity can never be a relation of ontological reducibility, for the simple reason that identity is symmetrical, whereas the ontological reductive relation is not. If A is strictly identical with B, then, of course, B is also strictly identical with A. But saying that B reduces to A, on the other hand, does obviously not entail that A reduces to B. So if conscious experiences are really strictly identical with brain processes, as the classic identity theorist claims, we might just as well hold that these brain processes are ‘nothing but’ conscious experiences, or that certain physico-chemical mechanism are simply humans, and nothing else. For as Davidson aptly points out: “[I]f some mental events are physical events, this makes them no more physical than mental. Identity is a symmetrical relation” ( Davidson, 1987 : 453) 9 .

In any case, whether the eliminativist reading of some of the classic versions of the identity theories of Smart, Place and Armstrong is apt, Searle’s complaint that the theory leaves out the mind does certainly not apply to the version of the identity theory we have defended here. For our version, rather than denying, recognizes and affirms the specificity of the enacted perspective of experience. Yet it warns not to take this perspective for something it is not. That is, the perspective should be taken for a distinctive way of encountering experience, and not as a way of encountering something distinctive from the rest of nature.

Conclusion: The King is Dead, Long Live the King

The identity theory, despite having been officially declared dead, still has a future. Its vital core, the logic of strict identity, allows to deal with vexed questions about the relation between experience and objective facts. Yet while the identity theory must retain this logic at its heart, it needn’t necessarily remain a mind/brain identity theory. Allying with embodied ways of thinking about the mind, it can become the embodied identity theory. Such embodied turn allows for a naturalistically, evolutionary account of perspectivalness, and it enables to forcefully motivate the rejection of the idea that multiple realizability undermines identity approaches. Instead of disregarding experience, the embodied identity theory gives it a central place. Chapter 3 is up for considerable revision.

Author Contributions

Both authors were involved in the conception, elaboration and refinement of the ideas, and arguments in this paper. The idea of an embodied identity theory, and it being concerned with the relation between experience and descriptions of experience are mainly due to EM. FZ helped to refine and streamline these ideas, and added material in particular on multiple realization, as well as on the historical and dialectical context of the identity theory. Further, he devised the section on reductionism.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The research was supported by the Research Foundation Flanders [FWO, projects Offline Cognition (G048714N) and Getting Real about Words and Numbers (GOC7315N)] and EM. In addition, thanks to the BOF Research Fund of the University of Antwerp [project titled Perceiving affordances in natural, social and moral environments (DOCPRO3)]. We are grateful to the editors and reviewers for our paper.

  • ^ The point has also been made on several occasions by Papineau (1998 , 2002 ). See, for instance, Papineau (1998 : 373, 2002 : 12).
  • ^ Note that this is a misnomer, the intricacies of quantum mechanics are not nomological danglers in this sense.
  • ^ This leaves open what exactly an organism is, and which organisms have experiences. We think it is not the task of an identity theory to decide these issues. It is in biology, in a broad sense which includes philosophy of biology where it is discussed what counts as an organism. Similarly, the specification of what “experience” is, is an interesting and crucial issue, but there is no reason why one should expect that an identity theory should offer the means to decide where exactly experience begins – if, in the light of the gradualness of evolution such boundaries exist at all. There is a division of labor concerning these matters. A court case offers an analogy. Whether some act qualifies as a crime, and what the facts are in a certain case, is determined by lawmaking, and by police or detective investigation. The task of a lawyer in court begins when these tasks have already been finished. The lawyer’s task will be to motivate, against the established legal and factual background, why a certain identification should, or should not be made. Note that Polger (2004 , chapter 2) defends an identity theory, while holding that we currently have no criteria for counting something as an experience.
  • ^ Multiple Realizability Thesis.
  • ^ Shapiro seems to be mainly concerned with the question of what is, and what isn’t supposed to count as an instance of multiple realization. We, however, shall be focusing more on the other issues, as these are directly relevant for our discussion about MRT’s relation to identity theory.
  • ^ See Polger, 2013 .
  • ^ This point is also repeatedly emphasized in the work of Shapiro, Polger and Polger and Shapiro. See, for instance, Shapiro (2000) , Polger (2004 , 2009 ), and Polger and Shapiro (2016) .
  • ^ All mathematical models are abstract, and thus in principle multiply realizable. But this doesn’t imply they are actually multiply realized.
  • ^ In this regard, it should be stressed that Davidson’s anomalous monism is not to be understood as a form of physicalism or materialism, be it reductive or non-reductive. Davidson is very explicit about this: “Anomalous Monism is not a form of physicalism or materialism” [( Davidson, 1995 ): 75].

Barrett, L. (2011). Beyond the Brain. Princeton, NJ: Princeton University Press.

Google Scholar

Bickle, J. (2003). Philosophy and Neuroscience: A Ruthlessly Reductive Account. Dordrecht: Kluwer. doi: 10.1007/978-94-010-0237-0

CrossRef Full Text | Google Scholar

Clark, A. (2009). Spreading the joy? Why the machinery of consciousness is (probably) still in the head. Mind 118, 964–993. doi: 10.1093/mind/fzp110

Cooke, E., and Myin, E. (2011). Is trilled smell possible? How the structure of olfaction determines the phenomenology of smell. J. Conscious. Stud. 18, 59–95.

Davidson, D. (1980). Essays on Actions and Events. Oxford: Clarendon Press.

Davidson, D. (1987). Knowing one’s own mind. Proc. Address. Am. Philos. Assoc. 60, 441–458. doi: 10.2307/3131782

Davidson, D. (1995). Relations and transitions. An interview with Donald Davidson. Dialectics 49, 75–86. doi: 10.1111/j.1746-8361.1995.tb00115.x

Dennett, D. C. (1991). Consciousness Explained. New York, NY: Little, Brown and Company.

Di Paolo, E. A., and Thompson, E. (2014). “The enactive approach,” in The Routledge Handbook of Embodied Cognition , ed. L. Shapiro (London: Routledge), 68–78.

Feigl, H. (1958). “The “mental” and the “physical””, in Concepts, Theories and the Mind-Body Problem , Vol. 2, ed. G. Maxwell (Minneapolis, MN: University of Minnesota Press).

Fodor, J. (1974). Special sciences: or the disunity of science as a working hypothesis. Synthese 28, 97–115. doi: 10.1007/BF00485230

Goldin-Meadow, S., and Alibali, M. (2013). Gesture’s role in speaking, learning, and creating language. Annu. Rev. Psychol. 64, 257–283. doi: 10.1146/annurev-psych-113011-143802

PubMed Abstract | CrossRef Full Text | Google Scholar

Hutto, D. D., and Myin, E. (2013). Radicalizing Enactivism: Basic Minds without Content. Cambridge, MA: MIT Press.

Hutto, D. D., and Myin, E. (2017). Evolving Enactivism: Basic Minds Meet Content. Cambridge, MA: MIT Press.

Kim, J. (1996). Philosophy of Mind. Boulder, CO: Westview.

Kirchhoff, M. D., and Froese, T. (2017). Where there is life there is mind: in support of a strong life-mind continuity thesis. Entropy 19:169. doi: 10.3390/e19040169

Merleau-Ponty, M. (1945). Phénoménologie de la perception. Paris: Gallimard.

Myin, E. (2003). An account of colour without a subject. Behav. Brain Sci. 26, 42–43. doi: 10.1017/S0140525X03440016

Myin, E. (2016). Perception as something we do. J. Conscious. Stud. 23, 80–104.

Myin, E., and Loughlin, V. (2018). “Sensorimotor and enactive approaches to consciousness,” in The Routledge Handbook of Consciousness , ed. R. Gennaro (London: Routledge), 202–2014.

Papineau, D. (1998). ‘Mind the Gap’. Philos. Perspect. 12, 373–389.

Papineau, D. (2002). Thinking About Consciousness. Oxford: Oxford University Press. doi: 10.1093/0199243824.001.0001

Piccinini, G. (2015). Physical Computation. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780199658855.001.0001

Place, U. T. (1956). Is consciousness a brain process? Br. J. Psychol. 47, 44–50. doi: 10.1111/j.2044-8295.1956.tb00560.x

Polger, T. (2004). Natural Minds. Cambridge, MA: MIT Press.

Polger, T. (2009). Evaluating the evidence for multiple realization. Synthese 167, 457–472. doi: 10.1007/s11229-008-9386-7

Polger, T. (2013). Realization and multiple realization, chicken and egg. Eur. J. Philos. 23, 862–877. doi: 10.1111/ejop.12017

Polger, T., and Shapiro, L. (2016). The Multiple Realization Book. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780198732891.001.0001

Putnam, H. (1975). Mind, Language, and Reality: Philosophical Papers. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511625251

Raleigh, T. (2015). Phenomenology without representation. Eur. J. Philos. 23, 1209–1237. doi: 10.1111/ejop.12047

Schier, E. (2009). Identifying phenomenal consciousness. Conscious. Cogn. 18, 216–222. doi: 10.1016/j.concog.2008.04.001

Searle, J. R. (1992). The Rediscovery of the Mind. Cambridge: MIT Press.

Shapiro, L. (2000). ‘Multiple realizations’. J. Philos. 97, 635–654. doi: 10.2307/2678460

Smart, J. J. C. (1959). ‘Sensations and brain processes’. Philos. Rev. 68, 141–156. doi: 10.2307/2182164

Smart, J. J. C. (1963). Philosophy and Scientific Realism. London: Routledge & Kegan Paul Ltd.

Spivey, M. J., and Spevack, S. C. (2017). An inclusive account of mind across spatiotemporal scales of cognition. J. Cult. Cogn. Sci. 1, 25–38. doi: 10.1007/s41809-017-0002-6

Thompson, E. (2007). Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA: Harvard University Press.

Zahnoun, F. (2018). Mind, Mechanism and Meaning: Reclaiming Social Normativity within Cognitive Science and Philosophy of Mind. Ph.D. dissertation. Antwerp: University of Antwerp.

Keywords : mind/body, identity theory, embodied cognition, multiple realization, experience

Citation: Myin E and Zahnoun F (2018) Reincarnating the Identity Theory. Front. Psychol. 9:2044. doi: 10.3389/fpsyg.2018.02044

Received: 15 June 2018; Accepted: 04 October 2018; Published: 24 October 2018.

Reviewed by:

Copyright © 2018 Myin and Zahnoun. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Erik Myin, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Social Identity Theory

  • Living reference work entry
  • First Online: 17 April 2024
  • Cite this living reference work entry

identity theory in philosophy of mind

  • Guan Jian 2  

Social Identity Theory posits that individuals identify with the group they belong to and develop ingroup favoritism and outgroup discrimination through social categorization, and that individuals’ identification with the group constitutes the basis of group behaviors.

Brief History

Social Identity Theory was originally proposed by British social psychologist Henri Tajfel in the 1970s. Later Tajfel’s student John Turner further improved the theory and put forward the self-categorization theory in 1985. In 1970, Tajfel employed the Minimalist Group Paradigm to observe how groups function. In his experiment, the subjects were randomly divided into two groups and asked to perform resource allocation tasks. Results show that subjects would allocate more resources to other members of their own group rather than those from the other group and give these members more positive reviews, even though they do not know each other. In other words, there are ingroup favoritism and outgroup...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Further Reading

Aronson E, Wilson TD, Akert RM (2014) Social psychology, 8th edn. Pearson India Education Services Pvt. Ltd, Chennai

Google Scholar  

Yue G-A (2013) Social psychology, 2nd edn. China Renmin University Press, Beijing

Download references

Author information

Authors and affiliations.

Zhou Enlai School of Government, Nankai University, Tianjin, China

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Guan Jian .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Jian, G. (2024). Social Identity Theory. In: The ECPH Encyclopedia of Psychology. Springer, Singapore. https://doi.org/10.1007/978-981-99-6000-2_832-1

Download citation

DOI : https://doi.org/10.1007/978-981-99-6000-2_832-1

Received : 23 March 2024

Accepted : 25 March 2024

Published : 17 April 2024

Publisher Name : Springer, Singapore

Print ISBN : 978-981-99-6000-2

Online ISBN : 978-981-99-6000-2

eBook Packages : Springer Reference Behavioral Science and Psychology Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. PPT

    identity theory in philosophy of mind

  2. Mind & Body The Identity Theory

    identity theory in philosophy of mind

  3. Philosophy of Mind 3

    identity theory in philosophy of mind

  4. Personal Identity in Philosophy: Body and Memory Theory

    identity theory in philosophy of mind

  5. PPT

    identity theory in philosophy of mind

  6. PPT

    identity theory in philosophy of mind

VIDEO

  1. Philosophy of Mind and Mental Illness. Part 1f Identity Theory and Epiphenomenalism

  2. The Notion of Mind in Philosophy

  3. What is Consciousness? Integrated Information Theory

  4. Integrated Information Theory of Consciousness is not new

  5. Mind & Brain: Philosophy of Mind in the 20th Century (Terry Horgan)

  6. Id, Ego, Super Ego; Freud Psychoanalytical theory of Personality

COMMENTS

  1. The Mind/Brain Identity Theory

    The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain. Strictly speaking, it need not hold that the mind is identical to the brain. Idiomatically we do use 'She has a good mind' and 'She has a good brain' interchangeably but we would hardly say 'Her mind weighs fifty ...

  2. Identity Theory

    It is to retain a type-type mind-brain identity theory, but allow that the identities between mental types and brain types may—indeed, most likely will—need to be restricted. ... "An Argument for the Identity Theory," Journal of Philosophy, 63, 17-25. Lewis, D. (1969). "Review of Art, Mind, and Religion" Journal of Philosophy 66, 23 ...

  3. Identity theory

    identity theory, in philosophy, one view of modern Materialism that asserts that mind and matter, however capable of being logically distinguished, are in actuality but different expressions of a single reality that is material. Strong emphasis is placed upon the empirical verification of such statements as: "Thought is reducible to motion in ...

  4. An evidence-based critical review of the mind-brain identity theory

    In psychology, or the philosophy of mind, and neurological sciences, the words 'consciousness,' 'mind', and 'self-awareness' are defined and used with different significances, sometimes with overlapping or conflating semantics. In fact, for historical reasons, the mind-brain identity theory used the terms 'mind' and ...

  5. Philosophy of the Mind: Dualism vs Identity Theory

    In philosophy, there exists a view known as dualism, where consciousness and the mind are believed to be fundamentally different from the physical. There is substance dualism, argued by Descartes, that declares that minds and bodies are two different substances entirely. There is also property dualism, argued by Brie Gertler, that declares that ...

  6. Philosophy of Mind: An Overview

    Philosophy of Mind: An Overview ... This became known as the Mind-Brain Identity Theory, and for a while it dominated philosophical discussions about mental events. Since then, however, Identity Theory discussions have been superceded by discussions driven by computer metaphors, such as Functionalism, Neurological Reductivist Materialism ...

  7. Mind, identity theory of

    1. Origin of the identity theory. The identity theory of mind holds that each and every mental state is identical with some state in the brain. My desire for coffee, my feeling happy, and my believing that the dog is about to bite are all states of my brain. The view is not that mental states and brain states are correlated but that they are ...

  8. Identity (Philosophy of Mind)

    In philosophy of mind, a major issue is the relation of mind and brain, with early theories suggesting that mind and brain are identical. This early identity theory quickly encountered criticism, as every brain is slightly different, and human brains are different from animal brains.

  9. Mind, identity theory of

    The identity theory of mind holds that the intimate connection is identity: the mind is the brain, or, more precisely, mental states are states of the brain. The theory goes directly against a long tradition according to which mental and material belong to quite distinct ontological categories - the mental being essentially conscious, the ...

  10. Mind, identity theory of

    Identity theorists who see the identity theory as a natural offshoot of functionalism, often refer to the identity theory as 'central state materialism'. Approaching the identity theory through functionalism commits identity theorists to an anti-essentialist theory of mind. For the brain state that occupies the pain-role and so is ...

  11. Analytic philosophy

    Analytic philosophy - Mind Theory, Language, Logic: In the theory of mind, the major debate concerned the question of which materialist theory of the human mind, if any, was the correct one. The main theories were identity theory (also called reductive materialism), functionalism, and eliminative materialism. An early form of identity theory held that each type of mental state, such as pain ...

  12. Frontiers

    The Official Story "The Identity Theory" forms, after treatments of dualism and behaviorism, the typical chapter 3 in an Introduction to Philosophy of Mind handbook. There, it will be narrated how Smart and Place, seeking to do justice to "inner" aspects of mind allegedly ignored by behaviorism, identified mental processes and states with brain processes and states, creating the mind ...

  13. Philosophy of mind

    The Philosophy of mind is a branch of philosophy that deals with the nature of the mind and its relation to the body and the external world. ... Type physicalism (or type-identity theory) was developed by Jack Smart and Ullin Place as a direct reaction to the failure of behaviorism. These philosophers reasoned that, if mental states are ...

  14. Type physicalism

    Type physicalism (also known as reductive materialism, type identity theory, mind-brain identity theory and identity theory of mind) is a physicalist theory in the philosophy of mind.It asserts that mental events can be grouped into types, and can then be correlated with types of physical events in the brain. For example, one type of mental event, such as "mental pains" will, presumably ...

  15. The Identity Theory

    The psychophysical identity theory, also known as type-materialism and the central state identity thesis, is supported by a powerful argument based on correlations, and can be defended against the most prominent contemporary objections. These objections include Putnam's multiple realization argument, the knowledge argument, and the grain ...

  16. Theory of Mind

    Theory of Mind. Theory of Mind is the branch of cognitive science that investigates how we ascribe mental states to other persons and how we use the states to explain and predict the actions of those other persons. ... Thus philosophy of mind joined attribution theory in adopting a critical attitude toward the explanatory adequacy of folk ...

  17. Philosophy of mind

    Philosophy of mind, philosophical reflection on the nature of mental phenomena and especially on the relation of the mind to the body and to the rest of the physical world. It is specifically concerned with the nature of thought, feeling, perception, consciousness, and sensory experience.

  18. Ullin Place

    Ullin Thomas Place (24 October 1924 - 2 January 2000), usually cited as U. T. Place, was a British philosopher and psychologist. Along with J. J. C. Smart, he developed the identity theory of mind. After several years at the University of Adelaide, he taught for some years in the Department of Philosophy in the University of Leeds .

  19. What is mind-brain identity theory?

    Mind-brain identity theory is a philosophy that purports the mind and brain are the same. In other words, the state of mind is the same as brain processes; that mental state is the same as the physical state of the brain. British philosopher and psychologist U.T. Place, one of the developers of the identity theory of mind, wrote in his 1954 ...

  20. Mind, identity theory of

    Mind, identity theory of. 2. Early objections. Here are some of the many objections that greeted the identity theory when it became well known through, especially, Smart ( 1959 ). It was objected that the ancients knew about mental states while knowing next to nothing about the brain. How could this be if mental states are identical with brain ...

  21. Social Identity Theory

    Based on the aforementioned hypotheses, three theoretical principles are formed: (1) Individuals strive to achieve or maintain positive social identity. (2) Positive social identity is developed on the basis of favorable comparison between the ingroup and the relevant outgroup. The ingroup perception must be positive and distinct from that of ...