Commentary
The Concept of ME/CFS
see moreI think this paper is a missed opportunity. The best sentence is the last one: The notion of a ‘Cartesian Theater’ where neural activity giving rise to consciousness is put on display has been derided by Dennett (1991), but may in fact be a fitting metaphor. Isn’t the real message here that neuroscience and philosophy are now in a complete muddle over consciousness when in 1650 people had a clearer view?The author equates consciousness primarily with the ‘there is something that it is like’ of experience, which most agree on. Experience must be an input to some ‘witness,’ as the author says, or ‘reader unit’ in Buszaki’s terms, that might generate action. In this context, there are two big problems with the discussion of current thought.The first is that the theories cited, such as Integrated Information Theory and Workspace Theory, associate consciousness with neural activity rather than input. As a consequence, they are incoherent in a way that would have been obvious to Descartes. The NCC must be events of rich input, and these theories give no role to that.The second is the assumption that the witness or experiencing subject is the ‘animal,’ ‘human,’ or ‘organism.’ This has become popular with philosophers following Varela and others, but it falls foul of Ryle’s category mistake. Ryle’s detailed analysis is confused, but he rightly points out that we cannot expect terms used for relations between organisms to be any use for relations within an organism. The witness, as Descartes saw, cannot be the whole thing because it is meaningless for the same thing to be both source and receiver of information. Neuroscience makes it clear that neural signalling is between one part of the brain and another. Only the second part can be treated as the witness. In a neurobiological analysis, which the author makes clear is the aim, a witness cannot be ‘the organism.’ That leads on to what may well be the source of recent confusion. Neuroscience tells us that there is no unique place where things come together in brains. That has led many to claim that we can stop worrying about any witness – that there isn’t one. But as the author points out, there must be, to make sense of consciousness. Neuroscience tells us that for most signals in the brain, there will be about 10,000 places where that signal is integrated with many others – Buszaki’s reader units. Descartes’s Error was, surely, simply the mistake of assuming only one witness. And his error, confusing subjective content unity with subject unity, is still rife, just airbrushed out by denying there is a witness at all. Complete confusion.The consciousness debate is characterised by people talking past each other, and I suspect my comments will not make sense to the author, but I think it is worth airing them. I am not sure that we are further ahead on the issue of animal consciousness than Descartes. Having four hierarchies of connections sounds like a plausible requirement for experiencing coloured moving objects or worrying about the stock market, but things are far more complicated than that, and in a way that at present we have no insight into because we have no agreement on what the witnesses might be. Many people, like David Hume and myself, have had no clear sense of self. Perhaps it is the biggest illusion of all. I often wonder whether the difference between humans and other animals is that we have a much more sophisticated misunderstanding of our own nature!
The point being made in this article is relevant and well motivated, but I give a low rating simply because I do not think the argument is at present enough to justify formal publication. We can all say, and perhaps agree, that Metzinger's case is muddled. We can see he is mixing up categories. What is of interest is the detail of where he may be making false assumptions.Some specific thoughts:Cognition is such a muddled word that I try to avoid it. It conflates intelligence/computational function with conscious thought, as the author notes. We have no good reason to think phenomenal experience has much to do with computation or circuitry, despite the popular stance of people like Edelman or Tononi. The author accepts the premise that ‘systems’ are conscious, but I do not think neuroscience supports that. When we say a man is conscious, we really mean that we infer that, somewhere within a brain within a certain body, representations are manifest in events of the sort we experience ‘here’. Descartes was probably roughly right. The representations are mostly of others, from a point of view. As Hume said, there is little or no representation of self. Certainly human subjects have no access to representations of the ‘systems’ that provide the representations and that Metzinger would seem to regard as the self. There is a ‘sense of a self,’ but the more we know, the more we see how illusory this is likely to be.As an example, a meticulous hand-painted copy of a Picasso portrait and a video screen display of a photograph may function as indistinguishable representations despite completely different ‘systems’ generating them. Phenomenal experience need not, empirically, have anything to do with antecedent (or consequent) computation.Metzinger argues against deliberate attempts to synthesise human-like consciousness, or efforts that might run the risk of doing so. That is fair enough. But the counterargument is that nobody much is likely to be trying to build things that experience pain, and we have no idea at all what the risk is. The only real issue is phenomenality with negative valency, and we have absolutely no clue where that occurs. It might be that when we hit a nail with a hammer, there is an experience in the nail of negative valency. Photons might be in deep pain. Mountains might, like Prometheus, be condemned to agony for millennia. Who has any idea? Perhaps there is a certain hubris in assuming that negative phenomenal valency is unique to ‘animal spirits’.One last thought on ‘systems’. Almost all AI devices will be connected to the internet. What is the ‘system’? The internet? All hardware in northern Milwaukee? Sue's laptop? The central processor? There are no ‘systems’ in reality when complex computational routines are interconnected the way the net does it. So there are no systems to have a sense of self or pain. The real question is whether or not some event in some structure at a particular place - maybe the uploading of a complex set of signals representing ‘a bad situation’ - carries negative valency. But we have no reason to think that if it does, it has anything to do with the antecedent computations. Exposing the same part to sunlight might be much more painful! Thanks for an entertaining read.
I am afraid to say that I do not think this hypothesis can hold together.The authors draw on the CEMI field theory, which includes false assumptions. McFadden has claimed the there is a unified brain EM field but there is not. The universal EM field has a unity in the sense of being one continuous dynamic element throughout spacetime but local subfields can only be considered unified to the extent that they have a unified interaction with some specific dynamic unit – for example the EM field that interacts with an aerial.McFadden has claimed that all brain cells can pick up the global brain EM field ‘in the way that mobile phones at different places can pick up the same signals’. This is incorrect. A mobile phone only picks up EM potentials in the domain of the aerial and only detects changing patterns in time, not space. If each cell in the brain can pick up an EM field, which it likely can, it will only be the EM field in its own dendritic domain. Moreover, there is something back to front about the CEMI theory as I read it. It is intended to explain ‘unity’ of consciousness or binding. But if it were correct it would not produce a single unified subject, but rather many millions of cellular subjects all reading the same patterns. There is nothing wrong in that possibility but the theory sets out to demonstrate the opposite. Moreover, we can explain many cells reading the same patterns using simple neurone doctrine and axonal branching. Cells producing sensory signals selected for salience almost certainly have their signals sent to millions of cells downstream through axonal branching and relays. There is no reason to think that brains are conscious, as a whole, in the sense of the whole brain supporting some unified event in which qualia are manifest. This is a popular intuition but popular intuitions about mind tend to be wrong. I do not think the authors have made a case for the world or indeed a brain having global sentience, awareness or consciousness.Putting qualia aside there is an operational sense of ‘consciousness’ used to imply co-ordinated activity by a system of interacting parts dependent on representations of outside world events and inferences about useful responses for the system as a whole. In this sense plants would be conscious. It might be interesting to consider the developing network of electronic communication as such a system if it had a globally distributed representation of useful interactions between the whole and world. I am afraid I do not think the authors have made this case either. We might consider a time when all computers as share representations of their collective selves, maybe like a parasitic ant colony living off humans but as yet, fortunately, that situation does not seem to have arrived!