Could a completely mechanical supercomputer achieve self-awareness? If so, where exactly would the consciousness be?

Less man-hours available, and more work than expected. Isn't this why public works projects take so long? Well, I finally put together something that I believe is coherent and ready for critique.

Before even defining consciousness I must discuss the approach to reasoning about it. Introspection may be necessary to identify some of the properties of consciousness (it certainly is necessary with today's science that doesn't provide access to its contents). However, introspection cannot strictly tell us which properties of the mind are necessary to enable consciousness, which may be sufficient but not necessary, and which may co-occur with our own consciousness by accident of history. This extension problem applies to the analysis of the commonalities of human consciousness almost as much as it does to solipsistic thinking. Some questions become easier to answer, but we're still subject to to potential criticisms of speciesism or vitalism. So the project of examining consciousness should be considered an inference to the best explanation using the methods available to us, introspection, logic, and science. Because of the extension problem, we will gain the most information about consciousness by considering areas of failure. Hopefully, with measured analysis, alternative theories (dualism, idealism, etc.) acquire the onus of justification by claiming new fundamental properties without explanation.

The first problem of consciousness is how to define it. Defining it as experience doesn't quite work. We experience our dreams, but only a select few (lucid dreamers) are conscious of them, and can exert some manner of control over these experiences that are less constrained by the outside world. Sufferers of blindsight still experience vision, borne out by their relatively accurate determinations of visual content or even drawing pictures of a scene before them. They call these decisions guesses. Both of these edge cases can be encapsulated by a definition of consciousness as “awareness of awareness/experience.” Taking this as a provisional definition, what properties of consciousness can be identified?

(1) Consciousness is integrated. It is a phenomenon of unity. Directed attention allows us to focus on aspects of that experience, to identify the color, orientation, tone etc., but attention doesn't actually isolate them from the integrated experience. This is not to say that processing the priors of consciousness must be integrated. Processing of sensory input or even “higher” cognition does occur outside of this integrated experience, and subsequently influences conscious experience or thought. We gain non-willful awareness of any constituted processing that is sufficiently powerful or explanatory. Hearing one's own name may transform what had previously been a prosodic buzz without explicit meaning into a consciously perceived conversation. Understanding of even complex ideas, such as mathematics, can occur via intuition, being more what we would call automatic than contemplative. Understanding can even occur when asleep, though this weaker evidence given that such understanding could occur with consciousness, but without memory. Such a possibility can largely be rejected for recent, awake intuition.

Some may object to this idea of integration because the distinction between experience of the inner and outer world seems so easy. This is more easily understood as an illusion of the manifest image, similar to the distinction that can be drawn between vision and hearing. None of these things occur in isolation. Combined processing of conflicting domains (e.g. McGurk effect) can lead to an experience resembling the raw input of neither. Our perceptual experiences can likewise influence our thoughts (e.g. priming), and our thoughts can influence our perceptions (e.g. body dysmorphia). Every conscious sensation or thought is interrelated.

(2) Consciousness is exclusionary. Each sensation or thought is not only defined by its raw content, but also by all the perceptions and thoughts that it is not. Other sensations, other ideas determine their boundaries. It's in this way that they acquire significance or meaning, and lead to our understanding. For concepts, this can perhaps be exemplified by the process of learning. A child might be told that a dog has four legs and a tail, and subsequently identify cats as dogs. This is a failure of specificity of meaning that can corrected by further education, but nevertheless shows that the concept is defined as much by what it is, as what it is not.

The idea applies to more than just learned concepts, and includes some aspects of experience that may be templates derived from evolution such as our color space, pitch, time, etc. Red is red because it is not blue, or green, middle C, car, or love etc. Red possesses some inherent properties, but the mind “choosing” to represent a percept as red is contingent. Sometimes it's contingent on the machinery of perception (color blindness), but could also be contingent upon non-willful decisions of percept categorization, that in some people may be capable of change. The recent blue and black or white and gold dress is a good example of this. It's also a good example that both perceptions cannot be had at once. The stimulus is ambiguous, and the experience can flip back and forth between them, but they cannot both obtain together. The rabbit/duck illusion is a more commonly used demonstration of the same thing. Via context switching, both experiences can obtain, even alternately obtain very quickly, but not simultaneously.

(3) Consciousness is a pattern of physical activity. Saying a pattern of activity, I don't intend to say that consciousness exists apart from the activity, more that it is the activity. Some religious individuals contend that consciousness leaves the body at death rather than ending, but this presupposes different kinds of things without evidentiary support. Most people who require resuscitation have no in between awareness of experience, and those who do report experiences subject to cultural influence. Maybe it could be said that out-of-body experiences are not completely culturally determined, but in the few studies of OOBE, such as they are, subjects are unable to report information to which they should have access in the altered state. The best explanation then, appears to be a neurobiological one. Too long without oxygen and the network that integrates experience into, or maintains consciousness starts to hallucinate and then die.

How do binary computers connect with such concepts, even in their most limited versions? I'm going to take them in reverse order. Taken broadly they might fulfill (3). I haven't specified the kind of pattern, because I neither know it, nor have a good guess about it, although I do have some inchoate ideas related to the default mode network or the functioning of the claustrum.

How about (2)? I vacillate on this one. Bits, by definition are exclusionary. They are better defined in the processor than in memory, but maybe improvements in the latter will make such a distinction pointless. Regardless, even now each bit is almost always only read one way, and to hold non-biological computers to a standard that we biological computers don't meet (conditions at the synapse determine how the axon cascade is actually interpreted downstream) would be odd at best. Unfortunately for computer consciousness, a processor performs calculations on bits without any intrinsic representational quality. A pattern of bits may represent a memory address, an integer, or the components of a float. Combinations of these, or even the bits themselves depending on the algorithm, can at the level of software be interpreted as colors or positions etc., excluding other possibilities. However, this doesn't fully capture the point about exclusion, because it may depend upon (1). Each interpretation has no relationship with the interpretation of anything else. They do not define each other because they are not integrated. I'm not trying to say that any algorithm different from our own couldn't elicit consciousness, but each function of the binary computer algorithm is fully isolated, never unified.

Could stacked algorithms do any better? What I mean is one algorithm watching another in a likeness of metacognition. In some sense this is what we have already, and unfortunately trying to find awareness of awareness in such an idea leads one into an infinite regress, one algorithm watching another up a never ending cascade of metacognitive algorithms. A never ending recursive algorithm doesn't quite get what we want either, that would just be a crashed computer. Only integration – something which defines conscious experience would provide an end to such a chain.

I could alternatively posit that each algorithm of my own brain has its own consciousness, and then extend such an idea of more limited consciousness to algorithms of any sort, but this is a premise without any evidence rather than perhaps singular evidence. Maybe I'm privileging the relations of humans capable of reporting experience to one another, but the notion that an algorithm which passes the Turing test exhibits consciousness should at least be controversial. I argue that it fails some of the prime qualities of the only sort of consciousness we know.

/r/philosophy Thread