Could a completely mechanical supercomputer achieve self-awareness? If so, where exactly would the consciousness be?

A couple asides before I jump into the meat of your comment.

At what level do you see individual neurons reflecting on the output of their processing?

There are intraneuronal signalling processes in our brains, and I'd imagine in most other creatures with brains, but it isn't the sort of process that I believe could manifest consciousness. It's a response to the detection of it's own firing mediated by intracellular proteins. Bits are even more limited than neurons in this respect, but I don't point out this difference to say that neurons can do what collections of bits cannot. I cannot of course prove that neither neurons nor bits are conscious. Maybe all matter/energy is conscious (though not to the same degree or with the same capacities), and the aphorism that "we are the universe reflecting upon itself" is true for us, and more things than we would commonly admit. But, I also have no good reasons to believe such a view when a more limited theory that depends upon the nature of consciousness is as explanatorily powerful.

Then what specific physical property is it that allows consciousness? Not a conceptual property, I want to hear which arrangement of matter allows consciousness.

I'm not sure I understand the intent of your challenge here. I never claimed to be able to point to consciousness and say, "There it is," or even, "There's a system that would enable it." I must limit myself to conceptual properties because of my own ignorance, and that of the scientific field.

Tell me if I understand your view properly. As long as the relevant processing (the conscious processing analog, CPA) occurs, it doesn't matter whether the information is conveyed by the interconnections or firing patterns of neurons, or the pattern of bits in memory that are manipulated by a processor? So, simulate the same input/output and we should presume the same intermediate (qualia and consciousness)?

If I have construed your intent correctly, this sounds a great deal to me like functionalism, though it could also fit with notions of dualism or panpsychism. If you were to consider Searle's Chinese Room, would you say that the system (room, instructions, and agent working together) understands Chinese?

I'm saying that organized in the way they presently are, non-biological computers can't be conscious, which is a bit different than saying that they can't flat out, for all time.

I tried to move the discussion to algorithms because I believe neither neurons nor bits exhibit some of the properties sufficient to explain the only kind of consciousness of which we have knowledge, our own. I don't think you're advancing this pos, butIf we presume them conscious, then what couldn't be? That would be some very interesting and strange consciousness if an individual neuron was conscious. The consciousness would be a direct function of protein phosphorylation or synthesis, and would be a diffuse thing without presence, in some sense existing in the space between measurable things. Or alternatively, if consciousness must be an integrated phenomenon as is my contention, because it must have real presence, the electrical potential of the neural soma might be its individual analog of our own consciousness, and the processing that occurs inside the cell is its analog to our unconscious processing. Any arrangement of matter or energy conveys information, but not just any arrangement of matter or energy is reflective or at least doesn't give the appearance of being so.

/r/philosophy Thread Parent