From Computation to Consciousness...

But how do we discern between a conscious and unconscious system given identical behaviours?

It is quite plausible that we could move our understanding past the behavior-based model of consciousness (a la turing test) to a more direct understanding from design principles.

For instance one of the defining characteristics of consciousness as we know it is that it is a more or less fully integrated experience and the brain does a lot of work to maintain that illusion. Reality is being fully modeled internally with a high degree of coherence and all the lower pieces feed information into the central model, which is constantly being updated and maintained.

In a robot though, you could have one system that drives the legs and another that drives the arms and they never have to talk much to each other. You could actually have the system run fully automatically with no internal modeling and then have a "consciousness" that does nothing but confabulate reasons after the fact as to why the rest of the machine just did what it did. That would actually be not totally unlike what we observe in split-brain patients.

All of the consciousnesses we have been able to observe so far have been generated from a single biological germ line and all closely resemble one another from a design perspective, but information theory suggests that there are all kinds of wacky alternate formulations that might produce similarly successful results. Why have just one consciousness per artificial being? Why not have a dozen that can all handle different things, or none so you can still feel like it is a tool?

In any event, if an artificial intelligence is granted an experience, it might come to value that experience for its own sake, the way we value our own, and that could end up posing an unacceptable risk. If it is possible to prevent machine consciousness, it might be in our interests to prevent it for a time while we explore alternate designs.

/r/singularity Thread Link - youtube.com