Can someone please explain why the notion of Artificial Consciousness is not absurd?

Computational systems have a system of inputs and outputs, and the computations themselves are mathematical in nature. Computers deal with information in the form of numbers, ultimately everything computers do breaks down to 1's and 0's, and complex arrangements of logic gates, and this is true of all computational systems I'm aware of including quantum computing, which doesn't even really exist yet.

I say they are "incompatible" because they excel at completely different things, and have completely different weaknesses.

Brains (even the brains of very smart people) are not capable of the kinds of rapid computation as computers. To put things mildly I'm quite talented at mathematics, but my ability to do computation is put to shame by rather simple machines, such as TI calculators. Some autistic savants have an ability to do certain kinds of computational calculations extremely quickly (Such as the guy who "feels" whether or not a number is prime), but these people with Savantism are almost ubiquitously incapable of the kinds of abstract pattern recognition that characterize great mathematicians. In other words, their seeming superpowers are rooted in a very strong disability that is not particularly indicative of how "normal" brains function.

Brains can easily do many things computers can't even hope to do, as these functions fall outside of the bounds of computation. Abstract pattern recognition is one of them. Almost everyone who reads this could intuitively grasp the pattern 1 1 2 3 5 8 etc. without needing it explained, but a computational system could make nothing of that pattern without being programmed to recognize the pattern. In other words, the only reason machines can deal with such patterns in the first place is because people find the relationships so easily, and then getting a machine to do it is far more complicated than having a person do it.

Brains can also learn synergistically without being trained to learn. I know how to write screenplays, but no one taught me to do that. I received no formal instruction. I analyzed a large body of material, determined for myself what was important, broke it down with analysis, synthesized this information into new conclusions, applied them, and then repeated the process. Computational systems are incapable of this sort of learning outside the bounds of their programming. A smart robot designed to analyze the stock market is not capable of writing screenplays, and that's the whole point. While we can design "artificial intelligence," these machines only move within the very specific bounds of their programming, while the brain is "unbound" to learn as it sees fit, and the nature of learning between the brain and machine learning systems is seemingly quite different.

I know of no evidence to support the notion that brains work on a computational input => output basis, and see considerable (Although anecdotal, because again brains are fucking complicated) evidence to suggest that they don't.

/r/philosophy Thread Parent