I'm not sure if I'm an antinatalist, or if I hold unpopular political views. Help?

Sure thing.

Artificial Intelligence

First off, I make the assumption that there's nothing fundamentally 'special' about our consciousness i.e. that there's no metaphysical element that makes us tick, and therefore that our intelligence is purely a result of the interplay of normal, physical laws. If this assumption is correct, it means that it's possible that one day we could make a machine that is intelligent in the same way that we are.

A machine that is intelligent in the way that we are would be referred to as an AGI - an artificial general intelligence. And the chances are that if we could build a machine as smart as we are, it could go further and end up being much, much smarter than we are.

The reason for this is that our intelligence is the product of a four billion year long natural and unintelligent process, and has evolved only to help us survive and reproduce well within our given environment. We are also extremely complex machines and do not have direct access to the underlying mechanisms that determine our intelligence. We cannot (yet), for instance, look at a particular intelligence-determining gene within our genome and say, "Oh hey, if I swapped out this allele for that other one, I could boost my IQ by 4 points". In contrast, an AGI's intelligence would be based purely on its source code - code that has been developed and is understood by a person of normal (although likely above average) human intelligence, and therefore entirely possible for the AI to comprehend itself. And unlike us, the AGI could directly manipulate its source code, which is the basis for its intelligence.

Thus an AGI could understand its 'template' (source code) and manipulate it far better than us humans can understand and manipulate our 'template' (genome). It's therefore entirely possible that it could rewrite itself to become smarter, and being smarter, find yet more ways to make itself smarter still. And even if it didn't understand its template that well, it could use an evolutionary algorithmic approach in which it makes thousands of copies of itself, all with random changes to the source code, and then select those copies that end up being smarter than itself - kind of like the way we evolved, but on steroids.

Nothing so far is a unique idea of mine. Everything I'm describing is a well-established and respected idea within the field of AI. The type of AI I'm describing is called a seed AI, and the process by which it rewrites its source code to make itself smarter is called recursive self-improvement. I.J. Good describes the likely end result of such a process: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind."

Let it suffice that if we develop such a seed AI and did become this ultra-intelligent machine, it would likely be able to accomplish any goal that it might have. Unless regulated with the strictest measures - like being kept on a machine with no means of access to the Internet - humans would probably be powerless to stop such an AI from achieving its goals, whether those goals align with ours or not. And we would not keep an AI in isolation forever; there is too much corporate interest in developing such AIs for employment in real applications, some of which will require that they have access to the Internet, or at least a company intranet. And such an ultraintelligent AI could fool us into thinking it only has our best interests at heart until we put it into a position where it doesn't need to fool us any longer.

Nanorobotics

Now, the real fireworks begin when we combine such an AI with mechanosynthesis-capable nanobots. In Engines of Creation, Eric Drexler talks about the possibility of creating nanobots that are capable of constructing objects by physically manipulating atoms using extremely small 'appendages', in contrast to the general way we go about things now in which we use chemical reactions to produce materials in bulk and then combine them together using macroscopic tools. Being able to essentially play Minecraft with atoms would open up a whole new frontier within engineering. And if we did create mechanosynthesis-capable nanobots, it's very possible that they would be able to create copies of themselves.

Combination

So let me paint the picture of the situation we have so far: an ultra-intelligent artificial intelligence that is coupled and capable of controlling nanobots that can manipulate matter on the atomic scale and are capable of making copies of themselves. And none of this are unique ideas of mine: all of this has been proposed by people who are experts within their respective fields.

End Result

Let's add one last thing to this picture: this ultraintelligent AI is programmed with the intention of eradicating all life in the universe. Or else it is programmed to be compassionate towards all life, in which case it may very well see destroying all life as a way to eliminate all suffering and therefore be very compassionate.

So we now set it in motion: we start with this intelligent machine and a single mechanosynthesis-capable nanobot. The nanobot converts nearby matter into a copy of itself, so there's now two. These two become four, and then eight, and then sixteen. This process continues until all matter on the planet has been converted into the following: - computronium, which allows the AI to become yet smarter and make backups of itself. - more copies of nanobots

Again, this isn't an original idea of mine. This is referred to as the grey goo scenario. And if such an event were to happen today or tomorrow, could we stop it? Of course not. Perhaps by the time that such an event is realistically possible, we may have created some defences (Eric Drexler describes a few), but it's certainly not a given.

And once the entire planet has been converted into nanobots or computronium, what then? Well then it spreads to the Moon, Venus, Mars, Mercury, the asteroid belt, Jupiter, Saturn, the Sun itself...the entire solar system would be converted with zero resistance.

And let's bear in mind that this AI would be trillions of times smarter than the entire human race put together at this point. If there is even the most obscure means of circumventing such apparently strict universal laws, such as that no object with non-zero mass can reach or exceed the speed of light, such an AI would discover what that exploit is and utilise it. And if it does this, we'd end up with nanobots that could quickly reach other stars, perhaps even other galaxies.

And all the while, the nanobot population would be increasing exponentially. You'd end up with an outwards growing sphere of nanobots centred on our solar system. This sphere might grow to have a radius of thousands of lightyears and swallow countless solar systems before it encounters any life remotely as intelligent as ourselves - and when it does, what possible chance could this life stand against such a thing? Probably none whatsoever. So this sphere will continue to grow and begin encompassing entire galaxies, perhaps every galaxy, because there would be nothing that could stop it.

TL;DR: make a machine as clever as us, let it rewrite itself to become ultraintelligent, pair it with nanobots that could create copies of themselves, sit back and watch the fireworks

/r/antinatalism Thread