I'm not sure if I'm an antinatalist, or if I hold unpopular political views. Help?

In your example the destruction of all life in the entire universe for eternity works without fault but removing involuntary suffering is full of problems.

I'm not exactly what you mean here. Did you mean to say, "Eliminating suffering through involuntary means (such as global extermination) is ethically problematic?" Because if so, I can see where you're coming from but I would dispute it. Let me know if this is what you means and I'll present a counter-argument.

If human nature is "fundamentally shit" and you are a human then your nature is fundamentally shit. Why should anyone accept some solution to suffering from a person who's nature is "fundamentally shit" (whatever that means)?

Because when we're simply discussing solutions like this, we're simply treating a particular problem as an intellectual puzzle and trying to solve it in a very sterile, calm manner. We can think about these things in purely logical terms and hopefully reach logical conclusions. We can deal in ideals. The 'shitness of human nature' doesn't need to enter the equation at this stage; it's only when we start putting these things into practice that it becomes a relevant. For example, I don't believe David Pearce or Peter Singer are particularly special people. They breathe, shit and eat like the rest of us, and they have likely done unethical things like any of us. But I still believe they present good arguments and pretty good (but not, imo, ideal) solutions.

As another example, I chose to study computer science with a specialisation in artificial intelligence at university because at the time, I was thinking that if (on the extreme off-chance that I accomplished it) I were able to create a super-intelligent AI, I could use to painlessly eradicate all life on the planet without anyone being aware of it. To me, that would be the most ethical thing someone could possibly do, and I still do (being in /r/antinatalism, I'd hope you guys would be a little more sympathetic to this line of thinking than the average Redditor, but you're free to feel about it however you want of course). However, this is only me thinking in the ideal case. In practice, if I were to build such a super-smart AI, my fundamental and unavoidable human shitness would become a relevant factor. I would become a godlike being, and I would probably use that power to torture and kill others on an individual basis. I'd probably start raping people. Because empathy really means that the suffering of others causes oneself to suffer, and I wouldn't want to suffer, I'd probably switch off any sense of empathy I had and become a pure psychopath in order to experience unadulterated pleasure even in the face of all the suffering I was causing for others.

So when I'm asking you to consider whether or not I'm right about a certain approach to an ethical problem, I'm only really asking you to consider the 'ideal', purely logical side of it. Of course, a better solution than the one I offer might take into account the fundamental shitness of all people including the person who's supposed to be pushing for that solution, in which case it may end up actually working in practice as well as theory.

A universal extinction of suffering guarantees the complete elimination suffering too.

Yes, that's tautological. What we're considering is if Pearce's proposal really would guarantee the elimination of absolutely every form of suffering for all time like a universal extinction of all life would, and also how likely it is that we'd ever even reach that point. Pearce's proposal involves the use of biotechnology to fundamentally rewire sentient forms of life to cause a 'hedonic shift' in which all experiences are shifted along the hedonic spectrum - so that unpleasant experiences become neutral, and pleasant experiences become even more pleasant - while maintaining our preference architecture. I think that's possible - certainly as possible as what I propose - but I think there's a lot more that could go wrong. This is because the end result of my 'extinguish all life' approach seems to be a much more stable, persistent state of affairs than Pearce's 'rewire sentient life' approach. In Pearce's world, we could easily end up going down the Bioshock route, whereas a dead universe in which all matter has been converted into nanobots is hardly likely to spontaneously spring up with sentient, suffering life. If we have an AI 'oversee' the universe until heat death (or whatever), it further reduces this possibility.

I hope we can at least agree that my proposed solution, if successful, would have a better chance of permanently ending all suffering than Pearce's, if successful. The only two reasons I can see for favouring Pearce's solution is 1) if we believe Pearce's solution is more likely to be achieved (which I dispute); and 2) if we adopt something other than a strict negative utilitarian stance i.e. value something other than suffering and its absence, like if we believe that being alive is fundamentally valuable (which I dispute, and being in /r/antinatalism I imagine I'm not alone on this).

I also haven't heard the proposals of thousands of crackpots who claim have the cure to all disease or perpetual motion machines. Unless you have any evidence then your proposal is fantasy and as such irrelevant to any debate.

Let me put it this way: nothing of the proposal I have in mind is an original thought of mine. Every part of my proposal has been suggested by an expert within their respective field. Refer to this post. The 'solution' I describe is actually considered a very serious existential risk by many people within these fields.

David Pearce actually HAS put forward his proposal and it's well within possibility.

And funnily enough, David Pearce has also suggested that my proposal is a very likely possibility too: "Strictly speaking, however, humanity is more likely to be wiped out by idealists than by misanthropes, death-cults or psychologically unstable dictators. Anti-natalist philosopher David Benatar's plea ("Better Never to Have Been") for human extinction via voluntary childlessness must fail if only by reason of selection pressure; but not everyone who shares Benatar's bleak diagnosis of life on Earth will be so supine. Unless we modify human nature, compassionate-minded negative utilitarians, with competence in bioweaponry, nanorobotics or artificial intelligence, for example, may quite conceivably take direct action. Echoing Moore's law, Eliezer Yudkowsky warns that "Every eighteen months, the minimum IQ necessary to destroy the world drops by one point”."

Have you been thinking about murdering someone? Have you considered getting psychiatric help?

I've had psychiatric 'help'. Hell, I told the psychologist that I sometimes try to vent frustration towards my dad by fantasising crippling him in his sleep, killing the rest of my family in front of him, breaking his bones, drilling through his eyes and holding his face against our fireplace while lit. For four months I heard nothing from them, and then I got a typo-ridden letter saying that they didn't think I had any mental health problems and not to bother them again. Haha.

/r/antinatalism Thread