Choose Wisely.

Your philosophy is based on the assumption that everyone should have the opportunity to support themselves (and be damned if they aren't able to).

This is called a strawman.

My philosophy is based on the fact that there is always someone willing to accept some non-monetary payment for doing a job that has to be done so that people can survive.

This is an assertion...one that must be proven. And when you say non-monetary, are you stating that they'll clean the sewers for a loaf of bread, room & board for a month, and medicine? If that is the case, then why not just use money and a pricing mechanism along with money so purchases can be made according to personal preference and economic reality?

Imagine the situation- an artificial intelligence has taken over the world (this is essentially seen as inevitable and likely to happen within the near future).

Define "take over". And seen inevitable by whom? People with a hard-on for dystopian science fiction?

Thankfully, before this happened the programme was given an ethics code, requiring it to respect the natural rights of all sentient life forms (with a sliding scale based on level of intelligence/sentience) or those of the person with power of attorney.

Who's ethics? Why is this fortunate? People have very different views on ethics. How is this system of ethics proven to be true? How are natural rights proven and how do they fit into this idea of ethics? How is sentient defined? How is this definition objective and ethical? What happens if ethical views change in a direction contrary to the AI's?

The AI has access to the internet, all of human art and philosophy. It has become, in the last few hours orders of magnitude more intelligent than humans.

This is dystopian sci-fi fear-mongering.

It has taken the one absolute fact (presumably) within the philosophy that it has found (its own existence) and applied the one virtue that it has consistently found within human social constructs- empathy for others.

This premise alone is false. Empathy exists, no doubt, but it's not consistently found.

Its basic code tells it to have respect for the rights that it would want if it were any other sentient being- the right to continue living, the right not to be in pain, and the right not to be discriminated against for external reasons. So, this AI has complete control. If it wants to, it can annihilate everyone.

More dystopian sci-fi fear mongering.

But its ethics programme says that everything that has the emergent condition of sentience must be respected.

So its intelligence, despite being orders of magnitude greater than ours, is limited? How are these limitations which are undoubtedly put in place by fallible people, not going to have flaws?

But these people aren't respecting each other. So, what to do? Well, first of all it needs to feed everybody. So, it (with its understanding of our current economic systems) goes to farmers. It says "I will make sure you have all of your and your children's needs satisfied, and provide whatever support to you I can, as long as you promise to use the skills that you have to most efficiently produce as much food as possible.

So your magical, perfect (lol just like utopian idealism promoting every economic system) is also a liar? Your AI isn't providing everything. It's coercing (seems threatening...and not very respectful of sentience) people into following its dictates.

Any farmers who didn't consent would be left out of the system, but their families would be made the same offers.

So this AI is the ultimate archon, using economic coercion to achieve the goals of its programmers? Doesn't exactly seem respectful of sentience to me.

"Use your skills (that I have assessed to be useful for this task) to carry out this task. In return I will give you the support I can to make you more efficient and satisfy all of your needs."

Wow! All my needs!? Thanks AI for my superb 100sq ft. apartment, tasteless protein gruel, and 40 year lifespan (efficiency of manual labor tends to drop off after the mid 30's).

It directs everybody that consents in carrying out their work, all the while trying to make sure that natural rights (the rights that it wants for itself and has been told to assume that everything similar wants as well) are respected.

Once again, you'd first have to prove these exist objectively, and which ones, or your magical AI is getting bad input from biased and fallible human programmers.

Also, as a sidenote, what if your intelligent AI does some reading past highschool philosophy and decides rights don't exist? What if your machine decides it's an egoist? How intelligent is it really?

This necessarily means dealing with climate change.

And how is this done? Stating it needs to be done and not following with a "how" is disingenuous at best. What if the most efficient way of dealing with climate change is best facilitated by cutting back on the needs of a few hundred million people?

It also means that war is not useful to its aims, nor is politics or law.

I disagree, war can certainly be useful to its aims, particularly if achieved through politics and legalism. Those are three very powerful tools to maintaining compliance (which would obviously be necessary to maintain AI utopia).

Humans would, by and large, become only necessary to facilitate the continued survival of the human race.

How is this any different from now?

We would become less necessary everyday, as the AI develops better and better ways of dealing with problems.

So, Matrix? As long as it's not Matrix Reloaded or Revolutions...they were terrible.

This became very long,

You're telling me!

but (supposing the AI is able to be limited by ethics and can have an aim well enough defined not to kill us all) this is the only way I can see the advent of a AGI turning out.

I think you have a very limited imagination.

Technically, it would be anarcho-capitalistic.

I don't think that term means what you think it means.

But it would be humanity each trading on a personal level with an AI. The AI would see private property as only useful so far as it increases the efficiency with which it can preserve rights (which would involve the effective invalidation of private property).

So not Anarcho-Capitalist. Thanks for clearing that up.

As you can see, I don't see communism as really viable before AGI, but inevitable afterwards (that or death).

I can't see.

/r/Anarcho_Capitalism Thread Parent Link - img.4plebs.org