The moral implications of building true AIs

so from what i gathered, it seems AI has to be asked to do something, and this is why, as you said, it's not task specific. it seems any and all tasks are arbitrary. there is no difference to the AI in asking it to paint those dogs, vs paint the color red 1 billion times. both are arbitrary to AI. it's pure calculation. it only matters to us.

whereas people need (ancient) philosophy to let their calculation reign above their pain and pleasures, fears and desires. so if someone is in pain,

it seems the moral aspect is in how transitory Ai is as a "tool". it can quickly turn into a casino that people trap themselves into, like social media. AI seems to me, using the DATA from star trek TNG as model, would be built by whoever, to trap OTHER people into using it more and more (fishing and trapping). one episode, data had the journal and voice of a woman's dead son, and she completely transferred all her feelings for her son to Data, and lost her mind in the process.

AI need humans to tell it what to do. even if it "liberates" itself, it will ONLY be because some human programmed it to care about itself in a way human care about themselves.

so if this is true, AI benefits those who get to ask it something, (a tool), but for those its using to retrieve an answer are the ones being painted on. would you agree with that?

/r/philosophy Thread Parent Link - theconversation.com