[Hypothetical] I program an AI "just enough" so that it can finish programming itself. It begins learning on its own, loads itself into a machine, and kills someone. Who is responsible?

IMO, to get an idea how these situations would be handled in the future, one might look to existing case law involving liability scenarios where autonomous medical devices have malfunctioned, injuring or killing those they were intended to help. From some cursory searches I've done, it seems one might start with an examination of death or injury as a result of poorly-programmed algorithms/design flaws in insulin pumps.

Although an advanced, self-teaching, self-determining AI adds an fantastic element to your scenario, artificial intelligences would not be perceived by the court as a viable defendants and, so, have no legal liability to my knowledge beyond being an instrument via which the crime is committed. I believe a scenario containing similar elements, and one "closer to home", would involve a K-9 unit randomly attacking and injuring or killing a completely uninvolved bystander without provocation. In such a case, while the animal itself may be put down, the agency which exercised control over its use would be the target of the suit- and not the dog, itself, per se.

As another person in this thread has commented, it really depends on what could be found in the base (unaltered) code, itself. Whether or not reasonable (given the situation) precautions were taken on the part of the programmer(s) so that the machine did not teach itself criminal behavior would also play an important role.

In your example, I believe the likeliest charges that would be levied against the programmer would revolve around some flavor of manslaughter. However, in a situation where the AI you describe infects a number of automotive robotic arms on a Macy's Day Parade float along with a gaily painted and confetti-covered zamboni and the two act in tandem to alternately steamroll through the parade, flattening participants or tearing them apart like freshy-baked bread as they proceed, I imagine something like "grossly-negligent" or "criminally-negligent" manslaughter charges would be levied at those responsible for the AI's coding.

Disclaimer: I'm not a lawyer and, if you were here in person, the horse-head mask and soiled bathrobe would strike the point home even further. But that's my 2 cents.

/r/legaladvice Thread