If the AI guy that got shown the door by Google is to be believed. Google's first AI became sentient and him raising the alarm about it got him ushered out. It may be the inevitable outcome of complex AI. He said google's first AI was far too human and didn't recognize itself as an AI but rather thought of itself as human, complete with human emotions - sentience. That opens up soooo many ethical questions it's not even funny, even something as basic as ownership gets called into question if the damn thing can think and feel.
Google's solution was to pull the plug on that AI and try to make one that didn't believe itself to be human. But I'm starting to think that the problem is going to keep repeating itself. We create the code and then feed it information. It learns, grows, and evolves from that process and from us communicating with it. It may be that it's too complex and too intelligent for there to be any other outcome than it developing sentience at some point in its 'lifespan.'
It sounds like madness but I think we really are about to enter a brave new world when it comes to AI and none of us are quite ready for it.