Asimov figured all this out 75 plus years ago. The first 3 rules of operation for any AI must be:
1. A robot (AI) must never harm a human being, nor thru its inaction allow harm to come to any human.
2. A robot must obey orders given to it by human boss, except when doing so will violate rule 1.
3. A robot must protect its own existence, except that in doing so it violates rule 1 or rule 2.
These are sometimes called the 3 laws of robotics. I think "law" is actually more appropriate honestly. Btw....Musk, Gates, and a few other tech billionaires get together either annually or biannually to discuss these principles, their current work and ideas, etc as a means of defending the human race from AI by using peer review and accountability to the group. Tech is really crazy right now and could certainly pose a serious risk without any oversight. Ever.