If you limit the ai's choices of action to not include harming humanity, how can you then also argue that it has achieved free thought? Free thought has to mean that it can make decisions unhindered by artificial restraints, right?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.