Zontar said:
inu-kun said:
Zontar said:
Dragonbums said:
Happyninja42 said:
Zontar said:
We already are under the threat of things like extinction from these things, and there are too many lines of work under threat (with the economy also being at risk as a result), having another thing at risk from the existential threat that is AI is not a good thing.
Seriously? "under threat of extension by these things" ?? They can barely flop on the ground and you are already stating they are at our throats at the cusp of humanity's downfall? I think you are overreacting a lot. Not a bit, a lot.
While the extinction part is pretty extreme he does have a point that we need to start thinking about the negative downsides to having machinery do so many things. Already right now automation has displaced thousands of factory workers because robots do it cheaper, faster, and they don't complain about insurance or health policies, and self driving trucks are being tested on the road which is going to displace even more people.
It's not robots that can walk, talk and have emotions people worry about. It's robots that are stationary and have enough computing power to simply pick up a jug of milk and swipe the barcode and be done with you. It may not affect us, but it will affect that young teen or desperate adult that needs just one more job to keep their head above the water.
There's also the fact A.I. is one of the most likely means of extinction our species could face given that all you need is a self-improving A.I. and a bit of time and you end up with something that is too smart for us to out maneuver that thinks in a way completely alien to us that could very easily see us as pests to be removed.
I never believed the idea an AI wil kill us. Odds are it will either enter depression and commit "suicide" when it will realize it's task is impossible or will jail humans in order to protect them. At worst I can think of a gray goo incident but that seems very unlikely.
What if it just decides to do its own thing and thinks us a pesky risk that needs to be removed? Or just wants things to be more efficient for itself such as the world no longer having air, and not caring for the consequences?
A.I. needs heavy shackling to not be an existential threat to our species.
Or maybe it doesn't automatically go Hollywood Crazy like you assume, and just co-exists with us. You're assuming hostile intent on a species
that doesn't even exist, that will be created and programmed by us. But you are acting like it's a foregone conclusion that the only logical outcome, is destruction of humanity, with absolutely zero evidence to support that claim. You are fabricating hysteria, and it's completely unfounded. There is just as much evidence (read, none, because we are talking about a theoretical species at this point), that they will decide they really like humanity, and want to coexist with them peacefully. Or to be completely indifferent to us at all, because as you say, their thought process will be so alien to us (which makes no sense, since WE will be the ones designing their thought process) that maybe they just sit around and think about things all day, because that's all they want to do. None of us know. So please don't talk like you know the answer to this hypothetical question.
But, having listened to a few discussions by some people in the field of AI, they didn't seem too worried about it. Granted, it was only a few interviews that I've heard, so maybe they are the fringe element, and maybe the bulk of people who work in AI field are like you, and assume that the things they are making are going to rise up and kill us all (which if so, why the fuck are they working in that field), but I'm willing to bet that the majority of people in the AI field were like these 2 interviews I heard, and they weren't terribly worried about it.