Hayao Miyazaki heavily critcises AI development team for movement AI

Recommended Videos

Fox12

AccursedT- see you space cowboy
Jun 6, 2013
4,828
0
0
inu-kun said:
Fox12 said:
The best part of Miyazaki went into his films
He's one of the creators that knowing any personal information about him stains his work.
The weird thing is that takahata's work is comparatively dark and mature, and yet the dude seems like the sweetest guy in the company.
 

Dragonbums

Indulge in it's whiffy sensation
May 9, 2013
3,307
0
0
Happyninja42 said:
Zontar said:
We already are under the threat of things like extinction from these things, and there are too many lines of work under threat (with the economy also being at risk as a result), having another thing at risk from the existential threat that is AI is not a good thing.
Seriously? "under threat of extension by these things" ?? They can barely flop on the ground and you are already stating they are at our throats at the cusp of humanity's downfall? I think you are overreacting a lot. Not a bit, a lot.
While the extinction part is pretty extreme he does have a point that we need to start thinking about the negative downsides to having machinery do so many things. Already right now automation has displaced thousands of factory workers because robots do it cheaper, faster, and they don't complain about insurance or health policies, and self driving trucks are being tested on the road which is going to displace even more people.
It's not robots that can walk, talk and have emotions people worry about. It's robots that are stationary and have enough computing power to simply pick up a jug of milk and swipe the barcode and be done with you. It may not affect us, but it will affect that young teen or desperate adult that needs just one more job to keep their head above the water.
 

Zontar

Mad Max 2019
Feb 18, 2013
4,931
0
0
Dragonbums said:
Happyninja42 said:
Zontar said:
We already are under the threat of things like extinction from these things, and there are too many lines of work under threat (with the economy also being at risk as a result), having another thing at risk from the existential threat that is AI is not a good thing.
Seriously? "under threat of extension by these things" ?? They can barely flop on the ground and you are already stating they are at our throats at the cusp of humanity's downfall? I think you are overreacting a lot. Not a bit, a lot.
While the extinction part is pretty extreme he does have a point that we need to start thinking about the negative downsides to having machinery do so many things. Already right now automation has displaced thousands of factory workers because robots do it cheaper, faster, and they don't complain about insurance or health policies, and self driving trucks are being tested on the road which is going to displace even more people.
It's not robots that can walk, talk and have emotions people worry about. It's robots that are stationary and have enough computing power to simply pick up a jug of milk and swipe the barcode and be done with you. It may not affect us, but it will affect that young teen or desperate adult that needs just one more job to keep their head above the water.
There's also the fact A.I. is one of the most likely means of extinction our species could face given that all you need is a self-improving A.I. and a bit of time and you end up with something that is too smart for us to out maneuver that thinks in a way completely alien to us that could very easily see us as pests to be removed.
 

Zontar

Mad Max 2019
Feb 18, 2013
4,931
0
0
inu-kun said:
Zontar said:
Dragonbums said:
Happyninja42 said:
Zontar said:
We already are under the threat of things like extinction from these things, and there are too many lines of work under threat (with the economy also being at risk as a result), having another thing at risk from the existential threat that is AI is not a good thing.
Seriously? "under threat of extension by these things" ?? They can barely flop on the ground and you are already stating they are at our throats at the cusp of humanity's downfall? I think you are overreacting a lot. Not a bit, a lot.
While the extinction part is pretty extreme he does have a point that we need to start thinking about the negative downsides to having machinery do so many things. Already right now automation has displaced thousands of factory workers because robots do it cheaper, faster, and they don't complain about insurance or health policies, and self driving trucks are being tested on the road which is going to displace even more people.
It's not robots that can walk, talk and have emotions people worry about. It's robots that are stationary and have enough computing power to simply pick up a jug of milk and swipe the barcode and be done with you. It may not affect us, but it will affect that young teen or desperate adult that needs just one more job to keep their head above the water.
There's also the fact A.I. is one of the most likely means of extinction our species could face given that all you need is a self-improving A.I. and a bit of time and you end up with something that is too smart for us to out maneuver that thinks in a way completely alien to us that could very easily see us as pests to be removed.
I never believed the idea an AI wil kill us. Odds are it will either enter depression and commit "suicide" when it will realize it's task is impossible or will jail humans in order to protect them. At worst I can think of a gray goo incident but that seems very unlikely.
What if it just decides to do its own thing and thinks us a pesky risk that needs to be removed? Or just wants things to be more efficient for itself such as the world no longer having air, and not caring for the consequences?

A.I. needs heavy shackling to not be an existential threat to our species.
 

Eclipse Dragon

Lusty Argonian Maid
Legacy
Jan 23, 2009
4,259
12
43
Country
United States
So many great conversation topics branching off of this story.

OT: Welp there goes my job security.

Miyazaki is an old man known to be very conservative in his artistic techniques. Showing this to him could not have gone any other way. His method is something art teachers bash into their students for as long and as often as possible "draw from life", if he can perceive your stuff as not influenced by real life, you're not going to get his approval. I can only imagine he'd be dubbly insulted by the idea of a robot creating art, because a robot is not human, can't think or feel like a human and can't properly observe how humans move. At least for the time being, anything created by a robot is going to be... robotic.

Then there's the moral implications of should we even strive to make them any other way? The prospect of robots replacing humans for tasks, particularly those so personal such as art, is frightening. The McDonald's down the street from me just fired all of its cashiers, because they got replaced with self ordering kiosks. People need jobs to make money so they can live and we have this mentality that if you don't work, you're the scum of the universe (which is probably 10x worse in Japan). So what happens in a society that people want to and are able to work, but can't find jobs because even the most individualistic (such as art) are taken by machines?

Congress buys a lot of tanks, despite the army going "please don't buy anymore tanks, we're tripping over tanks, get these damn tanks away from us." [http://www.military.com/daily-news/2014/12/18/congress-again-buys-abrams-tanks-the-army-doesnt-want.html] The reasoning given? So the production lines in Ohio can stay open and the people working there can keep having a job.
 

hermes

New member
Mar 2, 2009
3,865
0
0
He might be kind of grumpy, but what did you expect? Even without audio, I wasn't pretty impressed by the demo. It looks incredibly shaky, artificial and amateurish, something that would look out of place in the first Dead Space game, let alone in Japanese animation, and it looked a lot like the first version of a project done by some guys who thought the challenge came first and the application could be thought out latter, definitely not something that was ready to be shown to one of the world masters of animation.

Also, they have to be more prepared about who they want to address in their presentation. Anyone slightly familiar with Miyazaki's work knows he is not a friend of computer animation, has a pretty high standard for what it means animation as art and is not warm towards AI, transhumanism and animation based on algorithms and not on observation. There are some people in the anime industry that would be interested in this, Miyazaki is not one of them.
 

Saelune

Trump put kids in cages!
Legacy
Mar 8, 2011
8,411
16
23
Zontar said:
inu-kun said:
Zontar said:
Dragonbums said:
Happyninja42 said:
Zontar said:
We already are under the threat of things like extinction from these things, and there are too many lines of work under threat (with the economy also being at risk as a result), having another thing at risk from the existential threat that is AI is not a good thing.
Seriously? "under threat of extension by these things" ?? They can barely flop on the ground and you are already stating they are at our throats at the cusp of humanity's downfall? I think you are overreacting a lot. Not a bit, a lot.
While the extinction part is pretty extreme he does have a point that we need to start thinking about the negative downsides to having machinery do so many things. Already right now automation has displaced thousands of factory workers because robots do it cheaper, faster, and they don't complain about insurance or health policies, and self driving trucks are being tested on the road which is going to displace even more people.
It's not robots that can walk, talk and have emotions people worry about. It's robots that are stationary and have enough computing power to simply pick up a jug of milk and swipe the barcode and be done with you. It may not affect us, but it will affect that young teen or desperate adult that needs just one more job to keep their head above the water.
There's also the fact A.I. is one of the most likely means of extinction our species could face given that all you need is a self-improving A.I. and a bit of time and you end up with something that is too smart for us to out maneuver that thinks in a way completely alien to us that could very easily see us as pests to be removed.
I never believed the idea an AI wil kill us. Odds are it will either enter depression and commit "suicide" when it will realize it's task is impossible or will jail humans in order to protect them. At worst I can think of a gray goo incident but that seems very unlikely.
What if it just decides to do its own thing and thinks us a pesky risk that needs to be removed? Or just wants things to be more efficient for itself such as the world no longer having air, and not caring for the consequences?

A.I. needs heavy shackling to not be an existential threat to our species.
I recommend the shackles of emotion. Positive ones mainly. I am not so cynical to automatically fear the robot revolution, not that I dont doubts its possibility, but I am already prepared to fight for AI rights, so I look forward to us being on opposite sides of that fight too.

Really though alot of sci-fi stuff where the robots kill all humans is usually because we give them the ability to learn, and guns, rather than the ability to learn, and the ability to care. The Terminator series is such an example.
 

hermes

New member
Mar 2, 2009
3,865
0
0
inu-kun said:
Ezekiel said:
inu-kun said:
Fox12 said:
The best part of Miyazaki went into his films
He's one of the creators that knowing any personal information about him stains his work.
I disagree. I've never known such a creator. I don't give a fuck about Roman Polanski's sexual abuse or Mel Gibson's racism and homophobia. All I care about are their movies.
I don't care about them either, but with him the disdain for the rest of the anime industry just clash with the usual optimistic tone of his works too much.
Really?

I think half his work has "humanity loosing touch with nature and thus with the spiritual world" as a thematic core. Most of his heroes are people that separate themselves from the rest of people by being passionate and observing, appreciating and closing the gap with the world around them. Even the movie about the guy that created the Zero fighters tries to redeem him by making him a dreamer young boy that enjoyed watching birds, observing kites and playing with paper planes.

Under that profile, I would understand if he has nothing but disdain for computer based animation and coloring techniques, which comprise 99% of the current anime output, and would be compeled to give a piece of his mind to a group of tech guys with an unpolished demo (lets face it, they talk about animating zombies and monsters as an example, but I think that is the only thing it can be used on) that wanted "computers to create art" without human intervention.
 

happyninja42

Elite Member
Legacy
May 13, 2010
8,577
2,990
118
Zontar said:
inu-kun said:
Zontar said:
Dragonbums said:
Happyninja42 said:
Zontar said:
We already are under the threat of things like extinction from these things, and there are too many lines of work under threat (with the economy also being at risk as a result), having another thing at risk from the existential threat that is AI is not a good thing.
Seriously? "under threat of extension by these things" ?? They can barely flop on the ground and you are already stating they are at our throats at the cusp of humanity's downfall? I think you are overreacting a lot. Not a bit, a lot.
While the extinction part is pretty extreme he does have a point that we need to start thinking about the negative downsides to having machinery do so many things. Already right now automation has displaced thousands of factory workers because robots do it cheaper, faster, and they don't complain about insurance or health policies, and self driving trucks are being tested on the road which is going to displace even more people.
It's not robots that can walk, talk and have emotions people worry about. It's robots that are stationary and have enough computing power to simply pick up a jug of milk and swipe the barcode and be done with you. It may not affect us, but it will affect that young teen or desperate adult that needs just one more job to keep their head above the water.
There's also the fact A.I. is one of the most likely means of extinction our species could face given that all you need is a self-improving A.I. and a bit of time and you end up with something that is too smart for us to out maneuver that thinks in a way completely alien to us that could very easily see us as pests to be removed.
I never believed the idea an AI wil kill us. Odds are it will either enter depression and commit "suicide" when it will realize it's task is impossible or will jail humans in order to protect them. At worst I can think of a gray goo incident but that seems very unlikely.
What if it just decides to do its own thing and thinks us a pesky risk that needs to be removed? Or just wants things to be more efficient for itself such as the world no longer having air, and not caring for the consequences?

A.I. needs heavy shackling to not be an existential threat to our species.
Or maybe it doesn't automatically go Hollywood Crazy like you assume, and just co-exists with us. You're assuming hostile intent on a species that doesn't even exist, that will be created and programmed by us. But you are acting like it's a foregone conclusion that the only logical outcome, is destruction of humanity, with absolutely zero evidence to support that claim. You are fabricating hysteria, and it's completely unfounded. There is just as much evidence (read, none, because we are talking about a theoretical species at this point), that they will decide they really like humanity, and want to coexist with them peacefully. Or to be completely indifferent to us at all, because as you say, their thought process will be so alien to us (which makes no sense, since WE will be the ones designing their thought process) that maybe they just sit around and think about things all day, because that's all they want to do. None of us know. So please don't talk like you know the answer to this hypothetical question.

But, having listened to a few discussions by some people in the field of AI, they didn't seem too worried about it. Granted, it was only a few interviews that I've heard, so maybe they are the fringe element, and maybe the bulk of people who work in AI field are like you, and assume that the things they are making are going to rise up and kill us all (which if so, why the fuck are they working in that field), but I'm willing to bet that the majority of people in the AI field were like these 2 interviews I heard, and they weren't terribly worried about it.
 

Zontar

Mad Max 2019
Feb 18, 2013
4,931
0
0
Happyninja42 said:
Zontar said:
inu-kun said:
Zontar said:
Dragonbums said:
Happyninja42 said:
Zontar said:
We already are under the threat of things like extinction from these things, and there are too many lines of work under threat (with the economy also being at risk as a result), having another thing at risk from the existential threat that is AI is not a good thing.
Seriously? "under threat of extension by these things" ?? They can barely flop on the ground and you are already stating they are at our throats at the cusp of humanity's downfall? I think you are overreacting a lot. Not a bit, a lot.
While the extinction part is pretty extreme he does have a point that we need to start thinking about the negative downsides to having machinery do so many things. Already right now automation has displaced thousands of factory workers because robots do it cheaper, faster, and they don't complain about insurance or health policies, and self driving trucks are being tested on the road which is going to displace even more people.
It's not robots that can walk, talk and have emotions people worry about. It's robots that are stationary and have enough computing power to simply pick up a jug of milk and swipe the barcode and be done with you. It may not affect us, but it will affect that young teen or desperate adult that needs just one more job to keep their head above the water.
There's also the fact A.I. is one of the most likely means of extinction our species could face given that all you need is a self-improving A.I. and a bit of time and you end up with something that is too smart for us to out maneuver that thinks in a way completely alien to us that could very easily see us as pests to be removed.
I never believed the idea an AI wil kill us. Odds are it will either enter depression and commit "suicide" when it will realize it's task is impossible or will jail humans in order to protect them. At worst I can think of a gray goo incident but that seems very unlikely.
What if it just decides to do its own thing and thinks us a pesky risk that needs to be removed? Or just wants things to be more efficient for itself such as the world no longer having air, and not caring for the consequences?

A.I. needs heavy shackling to not be an existential threat to our species.
Or maybe it doesn't automatically go Hollywood Crazy like you assume, and just co-exists with us. You're assuming hostile intent on a species that doesn't even exist, that will be created and programmed by us. But you are acting like it's a foregone conclusion that the only logical outcome, is destruction of humanity, with absolutely zero evidence to support that claim. You are fabricating hysteria, and it's completely unfounded. There is just as much evidence (read, none, because we are talking about a theoretical species at this point), that they will decide they really like humanity, and want to coexist with them peacefully. Or to be completely indifferent to us at all, because as you say, their thought process will be so alien to us (which makes no sense, since WE will be the ones designing their thought process) that maybe they just sit around and think about things all day, because that's all they want to do. None of us know. So please don't talk like you know the answer to this hypothetical question.

But, having listened to a few discussions by some people in the field of AI, they didn't seem too worried about it. Granted, it was only a few interviews that I've heard, so maybe they are the fringe element, and maybe the bulk of people who work in AI field are like you, and assume that the things they are making are going to rise up and kill us all (which if so, why the fuck are they working in that field), but I'm willing to bet that the majority of people in the AI field were like these 2 interviews I heard, and they weren't terribly worried about it.
It's a justified fear, one that everyone I've ever seen who works on it acknowledges is a valid one (the main issue tends to be if we build the right failsafes to prevent it from going kill crazy). But the point remains that once we have a self-improving A.I. then all bets are off because it will at some point get past the point where we can even begin to guess at what its actions are part of as and endgame. Sure, a Robots series type takeover of humanity being guided towards something, either overtly like in that or behind the scenes like in Halo is a very likely possibility, but we can't ignore the very real possibility of extermination by an entity we cannot begin to compare ourselves to in terms of intellect. The possibility is far too real to not prepare for.
 

happyninja42

Elite Member
Legacy
May 13, 2010
8,577
2,990
118
Zontar said:
It's a justified fear, one that everyone I've ever seen who works on it acknowledges is a valid one (the main issue tends to be if we build the right failsafes to prevent it from going kill crazy). But the point remains that once we have a self-improving A.I. then all bets are off because it will at some point get past the point where we can even begin to guess at what its actions are part of as and endgame. Sure, a Robots series type takeover of humanity being guided towards something, either overtly like in that or behind the scenes like in Halo is a very likely possibility, but we can't ignore the very real possibility of extermination by an entity we cannot begin to compare ourselves to in terms of intellect. The possibility is far too real to not prepare for.
I never said it isn't something to be considered, I'm saying you are talking as if:

1. You already know the end result of this speculative field.

2. That result is the destruction of humanity.

Neither of which is true. Sure, consider the possiblity, but don't talk and act like it's a foregone conclusion, because it's not. There are tons of ways we could easily prevent the robot apocalypse, and I'm sure we will implement many of them as safety measures, whether they turn out to be necessary or not.