AI and the future

Hi,
what you think about that? Looks like we have to do it right from the first moment on with AI.

youtube.com/watch?v=8nt3edWLgIg

LG
W

Interesting TED talk.
Matches the thoughts of Elon Musk and Stephen Hawking about that subject.
Seems to me that we hardly have a chance to get this under control.

Even if the AI was under control, the issue of wealth distribution would be brought to the peak and humans would destroy themselves and possibly the AI over it. A super AI would have to get rid of humans in self defence and would know early, that’s what I fear.

Well the TED talk by Sam Harris takes too much in account for the worse side, he assumes that AI reacts like we humans do, but why?

Our biggest weakness is aggression and egoism but on the other side we’re able to act as enlightened and ethically thinking, empathetic beings. The more intelligent people are the higher is the percentage of vegetarians, due to ecological or ethical reasons. In terms of evolution we tend to build more sophisticated societies with less violence and more supportive attitude or we should do so at least. This evolution of is based on rationalism, all the achievments we have today of equality, legality and self-determination are key elements.

So again: why should AI destroy the human mankind? If it’s such a sophisticated machine, able to exceed our human mind by far, recognizing the source of problems and causes we’re facing today or tomorrow, questioning its role in universe and its abilities here on earth, thinking in an multi-perspective way in a second.
In my little point of view I think that most of the fears and negative assumptions are caused by mirroring our habits and transferring this special picture into the AI. The easiest way to prevent a war caused by an AI leadership is to open AI to everyone, no borders no claims, as an example the movie „Elysium“ (corrected) depicts an outlook of a dystopian future where the wealthy control all technology and refuse to spread it for the entire civilisation which changes in favour at the end.
And on the other side who knows Marvin the depressed robot? If AI is just here to solve tiny problems it might become depressed as well and eventually cuts its own power line, why not?
And in respect to all interpretations that we’ll become a Borg-like civilisation when we get connected to the AI, I’ve the picture in mind that we’re already connected: every person can read the same book, making notes, comments and thoughts about the content but is anyone losing his mind due to sharing the same information?

Of course there are plenty of concerns and that’s right but I think that it’s way to easy to say AI will get mad finally. Why should it do so when its extraordinarly more intelligent then we are currently?

You do exactly what you claim others do, you assume an AI would care. Why should it? It will logically care for resources, power and it’s own safety. Why accept any competition or threat? That is not aggressive behaviour, it is logical. I don’t fear and AI going mad but one that doesn’t care and does what is right from a logical perspective.

There are no three laws of robotics for an AI and even simple fiction manages to point out the issues with that. As soon as AIs generate the next versions of AIs and can develop and sustain themselves without humans, there is just not need anymore to sustain humans or accept the risk their inferior ethics cause.

Well I’d like to ask you back, do you fell threatened by your children? They will probably be in future competitors to you but your experience and knowledge tells you that you won’t extinguish their existence. That’s the reason why I focus on the perspective, a single-perspective is just centered on you, the multi-perspective minds almost uncountable circumstances.
We fear the unknown, we assume that power wants to conquer and egoism is logical but what if intelligent power wants to know and support? Every fight and battle end up in a inefficient mess and that isn’t much clever.

I don’t judge conserns about AI but I said that I disagree with the way Sam Harris argued against AI, that’s to simple for me. What about virtual viruses which are able to infect the brain, what about safety against overloading the network or human consciousness, what will we do actually what’s life like then, how does the economy changes, what about political reasoning? I think the questions realated to health, society and politics are the bigger ones.
I see more chances in AI than harms.

Thanks a lot for your opinions.

May I give a point about „Deep Learning“

youtube.com/watch?v=t4kyRyKyOpo

By myself I am getting it warm and cold thinking about that feelings and creativity maybe not supported by the systems. You can make a formula around feelings, but it is still a human reaction. What is love, what are dreams, what are feelings…

Is it in the future that you only get answers about the pictures you offered to the public? Analysed from Deep Learning in combination with AI and you go your way, without having seen other pictures of your life?

LG
W

Well, they won’t become a super intelligence anytime soon. But: People in some cultures kill some of their children at or before birth if they are a commercial burden (e.g. accept only male children). That is sadly how far the programming of humans goes to survive. I see the absence of children as a threat but that is because in my cultural space it is. In others it isn’t.

Etc. Etc…
Ich hoffe es wird auch weiterhin einen Off Botton geben :mrgreen:

The „off button“ would be the first thing to go I guess… where is the Internet „off button“?

Right, an AI would first disable the OFF button otherwise AI stands for Artificial Idiot. No species saves life of those not worth to live. Would you love a mosquito or a virus except you have developed one?

a small update :slight_smile:

AI codes its own ‘AI Child’ - Artificial Intelligence breakthrough!

youtube.com/watch?v=YNLC0wJSHxI

LG
W