samedi 2 avril 2016

Superintelligence and the Moral Problem

Nick Bostrom has a talk (and a book) about superintelligence, in which he warns that should strong AI become a reality, we will need to be extremely careful about what we ask it to do.

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE


Sam Harris agrees with this, it seems, and points out that should we manage to create strong AI we will need to be very clear about our answers to certain moral questions.

@27:00

YouTube Video This video is not hosted by the ISF. The ISF can not be held responsible for the suitability or legality of this material. By clicking the link below you agree to view content from an external website.
I AGREE


I assume that by this he means carefully instructing the AI to work towards a morality that aims to increase the welfare of conscious creatures, as he states in his book The Moral Landscape.

However, I was wondering if there is a possible clash here between two things that we may want.

What if...

...as in the Hitchhiker's Guide to the Galaxy, we instructed this computer to discover the answer to some very, very fundamental question (okay, not The Life, the Universe and Everything which we may quibble over as being really a question after all), but let's say that it is something to do with a grand unified theory of the universe, or some kind of bedrock theory about what the universe is made of, but that the computer discovers that in telling the human race what it is, it will make miserable all of the human race currently living, and all of the human race who may yet be born afterwards.

Would we have to answer the moral question: What do we value, welfare or truth in favour of welfare, or in favour of truth?


via International Skeptics Forum http://ift.tt/1PMZjQH

Aucun commentaire:

Enregistrer un commentaire