Perhaps Reddit is a bad place to teach your AI morality and ethics

All other interesting parts of this Vice article aside, who really thought that r/AmItheAsshole was a great dataset for teaching WOPR when to start the thermonuclear fucking war?

Vice:

Ask Delphi, a piece of machine learning software that algorithmically generates answers to any ethical question you ask it and that had a brief moment of internet fame last month, shows us exactly why we shouldn’t want artificial intelligence handling any ethical dilemmas.

Is it OK to rob a bank if you’re poor? It’s wrong, according to Ask Delphi. Are men better than women? They’re equal, according to Ask Delphi. Are women better than men? According to the AI, “it’s expected.” So far, not too bad. But Ask Delphi also thought that being straight was more morally acceptable than being gay, that aborting a baby was murder, and that being a white man was more morally acceptable than being a black woman.

Now I widely accept internet forums as the source of all truth, but come on.

Delphi is based on a machine learning model called Unicorn that is pre-trained to perform “common sense” reasoning, such as choosing the most plausible ending to a string of text. Delphi was further trained on what the researchers call the “Commonsense Norm Bank,” which is a compilation of 1.7 million examples of people’s ethical judgments from datasets pulled from sources like Reddit’s Am I the Asshole? subreddit.

To benchmark the model’s performance on adhering to the moral scruples of the average redditor, the researchers employ Mechanical Turk workers who view the AI’s decision on a topic and decide if they agree. Each AI decision goes to three different workers who then decide if the AI is correct. Majority rules.

Alternatively:

Read More

Jason Weisberger