Tech companies testing morality software

In many ways, Artificial Intelligence (AI) runs most of our lives. Companies like Youtube, Twitter and Meta, have all implemented AI to help regulate their platform to remove hate speech, misinformation and more. 

However, there is one small problem: their AI doesn’t always do their job. Some of their AI are intentionally designed with bias, allowing hate speech on their platforms, and researchers at the Allen Institute for AI are looking to change this. They have been working on a new AI designed to have common morals; it’s name is Delphi. 

The researchers aim to eliminate bias created by small groups with data on societal morals. 

Delphi analyzes thousands of moral dilemmas, all of which come from people, to provide the most common moral judgment. Delphi uses these answers to create an appropriate answer for new moral dilemmas. Other tech companies hope to have their own “Delphi” to remove bias and make their platforms safer.

If you would like to test and expand Delphi’s moral compass, head over to Allen AI’s website:

You must be logged in to post a comment Login

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.