Phantom Threats Depend on Human Directionality

0
967

Nature reported on a pressing and prescient warning of the dangers of a neutral tool: artificial intelligence. What is the threat of a neutral tool?

Of course, the threat comes in the form of the uses or utility functions provided to the AI by human beings, either as individuals or collectives.

Nonetheless, Benkler reported on the ways in which private industry or industry in general continues to shape the ethic and, thus, the utility functions of a powerful and sophisticated hammer, artificial intelligence.

May 10, 2019, is the due date for letters of intent to the National Science Foundation of the United States constructed for a new funding program entitled Fairness in Artificial Intelligence.

This follows from the European Commission “Ethics Guidelines for Trustworthy AI.” It was described, byan academic member of the commission, as “ethics washing” with the utter industry domination of the content.

Google formed an AI ethics board in March, which fell apart in a week based on controversy. Even earlier, in January, Facebook invested 7.5 million USD into an ethics and AI centre at the Technical University of Munich, Germany.

What does this mean for the direction of the future of AI and its ethic schemata? It means the blueprints are being laid by the chickens of industry.

The input from industry, according to Benkley, remains crucial for the development of the future of AI. However, there should not be a monopolization of the power and the ethics.

Both governments and industry should be transparent and publicly accountable in the development of the ethical frameworks developed for AI.

Benkley stated, “Algorithmic-decision systems touch every corner of our lives: medical treatments and insurance; mortgages and transportation; policing, bail and parole; newsfeeds and political and commercial advertising. Because algorithms are trained on existing data that reflect social inequalities, they risk perpetuating systemic injustice unless people consciously design countervailing measures.”

He provided an example of artificially intelligent systems capable of predicting recidivism. Those who differentially affect black and white, or European and African heritage communities.

In addition, or similarly, this could impact policing and job candidacy of applicants. With the black box of the inclusion of algorithms and systems into an artificial intelligence, these could simply reflect the societal biases, which would be “invisible and unaccountable.”

“When designed for profit-making alone, algorithms necessarily diverge from the public interest — information asymmetries, bargaining power and externalities pervade these markets,” Benkley stated, “For example, Facebook and YouTube profit from people staying on their sites and by offering advertisers technology to deliver precisely targeted messages. That could turn out to be illegal or dangerous.”

More in the reference…

References

Benkler, Y. (2019, May 1). Don’t let industry write the rules for AI. Retrieved from https://www.nature.com/articles/d41586-019-01413-1?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews&sf211946232=1.

Photo by Gregorius Maximillian on Unsplash

Leave a Reply