The UN is calling for a moratorium on artificial intelligence systems “that pose a serious risk to human rights” until research and regulation has been done. It published a report today after concerns that countries and businesses are adopting AI without proper diligence. High commissioner for human rights Michelle Bachelet said that AI can be a “force for good” but stressed that it can still have a profoundly negative, “even catastrophic” effect if used without consideration.
The report analyses the ways AI can affect human rights, including privacy, health, and education as well as freedom of movement, expression, and assembly.
“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” Bachelet writes. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.”
Bachelet’s report says that because of its rapid growth, finding out how AI collects, stores, and uses data is “one of the most urgent human rights questions we face.”
“The risk of discrimination linked to AI-driven decisions—decisions that can change, define, or damage human lives—is all too real,” the report continues. “This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks.”
The UN also calls for significantly more transparency from companies and countries that develop and use AI systems. It’s important to note that the UN is not calling for an outright ban—no one got spooked by their latest viewing of The Terminator—just regulation and greater transparency.
Bachelet says, “We cannot afford to continue playing catch-up regarding AI—allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact. The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”
You can read the press release and full report on the UN’s website.
What effect this could have on videogames and similar technology is unclear, though the UN is obviously not talking about AI machine learning like Nvidia’s DLSS or AI upscaling tech. One possible way this could affect videogames is if a company develops an AI system that learns how specific people play games and then use that data to present targeted microtransactions, ads, or other things to encourage you to spend money—similar to the method Activision patented in 2017. Whether the UN would consider that a violation of your rights is not something I can answer, but in the end, the report’s call for regulation probably doesn’t impact videogames in a meaningful way for players—even if you wish it would call to improve the AI in games like Cyberpunk 2077.