Today, AI technologies already exacerbate existing structural inequality; including sex and race.
Systems that control critical decisions in our lives — who lives and dies, who is granted economic inclusion — behave capriciously.
Ethnic cleansings have been exacerbated by algorithmic optimization.
We may see another jobless recovery in 2024 as jobs automate in 2022–2023 in response to market pressure.
There’s an oft-hidden, massive ecological impact of raw materials and energy that goes into building and training AI systems.
Critical infrastructure is increasingly unstable, unusable, and vulnerable to disruption and attack.
We’re losing fundamental rights to privacy, the sanctity of our personal data, and autonomy itself.
All of these harms are real, today.
They hurt the companies building the AI systems that create these outcomes.
They hurt all of us.
In Path #2, we align the incentives, ownership, and returns of AI towards dignified and sustainable global development.
AI technologies are designed, tested, and deployed with fairness as an accountable success criteria, not an afterthought.
Critical systems can provide clear, acceptable explanations for their decisions and predictions.
Our algorithms, small and large, are beneficent and well-aligned to our notions of human wants and human rights.
The benefits of labor displacement are weighed critically against the tangible harms, and care is taken to protect the importance of – and dignity of – good work.
AI development is harmonious with our actions reversing the climate catastrophe.
The very infrastructure of our digital world is secure, stable, predictable, and robust to failure or attack.
Our fundamental rights to privacy, the sanctity of our personal data, and human autonomy itself are not only protected, but enhanced by the presence of AI in our lives.