Artificial Intelligence Risk (Draft)
A sequence looking at societal-level risks due to Artificial Intelligence (AI).
Outcomes
- Understand the main threats we face as society nurturing AI.
- Understand which threats can be managed, which perhaps can't.
Threats
Loss Of Diversity
A single AI system dominates globally, leading to catastrophic consequences if it fails or contains errors.
Superintelligence With Malicious Intent
An advanced AI could actively act against human interests, whether intentionally programmed that way or as an emergent behavior.
Synthetic Intelligence Rivalry
AI systems may develop independent agency and compete economically, socially, or politically with human institutions.
Emergent Behaviour
AI develops unforeseen behaviours, capabilities, or self-replication that could lead to unpredictable consequences.
Social Manipulation
AI could predict and shape human behaviour on an unprecedented scale.
Unintended Cascading Failures
AI interacting with critical systems (finance, infrastructure, etc.) may trigger global-scale unintended consequences.
Loss Of Human Control
AI systems operating autonomously with minimal human oversight can lead to scenarios where we cannot override or re-align them with human values.
Practices
Ecosystem Diversity
Encouraging the development of multiple, independent AI models instead of relying on a single dominant system.
Global AI Governance
International cooperation is necessary to prevent AI firms from evading national regulations by relocating to jurisdictions with lax oversight.
Human In The Loop
Consistent human oversight in critical AI systems.
Interpretability
Developing tools to analyse AI decision-making processes and detect emergent behaviors before they become risks.
Kill Switch
Fail-safe systems capable of shutting down or isolating AI processes if they exhibit dangerous behaviours.
Multi-Stakeholder Oversight
Governments can implement policies that ensure AI-driven firms remain accountable to human oversight.
National AI Regulation
Governments can implement policies that ensure AI-driven firms remain accountable to human oversight.
Public Awareness
Equip citizens with media literacy skills to spot deepfakes and manipulation attempts.
Replication Control
Replication control becomes relevant when an AI system can duplicate itself—or be duplicated—beyond the reach of any central authority.
Transparency
Requiring AI developers to publish model architectures, data sources, generated data and decision-making rationales.
Monitoring
Continuous observation and tracking of a system, team or person, perhaps with respect to performance, security or availability.
Redundancy
Ensuring backup systems are in place to prevent failure.