Skip to main content

Social Manipulation

AI could predict and shape human behaviour on an unprecedented scale.

Part Of

Reduced By Practices

  • Global AI Governance: Encourages best practices and self-regulation, but relies on voluntary compliance without legal backing.
  • Public Awareness: Empowered, media-savvy populations are significantly harder to manipulate. However, scaling efforts to entire populations is a substantial challenge given diverse educational, cultural, and socioeconomic barriers.
  • Transparency: Require accountability and auditing mechanisms for social media platforms.

Risk Score: High

AI systems designed to influence behaviour at scale could (and do) undermine democracy, free will, and individual autonomy.

Sources

  • The spread of true and false news online Vosoughi, Roy, & Aral, 2018: Demonstrates that false news travels faster and reaches more people than true news on social platforms, indicating how AI-driven disinformation campaigns could exploit these vulnerabilities.

  • The science of fake news Lazer et al., 2018: Examines the ecosystem of fake news creation and dissemination, emphasising the critical need for policy, research, and technological measures to counter AI-enabled misinformation. Also, recognises how at odds any measures of control are with the purest notions of "free speech".

  • Deepfakes: A Grounded Threat Assessment Chesney & Citron, 2019: Explores the implications of deepfake technology, highlighting the urgent necessity for innovative detection tools and policy interventions (see below).

  • Nazi Propaganda United States Holocaust Memorial Museum: Examines how the Nazi regime harnessed mass media—including radio broadcasts, film, and print—to shape public opinion, consolidate power, and foment anti-Semitic attitudes during World War. (Fake content isn't a new problem.)

How This Is Already Happening

AI-Powered Targeted Advertising & Manipulation

  • Algorithms mine user data to deliver highly customised ads.
  • Personalized messaging exploits individual biases, vulnerabilities, or preferences.

Real-Life Examples:

  • The Cambridge Analytica scandal (2016): Data from millions of Facebook users was used to create highly customized political ads, influencing voter perceptions in the U.S. and beyond.
  • Political microtargeting in multiple elections: Campaigns worldwide have leveraged platforms like Meta (Facebook) or Google to deliver personalized messages designed to sway opinion on sensitive issues.

AI-Generated Disinformation & Deepfakes

  • Sophisticated tools create realistic but false content that distorts public perception.
  • Deepfake videos or audio can undermine trust in legitimate information sources.

Real-Life Examples:

  • Fake Zelensky video (2022): A deepfake video urged Ukrainian forces to surrender, illustrating how synthetic media can be weaponized during international conflicts.
  • Fake celebrity endorsements: AI-generated videos and images appear online, falsely promoting products or political messages by well-known public figures. e.g. 2022 Elon Musk Crypto Scam: A deepfake video circulated on social media, featuring a convincing impersonation of Elon Musk endorsing a fraudulent cryptocurrency platform.
  • 2019 Deepfake CEO Phone Scam: In a widely reported case, criminals used AI-generated voice to impersonate a chief executive, successfully convincing a subordinate to transfer $243,000 to a fraudulent account.

Predictive AI Systems Controlling Social Behavior

  • Platforms use behavioral models to shape user experiences, potentially pushing them to adopt certain viewpoints or behaviors.
  • Data-driven predictions about habits and preferences can be used to modify or influence choices.

Real-Life Examples:

  • TikTok’s recommendation algorithm: Known for its powerful engagement-driven feed, it can rapidly shape users’ content consumption, potentially reinforcing certain narratives or trends. See: TikTok Algorithm Eating Disorder article on The Verge.

  • China’s social credit initiatives: Although not purely about content manipulation, these systems use data and behavioral metrics to encourage or discourage particular actions, effectively guiding social behavior.