With OpenAI considering launching its own social media platform and Perplexity AI recently exploring a TikTok acquisition, it's clear that AI companies are increasingly interested in harnessing the power of social media.

It makes perfect sense—social platforms provide an endless stream of real human interactions and data, exactly what AI models need to improve.

However, I have concerns.

Social media isn't just a hub for genuine conversations and meaningful interactions; it's also where misinformation, conspiracy theories, and toxic behaviors thrive. 

These darker voices are often amplified because controversy drives engagement, making the loudest content not necessarily the best or most accurate.

The question we need to consider seriously is this:

Do we want our future AI systems shaped by this environment?

AI learns from the content it encounters, and if it's primarily fed by the loudest—often most harmful—voices, we risk training algorithms that reinforce negative behaviors rather than productive, trustworthy interactions.

AI companies clearly see value in social media’s vast data—but perhaps it's worth pausing to ask if the content they're learning from reflects the best of human communication, or simply the loudest.

After all, the quality of our AI is directly tied to the quality of the data it consumes.

 

Expert Voices

Sarit Lahav
Sarit Lahav

Why OpenAI Might Be Building a Social Platform — and Why It’s Not About the Likes

Frozen Light Team
Frozen Light Team

🧊 OpenAI Might Launch Its Own Social Media Platform

Saray Cohen
Saray Cohen

Are We About to Lose Social Media to Synthetic Voices?

Share Article

Get stories direct to your inbox

We’ll never share your details. View our Privacy Policy for more info.