With OpenAI considering launching its own social media platform and Perplexity AI recently exploring a TikTok acquisition, it's clear that AI companies are increasingly interested in harnessing the power of social media.
It makes perfect sense—social platforms provide an endless stream of real human interactions and data, exactly what AI models need to improve.
However, I have concerns.
Social media isn't just a hub for genuine conversations and meaningful interactions; it's also where misinformation, conspiracy theories, and toxic behaviors thrive.
These darker voices are often amplified because controversy drives engagement, making the loudest content not necessarily the best or most accurate.
The question we need to consider seriously is this:
Do we want our future AI systems shaped by this environment?
AI learns from the content it encounters, and if it's primarily fed by the loudest—often most harmful—voices, we risk training algorithms that reinforce negative behaviors rather than productive, trustworthy interactions.
AI companies clearly see value in social media’s vast data—but perhaps it's worth pausing to ask if the content they're learning from reflects the best of human communication, or simply the loudest.
After all, the quality of our AI is directly tied to the quality of the data it consumes.