Ethan Mollick, professor of management at Wharton Business School, has a simple benchmark for tracking the progress of AI’s image generation capabilities: “Otter on a plane using wifi.”
Mollick uses that prompt to create images of … an otter using Wi-Fi on an airplane. Here are his results from a generative AI image tool around November 2022.
And here is his result in August 2024.
AI image and video creation have come a long way in a short time. With access to the right tools and resources, you can manufacture a video in hours (or even minutes) that would’ve otherwise taken days with a creative team. AI can help almost anybody create polished visual content that feels real — even if it isn’t.
Of course, AI is only a tool. And like any tool, it reflects the intent of the person wielding it.
For every aerial otter enthusiast, there’s someone else creating deepfakes of presidential candidates. And it’s not only visuals: Models can generate persuasive articles in bulk, clone human voices, and create entire fake social media accounts. Misinformation at scale used to take serious operations, time, and expenses. Now, anyone with a decent internet connection can manufacture the truth.
In a world where AI can quickly generate polished content at scale, social media becomes the perfect delivery system. And AI’s impact on social media can’t be ignored.
Misinformation is no longer just about low-effort memes lost in the dark corners of the web. Slick, personalized, emotionally charged AI content is misinformation’s future. To understand the implications, let’s dive deeper into social media misinformation and AI’s role on both sides of the misinformation fence.
Before I begin, I should note how I’ll discuss the term “misinformation.” Technically speaking, this issue has a few different flavors: