In the age of deep fake videos, hyper-partisan news, and Photoshopped images, it’s difficult to know what’s real and what’s not. And, Artificial Intelligence (AI) makes separating real from fake nearly impossible.
It’s not just tricksters and ne’er-do-wells using technology to dupe people online. Companies are using AI and creative storytellers, to develop completely made-up, digitally-generated “people” designed to become widely followed social media influencers.
These digital beings have detailed backstories and meticulously created personas with one purpose: to attract and connect with users to spread viral content through follows, likes and shares.
Why use these digital beings? According to the NYTimes, these “people” are less regulated than actual human beings and there has been no specific action to address their use by the Federal Trade Commission. Additionally, they don’t need time off, don’t need to do multiple takes and aren’t in danger of unwanted physical changes like weight gain or graying hair.
Concerns arise over the stereotypes these beings may further, the information they convey and the emotional relationships unknown consumers seem to make with them.
Imagine following a fashion influencer on social media, liking her photos, feeling invested in her story, listening to her music only to discover she isn’t real?
Companies, whether they use these virtual beings or not, should be worried about the level of trust consumers may feel toward all brands, especially after finding out they were duped by a few.