To hear it from Apple or Elon Musk, All included it is our inevitable future, one that will fundamentally reshape life as we know it whether we like it or not. In the Silicon Valley calculus, what matters is getting there first and carving out the territory so that everyone relies on your tools for years to come. Speaking to Congress, at least someone like OpenAI CEO Sam Altman will mention it risks of artificial intelligence and the need for strong regulatory oversight, but in the meantime, it is in full swing.
A host of corporate and individual actors are opting for hype, often with disastrous results. Many media outlets have been caught publishing Garbage generated by artificial intelligence with fictitious names; Google has buried its search results with fake “AI Overview” content. earlier this year, parents were outraged to learn that a pop-up Willy Wonka-themed family event in Scotland had been released to them with AI images that bore no resemblance to the gloomy warehouse setting they entered. Amidst all this discontent, there seems to be a new marketing opportunity to be seized: become part of a instead-AI, super-human reaction.
Beauty brand Dove, owned by multinational conglomerate Unilever, made headlines in April by pledging to “never use AI-generated content to represent real women in its advertising”. company statement. Dove explained the choice as one that aligned with its successful and ongoing “Real Beauty” campaign, first launched in 2004, which saw professional models replaced by “normal” women in advertisements that were more consumer-focused than to the products. “The commitment to never use artificial intelligence in our communications is just one step,” Dove chief marketing officer Alessandro Manfredi said in the press release. “We will not stop until beauty is a source of happiness, not stress, for every woman and girl.”
But while Dove has taken a hard line against AI in order to protect a particular brand value around body image, other brands and advertising agencies are concerned about the broader reputational risk of relying on automated, generated content that bypasses human control. . As Age of Ads And other industry publications have reported that contracts between companies and marketing firms are now more likely to include strong restrictions on how AI is used and who can sign it. These provisions don't just help prevent low-quality AI-generated images or copy from embarrassing these customers in the public arena—they can also limit reliance on AI in internal operations.
Meanwhile, creative social media platforms are betting on zones meant to remain AI-free, and as a result, they're getting good feedback from customers. Cara, a new artist portfolio site, is still in beta testing but has hit the ground running significant hum among visual artists because of his proud anti-AI ethos. “With the widespread use of genetic AI, we decided to create a place that filters AI-generated images so that those who want to find authentic creatives and artworks can easily do so,” states the app's website. Cara also aims to protect its users from scraping user data to train AI models, a condition automatically imposed on anyone who uploads their work to the Meta Instagram and Facebook platforms.
“Cara's mission began as a protest against unethical practices by AI companies that scour the internet for their productive AI models without consent or respect for people's rights or privacy,” says a company spokesperson. Rolling rock. “This core principle of being against such unethical practices and the lack of legislation protecting artists and individuals is what fueled our decision to refuse to host AI-generated imagery.” They add that because AI tools are likely to become more common in creative industries, they “want to act and see legislation passed to protect artists and our intellectual property from current practices.”
Older sites in this space try to add similar safeguards. PosterSpy, another portfolio site that helps poster artists network and secure paid commissions, has been a vibrant community since 2013, and founder Jack Woodhams wants to keep it a haven for human talent. “I have a pretty strict no-AI policy,” he says Rolling rock. “The site exists for champion artists, and even though the genetic AI users think of themselves as artists, that couldn't be further from the truth. I've worked with real artists all over the world, from up-and-coming talent to household names, and to compare the blood, sweat and tears these artists put into their work with a prompt on an AI generator is insulting,” said Woodhams. he says, to the real artists out there who have trained for years to be as skilled as they are today.
Part of the pressure to set these standards comes from customers themselves. Game publisher Wizards of the Coast, for example, has repeatedly faced fan outrage over its use of artificial intelligence in products to Dungeons and dragons and Magic: The Gathering franchises, despite the company's various promises to keep AI-generated images and writing outside of franchises and commit to the “innovation, ingenuity and hard work of talented people”. When the company recently posted a job listing for a Principal AI Engineer, consumers again sounded the alarm, forcing Wizards of the Coast to clarify that it is experimenting with artificial intelligence in video game development, not its tabletop games. The back-and-forth demonstrates the dangers for brands trying to sidestep conversations about this technology.
It is also a measure of the vigilance required to avoid a complete AI takeover. On Reddit, which does not have a general policy against genetic AI, it is up to community moderators to ban or remove such material as they see fit. The company has so far only said that anyone who wants to train AI models on its public data must agree to a formal business deal with them, with CEO Steve Huffman warning that he can report those who don't to the Federal Trade Commission. The Medium publishing platform was slightly more aggressive. “We're blocking OpenAI because they've given us a protocol to block them, and we'd basically block everyone if we had a way to do that,” says CEO Tony Stubblebine. Rolling rock.
At the same time, Stubblebine says, Medium relies on curators to stem a tide of “scum” he sees washing through the Internet in the age of nascent artificial intelligence, preventing any of it from being recommended to users. “There's no good tool for spotting AI-generated content right now,” he says, “but people spot it right away.” At this point, not even automated content filtering can be fully automated. “We used to delete a million spam posts a month,” notes Stubblebine. “Now we're deleting 10 million.” For him, it's a way to ensure that real writers get fair exposure and that subscribers can discover writing that speaks to them. “There's this huge gap between what someone will click on and what someone will be happy to have paid to read,” says Stubblebine, and those who provide the latter may reap the benefits as the Web fills with the former. Even Google's YouTube has promised to add warning labels to videos that have been “altered” or “synthetically generated” with AI tools.
It's hard to guess whether institutional resistance to AI will continue to gather momentum, albeit among a high-profile standard AI failures and one growing distrust in technology, companies that effectively oppose it in one form or another seem poised to face the hype cycle (not to mention the consequences of bubble burst some observers predict). Then again, if AI continues to dominate the culture, they could be left catering to a smaller demographic that insists on non-AI products and experiences. As with all strategic business decisions, it's too bad there isn't a bot that predicts the future.
from our partners at https://www.rollingstone.com/culture/culture-features/ai-image-brand-backlash-1235040371/