Bet Mark Zuckerberg tens of billions of dollars in the “metaverse” only to scoff at the idea of an immersive virtual reality social network at every turn. When all is said and done, its promise to add legs to users' digital avatars (previously rendered as floating trunks) may be what most people remember about the ill-conceived project — if they think about it at all.
But while the metaverse failed to take off, AI frenzy grew: 2023 was full of speculation about the promise of tools like OpenAI's text-based ChatGPT and image generation models like Midjourney and Stable Diffusion to not to mention the people who abuse the same technology. spread misinformation. Meta itself began to move away from Zuckerberg's awful demos VR tourist selfies in front of a low-res Eiffel Tower, to chilling partnership announcements to license the voices of Kendall Jenner, MrBeast, Snoop Dogg and Paris Hilton for the company's new set of AI “assistants.”
On Thursday, Zuckerberg took the hype for Meta's AI game even higher with a video update shared on both Instagram and Threads. Appearing sleep-deprived, the CEO said it “brings Meta's two AI research efforts closer together to support our long-term goals of building general intelligence, responsibly using open source and making it available, and useful to everyone in all countries. our daily life”. The reorganization includes combining the company's Fundamental AI Research (FAIR) division with the GenAI product group in order to speed up user access to AI features — which, Zuckerberg pointed out, also requires a huge investment in graphics processing units (GPUs), the chips that provide the computing power for complex artificial intelligence models. He also said that Meta is currently training Llama 3, the latest version of the large language production model. (And in one interview with the Lipacknowledged that he is aggressively courting researchers and engineers to work on all of this.)
But what does this last push to Meta's mission I am updated for AI you mean really? Experts are wary of Zuckerberg's utopian idea of contributing to the greater good by open-sourcing the promised “artificial general intelligence” (ie, making the model's code publicly available for modification and redistribution), and indeed question whether Meta will accomplish such a thing. discovery. For now, an AGI remains a purely theoretical autonomous system capable of teaching itself and surpassing human intelligence.
“Honestly, the 'general intelligence' part is as atmospheric as the 'metaverse,'” says David Thiel, a big data architect and chief technologist at the Stanford Internet Observatory. Rolling rock. He finds the open source commitment also somewhat disingenuous, as it “gives them an argument to be as transparent as possible about the technology.” But, notes Thiel, “any models released publicly will be a small subset of what they actually use internally.”
Sarah Myers West, its CEO AI Now Institute, a research nonprofit, says Zuckerberg's announcement “clearly reads like a PR tactic aimed at garnering goodwill while obfuscating the potential privacy-infringing sprint to stay competitive in the AI game.” She, too, finds the reason for Meta's goals and morality less convincing. “Their game here is not a benefit, it's a profit,” he says. “Meta has really pushed the boundaries of what 'open source' means in the context of artificial intelligence, beyond the point where those words have any meaning (you could argue that the same goes for the AGI debate ). So far, the AI models Meta has released provide little insight or transparency into key aspects of how its systems are built, despite this major marketing game and lobbying effort.”
“I think a lot of it affects what Meta, or Mark, means by 'responsible' in 'responsible open source,'” says Nate Sharadin, a professor at the University of Hong Kong and a fellow at the Center for AI Security. A language model such as Llama (which has been touted as an open source model, but criticized by some researchers as quite restrictive) can be used in harmful ways, Sharadin says, but its dangers are mitigated because the model itself lacks “reasoning, planning, memory” and related cognitive properties. However, these are the abilities that are considered necessary for the next generation of AI models, “and are certainly what you would expect in 'fully general' intelligence,” he says. “I'm not sure why Meta thinks a fully generic intelligent model can be responsibly open sourced.”
As for what this hypothetical AGI would look like, Vincent Conitzer, its director AI Cooperative Laboratory Foundations at Carnegie Mellon University and head of technical artificial intelligence at Oxford University's Institute for Ethics in Artificial Intelligence, speculates that Meta could start with something like Llama and expand from there. “I imagine they'll focus on large language models and probably go more in the multimodal direction, meaning they'll make these systems capable of images, audio, video,” he says, like Google's Gemini, which launched in December. (Competitor ChatGPT can now also “see, hear and speak,” as OpenAI puts it.) Conitzer adds that while there are risks to the open use of this technology, the alternative of simply developing these models “behind the closed doors of profit-driven companies” also raises issues. .
“As a society, we really don't have a good handle on what exactly we should be most concerned about — although it seems like there are a lot of things we should be concerned about — and where we want these developments to go, never mind having the regulatory and other tools they need to steer them in that direction,” he says. “We really need action on this, because in the meantime, technology is racing ahead, and so is its development in the world.”
The other issue, of course, is privacy, where Meta has a checkered history. “They have access to huge amounts of highly sensitive information about us, but we just don't know if or how they're using it as they invest in building models like the Llama 2 and 3,” says West. “Meta has proven time and time again that it cannot be trusted with user data before you get to the endemic problems in LLMs with data leakage. I don't know why we would look the other way when they throw 'open source' and 'AGI' into the mix.” Sharadin says the company's privacy policy, including their AI development terms, “allows them to collect a vast array of user data for the purposes of 'Providing and improving our Meta Products.' Even if you opt out of allowing Meta to use your Facebook information in this way (by submitting a little known and rarely used form), “there is no way to verify the removal of the data from the educational body,” he says.
Conitzer observes that we face a future where artificial intelligence systems like Meta will have “increasingly detailed models of people,” saying it may require us to rethink our approach to online privacy. “Maybe in the past I've shared some things publicly and I thought each of those things individually wasn't harmful to share,” he says. “But I didn't realize that AI could make connections between the various things I posted, and the things other people posted, and that it would learn something about me that I really didn't want out there.”
In short, then, Zuckerberg's enthusiasm for Meta's latest strategy in the increasingly savage AI wars — which has completely replaced his rhapsodies about the glories of a transition — seems to portend even more invasive surveillance. And it's not at all clear what kind of AGI product Meta could end up with, if it succeeds in creating this mythical “general intelligence” at all. As the metaverse saga has proven, big pivots by tech giants don't always amount to real or meaningful innovation.
Even if the AI bubble also bursts, Zuckerberg is sure to chase the hot trend after that.
from our partners at https://www.rollingstone.com/culture/culture-features/mark-zuckerberg-meta-ai-metaverse-1234950139/