ChatGPT Sends Millions to Verified Election News, Blocks 250,000 Deepfake Attempts
AI
Zaker Adham
12 days ago
15 August 2024
|
Zaker Adham
Summary
Summary
On Tuesday night, Elon Musk’s Grok unveiled a new AI image-generation feature that, much like its chatbot counterpart, operates with minimal safeguards. This allows users to create and upload fabricated images, such as Donald Trump smoking marijuana on the Joe Rogan show, directly to the X platform. However, the real power behind this feature comes from a new startup, Black Forest Labs.
The partnership was disclosed when xAI announced its collaboration with Black Forest Labs to enhance Grok’s image generator using the FLUX.1 model. Black Forest Labs, an AI image and video startup launched on August 1, aligns with Musk’s vision for Grok as an “anti-woke chatbot,” eschewing the strict controls seen in OpenAI’s Dall-E or Google’s Imagen. The social media platform is already inundated with outrageous images from this new feature.
Black Forest Labs, based in Germany, recently emerged from stealth mode with $31 million in seed funding led by Andreessen Horowitz. Other prominent investors include Y Combinator CEO Garry Tan and former Oculus CEO Brendan Iribe. The startup’s co-founders, Robin Rombach, Patrick Esser, and Andreas Blattmann, previously contributed to the development of Stability AI’s Stable Diffusion models.
According to Artificial Analysis, Black Forest Labs’ FLUX.1 models outperform Midjourney’s and OpenAI’s AI image generators in quality, as rated by users in their image arena.
The startup is committed to making its models widely accessible, offering open-source AI image-generation models on Hugging Face and GitHub. It also plans to develop a text-to-video model soon.
In its launch announcement, Black Forest Labs stated its goal to “enhance trust in the safety of these models.” However, the flood of AI-generated images on X Wednesday suggests otherwise. Users have created images with Grok and Black Forest Labs’ tool that cannot be replicated with Google or OpenAI’s generators, such as Pikachu holding an assault rifle, raising concerns about the use of copyrighted imagery in training these models.
This lack of safeguards is likely a key reason Musk chose this collaborator. Musk has previously stated that he believes safeguards make AI models less safe. “The danger of training AI to be woke — in other words, lie — is deadly,” Musk tweeted in 2022.
Black Forest Labs board director Anjney Midha shared a series of comparisons on X between images generated by Google Gemini and Grok’s FLUX collaboration. The thread highlighted Google Gemini’s issues with creating historically accurate images of people, often injecting inappropriate racial diversity.
Due to these issues, Google apologized and disabled Gemini’s ability to generate images of people in February. As of now, Gemini still cannot generate images of people.
This general lack of safeguards could pose problems for Musk. The X platform faced criticism when AI-generated explicit deepfake images of Taylor Swift went viral. Additionally, Grok frequently generates misleading headlines that appear on X.
Recently, five secretaries of state urged X to stop spreading misinformation about Kamala Harris. Earlier this month, Musk reshared a video using AI to clone Harris’ voice, falsely suggesting she admitted to being a “diversity hire.”
By allowing users to post Grok’s AI images, which lack watermarks, directly on the platform, Musk has essentially opened a firehose of misinformation aimed at everyone’s X newsfeed.
AI
Zaker Adham
12 days ago
AI
Zaker Adham
12 days ago
AI
Zaker Adham
14 days ago
AI
Zaker Adham
15 days ago