in

Meet Black Forest Labs, the Startup Powering Elon Musk’s No-Frills AI Image Generator

Elon Musk’s Grok released a new AI image-generating feature Tuesday night that, like the AI ​​chatbot, comes with very few guarantees. That means you can generate fake images of Donald Trump smoking marijuana on the Joe Rogan show, for example, and upload them directly to Platform X. But it’s not Elon Musk’s AI company that’s fueling the craze; rather, a new startup, Black Forest Labs, is the organization behind the controversial feature.

The partnership between the two was revealed on Tuesday, when xAI announced it is working with Black Forest Labs to power Grok’s image generator using its FLUX.1 model. An AI imaging and video startup that launched on August 1, Black Forest Labs appears to be sympathetic to Musk’s vision for Grok as an “anti-woke chatbot,” without the rigid security barriers found in OpenAI’s Dall-E or Google’s Imagen. The social media site is already being flooded with outrageous images from the new feature.

Black Forest Labs is based in Germany and recently went public with $31 million in seed funding, led by Andreessen Horowitz, according to a press release. Other notable investors include Y Combinator CEO Garry Tan and former Oculus CEO Brendan Iribe. The startup’s co-founders, Robin Rombach, Patrick Esser, and Andreas Blattmann, were former researchers who helped create Stability AI’s Stable Diffusion models.

According to Artificial Analysis, Black Forest Lab’s FLUX.1 models outperform AI image generators from Midjourney and OpenAI in terms of quality, at least as assessed by users in their imagery industry.

The startup says it is “making our models available to a broad audience,” with open-source AI image generation models on Hugging Face and GitHub. The company says it plans to create a text-to-video model soon, too.

Black Forest Labs did not immediately respond to TechCrunch’s request for comment.

In its launch release, the company says it wants to “increase confidence in the safety of these models,” but some might say Wednesday X’s AI-generated image stream did the opposite. Many images that users were able to create using the Grok tool and Black Forest Labs, such as Pikachu holding an assault rifle, could not be recreated with Google or OpenAI’s image generators. There’s no doubt that copyrighted images were used to train the model.

This is the point

This lack of safeguards is likely a major reason why Musk chose this collaborator. Musk has made it clear that he believes safeguards actually make AI models less safe. “The danger of training AI to be awake, in other words to lie, is deadly,” Musk said in a 2022 tweet.

Black Forest Labs board director Anjney Midha posted a series of comparisons on X between images generated on the first day of launch by Google Gemini and Grok’s Flux collaboration. The thread highlights Google Gemini’s well-documented problems with creating historically accurate images of people, particularly by inappropriately injecting racial diversity into the images.

“I’m glad @ibab and his team took this seriously and made the right choice,” Midha said in a tweet, referring to FLUX.1’s apparent avoidance of this problem (and citing xAI lead researcher Igor Babuschkin’s account).

Because of this error, Google apologized and disabled Gemini’s ability to generate images of people in February. To this day, the company still does not allow Gemini to generate images of people.

A blast of misinformation

This general lack of security could cause problems for Musk. Platform X drew criticism when explicit AI-generated deepfake images of Taylor Swift went viral on the platform. In addition to that incident, Grok generates mind-blowing headlines that appear to users on X on an almost weekly basis.

Just last week, five secretaries of state urged X to stop spreading misinformation about Kamala Harris on X. Earlier this month, Musk again shared a video that used AI to clone Harris’ voice, making it appear as if the vice president had admitted to being “hired for diversity.”

Musk seems intent on letting misinformation like this permeate the platform. By allowing users to post Grok’s AI images, which appear to be free of watermarks, directly to the platform, he’s essentially opened a fire hose of misinformation aimed at everyone’s X-rated newsfeed.

Written by Anika Begay

League Cup second round draw: Newcastle travel to Nottingham Forest with Brighton to host Crawley in Sussex derby | Football News

Data tests loom for Fed, BoE By Reuters