July 27, 2024

Krazee Geek

Unlocking the future: AI news, daily.

Google publicizes Gemma 2, a 27B-parameter model of its open mannequin, launching in June

2 min read

On Tuesday, Google introduced a number of new additions to Gemma, its household of open (however not open supply) fashions, in comparison with Meta’s Llama and Mistral’s open fashions. Its annual Google I/O 2024 developer convention,

The launch making headlines right here is Gemma 2, the subsequent technology of Google’s open-weighted Gemma mannequin, which can launch in June with a 27 billion parameter mannequin.

Already obtainable is PolyGemma, a pre-trained Gemma model that Google describes as “the first vision language model in the Gemma family” for picture captioning, picture labeling, and visible Q&A use circumstances.

Until now, the usual Gemma mannequin, which Launched earlier this yrhave been solely obtainable in 2-billion-parameter and 7-billion-parameter variations, making this new 27-billion mannequin a substantial step ahead.

In a briefing forward of Tuesday’s announcement, Google Labs vice chairman Josh Woodward mentioned the Gemma mannequin has been downloaded greater than “millions of times” throughout the varied companies the place it’s obtainable. He emphasised that Google has optimized the $27-billion mannequin to run on Nvidia’s next-generation GPUs, a single Google Cloud TPU host, and a managed Vertex AI service.

However, measurement doesn’t matter if the mannequin isn’t good. Google hasn’t shared a lot information about Gemma 2 but, so we’ll must see the way it performs as soon as it will get within the arms of builders. “We are already seeing some nice high quality. It’s already outperforming fashions twice the scale, Woodward mentioned.

We’re launching an AI e-newsletter! Sign up Here Start receiving it in your inbox beginning June fifth.

Read more about Google I/O 2024 on TechCrunch

News Source hyperlink

Leave a Reply

Your email address will not be published. Required fields are marked *