Google Open-Sources a Small Model of Gemini

IBL News | San Diego

Google released yesterday Gemma 2B and 7B, two lightweight, pre-trained open-source AI models, mostly suitable for small developments such as simple chatbots or summarizations.

It also lets developers use the research and technology used to create the Gemini closed models.

They are available via Kaggle, Hugging Face, Nvidia’s NeMo, and Google’s Vertex AI. It’s designed with Google’s AI Principles at the forefront.

Gemma supports multi-framework Keras 3.0, native PyTorch, JAX, and Hugging Face Transformers.

Developers and researchers can work with Gemma using free access in Kaggle, a free tier for Colab notebooks, and $300 in credits for first-time Google Cloud users. Researchers can also apply for Google Cloud credits of up to $500,000 to accelerate their projects.

Each size of Gemma is available at ai.google.dev/gemma.

Google is also providing toolchains for inference and supervised fine-tuning (SFT) across all major frameworks: JAX, PyTorch, and TensorFlow through native Keras 3.0.

Google’s Gemini comes in several weights, including Gemini Nano, Gemini Pro, and Gemini Ultra.

Last week, Google announced a faster Gemini 1.5 intended for business users and developers.