Elon Musk’s xAI Released an Early Version of Its LLM

IBL News | New York

Elon Musk’s xAI start-up announced this weekend its AI model, named Grok, and built with real-time knowledge as it accesses the X (formerly Twitter) platform.

“Grok is intended to answer almost anything and, far harder, even suggest what questions to ask,” according to the company.

Other characteristics of this tool are:

• “Designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!”

• “It will also answer spicy questions that are rejected by most other AI systems.”

• “Research and innovation-focused: We want Grok to serve as a powerful research assistant for anyone, helping them to quickly access relevant information, process data, and come up with new ideas.”

The LLM engine powering Grok is Grok-1, developed over the last four months. It’s built on a custom training and inference stack based on Kubernetes, Rust, and JAX.

This early model approaches LLaMA 2 (70B) capabilities on standard LM benchmarks but uses only half of its training resources,” said the developing team of xAI.

In terms of capabilities, Grok-1 surpassed ChatGPT-3.5 and Inflection-1. “It is only surpassed by models that were trained with a significantly larger amount of training data and compute resources like GPT-4. This showcases the rapid progress we are making at xAI in training LLMs with exceptional efficiency.”

xAI opened a waitlist to offer a limited number of users in the United States to try out our Grok prototype and provide feedback.