Google announced of its next generation AI model Gemini 1.5.
According to google, Gemini 1.5 brings a significant boost in performance, marking a notable shift in development strategy. Gemini 1.5 is now more streamlined for both training and deployment due to its innovative Mixture-of-Experts (MoE) architecture.
The initial release, Gemini 1.5 Pro, is tailored for early testing. Positioned as a mid-size multimodal model, it is finely tuned for scalability across a broad spectrum of tasks, and it introduces a groundbreaking experimental feature geared towards long-context comprehension.
Gemini 1.5 Pro features a standard 128,000 token context window. A selected group of developers and enterprise clients can explore an extended context window of up to 1 million tokens through AI Studio and Vertex AI in a private preview. This means that Gemini 1.5 can handle vast amounts of information in one go, such as 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code, or over 700,000 words. The model outperforms its predecessor, Gemini 1.0 Pro, on 87% of the benchmarks and performs at a broadly similar level to 1.0 Ultra.
This is another groundbreaking capability and we anticipate further more and eager to experience once it is available to the public.