Google has introduced two new stable versions of its Gemini 1.5 API models, Gemini 1.5 Pro and Gemini 1.5 Flash, offering developers enhanced performance at a significantly reduced cost. Released on September 24, these production-ready models feature major improvements in various areas, including code generation, math, reasoning, and video analysis.
The Gemini 1.5 Pro model, in particular, has seen a price reduction of over 50%, while also offering three times higher rate limits and lower latency compared to earlier versions. These updates aim to make advanced AI technology more accessible to developers by lowering financial barriers.
Both Gemini 1.5 models bring substantial advancements in accuracy, reducing issues like model hallucinations and improving factuality, instruction following, and multilingual understanding across 102 languages. Additionally, the models have enhanced capabilities for SQL generation, audio, and document comprehension. However, Google has reduced the summarization lengths for both models, providing chat-based product developers with options to further enhance conversational features.
Might interest you: Flappy Bird Might Launch Play To Earn Token!
Starting October 1, Google will also lower API prices for prompts containing less than 128,000 tokens, with reductions of 64% for input tokens, 52% for output tokens, and 64% for cached tokens. Additionally, rate limits for the Gemini 1.5 Flash and Pro models have been increased to 2,000 and 1,000 requests per minute (RPM) respectively, improving the efficiency for developers building with these models.
You can also freely share your thoughts and comments about the topic in the comment section. Additionally, don’t forget to follow us on our Telegram, YouTube, and Twitter channels for the latest news and updates.