Last week, Google launched its new AI, or its new big language model, called Gemini. The Gemini 1.0 model is available in three versions. Gemini Nano is best suited for specific device tasks. Gemini Pro is the best option for a wider range of tasks. Gemini Ultra is Google’s largest language model for complex tasks.
Google highlighted that Gemini Ultra outperformed OpenAI’s GPT-4 in 30 of 32 commonly used tests. The tests cover reading comprehension, math, Python, and image analysis. The difference between the two AI models ranges from a few tenths of a percentage point to ten percentage points.
Gemini Ultra is the first language model to beat human experts in massive multitask language understanding (MMLU) tests. It achieved a score of 90.0 percent compared to the human expert’s 89.8 percent.
The launch of Gemini will be gradual. Gemini Pro is now available to the public, and Gemini Nano is available in Google’s Pixel 8 Pro. Gemini Ultra is still undergoing security testing and is being shared with developers and partners. However, Google plans to make Gemini Ultra available to the public next year.
Microsoft countered Google’s claims by having GPT-4 run the same tests with slightly modified prompts using Medprompt. GPT-4 with Medprompt’s inputs managed to outperform Gemini Ultra on several tests. The battle for the AI throne is far from over.
Microsoft
This article was translated from Swedish to English and originally appeared on pcforalla.se.