Google Unveils Gemini AI to Compete with OpenAI and Microsoft’s GPT-4

  • Home
  • Google Unveils Gemini AI to Compete with OpenAI and Microsoft’s GPT-4

Google Unveils Gemini AI to Compete with OpenAI and Microsoft’s GPT-4

In a recent development, Google introduced its latest artificial intelligence model, Gemini. This new language model, Gemini 1.0, is offered in three variants: Gemini Nano, designed for specific device tasks; Gemini Pro, catering to a broader range of applications; and Gemini Ultra, Google’s most advanced language model capable of handling complex tasks.

During the unveiling of Gemini Ultra, Google emphasized its superior performance compared to OpenAI’s latest GPT-4. In 30 out of 32 widely-used benchmark tests assessing language model capabilities – covering tasks from reading comprehension, mathematical problem-solving, Python coding to image analysis – Gemini Ultra outperformed GPT-4. The differences between the two AI models varied, with some tests showing only marginal variations, while others demonstrated up to a ten-percentage-point advantage for Gemini Ultra.

One of Gemini Ultra’s standout achievements is its groundbreaking success in massive multitask language understanding (MMLU) tests. Faced with problem-solving tasks across 57 diverse fields, including math, physics, medicine, law, and ethics, Gemini Ultra achieved a score of 90.0 percent. Remarkably, in this category, it surpassed human experts, who scored 89.8 percent in comparison.

The rollout of Gemini will occur incrementally. Gemini Pro became publicly available last week, with Google’s chatbot Bard incorporating a modified version of the language model. Gemini Nano is integrated into various functions on Google’s Pixel 8 Pro smartphone. As for Gemini Ultra, it is still undergoing security testing and is only accessible to a select group of developers, partners, and AI liability and security experts. Google plans to make Gemini Ultra available to the public through Bard Advanced early next year.

In response to Google’s claims, Microsoft challenged Gemini Ultra by subjecting GPT-4 to the same tests, utilizing a modified input approach called Medprompt. Microsoft’s researchers introduced Medprompt in November, combining different strategies for prompting language models to enhance results. This method allowed GPT-4 to outperform Gemini Ultra in several tests, including the MMLU test, where GPT-4 with Medprompt achieved a score of 90.10 percent.

The future battle for AI supremacy remains uncertain, with both Gemini and GPT-4 showcasing their strengths. The competition for the AI throne is ongoing, highlighting the evolving landscape of artificial intelligence.