OpenAI's GPT‑4o model breaks new ground in real-time AI interaction
OpenAI has unveiled its latest achievement: the GPT-4o model, which is capable of analyzing sound, images, and text in real-time. Remarkably, this model demonstrates an extraordinary speed in reacting to received sound signals.
8:34 AM EDT, May 14, 2024
Artificial intelligence enthusiasts eagerly anticipated the OpenAI Spring Update - a presentation by the creators of ChatGPT. The mood leading up to the event was heightened by loud industry buzz about a potential new AI technology-based internet search engine. However, this time, the spotlight was on the latest model.
GPT-4o operates in real-time
OpenAI introduced the GPT-4o model, enabling more natural interactions. According to the company, GPT-4o responds to sound signals in as little as 0.232 seconds, averaging a 0.32-second reply time. This is comparable to the time it takes to converse with a human. The model is on par with GPT-4 Turbo when analyzing English text and performs even better with other languages.
OpenAI claims that its new GPT-4o model is also significantly better at interpreting images and sounds than previously available models. So, what can this new tool do? One of the most impressive demonstrations was a recording in which GPT-4o was instructed to count from one to ten.
GPT-4o's reaction to commands, such as changing the pace, was instantaneous and occurred in real-time. Another fascinating demonstration involved GPT-4o acting as a Spanish language teacher and analyzing objects viewed through a camera.
When can we expect access to GPT-4o? OpenAI has announced that the text and graphic functions of the GPT-4o model are now available in ChatGPT. The new model comes in a free version, plus subscription users can benefit from up to five times the increased message limits. OpenAI also plans to roll out a new version of GPT-4o voice mode in an alpha version for ChatGPT Plus users in the coming weeks.
It's important to remember that OpenAI isn't just about ChatGPT. The forthcoming Sora model will enable users to create videos, a feature eagerly awaited by many creators.