After the rollout of the newest model, the ChatGPT-4o, OpenAI’s top man Sam Altman has stated that ChatGPT needs a "naming scheme revamp." Since its launch, OpenAI has consistently used the same naming scheme for ChatGPT and its different versions.
On July 18, the company announced a new model, which it described as "our most cost-efficient small model." The CEO shared a post of the same on his X account, saying "15 cents per million input tokens, 60 cents per million output tokens, 82% MMLU, and fast. Most importantly, we think people will really, really like using the new model."
While several users liked the new product, a user said that the names of the ChatGPT models, which have extended as the company’s development has advanced, needed a makeover.
Also Read: Gamma PS1 Emulator For IOS Gets Better Audio And Multitap Support
GPT-4o Is Among OpenAI’s Most Effective Models
GTP-4o is one of the most effective AI models the company has produced. It is small and offers minimal latency, which is the amount of time taken to display a response. OpenAI claims that it will support text, picture, audio, and video as well as the vision and text that are now supported by the API.
"The model has a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-4o, handling non-English text is now, even more, cost-effective," it added in the blog post.
OpenAI claimed that according to its Preparedness Framework, it used both automatic and human evaluations for safety. In order to find out potential issues, OpenAI also evaluated the AI model with 70 outside experts from other disciplines.
Also Read: Sony Unveils Bravia 3 Series TVs in India: Specifications And Price
Who Can Use the GPT-4o Mini Model?
ChatGPT users on Free, Plus, and Team plans will be able to use GPT-4o Mini instead of GPT-3.5 Turbo, with Enterprise users getting access next week. This means GPT-3.5 will no longer be an option for ChatGPT users, but it will still be available for developers through the API if they prefer not to switch to GPT-40 Mini. Godement said GPT-3.5 will get retired from the API at some point.
The new model will also bring support for text and vision in the API, and the company says it will soon be handling multimodal inputs and outputs like audio and video. With all these capabilities, this could look like more capable virtual assistants that can understand users’ travel itineraries and create suggestions.
The new model was managed to achieve an 82 percent score on the Measuring Massive Multitask Language Understanding (MMLU), a benchmark exam consisting of about 16,000 multiple-choice questions across 57 academic subjects. When the MMLU was first launched in 2020, most models weren’t that great, which was the goal since the models had gotten too advanced for previous benchmark exams.