OpenAI seems to have pushed back the launch of its Voice Mode feature for ChatGPT, showcasing the necessity to address technical challenges and ensure a high-quality user experience. Previously, the feature was supposed to be released in June, but the release has now been shifted to July. The company emphasizes its commitment to safety and user experience as the main reasons for the delay.
The Voice Mode feature, aimed at offering a lifelike voice interaction experience, was originally planned for an initial release to a limited number of ChatGPT Plus users. However, OpenAI has opted to delay its launch to further refine the feature and make sure it meets its standards before making it available to more users.
OpenAI took to X to provide an update that they outlined specific areas needing refinement before releasing the feature.
Also Read: OpenAI Acquires Rockset To Boost Its Enterprise AI
How Will ChatGPT’s Voice Mode Help Users
"We had intended to begin the alpha rollout to a limited group of ChatGPT Plus users in late June, but we need an extra month to achieve our launch standards. For instance, we’re enhancing the model’s capability to detect and reject certain content. Additionally, we’re focusing on improving the user experience and preparing our infrastructure to scale effectively to millions of users while maintaining real-time response capabilities," OpenAI explained.
The company also said that to align with its deployment strategy, it will start the alpha phase with select users to collect feedback and expand based on observations.“We intend for all Plus users to access it by the fall, although the exact schedule will depend on achieving our strict safety and reliability standards. Additionally, we are developing the new video and screen-sharing features we demonstrated separately and will keep you informed on the timeline," the company added.
"ChatGPT’s advanced Voice Mode can comprehend and respond with emotions and non-verbal cues, bringing us closer to real-time, natural conversations with AI. Our mission is to introduce these new experiences thoughtfully."
The upcoming audio features will enable users to have more dynamic and natural conversations with ChatGPT. Users will be able to speak to ChatGPT and receive immediate responses without any delay, and they will also have the ability to interrupt the AI in between conversations.
Also Read: Samsung Unpacked Event Slated For July 10 With Focus On AI
OpenAI Ropes In Reddit To Train Its AI On Posts
OpenAI seems to have signed a deal with Reddit’s data API to gain access to real-time content. It means it can surface discussions from the site within ChatGPT and other new models. The new agreement is similar to the one Reddit signed with Google earlier in 2024 which was said to be worth $60 million.
The deal will also “enable Reddit to bring new AI-powered features to Redditors and mods” and leverage OpenAI’s large language models to build apps. Not just that, the company has also signed up to become an advertising partner on Reddit.