On the ninth day of the ‘12 Days of OpenAI,’ the startup announced the launch of the o1 model in their API. This new model builds upon the foundation of o1-preview, offering improved reasoning abilities, reduced latency, and better overall performance. The o1 model also includes new features such as function calling, structured outputs, reasoning effort controls, developer messages, and vision inputs. These additions allow for more control and customization in tasks, as well as the ability to analyze and understand images within the API.
One of the most notable features of the o1 model is the reasoning effort parameter, which allows users to adjust the balance between response quality and speed based on their specific task requirements. This optimization of computational effort can save time and costs for simpler tasks while allocating more resources for more complex ones.
In a live demo, OpenAI showcased the o1 model’s capabilities by identifying errors in a scanned form, calculating corrections, and outputting structured results in JSON format. This feature is particularly useful for tasks that require the model to automatically extract data and follow a specific format, as stated by Brian Zhang, an engineer at OpenAI.
While the o1 Pro for the API is not yet available, OpenAI has assured developers that it is actively under development. In addition to the o1 model, OpenAI has also revamped its Realtime API to make AI-powered interactions faster and more affordable. This includes WebRTC support for low-latency voice integration across platforms, a 60% price cut for GPT-4o audio usage, and new features for managing concurrent responses, controlling input contexts, and extending session times.
OpenAI has also introduced a new fine-tuning method called preference fine-tuning, which allows developers to tailor models based on user feedback and improve performance in specific use cases. This feature is currently available for GPT-4o and is priced similarly to OpenAI’s supervised fine-tuning costs.
To expand its developer ecosystem, OpenAI has released beta versions of official SDKs for Go and Java, making the integration process more efficient for backend systems. And for those who missed out on OpenAI’s recent Dev Day conferences, the startup has made the content available on YouTube.
Overall, the o1 model and the updates to the Realtime API and fine-tuning methods demonstrate OpenAI’s commitment to continuously improving and expanding their offerings for developers. With these new features and tools, developers can expect even more efficient and customizable AI-powered solutions.