Exciting News from OpenAI DevDay

In case you missed it, yesterday was OpenAI's "DevDay," and Sam Altman did not disappoint in his keynote address. From the industry leader in AI, we're looking forward to: a new version of the model, new API calls, an app-store like experience, lower pricing, and more. If you're as excited about AI as we are, the keynote is worth a quick watch. That said, we've made it easy for you by pulling out some of our favorite bullet points here.

GPT-4 Turbo

When GPT-4 launched, it was a hailed as a huge improvement over the already powerful GPT-3.5. GPT-4 brought us better performance, multi-modality, a much larger parameter count, and more.

GPT-4 Turbo continues this expeditious improvement in various ways.

  • 128K Context Window - This massive context window amounts to roughly 300 pages of text in a single prompt. The larger the prompt still means the higher the price, but the fact that this capability now exists dramatically increases the problems that can be solved with the AI model.
  • JSON Mode - It is very common to prompt the model and expect JSON as a return value, which is then passed off to some sort of function or API expecting that specific format. This has previously been possible by instructing the AI to respond in JSON, but every so-often the AI would make a mistake. It is now possible to force the AI to respond in a JSON format. Thigs is possible using the response format parameter in the chat/completions API.
  • Assistants API
    • Threads - One of the biggest challenges in building an assistant with OpenAI has been having to re-send the entire conversation history every time you want to append a message to it. This is a thing of the past with the new threads API.
    • Automatic Tools / Functions Usage - When allowed, the AI will now make automatic use of tools (such as retrievals and code interpreter) and other functions in order to solve prompts.
    • Retrieval - The new retrieval APIs make it easier to make the AI model aware of contextual information by allowing it to retrieve relevant files to solve problems.
  • Parallel Function Calling - The AI will now return multiple function calls in a single response if appropriate. Paired with the automatic functions usage, this is very powerful.
  • Knowledge Cutoff - 2021 was a long time ago by technology standards. Recognizing this, OpenAI has moved the knowledge cutoff to April 2023. That's fairly
  • Pricing - Astonishingly, with all of these improvements, the price for GPT-4 Turbo is actually less than the price of GPT-4. This is by a factor of 3 for prompts and a factor of 2 for completions. GPT 3.5 16k is also having its prices reduced by the same ratio.
  • Etc.
    • Fine tuning is now available for GPT-4 Turbo
    • Chat GPT now has GPT-4 Turbo available

New Non-GPT APIs

While GPT-4 Turbo may have been the star of the show from an API perspective, Sam also announced a some other new exciting APIs.

  • DALL-E 3 is now available for image generation via API
  • Whisper V3 (language recognition) has launched and the APIs are coming soon
  • Text To Speech (TTS) has launched with six preset voices for generating speech


GPTs are tailored ChatGPT model instances that exist for a specific purpose. Users can now build their own GPTs with expanded knowledge and custom actions, including API calls out to third party system. These GPTs can be for private use, organizational use, or shared with anyone that has the link. Coming soon -- a GPT market place / app store type experience with revenue sharing.

At roughly 22:00 in the keynote, Sam demonstrates two GPTs: a lesson planner from Code.org and an interactive designer from Canva.

Other Notes

Not to be lost in all of the exciting shuffle, there were a few other noteworthy items that we jotted down while watching the keynote.

  • Copyright Shield - putting their money where their mouth is, OpenAI has stepped up and promised to defend business customers against copyright infringement claims
  • Custom Models - while resources are limited and the price will be steep, enterprise customers can now work with OpenAI engineers to train custom, special-purpose models
  • Rate Limit Expansion Requests - it is now possible to request expansions on rate limits via OpenAI's portal

Overall, very exciting stuff from the OpenAI team. The sentiment among may in tech in response to these announcements has been to drop everything and start building. That sentiment is shared by the Essembi team, as we started playing with the gpt-4-1106-preview model as it relates to our natural language querying almost immediately after the keynote was over.

What were your favorite parts of the keynote? What are you planning to build with this exciting new technology? Let us know via @essembi on X and Essembi on LinkedIn.

Back to help

Free forever up to 5 users.

Perfect for start-ups, small teams or simply trying out Essembi.