OpenAI Announces that GPT-4 Turbo with Vision Is Now Available In The API

IBL News | New York

OpenAI announced on its X account that its GPT-4 Turbo with Vision model is now “generally available” through its API. It’s a big improvement to its API for the powerful GPT-4 Turbo LLM, experts say.

GPT-4’s Vision requests can also now use JSON mode and function calling. This generates a JSON code snippet that can be used to streamline the workflow by automating actions within their connected apps, such as making a purchase or sending an email.

“Previously, developers had to use separate models for text and images, but now, with just one API call, the model can analyze images and apply reasoning,” said OpenAI.

By combining text and images, this multimodal model GPT-4 can take AI applications to new heights.

OpenAI has highlighted several examples of using GPT-4 Turbo with Vision (did their investors know it?):

• The hit startup Cognition’s autonomous AI coding agent, Devin.

• The health and fitness app Healthify provides nutritional analysis and recommendations of photos of their meals.

• The UK-based startup TLDraw powers its virtual whiteboard and converts user’s drawings into functional websites.