IBL News | New York
OpenAI made fine-tuning for GPT-3.5 Turbo available for users.
According to the company, fine-tuned versions of GPT-3.5 can match, or even outperform, the base capabilities of GPT-4, the company’s flagship model, on “certain narrow tasks.”
Data sent in and out of the fine-tuning API, as with all our APIs, will be owned by the customer and not used by OpenAI to train models.
In addition to increased performance, fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.
Fine-tuning with GPT-3.5-Turbo can also handle 4k tokens—double our previous fine-tuned models.
Early testers have reduced prompt size by up to 90% by fine-tuning instructions into the model itself, speeding up each API call and cutting costs, according to OpenAI.
Fine-tuning costs are as follows:
- Training: $0.008 / 1K tokens
- Usage input: $0.012 / 1K tokens
- Usage output: $0.016 / 1K tokens
Fine-tuning is most powerful when combined with other techniques such as prompt engineering, information retrieval, and function calling.
In other news, OpenAI today made available two updated GPT-3 base models (babbage-002 and davinci-002), which can be fine-tuned as well.
OpenAI said that fine-tuning support for GPT-4 — which, unlike GPT-3.5, can understand images in addition to text — will arrive sometime later this fall, but said when.
.