OpenAI Issues Fine-Tuning for GPT-4o

IBL News | New York

OpenAI launched fine-tuning for GPT-4o yesterday, allowing developers to customize the structure and tone of responses or follow domain-specific instructions.

Fine-tuning was one of the most requested features. It can significantly impact model performance and cost reduction.

OpenAI announced it offers 1 million training tokens daily for free for every organization through September 23.GPT-4o fine-tuning training costs $25 per million tokens, and inference is $3.75 per million input tokens and $15 per million output tokens.

For GPT-4o mini, OpenAI offers 2 million training tokens daily for free through September 23.

One featured example is Genie, an AI software engineering assistant that can autonomously identify and resolve bugs, build features, and refactor code in collaboration with users.

It is powered by a fine-tuned GPT-4o model trained on examples of real software engineers at work, enabling the model to learn to respond in a specific way.

The model is also trained to output in specific formats, such as patches that could be easily committed to codebases.

The San Francisco-based research lab ensured that these fine-tuned models remain entirely under the customer’s control. The customer has full ownership of his business data, including all inputs and outputs. This ensures that your data is never shared or used to train other models.