Automatically Generated Prompts Perform Better than Those Written by Human Engineers 

IBL News | New York

The Internet is replete with prompt engineering guidescheat sheets, and advice threads to help you get the most out of an LLM.

However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer, wrote Dina Genkin at IEEE Spectrum.

As a consequence, many prompt-engineering jobs may disappear.

Some researchers found how unpredictable LLM performance is in response to prompting techniques.

For example, asking models to explain their reasoning step-by-step—a technique called chain-of-thought—improves their performance on a range of math and logic questions. There is a surprising lack of consistency.

Recently, new tools have been developed to automate this process.

Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM.

Researchers found that in almost every case, this automatically generated prompt did better than the best prompt found through trial and error, and, the process was much faster, a couple of hours rather than several days of searching.