OpenAI recently published a guide to Prompt Engineering with six strategies for eliciting better responses from their GPT models, with a focus on GPT-4. Prompts are descriptions or examples of the task to be performed given to the model.
The strategies offered are:
1. Write clear instructions: Give the model clear and concise instructions that are easy to understand.
2. Provide reference text: Include relevant text or data that the model can use to inform its response.
3. Split complex tasks: Break down complex tasks into smaller, more manageable subtasks.
4. Give the model time to “think”: Allow the model ample time to generate a thoughtful response.
5. Use external tools: Integrate the GPT model with other tools and resources to enhance its capabilities.
6. Test changes systematically: Experiment with different prompts and settings to find the optimal configuration for your desired output.
Several of the tactics utilize the Chat API’s system message parameter to set the model’s behavior, such as shaping its responses or passing instructions to be repeated across inputs.
The guide acknowledges that newer versions of LLM models may reduce the need for complex prompting in the future due to their advancements. It also emphasizes the importance of using the code generated by the model in a sandbox environment due to potential safety concerns.