Skip to content

Enhancing OpenAI API Efficiency: Leveraging System Messages and Temperature Settings for Improved JSON Output

Further to my post, ‘Sequel to AI-Powered Order Structuring: Advancing to JSON Transformation’, I have been enhancing our capabilities and learning more about the OpenAI API. Recently, I have started incorporating two additional settings in my API calls:

  • System messages
  • Temperature settings

Here’s an example:

var requestBody = new
{  model = "gpt-3.5-turbo",
  messages = new[]
        role = "user",
        content = prompt4.ToString()
       role = "system",
       content = systemprompt.ToString()
  temperature = 0.2

The system role is used to set the behaviour of the assistant at the beginning of the conversation. It can provide instructions, context, or background information that helps the model understand how it should respond throughout the session. My current system prompt is:

var systemprompt = "You are an assistant that generates JSON. You always return just the JSON with no additional description or content.";

The temperature parameter in the OpenAI API controls the randomness and creativity of the model’s responses. It influences how deterministic or varied the model’s output will be. Low temperatures between 0 and 0.3 are useful for tasks requiring precise and consistent answers, such as fact-based queries or technical instructions.

These settings have improved the output generated by OpenAI, making it more consistent and accurate. This is particularly beneficial when you need to classify customer communications and extract structured content from natural language.


Join the conversation

Your email address will not be published. Required fields are marked *