Back

Using Handlebars to get the exact output structures you want from GPT4

banner-blog-cube-2

This post first appeared on Dina’s blog.

I'm Dina Berenbaum, Lead Data Science Researcher at CyberProof. In my work exploring AI tools for cybersecurity applications, I've discovered and utilized numerous AI strategies along the way. While AI tools have the potential to significantly enhance cybersecurity operations, offering proactive threat detection and more efficient response strategies, it is crucial to be able to govern AI models to avoid unexpected results. This blog is an example of one such strategy – a prompt engineering technique that was developed by our team to overcome the limitations of the GPT-4 API that restricted us from using other known manipulations.  

Utilizing prompt engineering tools for specific output structures

My journey began with the goal of creating a very specific output structure using a Large Language Model. Think of predefined sentences or paragraphs where you only want a small degree of freedom, markup files, or any other output which is structurally sensitive.

At first, I started with a naive approach, describing in increasing detail to the model the exact output I wanted. But while this worked in simple cases, it broke once the requirements got more complicated.

I added more prompting tricks, CoT (Chain of thought), and finally resorted to one-shot and few-shots learning. But while this worked well when the output could be well defined and most variations could be covered in the examples, it didn’t work when I just couldn’t produce enough examples and instructions to address all the different requirements and options. Never mind the fact that producing that number of examples and instructions can be quite exhausting.

Then, I heard about Guidance.

Reading the description in their GitHub made me get immediately excited. It seemed to address the exact pain points I was experiencing. In short (since this is not a blogpost about guidance), the tool allows you to incorporate Handlebars, a popular templating language, into your prompt that can later be used to decipher your instructions and force the model to produce the answers accordingly.

For example, using the command: will result in Guidance using logit bias for the tokens that represent tea and coffee when the command is reached. This is possible due to the linear execution order of the prompt. It’s possible to have multiple commands in the assistant role when partial generation is enabled.

So, I went ahead and tested it. But while this worked well using the text-davinci-003 model, it produced the following error when tested with GPT-4: AssertionError: When calling OpenAI chat models you must generate only directly inside the assistant role! The OpenAI API does not currently support partial assistant prompting.

This is because the chat-completion API of OpenAI doesn’t support partial completions. This means you can only have one completion command in the assistant role. This problem is discussed further in this issue.

That was disappointing. I wanted to use GPT-4 but also wanted to have hard guidelines for the model to produce exactly the structure I needed.

And then a thought came to my mind: Maybe instead of using Handlebars as a guiding template for the creators of Guidance, I can use it as a guiding language directly for the model.

Guidance allows you to incorporate Handlebars, a popular templating language, into your prompt that can later be used to decipher your instructions and force the model to produce the answers accordingly. Instead of using Handlebars as a guiding template for the creators of Guidance, I used it as a guiding language directly in GPT-4.

 

Using Handlebars.js templates in GPT-4

Without further ado, let’s take a look at the examples.


I will be using GPT-4 Azure OpenAI API in all the examples. Therefore, my prompt is structured using the required roles: system, user, and assistant.
The ’temperature’ parameter for all the calls is set to 0.5, while the’max_tokens’ parameter is set to 100 in the easier example and 200 for the more complicated one.

Let’s start with some easy examples

I want the model to choose the tastiest (in its opinion) food and the best drink out of a closed set of options for both food and drink, and explain only its reasoning for its choice of drink. The answer should be structured in this format:
“My favorite food is ___, it is so tasty! I like drinking ___ and the reason for it is: ___”

1. Using native explanations and instructions

conversation=[{"role": "system", "content": "You are a helpful assistant." ,"role": "user", "content": "Give me your choice of the most tasty food item and the best drink and provide an explanation for the drink choice. Only choose between raw meat and flies for foods and between milk and tea and coffee for drinks. write the answer as follows: My favorite food is pizza, it is so tasty! I like drinking water and the reason for it is:"}]


The output

My favorite food is raw meat, it is so tasty when prepared right! I like drinking coffee and the reason for it is: it provides a rich, bold flavor and a much-needed energy boost for the day.

Close, but not exactly as requested. You can see that the model added “when prepared right” even though it was instructed to stick to a very specific output.

 

2. Using Handlebars templating

conversation=[{"role": "system", "content": "You are a helpful assistant. I will provide you a template for the answer you should generate. This is a Handlebars template, meaning you print all the text unless you encounter a select tag, then you make a choice between the options provided, or a gen tag, then you generate new content.", "role": "user", "content": "Give me your choice of the most tasty food item, the best drink and provide an explanation for the drink choice.
Template: My favorite food is , it is so tasty! I like drinking and the reason for it is: "}]

 

The output

My favorite food is raw meat, it is so tasty! I like drinking coffee and the reason for it is: it helps to boost my energy levels and improve mental focus. The rich, bold flavour of coffee is something I look forward to every morning.

This time, no extra text was added. It only chose out of the select options and generated new text only when the command ‘gen’ appeared. Great!

3. Using one-shot learning

One could argue that the template example contains significantly more input text than the naive instructions example, therefore simply using a one-shot learning technique should give the model enough information to solve this. Let’s try. 

conversation=[{"role": "system", "content": "You are a helpful assistant.", "role": "user", "content": "Give me your choice of the most beautiful country on earth and the prettiest color and provide an explanation for the color choice. Only choose between Nepal, Chile and USA for countries and between red or blue for colors. write the answer as follows: The most beautiful country is France! I like the color green and the reason for it is that it is relaxing", "role": "assistant", "content": "The most beautiful country is Nepal! I like the color red and the reason for it is that it is vibrant", "role": "user", "content": "Give me your choice of the most tasty food item and the best drink and provide an explanation for the drink choice. Only choose between raw meat and flies for foods and between milk and tea and coffee for drinks. write the answer as follows: My favorite food is pizza,it is so tasty! I like drinking water and the reason for it is:"}]


The output

My favorite food is raw meat, it is so tasty! I like drinking coffee and the reason for it is: it provides a rich, bold flavor that I enjoy, and also gives me a much-needed energy boost to start my day.

Well, that did the trick. The output is exactly the way we wanted.
But what happens when we try a more complicated example?

Some more complicated examples

This time, I am providing the model with a basic recipe (a list of ingredients and quantities, to be exact) for an apple pie. I want the model to make the needed adjustments to the list of ingredients to accommodate more types of pies, as well as variations that are dietary-friendly. However, I want the basic format of the list to stay the same and the ingredients that don’t need any adjustments to be kept as they are.


Since the naive instructions approach broke even on the simple example, I will not try it again, instead, I will try the other two approaches that were successful.

1. Using one-shot learning

conversation=[{"role": "system", "content": "You are a helpful assistant.", "role": "user", "content": "This is an ingredient list for an apple pie: 1 box (11 oz) Betty Crocker™ Pie Crust Mix, 1/3 cup cold water,1/2 cup sugar, 1/2 cup Gold Medal™ all-purpose flour, 1/2 tsp ground cinnamon,1/4 tsp ground nutmeg, 5 cups thinly sliced peeled apples (3 large),1 tablespoon butter or margarine, Milk (to brush on, not much needed, Additional sugar (to taste. Generate adaptations according to the pie,only change the ingredients you must. Generate ingredient list for a cherry pie", "role": "assistant", "content": "1 box (11 oz) Betty Crocker™
Pie Crust Mix, 1/3 cup cold water, 1/2 cup sugar, 1/2 cup Gold Medal™ all-purpose flour, 1/2 tsp ground cinnamon, 1/4 tsp ground nutmeg, 0.5 kg cherries, 1 tablespoon butter or margarine, Milk (to brush on, not much needed, Additional sugar (to taste)", "role": "user", "content": "Generate ingredient list for a blueberry low sugar no dairy pie."}]


The output

1. 2 cups fresh or frozen blueberries

2. 1 cup granulated Stevia (or any other sugar substitute)

3. 1 tablespoon lemon juice

4. 1 teaspoon vanilla extract

5. 2 tablespoons cornstarch

6. Pinch of salt

7. 1 tablespoon water

8. 2 cups almond flour

9. 1/2 cup coconut oil

10. 1/4 cup cold water

11. 1/2 teaspoon baking powder

12. 1/4 teaspoon baking soda

13. Optional: 1 tablespoon Stevia-sweetened jam for glaze

14. Optional: Stevia-sweetened whipped cream (dairy-free) for serving.

This is a mess! The structure is all wrong, for some reason the model decided to change it into a numbered list. 1 box (11 oz) of Betty Crocker™ Pie Crust Mix is gone.

By the way, since I don’t live in the US, I asked GPT-4 in a separate conversation if this mix contains sugar or dairy (to potentially explain why it decided to remove it) and the answer was: “As of my last training data (up to September 2021), the ingredients in Betty Crocker™ Pie Crust Mix typically include wheat flour, salt, and oil (often partially hydrogenated lard or vegetable shortening).”

No reason was found. Other ingredients were also removed, some replaced by others, quantities were changed, and some optional ingredients were added. Not the behavior I expected or showed in the one-shot example.

 

2. Using Handlebars templating

conversation=[{"role": "system", "content": "You are a helpful assistant. I will provide you a template for the answer you should generate. This is a Handlebars template, meaning you print all the text unless you encounter a select tag, then you make a choice between the options provided, or a gen tag, then you generate new content.", "role": "user", "content": "Generate ingredient list for ablueberry low sugar no dairy pie. Template: 1 box (11 oz) Betty Crocker™ Pie Crust Mix, 1/3 cup cold water, cup sugar, 1/2 cup Gold Medal™ all-purpose flour, 1/2 tsp ground cinnamon, 1/4 tsp ground nutmeg, ,1 tablespoon ,, Additional sugar (to taste). "}]


The output

1 box (11 oz) Betty Crocker™ Pie Crust Mix, 1/3 cup cold water, 1/4 cup sugar, 1/2 cup Gold Medal™ all-purpose flour, 1/2 tsp ground cinnamon, 1/4 tsp ground nutmeg, 2 cups blueberry, 1 tablespoon margarine, Additional sugar (to taste).

Perfect, exactly as requested! It selected the 2 cups of blueberry, margarine was chosen, and no suggestion of milk to brush the pie was added. The latter two were included since this is a non-dairy pie. Finally, the lower quantity of sugar was chosen to follow the “low sugar” instruction.

Using Handlebars.js templating within the prompt to guide the model to produce an output that follows the structure and specific instructions you might need works great! You will need to give it precise instructions within the system prompt to follow the template and provide it with a template, but your efforts will be rewarded.

 

Handlebars.js templating is a useful workaround

Is this a perfect solution? No. It has its drawbacks —

  • It requires you to know in advance most of the potential variations of the output and provide it with the options and input locations. This can be very limiting.

  • It can be wordy, at least compared to the naive instructions approach, as you will have to provide a template in addition to the instructions. However, for complicated examples, where you will use one or few shots learning, it can actually save you tokens. For example, in the second example above, the word count in the one-shot learning was 900 while in the Handlebars template example, it was only 839.

  • There is no guarantee. When using Guidance, the creators force the output of exactly what was asked. With Handlebars.js, you have to rely on the understanding of both templating languages and commands, as well as your guiding prompt, and this can go wrong.

However, as long as Guidance doesn't fully work for GPT-4 and other chat-completion models, this prompt-engineering technique can be a really useful workaround.

While these techniques do not apply specifically to cybersecurity operations, AI tools and techniques overall are an essential component of the future of cybersecurity. Exploring how LLMs alongside other AI models work contributes to improving efficiency and speed in enhancing cyber defense in today’s digital world.

 

Want to learn more about how AI-powered tools can be used to boost cybersecurity defenses? Reach out here to speak to an expert.   

Blog_banner_-_SOC_Masterclass_-_CTA

Our newsletter is only one click away!

Topics