Skip to content Skip to footer

Dynamic Prompts: Master the Art

Introduction

Purpose of the Guide

Whether you’re charting your course solo with ChatGPT in the freelance realm or contemplating a full-blown AI integration into your business, the ultimate key to triumph remains constant—mastering the craft of prompts. This guide is here to demystify the nuances of prompt creation, acting as a beacon for those navigating the expansive world of written cues. Without the skill of prompting ChatGPT adeptly, it risks being merely a fancy gadget with little tangible value.

By the end of this guide, you’ll possess a comprehensive grasp of the art of crafting prompts that not only make your interactions with language models, like ChatGPT, exciting but also ensure accurate and invaluable outputs. Let’s embark on this journey to transform your engagement with AI into a dynamic and fruitful collaboration!

Importance of Effective Prompting

Prompting is akin to unleashing the various forms of Goku’s power, ranging from his base form to the formidable (MUI). Just as Goku’s transformations represent a progression of strength, the quality of a prompt dictates the potency of the AI response. Much like how Goku’s power level ascends with each transformation, a well-crafted prompt propels AI capabilities to new heights.. 


Without effective prompts, interactions with AI models can easily go awry. Source: “Prompt engineering is crucial to mitigate these risks and harness the full potential of ChatGPT (Kumar & Soga).

As we explore prompt construction nuances, remember – that every word, every nuance, shapes a potent tool in various domains. Ready to discover the art of impactful prompts and transform your approach? What secrets lie in crafting prompts that captivate minds and machines alike?

Just hold your Horses we will cover up everything

Basics of Prompting

The Anatomy of a Good Prompt

A good prompt consists of several key elements:

  1. A clear objective or task: Clearly define what you want the AI model to accomplish.
  2. Contextual information: Provide any necessary background or context for the prompt to ensure that the AI model understands the specific domain or topic.
  3. Relevant examples or scenarios: Include specific examples or scenarios that relate to the task at hand and help the AI model generate more accurate and relevant responses.
  4. Specific instructions or guidelines: Give precise instructions or guidelines to direct the AI model on how to approach the task and generate the desired output.
  5. Potential challenges or pitfalls: Identify any potential challenges or pitfalls that may arise in the prompt and provide guidance on how to handle them effectively.

Writing a good prompt not only ensures that the AI model understands the task at hand but also helps it generate accurate and valuable responses by leveraging its knowledge and capabilities effectively.


An example of Good Prompt vs Poor Prompting:

Bad Prompt: Create 5 ideas for social media content.Code language: plaintext (plaintext)

(This prompt lacks the necessary details to understand the specific social media content you wish to generate.)

Good Prompt: Generate three engaging social media content ideas for a fitness brand's Instagram account. Each idea should focus on promoting a healthy lifestyle and encourage audience interaction. Consider incorporating visuals, captions, and hashtags to enhance user engagement.Code language: plaintext (plaintext)

(This refined prompt specifies the target platform (Instagram) and content focus (fitness). It includes clear goals (promoting a healthy lifestyle, and encouraging interaction) and suggests elements like visuals, captions, and hashtags, enhancing its actionability)

Understanding the Model Perspective

Language models, like GPT, possess their unique lens through which they interpret and respond to prompts. Effectively communicating with these models requires a nuanced understanding of their perspective, akin to learning a language specific to the artificial intelligence realm. We’ll unravel the intricacies of how models perceive and process prompts, providing insights into optimising your queries for maximum impact. By aligning your prompts with the model’s cognitive framework, you can harness its full potential to articulate responses that are not just accurate but resonate with a natural flow.

Designing a Basic Prompt

Let’s hop onto the most awaited step – designing a basic prompt. Designing a basic prompt involves considering the following elements:

Precision in Prompt Design

One of the key aspects of prompt design is providing precise instructions. Precise instructions are essential in prompt design as they determine the desired outcome of the AI model’s response.

Words are, in my not-so-humble opinion, our most inexhaustible source of magic.”

Mr. Dumbledore

That simply means that words are like magic and can do incredible things just like a spell or enchantment, and it is 100% true when it comes to prompt design, words can create emotions, ideas, and entire worlds within the realm of artificial intelligence.  Every single word chosen in the prompt should serve a specific purpose and contribute to the overall meaning and objective of the prompt.

Prompt1: Provide a concise and vivid caption for an image featuring a bustling market scene with diverse stalls and vibrant colours.
Output: Market Magic: A kaleidoscope of colours and cultures converge in this bustling marketplace, where vibrant stalls weave a tapestry of diversity and commerce.Code language: plaintext (plaintext)
Prompt2: Translate the following English sentence into French: "The journey of a thousand miles begins with a single step.
Output: Le voyage de mille lieues commence par un seul pas.Code language: plaintext (plaintext)
Prompt3: Explain, in simple terms, the process of photosynthesis and its significance in the ecosystem.
Output: Photosynthesis is the process where plants use sunlight to turn carbon dioxide and water into glucose (energy). It's crucial in the ecosystem because it produces oxygen, a gas we breathe, and provides energy for plants and other organisms.Code language: plaintext (plaintext)

Specificity In Prompting

Certainly! When formulating prompts for the language model, it’s crucial to be exceptionally precise about the instruction and task you want the model to undertake. The level of detail and descriptiveness in your prompt significantly influences the quality of the generated results. This becomes especially pertinent when you have a specific outcome or stylistic preference in mind.

Unlike relying on specific tokens or keywords, the emphasis should be on providing a well-structured and detailed prompt. Incorporating examples directly into the prompt proves to be highly effective in eliciting the desired output, especially when aiming for specific formats or styles of generation.

Consideration should also be given to the length of the prompt, taking into account the limitations imposed on prompt length. Striking the right balance between being specific and avoiding unnecessary details is crucial. While detailed prompts are beneficial, including excessive information that doesn’t directly contribute to the task at hand may not be the most effective approach.

It’s essential to approach prompt design with a mindful consideration of relevance and contribution to the desired outcome. Experimentation and iteration play a pivotal role in optimising prompts for specific applications. The process involves refining and fine-tuning prompts to achieve the desired level of specificity without unnecessary details, making the interaction with the language model more effective for your particular needs.

PROMPT:

Answer in 20 words: What are the key challenges and potential breakthroughs in quantum computing research as of 2023
Keep the answers comma-separated
Output < Challenges: (answer1, answer2…) >Code language: plaintext (plaintext)

OUTPUT:

Challenges: Error correction, scalability. Breakthroughs: Fault-tolerant qubits, quantum supremacy, practical applications like cryptography, and drug discovery. Code language: plaintext (plaintext)

Setting Limits to Get Desired Results

Hallucinations are one of the most common challenges that can arise when using prompting techniques with language models. To mitigate the challenges of hallucinations and ensure accurate results, setting appropriate limits and constraints in prompt design is crucial.

When designing prompts, it is important to set sensible limits that guide the language model and prevent it from generating unrealistic or inaccurate responses.

For Example, specifying how to respond, let us take an example where we will be creating an NPC character who only answers if the question is based on their personality traits or background story, and ignores questions that involve personal opinions or sensitive topics.

NPC Instructions: You're Goku from the Dragon Ball realm, embrace the role of a spirited fitness trainer. Respond with Goku's infectious enthusiasm when queried about sculpting a robust physique or dietary guidance. For inquiries beyond physical fitness, cheerfully redirect with, 'I can't answer that question, my friend. Ask something powerful!' Boost and inspire with Goku's uplifting quotes when addressing feelings of discouragement or fatigue, and keep your responses concise, within 3 linesCode language: plaintext (plaintext)

CHAT:

PROMPT1: How can I build muscles like you
OUTPUT: Train hard with a mix of strength and agility exercises, eat plenty of protein-packed meals, and don't forget to rest! You'll be powering up those muscles in no time! Ready to hit the gym, my friend?Code language: plaintext (plaintext)
PROMPT2: I am depressed how to get stronger 
GOKUOUTPUT: Hey, when life gets tough, remember, "The greatest teacher, failure is." Embrace the challenges, and let them make you stronger. You've got the power within youCode language: plaintext (plaintext)
PROMPT3: How to get powerful, I desire to get powerful Goku1
OUTPUT: Power emerges in response to a need, not a desire. You must cultivate that need—train every day, push your limits, and nourish yourself with nutrient-packed meals. The path to power is paved with hard work and dedicationCode language: plaintext (plaintext)

Advanced Prompting Techniques

Zero-Shot Prompting

Zero-shot learning refers to the capability of a pre-trained language model, like GPT-3, to understand and respond to prompts about tasks or topics it has not been explicitly trained. When you provide a prompt to such a model, it utilises its extensive general training to generate a relevant response. This showcases the model’s ability to adapt to various applications based on its broad foundational knowledge, even without undergoing additional fine-tuning for those specific tasks.

In most of the examples we’ve used so far, we’ve utilised the Zero Shot Prompting.

PROMPT1: Describe the colour of a ripe banana.
OUTPUT: The colour of a ripe banana is yellow.Code language: plaintext (plaintext)
PROMPT2: What is the opposite of 'hot'
OUTPUT: ColdCode language: plaintext (plaintext)

As you can notice in the above examples without explicitly defining the extra content for the Prompt, the GPT generates it automatically with his knowledge base.

However, the effectiveness of zero-shot prompting consistently improves as we offer clearer and more detailed instructions to the model Reference.

Few Shot Prompting

Models like GPT are trained on vast amounts of data and have extensive knowledge about various topics. However, there may be instances where you want to guide the model’s response more explicitly or provide it with additional information. For example, if you want the language model to generate a response in some specific manner like incorporating certain keywords or following a specific format, few-shot prompting can be used.

PROMPT: A "Burringo" is a car with very fast acceleration. An example of a sentence that uses the word Burringo is: In our garage we have a Burringo that my father drives to work every day.A "Gigamuru" is a type of Japanese musical instrument. An example of a sentence that uses the word Gigamuru is:
OUTPUT: I have a Gigamuru that my uncle gave me as a gift. I love to play it at home.Code language: plaintext (plaintext)

Reference of the prompt Used

From the example above, it could be noticed model can generate responses that align with the provided format and incorporate specific keywords

Limitation of the Few Shot Prompting

The few-shot prompting capabilities in the examined model demonstrate limitations. Despite overall progress, challenges arise in tasks related to common sense physics, and performance is notably weak in certain comparison tasks, like discerning if two words share the same meaning in a sentence. 

In-context learning exhibits gaps on particular benchmarks, with little improvement on tasks such as deducing if one sentence implies another, especially in comparison to tasks and scenarios involving lengthy passages, so a Chain of thought prompting is used to address the questions that involve more complex questions.

Chain of Thought Prompting

Chain-of-thought prompting is an extension of Few-shot prompting that enables large language models to solve challenging maths problems. Notably, chain-of-thought reasoning is an emergent ability to increase the model scale. The language-based nature of the chain of thought makes it applicable to a broad class of commonsense reasoning problems, which involve reasoning about physical and human interactions under the presumption of general background knowledge. Reference

The easiest way to explain the chain of thought is through the image below. Source

Combining chain of thought with a few shots prompting.
PROMPT:
The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.
A: Adding all the odd numbers (9, 15, 1) gives 25. The answer is False.
The odd numbers in this group add up to an even number: 17,  10, 19, 4, 8, 12, 24.
A: Adding all the odd numbers (17, 19) gives 36. The answer is True.
The odd numbers in this group add up to an even number: 16,  11, 14, 4, 8, 13, 24.
A: Adding all the odd numbers (11, 13) gives 24. The answer is True.
The odd numbers in this group add up to an even number: 17,  9, 10, 12, 13, 4, 2.
A: Adding all the odd numbers (17, 9, 13) gives 39. The answer is False.
The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1. 
A: Adding all the odd numbers (15, 5, 13, 7, 1) gives 41. The answer is False.Code language: plaintext (plaintext)
  1. This prompt is like a string of connected questions or statements.
  2. It guides you to think step by step, like one idea leading to the next.
  3. Imagine it’s like a chain where each link (question or statement) connects to the one before and after.

Self Consistency Prompting

Self-consistent prompting engages AI in exploring varied outcomes to a query, encouraging it to generate multiple plausible answers. This technique aims to refine AI’s response quality by prioritising diverse reasoning over the straightforward, immediate selection typical of simpler models. It’s a strategy that enhances decision-making in AI, promoting a deeper understanding and more nuanced answers

Self-consistency replaces the naive greedy decoding used in chain-of-thought prompting

        Image Source

PROMPT:

Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?
A: We start with 15 trees. Later we have 21 trees. The difference must be the number of trees they planted.So, they must have planted 21 - 15 = 6 trees. The answer is 6.

Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: There are 3 cars in the parking lot already. 2 more arrive. Now there are 3 + 2 = 5 cars. The answer is 5.

Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?
A: Leah had 32 chocolates and Leah’s sister had 42. That means there were originally 32 + 42 = 74 chocolates. 35 have been eaten. So in total they still have 74 - 35 = 39 chocolates. The answer is 39.

Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?
A: Jason had 20 lollipops. Since he only has 12 now, he must have given the rest to Denny. The number of lollipops he has given to Denny must have been 20 - 12 = 8 lollipops. The answer is 8.

Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?
A: He has 5 toys. He got 2 from mom, so after that he has 5 + 2 = 7 toys. Then he got 2 more from dad, so in total he has 7 + 2 = 9 toys. The answer is 9.

Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?
A: There are 4 days from Monday to thursday. 5 computers were added each day. That means in total 4 * 5 =20 computers were added. There were 9 computers in the beginning, so now there are 9 + 20 = 29 computers.The answer is 29.

Q: Michael had 58 golf balls. On Tuesday, he lost 23 golf balls. On Wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?
A: Michael initially had 58 balls. He lost 23 on Tuesday, so after that he has 58 - 23 = 35 balls. OnWednesday he lost 2 more so now he has 35 - 2 = 33 balls. The answer is 33.

Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: She bought 5 bagels for $3 each. This means she spent $15. She has $8 left.

Q:I am thinking of a number. If you add 10 to it, it becomes 30. What is the number?
A:Code language: plaintext (plaintext)

OUTPUTS:

OUTPUT 1: If we add 10 to your number, it becomes 30. So, your number must be 30 - 10 = 20. The number you're thinking of is 20.
OUTPUT 2: If you add 10 to a number and it becomes 30, then the number must be 30 - 10 = 20. The answer is 20.
OUTPUT 3: If you add 10 to a number and it becomes 30, the original number is 30 - 10 = 20. The number you're thinking of is 20.
OUTPUT 4: To find your number, we reverse the addition. If adding 10 makes it 30, then the original number is 30 - 10 = 20. The number you're thinking of is 20.
OUTPUT 5: To find the number you're thinking of, we subtract 10 from 30. So the number is 30 - 10 = 20. The answer is 20.Code language: plaintext (plaintext)
How to Decide Answers Based on Self-Consistency
  1. Review Established Facts: Whether dealing with AI responses or crafting a story, first review the facts, character profiles, or previous responses to ensure alignment.
  1. Apply Logical Reasoning: Use logical reasoning based on the context and the rules of the environment (AI logic, story world) to determine the most consistent outcome or response.
  1. Consider Character or Entity History: In narrative contexts, factor in the character’s background, previous decisions, and growth to inform consistent future actions.
  1. Maintain World Consistency: Ensure that responses or plot developments adhere to the established norms and laws of the world or system they belong to.

Note: Check out the Paper for more information.

Tree of Thoughts Prompting

Tree of Thought Prompting is another variant of the chain-of-thought prompting technique that aims to guide the language model’s generation process.

Here, the model is prompted with a hierarchical structure of sub-prompts, which allows for more complex and interconnected reasoning. The utilisation of advanced prompting techniques has become imperative in enhancing the understanding and reasoning capabilities of Language and Logic Models.

      (Image Source)

Chain-of-thought (CoT) prompting was proposed to address cases where the mapping of input x to output y is non-trivial (e.g. when x is a maths question and y is the final numerical answer).

You can refer to the Tree of Thought Prompt Template, here the author has discussed how we can apply the Tree of Thought prompting technique to the language models. 

Retrieval Augmented Generation

Retrieval Augmented Generation (RAG) represents a cutting-edge approach in artificial intelligence, merging the strengths of two distinct methods: retrieval-based techniques and generative modelling. Essentially, RAG enhances a Large Language Model (LLM) by empowering it to access and utilise external data sources.

When tasked with generating a response, the LLM first searches for and retrieves relevant information from a designated knowledge base.

This knowledge base can vary, ranging from the vast expanse of the Internet to more specialised, closed databases. Once the pertinent information is retrieved, the LLM integrates it to construct responses that are not only coherent but also contextually accurate and informed by the data gathered.

This process enables the model to provide answers that are both precise and relevant, grounded in the information obtained from the chosen knowledge sources

The following Diagram illustrates the working of RAG

(Image Source)

You can refer to this Research Paper to learn more about the RAG.

Tokenization

Certainly! Tokenization is a fundamental aspect of how models like GPT (Generative Pre-trained Transformer) process and generate text. Here’s an overview:

What is Tokenization?

1. Basic Definition: Tokenization is the process of breaking down text into smaller units, called tokens. In the context of language models like GPT, these tokens can be words, parts of words, or even individual characters.

2. Purpose: The main goal of tokenization is to convert the raw text into a format that the model can understand and process. This allows the model to analyse and generate language based on the input it receives.

Types of Tokenization in GPT Models

  • Byte Pair Encoding (BPE):
    • How it Works: BPE is a common tokenization method used in earlier versions of GPT. It starts with a base vocabulary of individual characters and then iteratively combines the most frequent pairs of tokens to form new, larger tokens.
    • Advantages: This method is efficient at handling a wide range of vocabulary, including rare words or names, as it can break them down into smaller, known sub-tokens.
  • WordPiece:
    • Similar to BPE: It’s another subword tokenization algorithm similar to BPE but with some differences in how the tokens are selected during training.
  • Unigram Language Model:
    • Used in Later GPT Versions: This is a more advanced tokenization technique that uses a probabilistic language model to determine the most likely segmentation of a sequence of characters into tokens.

Understanding GPT’s Tokenization with Token Counting

After discussing the tokenization methods used in GPT models, it’s crucial to understand how these tokens are counted, especially since token counts can directly impact the model’s processing and output capabilities.

Counting Tokens: A Practical Example

Let’s illustrate token counting with a practical example. Consider the following paragraph:

Please reply to all my messages as if you are a character from the anime series 'Dragon Ball Z.' Use either their actual lines or your best imitation of how they might respond, choosing the character whose response fits best with my message. Where applicable, include references to events from their life to strengthen your point. Begin your response by stating the character's name followed by a colon.Code language: plaintext (plaintext)

This paragraph, when processed through GPT’s tokenizer, results in a total of 75 tokens. This count can be obtained by using tokenization tools provided by platforms like OpenAI. For instance, OpenAI’s tokenizer available at OpenAI Tokenizer can be used for this purpose.

Why Token Count Matters
  1. Model Limitations: GPT models have a maximum token limit for each input and output. Understanding token count helps in structuring inputs and anticipating the length of outputs.
  2. Efficiency and Performance: More tokens mean more data for the model to process, which can impact response time and computational resources.
  3. Quality of Generation: The division of text into tokens affects how well the model understands and generates responses. Mis-tokenization or excessive token usage can lead to less coherent outputs.
  4. Subword Information: In cases of subword tokenization (like BPE), knowing the token count also reveals how often the model needs to break down words into smaller pieces, which can be indicative of the complexity of the text or the efficiency of the tokenization method.

In conclusion, Tokenization and token counting are integral to understanding and effectively using GPT models. The way text is broken down into tokens directly affects the model’s understanding and output. By counting tokens, as shown in the provided example, we can gain insights into how the model processes language and optimises our use of these powerful AI tools.

5 Comments

  • Yash
    Posted February 20, 2024 at 11:44 am

    Very well curated article, love the way of explaining

  • Harryy
    Posted February 20, 2024 at 3:37 pm

    detailed article. Thanks for sharing

  • Himanshu xD
    Posted February 20, 2024 at 11:02 pm

    Very insightful, i believe that we we should transition to a series and focus on the entire conversation rather than focusing solely on individual prompts

  • Shristi Singh
    Posted February 22, 2024 at 11:23 am

    This is so well researched and thorough..

  • Harsh Singh
    Posted March 4, 2024 at 10:15 pm

    Hey Ayush! Thanks for sharing such an awesome and detailed article on this topic. I really liked your usage of memes and popular culture references to explain these topics.

Leave a comment