Uncategorized

Multi Prompting Guide

Image credit: hbr.org

Here I covered a detailed and beginner friendly explanation of some of the most powerful prompting techniques.

Zero Shot Prompting:

In zero shot prompting, the model receives a direct instruction without any examples. It’s like asking a question to someone who’s never seen a similar one before and hoping they understand the task from just the description. For example, asking, “Classify the sentence: I think the vacation is okay” would lead the model to guess “Neutral” without any prior reference. This works well for straightforward tasks but can struggle with complex reasoning.

Few shot Prompting:

Few shot prompting improves performance by including a handful of examples in the prompt. These examples help the model understand the structure and logic needed to answer correctly. For instance, if you want the model to define a made up word in a sentence, showing one or two similar samples allows it to generate the correct output using the same pattern. However, even with examples, the model may still falter in more complicated logic tasks, like math or reasoning.

Chain of Thought(CoT) Prompting:

This prompt, which I have covered in my blog post on reinforcement learning in Al topic already (Please refer https://www.karthikanav.com/2025/05/20/reinforcement-learning-in-ai/) Chain of thought prompting guides the model to break a problem into steps before reaching an answer. Instead of just stating the result, it shows reasoning, like solving a math word problem step by step. DeepSeek, known for its reasoning and mathematical problem solving strengths, often demonstrates CoT behavior even in zero shot settings. In ChatGPT, when you toggle the “Deep Thinking” mode (available in some ChatGPT pro plans), it internally shifts toward a more deliberate, CoT like reasoning strategy. This makes it a great real world example of CoT prompting in action. These methods help improve accuracy, especially in tasks requiring multiple reasoning steps. For example, to answer how many tennis balls Roger has, the model walks through each part: starting amount, amount added and totals them together.

Tree of Thought Prompting:

Tree of thought prompting expands CoT by exploring multiple reasoning path at once.Think of it like brainstorming different solutions before settling on one. This helps in situations where there’s more than one way to solve a problem or where initial ideas may be flawed. It’s useful for creating writing, planning or problem solving tasks where the first answer might not be the best.

DiVeRSe Prompting:

DiVeRSe prompting stands for Diverse Verbal Reasoning and Steering. It helps large language models understand different user intents by giving multiple phrasings or styles of the same prompt. This ensures the model doesn’t get stuck following a narrow interpretation and can respond more flexibility. It’s especially helpful when prompting multilingual or culturally varied audience.

Function calling+ Tool use (Structured prompting):

Function calling is a structured prompting technique where large language models (LLMs) don’t just respond with text, they trigger real functions, APIs, or tools based on how the prompt is structured. This method is gaining traction because it shifts LLMs from being just language generators to becoming active agents that can interact with external systems. For example in OpenAI’s function calling, you define a function (like get_weather(city)) and instead of the model guessing the weather, it calls the function to fetch a real time data in that location. Similarly, LangChain and LlamaIndex allow you to chain together prompts, tools and memory, so the model can do things like retrieve documents, query database, other run code. While it’s not a prompting style like few shot or chain of thought, it’s a paradigm shift – structuring prompts to invoke functions or tools, making LLMs more like autonomous assistants. This technique is essential for building AI agents, chatbot, and enterprise applications.

SimToM prompting:

SimToM – Simulation to Model prompting encourages the model to simulate different roles, perspectives, or scenarios before responding. It’s like asking the model to “act” as a doctor or customer before answering. This helps it adapt better to domain specific problems or user personas, and it’s widely used in chatbots, simulations and expert systems.

Style Prompting:

Style prompting steers the model to respond in a specific tone, voice or format. For example, you can ask for a shakespearean version of a news headline or a tweet written like a pirate. This commonly used in branding, storytelling, and content generation to maintain consistency across output.

Final Note:

At the heart of it, prompting is all about context and understanding the task, the audience, and choosing the right approach to guide the model effectively. Whether you’re giving it examples, asking it to reason step-by-step, or connecting it with tools, the key is knowing what the model needs to perform at its best. There’s no one-size-fits-all. It’s about experimenting, adapting, and finding what works for your specific goal.

Author

karthika Navaneethakrishnan