Best Practices for Writing to an LLM for Clear and Accurate Responses

Learn how to craft precise prompts for LLMs to unlock clearer, more accurate responses and enhance your interactions with AI tools.

Large Language Models (LLMs) like ChatGPT are powerful tools, but they work best with well-crafted prompts. Here's how to get clear and accurate responses:

Quick Tips

By following these steps, you can guide LLMs to deliver more reliable and useful outputs.

Master the Perfect ChatGPT Prompt Formula

ChatGPT

How LLMs Work with User Input

To craft effective prompts, it helps to understand how LLMs process your input. These models generate text by analyzing token patterns and calculating probabilities, so even identical inputs can sometimes produce varying outputs. By knowing how LLMs handle information, you can refine your prompts to get better results.

Text Processing Basics

LLMs break down your input into smaller units called tokens, then use neural networks to identify patterns and generate text. While they excel at recognizing patterns and creating coherent responses, they don't truly "understand" context or maintain long-term consistency.

For instance, in May 2023, a New York lawyer submitted a legal brief generated by ChatGPT that contained fabricated legal details. This highlights the importance of carefully crafting prompts to minimize errors.

Here’s how different prompting techniques can improve accuracy:




Technique
Accuracy Improvement




Role Assignment
10.3%


Detailed Descriptions
20–50%


Chain-of-Thought
50–100%


Few-shot Prompting
26.28%



The Role of Clear Instructions

The structure and clarity of your instructions play a big role in getting precise outputs. Well-crafted prompts act as a guide, helping the model stay focused and reducing errors.

Research on GPT-3.5-turbo shows that proper prompt formatting can boost performance by up to 40% in tasks like code translation.


"The true power of AI isn't just in the models themselves, but in how we guide them to think and respond".


"Prompt engineering is no longer just a hack - it's a fundamental discipline for reliably controlling AI outputs." - Brian Muthama.

To get the most accurate results, modern LLMs respond best when you:

Professor Søren Dinesen Østergaard and Kristoffer Nielbo point out that what we often call "hallucinations" in AI are better described as "false responses". By mastering clear and structured inputs, you can significantly improve the reliability of LLM outputs.

Writing Clear Prompts for LLMs

Use Exact Details

When crafting prompts, precision is key. Adding specific details helps the model understand your needs better. For example, CalebCooks.com improved its content generation by including context like this: "CalebCooks.com is Caleb Smith's cooking blog... You are given a post's title and content, and you write its teaser paragraph. The goal is to convince readers to click".


"Employees do better when they have more business context. The same is true of LLMs! To do its best work, the LLM needs to know why it's being prompted, where its input came from, how its output will be used, and how its output will be judged."

The more precise you are, the easier it becomes to include broader background information naturally.

Add Background Information

Providing background details helps define the scope of your request. Here are some key elements to include:




Context Element
Purpose
Example




Industry Focus
Narrows the scope
"For the renewable energy sector"


Time Frame
Sets boundaries
"Developments in the last 5 years"


Target Audience
Shapes the tone
"For small business owners"


End Goal
Directs the output
"For a conference presentation"



Instead of asking vague questions like, "What are the latest marketing trends?" try something more specific: "I am preparing a presentation for a marketing conference on the latest trends this year".

Once you've established the context, clearly defining the response format ensures the output matches your expectations.

Choose Response Format

Clarifying the desired output format helps guide the model's response. Use clear action verbs and specify details like style, length, and tone.


"Being too vague is like asking a friend, 'What's up?' and expecting a life story. LLMs need details to give you a good answer."

sbb-itb-e464e9c

Improving Your Prompts

Starting with clear prompts is essential, but refining them can take your LLM responses to the next level.

Fine-tuning Your Questions

Improving your prompts often requires making small, targeted changes based on the model's initial output. If the response misses the mark, tweaking the prompt can lead to much better results.

Here’s a quick guide to refining prompts effectively:




Issue
Strategy
Example




Too Technical
Add audience context
"Simplify for a high school audience"


Too Vague
Include specific metrics
"Provide 3-5 concrete examples with percentages"


Too Broad
Set clear boundaries
"Focus only on developments from 2024-2025"


Incorrect Focus
Clarify priorities
"Emphasize cost-effectiveness over speed"




"The core idea is as simple as it is powerful - a feedback loop where the model's own performance informs the next version of its prompt." - Mustafa Ak, Data Scientist at Microsoft Data and Places

When working on more complicated topics, a structured approach can make all the difference.

Handling Complex Topics

Breaking down a complex query into smaller, manageable parts helps ensure accurate and relevant responses. The B.R.E.A.K. framework is a helpful method for tackling intricate topics:


"Test your prompts regularly. Testing will make sure your outputs are accurate, relevant and cost-efficient (and that you minimize unnecessary API calls)." - Lina Lam, Helicone

To handle complex queries effectively, stick to a systematic process:

Common Prompt Writing Mistakes

Avoiding common mistakes when crafting prompts can greatly improve your interactions with LLMs.

Unclear Language

Using vague language can lead to responses that miss the mark. When prompts lack detail, the model has to guess, often resulting in irrelevant or unhelpful outputs.

Here are some examples of unclear prompts and how to make them more precise:




Unclear Prompt
Issue
Improved Prompt




"Optimize the website"
Too general, no specific goal
"Optimize the website's largest contentful paint (LCP) to be under 2.5 seconds on mobile devices"


"Fix the issues in the app"
Doesn't specify the problem
"Resolve the crash issue on the login screen when invalid credentials are entered"


"Help debug this code"
Lacks context or details
"Help debug this Python script for a Django web application, which throws an 'IntegrityError' when adding new users to the database"



Being specific ensures the model understands the task and delivers more useful results. Similarly, avoid overloading your prompt with too much information.

Too Many Questions at Once

Prompts should focus on one question at a time. Asking multiple questions in a single prompt can lead to incomplete or scattered responses.

For example:


"What are the main features of artificial intelligence, how does it impact business, and what are the ethical concerns?"

Instead, break it into simpler, focused prompts:

Summary and Next Steps

Improve your prompt-writing skills through consistent practice. Studies  show that using emotional language can enhance results. Here's a simple framework to help refine your prompts:




Stage
Action
Expected Outcome




Initial Draft
Write a straightforward prompt with clear instructions
A basic response addressing the main request


Context Enhancement
Add background details and examples
A more precise and focused output


Format Specification
Specify the desired structure for the response
A well-organized and properly formatted result


Testing
Review the response for quality and accuracy
Identify areas for improvement


Refinement
Make adjustments based on test results
Improved quality and relevance of output



Use this framework as a checklist to refine and improve your prompts step by step.


"With GPT, it's to your advantage to make the prompt longer. That prompt - or what is it you're asking GPT - requires accuracy and contextual information. That's where we find the magic." - John Nosta, President, NostaLab

For practical applications, focus on specific tasks like text generation, editing, or summarization. Be clear about the tone, style, and structure you want to guide the model's responses.

When evaluating prompts, consider factors such as factual accuracy, relevance, efficiency, and precision.

Research also supports a structured approach. For instance, Google Research highlights that using Chain of Thought (CoT) prompting - breaking tasks into smaller, logical steps - can significantly enhance output quality, especially for complex problems.

For professional settings, use prompt version control and thorough testing to maintain consistent and dependable results. By following these steps, you can refine your prompts and achieve more accurate, effective responses from language models.

Related Blog Posts