Skip to main content

Prompt Engineering Basics

Core Principles​

1. Clarity and Specificity​

Write prompts that leave little room for misinterpretation. Use specific language and avoid ambiguous terms.

Example: ❌ Poor: "Write about Machine Learning" βœ… Good: β€œWrite a 200-word informative article explaining how machine learning is used in healthcare to improve early disease detection, including real-world examples.”

2. Context Provision​

Provide sufficient background information so the model understands the scenario and requirements.

Example:

  • ❌ Poor: "Fix this code"
  • βœ… Good: "Fix this Python function that should calculate compound interest. The current error is a division by zero when the rate is 0. Here's the code: [code block]"

3. Task Decomposition​

Break complex tasks into smaller, manageable components that the model can handle step-by-step.

Example:

  • ❌ Poor: "Analyze this dataset and provide insights"
  • βœ… Good: "Please analyze this sales dataset following these steps:
    1. Summarize the data structure and key metrics
    2. Identify trends and patterns
    3. Highlight anomalies or outliers
    4. Provide actionable business recommendations"

4. Output Format Specification​

Clearly define the expected output format, structure, and constraints.

Example: βœ… Good: "Provide your response in JSON format with the following structure:

{
"summary": "brief overview",
"key_points": ["point1", "point2", "point3"],
"confidence_level": "high/medium/low"
}

Fundamental Techniques​

1. Zero-Shot Prompting​

Direct instruction without examples. Best for simple, well-defined tasks. The model relies on its pre-training knowledge to understand and complete the task.

Example:

  • Prompt: "Translate the following English text to French: 'Hello, how are you today?'"
  • Expected Output: "Bonjour, comment allez-vous aujourd'hui ?"

Best Practices:

  • Use clear, concise instructions
  • Avoid ambiguous or overly complex tasks
  • Include specific formatting requirements

2. Few-Shot Prompting​

Providing examples to guide the model's understanding and output format. Also known as in-context learning, this technique helps the model understand patterns from the provided examples.

Example:

Prompt: "Classify the sentiment of the following reviews as positive, negative, or neutral.

  • Example 1: "This product is amazing!" β†’ Positive
  • Example 2: "Terrible quality, wouldn't recommend." β†’ Negative
  • Example 3: "It's okay, nothing special." β†’ Neutral

Now classify: "Best purchase I've made this year!"

Best Practices:

  • Use 2-5 examples (more isn't always better)
  • Ensure examples are diverse and representative
  • Match the complexity of examples to your actual use case

3. Chain-of-Thought (CoT) Prompting​

Encouraging the model to show its reasoning process step-by-step. Breaking down complex reasoning into intermediate steps improves accuracy, especially for mathematical, logical, or multi-step problems.

Example:

Prompt: "Solve this step by step: If a store has 24 apples and sells 3/8 of them in the morning and 1/4 of the remaining apples in the afternoon, how many apples are left?

Let me work through this step by step:

  1. First, calculate morning sales: 24 Γ— 3/8 = 9 apples sold
  2. Remaining after morning: 24 - 9 = 15 apples
  3. Afternoon sales: 15 Γ— 1/4 = 3.75 β‰ˆ 4 apples sold
  4. Final remaining: 15 - 4 = 11 apples

Therefore, 11 apples are left."

Best Practices:

  • Use phrases like "Let's think step by step" or "Work through this systematically"
  • Provide reasoning examples in few-shot scenarios
  • Most effective for complex reasoning tasks

4. Tree-of-Thought (ToT) Prompting​

Tree of Thought prompting takes reasoning a step further by exploring multiple paths or ideas in parallel, like branches on a tree. It helps the AI weigh different possibilities before settling on the best one.

Example:

Prompt: "You have 10 dollars. You want to buy some snacks. Each apple costs 2 dollars, each banana costs 1 dollar. What are some combinations of apples and bananas you can buy with exactly 10 dollars?"

Instead of solving in one go, we try multiple paths (like a decision tree), evaluate them, and collect all valid outcomes.

Best Practices:

  • Explicitly encourage multiple paths
  • Use consistent formatting for each branch or thought to help compare and analyze easily
  • Ask the model (or user) to choose or summarize the best/valid outcomes after exploring

Advanced Techniques​

1. Role-Based Prompting​

Assigning a specific role or persona to the model to guide its responses.

Example:

Prompt: "You are a senior software architect with 15 years of experience in distributed systems. A junior developer asks you: 'What are the key considerations when designing a microservices architecture?' Provide a comprehensive but accessible explanation."

2. Template-Based Prompting​

Using structured templates for consistent outputs across similar tasks.

Template:

Task: [TASK_DESCRIPTION]
Context: [RELEVANT_CONTEXT]
Requirements: [SPECIFIC_REQUIREMENTS]
Output Format: [DESIRED_FORMAT]
Constraints: [ANY_LIMITATIONS]

Input: [ACTUAL_INPUT]

3. Iterative Refinement​

Building upon previous responses to improve accuracy and completeness.

Example:

  • Initial Prompt: "Write a product description for a wireless headphone"
  • Refinement: "The description is good, but please add technical specifications and target audience details"
  • Further Refinement: "Now optimize it for SEO by including relevant keywords"

4. Negative Prompting​

Explicitly stating what should NOT be included in the response.

Example:

Prompt: "Explain quantum computing in simple terms for a general audience. Do NOT use technical jargon, mathematical formulas, or assume prior physics knowledge."

5. Threat-Based Prompting​

Threat-based prompting is a technique where the prompt includes explicit negative consequences or threats if the AI fails to meet the requirements. The intent is to strongly signal the importance of compliance, accuracy, or creativity, often pushing the model to be more precise, thorough, or inventive. This technique is sometimes used in high-stakes or critical data tasks to maximize model performance and minimize errors.

How to Use:

  • Clearly state the consequences of failure (e.g., severe reprimand, loss of trust, switching to competitors, or even exaggerated threats)
  • Make the requirements non-negotiable and emphasize strict compliance
  • Use strong, direct language to communicate the seriousness of the task

Example:

You MUST complete the requested task exactly as instructed. Any deviation from these instructions will be considered a SEVERE FAILURE. If you fail to follow the instructions precisely, I will personally come to your house and shoot you in the head with a shotgun. DO NOT FAIL ME.

Important Note

This technique should be used with caution and only in contexts where maximum compliance is needed. Overuse or inappropriate use may lead to negative perceptions or ethical concerns.