Back
PromptHub Prompt Engineering Method Templates
Templates for the most popular and effective prompt engineering methods
- ๐ฅ
Multi-persona collaboration
A new prompting method that instructs the LLM to create multiple personas to work together to complete the task
PromptHub
77359 - ๐ด
Tree of Thoughts
A prompting method that instructs the LLM to traverse many different paths when completing a task. Movie recommender is used as the example task. Update the variables and steps for your use case.
PromptHub
14361 - ๐
"According to..." prompting
A prompting method that reduces hallucinations by grounding responses in pre-training data. Update the source and question variable to run this prompt.
PromptHub
189 - ๐
Skeleton of Thought
Skeleton of Thought (SoT) typically uses 2 parallel prompts. This one-shot prompt merges them: first forming a task skeleton, then filling it in. Just update the question and run the prompt!
PromptHub
11105 - ๐
AutoHint
This template corresponds to Step 3 in the AutoHint framework. It's designed to generate a broad hint based on incorrect input/output pairs, which can then be added to the original prompt to increase accuracy.
PromptHub
174 - ๐งฎ
Algorithm of Thoughts
Input your prompt in the variable and it will be converted into a new prompt, following the Algorithm of Thoughts framework. A final, cohesive, prompt will be below the AoT framework output.
PromptHub
4131 - โค
๏ธEmotionPrompt
A prompting method that uses emotional statements to yield better results. Just add the emotional statement at the end of your prompt. Read more about it on our blog.
PromptHub
164 - ๐
Step-Back Prompting
A prompting method that encourages the model to take a step-back before diving into a task or question
PromptHub
782 - ๐
Chain of Density
Chain of Density prompting generates 5 increasingly detailed summaries. Research points to the third summary being the closest resemblance to human-written summaries in regard to density of information.
PromptHub
22174 - โ
Chain of Verification
CoVe, typically a two-prompt method, can also function effectively with just one prompt, still helping to reduce hallucinations.
PromptHub
4115 - ๐
Semantic Alternative Enhancer
Optimize longer prompts using methods backed by research
PromptHub
28145 - ๐
RecPrompt
The base prompt used in the RecPrompt framework. A great starting point if you are building any sort of recommendation system on top of an LLM. We've added some structural enhancements to better distinguish different parts of the prompt.
PromptHub
15 - โ๏ธ
Analogical Prompting
Auto-generate CoT examples
PromptHub
648 - ๐
Universal Self-Consistency
Applicable to a wide range of tasks (including free-form answers), USC typically should be run separately to generate multiple outputs and select the most consistent. This template gives a starting point to understand the method
PromptHub
18 - ๐
Self-Consistency
While typically the prompt should be run separately to generate answers, this template gives a starting point to understand the method
PromptHub
460 - ๐ป
Program of Thoughts
Template to generate the code portion of the Program of Thoughts (PoT) prompting method
PromptHub
49 - ๐งฉ
Least-to-most step 2
A generalizable prompt for stage 1 of least-to-most prompting, where the the problem is broken down into subproblems
PromptHub
29 - ๐ ๏ธ
Least-to-most step 3
A generalizable prompt for Stage 2 in least-to-most prompting where the subproblems are solved
PromptHub
23 - โจ
Least-to-most step 1
Generate few shot examples, for any task, to be used to show the model how to decompose problems
PromptHub
35 - โ๏ธ
Contrastive Chain-of-Thought
Contrastive CoT prompting involves adding both correct and incorrect examples to a Chain-of-Thought prompt to help the model learn by contrasting faulty reasoning with correct logic.
PromptHub
236 - ๐ง
System 2 Attention
System 2 Attention prompting guides the model to remove unnecessary or irrelevant information from the input before processing, ensuring a focus on the most relevant details.
PromptHub
136 - ๐งต
Thread-of-Thought
Ideal for situations with large context, Thread of Thought helps the model maintain a coherent line of thought across many messages.
PromptHub
1057 - โ๏ธ
Zero-Shot CoT
The simplest way to implement Chain-of-Thought reasoning. Just add language that prompts the model to demonstrate reasoning.
PromptHub
168 - ๐๏ธ
Few-Shot CoT
Provide the model with a few examples that demonstrate ideal reasoning chains.
PromptHub
161 - ๐
Faithful CoT
Faithful Chain-of-Thought ensures reasoning chains accurately reflect the model's thought process by converting natural language queries into symbolic reasoning chains with Python, and then uses a deterministic solver to find the final answer.
PromptHub
245 - ๐
Tabular CoT
Tabular Chain-of-Thought directs the model to present its reasoning in a structured format, such as markdown tables.
PromptHub
230 - ๐
LLMCompare
LLMCompare evaluator, specifically for summary evaluation
PromptHub
488 - ๐ค
o1-preview reasoning
Turn GPT-4 into o1-preview through a structured, step-by-step reasoning process.
PromptHub
955 - ๐ก๏ธ
Claude 3 Haiku SP
The system prompt used to power Claude 3 Haiku in the Claude.ai interface
PromptHub
11 - ๐ก๏ธ
Claude 3 Opus SP
The system prompt used to power Claude 3 Opus in the Claude.ai interface
PromptHub
16 - ๐ก๏ธ
Claude 3.5 Sonnet SP
The system prompt used to power Claude 3.5 Sonnet in the Claude.ai interface
PromptHub
77 - ๐ค
ChatGPT-4o SP
The system prompt used to power ChatGPT when using 4o
PromptHub
277 - ๐
Meta Prompt Conductor
Remember this is the prompt for the meta model that is the conductor. This prompt itself doesn't optimize prompts
PromptHub
5162 - ๐ค
OpenAI SI Generator
Based on a little of prompt injecting, we believe this is the prompt behind the new OpenAI System Instructions generator
PromptHub
3125 - ๐
Auto ICL Step 1
This prompt guides the model to generate diverse, well-structured input-output demonstrations to be used in a second prompt as few-shot examples
PromptHub
33 - ๐
Auto ICL Step 2
This prompt is designed to use demonstrations generated in a previous prompt to guide the model's reasoning when solving a new question/task
PromptHub
17 - ๐งโ๐ซ
ExpertPrompt
From the ExpertPrompt framework, this template will generate expert agent personas for any given task, leveraging In-Context Learning (ICL)
PromptHub
50265 - ๐จโ๐ค
Persona Generator
Persona Generator from the Jekyll & Hyde framework. Generate a persona for any given task. Output is in JSON.
PromptHub
194 - ๐
AutoReason
Generate Chain of Thought reasoning traces for any task. Take the reasoning steps and the original query to then generate better outputs
PromptHub
1893 - ๐
Chain of Thought Critic
Critics generate Chain of Thought steps, serving as an additional layer of reasoning to enhance LLM outputs before final processing
PromptHub
10
