Back
- ๐ด
Tree of Thoughts
A prompting method that instructs the LLM to traverse many different paths when completing a task. Movie recommender is used as the example task. Update the variables and steps for your use case.
PromptHub5289 - ๐งฎ
Algorithm of Thoughts
Input your prompt in the variable and it will be converted into a new prompt, following the Algorithm of Thoughts framework. A final, cohesive, prompt will be below the AoT framework output.
PromptHub1100 - โ
Chain of Verification
CoVe, typically a two-prompt method, can also function effectively with just one prompt, still helping to reduce hallucinations.
PromptHub184 - ๐ป
Program of Thoughts
Template to generate the code portion of the Program of Thoughts (PoT) prompting method
PromptHub30 - โ๏ธ
Contrastive Chain-of-Thought
Contrastive CoT prompting involves adding both correct and incorrect examples to a Chain-of-Thought prompt to help the model learn by contrasting faulty reasoning with correct logic.
PromptHub123 - ๐งต
Thread-of-Thought
Ideal for situations with large context, Thread of Thought helps the model maintain a coherent line of thought across many messages.
PromptHub345 - โ๏ธ
Zero-Shot CoT
The simplest way to implement Chain-of-Thought reasoning. Just add language that prompts the model to demonstrate reasoning.
PromptHub46 - ๐๏ธ
Few-Shot CoT
Provide the model with a few examples that demonstrate ideal reasoning chains.
PromptHub147 - ๐
Faithful CoT
Faithful Chain-of-Thought ensures reasoning chains accurately reflect the model's thought process by converting natural language queries into symbolic reasoning chains with Python, and then uses a deterministic solver to find the final answer.
PromptHub134 - ๐
Tabular CoT
Tabular Chain-of-Thought directs the model to present its reasoning in a structured format, such as markdown tables.
PromptHub122 - ๐ค
Auto Chain-of-Thought
Auto Chain-of-Thought description placeholder
PromptHub17 - ๐
AutoReason
Generate Chain of Thought reasoning traces for any task. Take the reasoning steps and the original query to then generate better outputs
PromptHub957 - ๐ญ
Auto CoT Generation
Generate Chain of Thought reasoning chain for your prompts
PromptHub116 - ๐
Chain of Thought Critic
Critics generate Chain of Thought steps, serving as an additional layer of reasoning to enhance LLM outputs before final processing
PromptHub8 - ๐ณ
DeepSeek-R1 training template
The prompt template used to generate reasoning chains during the training of DeepSeek R1
PromptHub3760