The Best Prompt Engineering Techniques For Getting The Most Out Of Generative AI
In today’s column, I am continuing my ongoing coverage of prompt engineering strategies and tactics that aid in getting the most out of using generative AI apps such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc.
Here’s the deal this time.
I’ve carefully compiled a comprehensive list of the best-in-class prompt engineering techniques and associated skills, aiming to provide you with a quick and easy-to-understand explanation. I’ve also included handy-dandy online links in case you want to learn more. There are a ton of captivating nitty-gritty details that you can dish into.
All told, this is my all-in-one package for those of you who genuinely care about prompt engineering.
Loyal readers might remember my prior recap of prompt engineering techniques, see my detailed discussion at the link here. I covered thirty keystone prompting approaches. You’ll be pleased and hopefully elated to know that this latest incarnation contains fifty essential prompting approaches and incorporates that prior coverage here. As a side note, those added twenty techniques have been detailed in my column and were posted after having done that earlier all-in-one recap.
Bottom line: You are in the right place now for the big picture on prompt engineering for generative AI.
I will proceed as follows.
First, I will do a brief overview of why prompt engineering is an essential skill when using generative AI. Next, I showcase in alphabetical order the fifty prompt engineering techniques that I believe encompass a full breadth of what any skilled prompt engineer ought to be aware of.
I’d wager that some of these fifty techniques are not necessarily well-known by even those who are profoundly interested in prompt engineering. Thus, to help out, I have at the end of this depiction provided a list of the Top 10 that I humbly proclaim that every sincere and seriously studious prompt engineer ought to know. I guess you could say that the rest of the remaining forty beyond the Top 10 is more so icing on the cake. Well, maybe. I still earnestly believe that any good prompt engineer should at least be comfortably familiar with the whole kit-and-kaboodle (i.e., all fifty techniques).
Okay, now then, if you are desirous of being an all-out prompt engineer, the best of the best, the top banana, here’s my lay-down-the-gauntlet challenge for you.
Are you ready for this?
I trust so.
Go through every prompt engineering technique that I lay out here. Make sure to use the provided online links and fully read the detailed indications that underpin each technique (no skipping, no idle eyeballing). Try extensively using the technique in your favored generative AI app. Quiz yourself to double-check that you really know how to use each technique. Be honest. Be fair and square.
Upon completion of that noble quest, I hereby convey upon you an honorary prompt engineering badge-of-honor and vociferously applaud you for your dedication and persistence in wanting to be the best of the best when it comes to prompt engineering.
Let’s get underway.
The Importance Of Prompt Engineering Is Woefully Understated
My golden rule about prompt engineering is this:
- The use of generative AI can altogether succeed or fail based on the prompt that you enter.
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry.
Similarly, if you put distracting words into your prompt, the odds are that the generative AI will pursue an unintended line of consideration. For example, if you include words that suggest levity, there is a solid chance that the generative AI will seemingly go into a humorous mode and no longer emit serious answers to your questions (which, you might aim to intentionally achieve, but this should not be something that catches you unawares or by utter surprise).
I have a helpful rule-of-thumb that I repeatedly cover in my classes on the core fundamentals of prompt engineering. Here it is. Be direct, be obvious, and generally avoid distractive wording when composing your prompts.
Voila, you’ve just learned something about prompting.
Indeed, that’s what prompt engineering is all about. The idea is to abide by tried-and-true prompting techniques, strategies, tactics, and the like, doing so to get the most you fruitfully can out of using generative AI. A whole gaggle of AI researchers have painstakingly sought to perform experiments and ascertain what kinds of prompts are useful. They have also generally identified prompts that are either not useful or, worse still, prompts that will waste your time and effort (plus, if you are paying to use generative AI, will needlessly cost you for the computational processing cycles wastefully consumed).
Getting back to my above recommendation about using direct wording for your prompts, being mindlessly terse should be cautiously employed. You see, being overly sparse can be off-putting due to lacking sufficient clues or information. When I say this, some eager beavers then swing to the other side of the fence and go overboard in being verbose in their prompts. That’s a problem too. Amidst all the morass of details, there is a chance that the generative AI will either get lost in the weeds or will strike upon a particular word or phrase that causes a wild leap into some tangential realm.
Of course, I am not saying that you should never use detailed prompts. That’s silly. I am saying that you should use detailed prompts in sensible ways, such as telling the generative AI that you are going to include necessary details and forewarning the AI accordingly.
All in all, I advocate the decidedly sensible practice of the Goldilocks style of prompting. Do not have porridge that is too hot or too cold. The nature of your prompt should be just right, befitting the circumstance at hand. Thanks go to Goldilocks and those three bears for their clairvoyance about the future and the rise of generative AI and large language models (LLMs).
I like to emphasize at my speaking engagements that prompts and dealing with generative AI is like a box of chocolates. You never know exactly what you are going to get when you enter prompts. The generative AI is devised with a probabilistic and statistical underpinning which pretty much guarantees that the output produced will vary each time. In the parlance of the AI field, we say that generative AI is considered non-deterministic.
My point is that, unlike other apps or systems that you might use, you cannot fully predict what will come out of generative AI when inputting a particular prompt. You must remain flexible. You must always be on your toes. Do not fall into the mental laziness of assuming that the generative AI output will always be correct or apt to your query. It won’t be.
Some naysayers opt to discard prompt engineering because the prompting techniques do not ensure an ironclad guarantee of working perfectly each time. To those malcontents, they seem to dreamily believe that unless a fully predictive tit-for-tat exists, there is no point in learning about prompting. That’s the proverbial tossing out the baby with the bathwater kind of mentality and misses seeing the forest for the trees.
Prompt engineering gives you an edge. An important and quite productive edge.
The techniques and approaches of prompt engineering provide a fighting chance at getting things done efficiently and effectively while using generative AI. Sure, the precepts and recommendations are not a one-hundred percent assurance. Those who shrug their shoulders and fall into random attempts at prompting will end up getting their just deserves. They will likely spin their wheels endlessly and ultimately give up using generative AI in self-disgust. Rather than doing an introspective examination that they opted to toss asunder prompt engineering, they will likely bemoan that generative AI is confusing, confounding, and ought to be avoided.
The thing is, we aren’t going to be able to simply decide to avoid generative AI. Generative AI is becoming ubiquitous. Period, end of story.
Adapt or become roadkill.
There is another oft-speculated claim that there is no need to learn prompt engineering because advances in AI technology are going to make prompting obsolete.
Allow me a moment to address this expressed concern.
Some people say that there is no need to learn about the composing of good prompts. The usual rationale for this claim is that generative AI will be enhanced anyway by the AI makers such that your prompts will automatically be adjusted and improved for you. This capacity is at times referred to as adding a “trust layer” that surrounds the generative AI app, see my coverage at the link here.
The loudly aired opinion is that soon there will be promulgated AI advances that can take the flimsiest of hand-composed prompts and still enable generative AI to figure out what you want to do. The pressing issue therefore is whether you are wasting your time by learning prompting techniques. It could be that you are only on a short-term ticking clock and that in a year or so the skills you homed in prompting will be no longer needed.
In my viewpoint, and though I concur that we will be witnessing AI advances that will tend toward helping interpret your prompts, I still believe that knowing prompt engineering is exceedingly worthwhile.
Here’s why.
First, you can instantly improve your efforts in today’s generative AI, thus, a speedy and valuable reward is found at the get-go. The payoff is immediate.
Second, we don’t know how long it will take for the speculated AI advances to emerge and take hold. Those who avoid prompting improvements of their own volition are going to be waiting on the edge of their seat for that which might be further in the future than is offhandedly proclaimed (a classic waiting for Godot).
Third, I would vigorously suggest that learning about prompting has an added benefit that few seem to be acknowledging. The more you know about prompting provides a nearly surefire path to knowing more about how generative AI seems to respond. I am asserting that your mental model about the way that generative AI works is embellished by studying and using prompting insights. The gist is that this makes you a better user of generative AI and will prepare you for the continuing expansion of where generative AI will appear in our lives.
I hope the above has whetted your appetite for digging into my compiled list of the best of the best for prompt engineering techniques.
Comprehensive List Of Essential Prompt Engineering Techniques
Enough of the preparatory chitchat, some of you are perhaps thinking, let’s get to the brass tacks. Let’s see the bonanza. Show me or step away.
In alphabetical order and without further ado, I present fifty keystone prompt engineering techniques.
Each comes with a brief sentence or two as an explanation of the essence of the technique. I also provide a handy link for the full-on details, including examples. In terms of the naming or phrasing of each technique, there isn’t a standardized across-the-board accepted naming convention, thus I have used the name or phrases that I believe are most commonly utilized. The aim is to try invoking a generalized indication so that you’ll be immediately in the right ballpark of what the technique references.
Have fun.
Here we go.
Add-On Prompting
You can use special add-ons that plug into generative AI and aid in either producing prompts or adjusting prompts. For various examples and further detailed indications about the nature and use of add-ons for prompting, see my coverage at the link here.
AI Hallucination Avoidance Prompting
One of the most pressing problems about generative AI is that the AI can computationally make up falsehoods that seem to be portrayed as truths, which is an issue known as AI hallucinations (I disfavor the catchphrase because it tends to anthropomorphize AI, but it is unfortunately caught on as a phrase and we seem to be stuck with it). For various examples and further detailed indications about the nature of AI hallucinations, see my extensive coverage at the link here, the link here, and the link here. **
Beat the “Reverse Curse” Prompting
Generative AI is known for having difficulties dealing with the reverse side of deductive logic, thus, make sure to be familiar with prompting approaches that can curtail or overcome the so-called “reverse curse”. For various examples and further detailed indications about the nature and use of beating the reverse curse prompting, see my coverage at the link here.
“Be On Your Toes” Prompting
The phrase “Be on your toes” can be used to stoke generative AI toward being more thorough when generating responses, though there are caveats and limitations that need to be kept in mind when using the prompting technique. For various examples and further detailed indications about “be on your toes” prompting, see my coverage at the link here. **
Browbeating Prompts
A commonly suggested prompting technique consists of writing prompts that seek to browbeat or bully generative AI. You need to be cautious in using such prompts. For various examples and further detailed indications about browbeating prompting, see my coverage at the link here. **
Catalogs Or Frameworks For Prompting
A prompt-oriented framework or catalog attempts to categorize and present to you the cornerstone ways to craft and utilize prompts. For various examples and further detailed indications about the nature and use of prompt engineering frameworks or catalogs, see my coverage at the link here.
Certainty And Uncertainty Prompting
You can explicitly indicate in your prompt that you want generative AI to emit a level of certainty or uncertainty when providing answers to your questions. For various examples and further detailed indications about the nature and use of the hidden role of certainty and uncertainty when prompting for generative AI, see my coverage at the link here.
Chain-of-Density (CoD) Prompting
A shrewd method of devising summaries involves a clever prompting strategy that aims to bolster generative AI toward attaining especially superb or at least better than usual kinds of summaries and is known as Chain-of-Density (CoD). For various examples and further detailed indications about the nature and use of CoD or chain-of-density prompting, see my coverage at the link here.
Chain-of-Feedback (CoF) Prompting
A variation on Chain-of-Thought (CoT) consists of the Chain-of-Feedback (CoF) prompting technique, which seems to reduce the degree of generative AI hallucinations. For various examples and further detailed indications about the nature and use of chain-of-feedback prompting, see my coverage at the link here. **
Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting has been heralded as one of the most important prompting techniques. When you enter a prompt, you invoke CoT by simply telling generative AI to work in a stepwise fashion. For various examples and further detailed indications about the nature and use of Chain-of-Thought (CoT) prompting, see my coverage at the link here.
Chain-of-Thought Factored Decomposition Prompting
You can supplement conventional Chain-of-Thought (CoT) prompting with an additional instruction that tells the generative AI to produce a series of questions and answers when doing the chain-of-thought generation. Your goal is to nudge or prod the generative AI to generate a series of sub-questions and sub-answers. For various examples and further detailed indications about the nature and use of chain-of-thought by leveraging factored decomposition, see my coverage at the link here).
Chain-of-Verification (CoV) Prompting
Chain-of-Verification (known as COVE or CoVe, though some also say CoV) is an advanced prompt engineering technique that in a series of checks-and-balances or double-checks tries to boost the validity of generative AI responses. For various examples and further detailed indications about the nature and use of CoV or chain-of-verification prompting, see my coverage at the link here.
Conversational Prompting
Be a fluent and interactive prompter, while avoiding the myopic one-and-done mindset that many unfortunately seem to adopt when using generative AI. For various examples and further detailed indications about the nature and use of conversational prompting, see my coverage at the link here.
DeepFakes To TrueFakes Prompting
You undoubtedly know about Deepfakes, while a different angle involves establishing via generative AI a Truefake, namely a fake version of yourself that is “true” in the sense that you genuinely want to have your fake digital twin devised. For various examples and further detailed indications about the nature and use of going from Deepfakes to Truefakes via prompting, see my coverage at the link here.
Directional Stimulus Prompting (DSP) And Hints
Using subtle or sometimes highly transparent hints in your prompts is formally known as Directional Stimulus Prompting (DSP) and can substantially boost the generative AI responses. For various examples and further detailed indications about the nature and use of hints or directional stimulus prompting, see my coverage at the link here.
Disinformation Detection And Removal Prompting
The volume of disinformation and misinformation that society is confronting keeps growing and lamentedly seems unstoppable. A notable means of coping consists of using generative AI to be your preferred filter for detecting disinformation and misinformation. For various examples and further detailed indications about the nature and use of prompting to detect and mitigate the flow of misinformation and disinformation, see my coverage at the link here.
Emotionally Expressed Prompting
Does it make a difference to use emotionally expressed wording in your prompts when conversing with generative AI? The answer is yes. And there is a logical and entirely computationally sound reason for why generative AI “reacts” to your use of emotional wording. For various examples and further detailed indications about the nature and use of emotionally worded prompting, see my coverage at the link here.
End-Goal Prompting
A highly recommended prompting strategy consists of identifying what your end goal is while working in generative AI and aiming to solve or delve into a particular topic or problem of specific interest. For various examples and further detailed indications about end-goal planning, see my coverage at the link here.
Essay-Compression Prompting
Sometimes instead of getting a summary, you want to have an essay compressed, meaning that it contains the same words as the original source but tosses out words that aren’t necessarily needed per se. For various examples and further detailed indications about essay-compression prompting, see my coverage at the link here. **
Fair-Thinking Prompting
You can use clever prompts that will get generative AI to lean in directions other than the already predisposed biases cooked into the AI, aiming to get a greater semblance of fairness in the generated responses. For various examples and further detailed indications about the nature and use of fair-thinking prompting, see my coverage at the link here. **
Flipped Interaction Prompting
You can flip the script, as it were, getting generative AI to ask you questions rather than having you ask generative AI your questions. For various examples and further detailed indications about the nature and use of flipped interaction, see my coverage at the link here.
Generating Prompts Via Generative AI
Rather than directly composing your prompts, you can ask generative AI to create your prompts for you. This requires knowing what kinds of prompting will get you the best AI-generated prompts. For various examples and further detailed indications about generating prompts prompting, see my coverage at the link here. **
Illicit Or Disallowed Prompting
Did you know that the licensing agreement of most generative AI apps says that you are only allowed to use the generative AI in various strictly stipulated ways? For various examples and further detailed indications about the nature of what is considered illicit prompts (i.e., that you aren’t supposed to use), see my coverage at the link here.
Imperfect Prompting
Imperfect prompts can be cleverly useful. For various examples and further detailed indications about the nature and use of imperfect prompts, see my coverage at the link here.
Importing Text As Prompting Skill
There are circumstances involving importing text into generative AI that require careful skill and necessitates the right types of prompts to get the text suitably brought in and properly infused. For various examples and further detailed indications about importing text prompting, see my coverage at the link here. **
Interlaced Conversations Prompting
Most of the popular generative AI apps require that each conversation be distinct and separate from your other conversations with the AI. The latest trend entails allowing for the interlacing of conversations and requires rethinking how you compose your prompts. For various examples and further detailed indications about interlaced conversation prompting, see my coverage at the link here. **
Kickstart Prompting
A wise move when prompting is to grease the skids or prime the pump, also known as kickstart prompting, which involves doing an initial prompt that gets generative AI into the groove of whatever topic or problem you want to have solved. For various examples and further detailed indications about the nature of kickstart prompting, see my coverage at the link here. **
Least-to-Most Prompting
Least-to-Most prompting (LTM) is a technique that involves guiding generative AI to work on the least hard part first and then proceed to the harder part (an alternative approach is Most-to-Least or MTL prompting). For various examples and further detailed indications about the nature of LTM and MTL prompting, see my coverage at the link here. **
Macros In Prompts
Similar to using macros in spreadsheets, you can use macros in your prompts while working in generative AI. For various examples and further detailed indications about the nature and use of prompt macros, see my coverage at the link here.
Mega-Personas Prompting
Mega-personas consist of the upsizing of multi-persona prompting. You ask the generative AI to take on a pretense of perhaps thousands of pretend personas. For various examples and further detailed indications about the nature and use of mega-personas prompting, see my coverage at the link here.
Multi-Persona Prompting
Via multi-persona prompting, you can get generative AI to simulate one or more personas. For various examples and further detailed indications about the nature and use of multi-persona prompting, see my coverage at the link here.
Overcoming “Dumbing Down” Prompting
Knowing when to use succinct or terse wording (unfairly denoted as “dumbing down” prompting), versus using more verbose or fluent wording is a skill that anyone versed in prompt engineering should have in their skillset. For various examples and further detailed indications about the nature and use of averting the dumbing down of prompts, see my coverage at the link here.
Persistent Context And Custom Instructions Prompting
You can readily establish a context that will be persistent and ensure that generative AI has a heads-up on what you believe to be important, often set up via custom instructions. For various examples and further detailed indications about the nature and use of persistent context and custom instructions, see my coverage at the link here.
Plagiarism Prompting
Your prompts can by design or by happenstance stoke generative AI toward producing responses that contain plagiarized content. Be very careful since you might be on the hook for any liability due to plagiarism. For various examples and further detailed indications bout the nature and use of prompts that might stir plagiarism, see my coverage at the link here and the link here. **
Politeness Prompting
A surprising insight from research on generative AI is that prompts making use of please and thank you can stir AI to produce better results. Make sure to use politeness while prompting, though do not go overboard and be judicious in such wording. For various examples and further detailed indications about politeness prompting, see my coverage at the link here. **
Privacy Protection Prompting
Did you realize that when you enter prompts into generative AI, you are not usually guaranteed that your entered data or information will be kept private or confidential? For various examples and further detailed indications about the nature and use of prompts that might give away privacy or confidentiality, see my coverage at the link here.
Prompt Shields and Spotlight Prompting
The emergence of prompt shields and spotlight prompting has arisen due to the various hacking efforts trying to get generative AI to go beyond its filters and usual protections. Here’s a useful rundown of what you need to know. For various examples and further detailed indications about the nature of prompt shields and spotlighting prompting, see my coverage at the link here. **
Prompt-To-Code Prompting
You can enter prompts that tell generative AI to produce conventional programming code and essentially write programs for you. For various examples and further detailed indications about the nature and use of prompting to produce programming code, see my coverage at the link here.
Retrieval-Augmented Generation (RAG) Prompting
Retrieval-augmented generation (RAG) is hot and continues to gain steam. You provide external text that gets imported and via in-context modeling augments the data training of generative AI. For various examples and further detailed indications about the nature and use of retrieval-augmented generation (RAG), see my coverage at the link here.
Self-Reflection Prompting
You can enter a prompt into generative AI that tells the AI to essentially be (in a manner of speaking) self-reflective by having the AI double-check whatever generative result it has pending or that it has recently produced. For various examples and further detailed indications about the nature and use of AI self-reflection and AI self-improvement for prompting purposes, see my coverage at the link here.
Show-Me Versus Tell-Me Prompting
Show-me consists of devising a prompt that demonstrates to the generative AI an indication of what you want (show it), while tell-me entails devising a prompt that gives explicit instructions delineating what you want to have done (tell it). For various examples and further detailed indications about the nature and use of the show-me versus tell-me prompting strategy, see my coverage at the link here.
Sinister Prompting
People are using sinister prompts to get generative AI to do foul things such as scams and the like. I don’t want you to do this, but I also think it is valuable for you to know what sinister prompts do and how they work, alerting you to avoid them and not inadvertently fall into the trap of one. For various examples and further detailed indications about the nature and use of sinister prompting, see my coverage at the link here. **
Skeleton-of-Thought (SoT) Prompting
Via a prompt akin to Chain-of-Thought (CoT), you tell the generative AI to first produce an outline or skeleton for whatever topic or question you have at center stage, employing a skeleton-of-thought (SoT) method to do so. For various examples and further detailed indications about the nature and use of the skeleton-of-thought approach for prompt engineering, see my coverage at the link here.
Star Trek Trekkie Lingo Prompting
An unusual discovery by researchers showcased that using Star Trek Trekkie lingo in your prompts can improve generative AI results. Downsides exist and can undercut your efforts by inadvertent misuse or overuse of this technique. For various examples and further detailed indications about Trekkie prompting, see my coverage at the link here. **
Step-Around Prompting Technique
At times, the prompts that you seek to use in generative AI are blocked by the numerous filters that the AI maker has put in place. You can use the step-around prompting technique to get around those blockages. For various examples and further detailed indications about step-around prompting, see my coverage at the link here. **
“Take A Deep Breath” Prompting
The prompting phrase of “Take a deep breath” has become lore in prompt engineering but turns out that there are limitations and circumstances under which this wording fruitfully works. For various examples and further detailed indications about the nature and use of the take a deep breath prompting, see my coverage at the link here.
Target-Your-Response (TAYOR) Prompting
Target-your-response (TAYOR) is a prompt engineering technique that entails telling generative AI the desired look-and-feel of to-be-generated responses. For various examples and further detailed indications about the nature and use of TAYOR or target-your-response prompting, see my coverage at the link here.
Tree-of-Thoughts (ToT) Prompting
Tree-of-thoughts (ToT) is an advanced prompting technique that involves telling generative AI to pursue multiple avenues or threads of a problem (so-called “thoughts”) and figure out which path will likely lead to the best answer. For various examples and further detailed indications about the nature and use of ToT or tree-of-thoughts prompting, see my coverage at the link here.
Trust Layers For Prompting
Additional components outside of generative AI are being set up to do pre-processing of prompts and post-processing of generated responses, ostensibly doing so to increase a sense of trust about what the AI is doing. For various examples and further detailed indications about the nature and use of trust layers for aiding prompting, see my coverage at the link here.
Vagueness Prompting
The use of purposefully vague prompts can be advantageous for spurring open-ended responses that might land on something new or especially interesting. For various examples and further detailed indications about the nature and use of vagueness while prompting, see my coverage at the link here.
End of List
Whew, that’s quite a comprehensive list.
It ranged from A to Z (kind of, the last item starts with the letter V, though I was tempted to purposely make a prompting technique name that began with the letter Z, just for fun).
I ask that you mindfully contemplate the list.
Did you see techniques that were akin to seeing old friends?
Did you see techniques that were new to you and caught your attention?
Let’s discuss the list further.
Numbering The Prompt Engineering Techniques While In Alphabetical Order
The above list of prompt engineering techniques was shown in alphabetical order.
I did so for ease of reference. If you perchance knew the name or phrase of a particular prompt engineering technique, I sought to make life easy for you to find the technique by looking alphabetically for it. I didn’t number them because I was worried that the numbering would imply a semblance of importance or priority. I wanted the above listing to seem that all the techniques are on an equal footing. None is more precious than any of the others.
Lamentedly, not having numbers makes life harder when wanting to quickly refer to a particular prompt engineering technique. So, I am going to go ahead and show you the list again and this time include assigned numbers. The list will still be in alphabetical order. The numbering is purely for ease of reference and has no bearing on priority or importance.
There is another reason too for me to number the list. I had earlier suggested that you might want to step by step make sure that you are familiar with each of the prompt engineering techniques.
Here’s how I suggest that you proceed.
I’d recommend that you use the numbered list shown next. Import the list into your favorite spreadsheet.
Make a column that you can mark as to whether you are familiar with the specific technique, do so by using a plain score ranging from 0 to 5, wherein 0 is that you don’t know it at all, while the highest score of 5 is that you know it like the back of your hand. Be straightforward and don’t give a fake score. Put your real score. This list is solely for your own benefit.
Make another column that has a score showing what you want to become in that technique (suppose for example that you start as a self-rated 1 on a particular technique and want to end up at a self-rated 4). Finally, add an additional column that will contain a target date of when you hope to attain the heightened score.
You can now use that spreadsheet as your career planning guide for prompt engineering purposes. Keep it updated as you proceed along in your adventure as a prompt engineer who wants to do the best that you can.
Whether you undertake that treasured challenge or not, here’s the list with numbers shown as pure reference and the list is still in the same alphabetical order as shown above:
- L-01. Add-On Prompting
- L-02. AI Hallucination Avoidance Prompting
- L-03. Beat the “Reverse Curse” Prompting
- L-04. “Be On Your Toes” Prompting
- L-05. Browbeating Prompts
- L-06. Catalogs Or Frameworks For Prompting
- L-07. Certainty And Uncertainty Prompting
- L-08. Chain-of-Density (CoD) Prompting
- L-09. Chain-of-Feedback (CoF) Prompting
- L-10. Chain-of-Thought (CoT) Prompting
- L-11. Chain-of-Thought Factored Decomposition Prompting
- L-12. Chain-of-Verification (CoV) Prompting
- L-13. Conversational Prompting
- L-14. DeepFakes To TrueFakes Prompting
- L-15. Directional Stimulus Prompting (DSP) And Hints
- L-16. Disinformation Detection And Removal Prompting
- L-17. Emotionally Expressed Prompting
- L-18. End-Goal Prompting
- L-19. Essay-Compression Prompting
- L-20. Fair-Thinking Prompting
- L-21. Flipped Interaction Prompting
- L-22. Generating Prompts Via Generative AI
- L-23. Illicit Or Disallowed Prompting
- L-24. Imperfect Prompting
- L-25. Importing Text As Prompting Skill
- L-26. Interlaced Conversations Prompting
- L-27. Kickstart Prompting
- L-28. Least-to-Most Prompting
- L-29. Macros In Prompts
- L-30. Mega-Personas Prompting
- L-31. Multi-Persona Prompting
- L-32. Overcoming “Dumbing Down” Prompting
- L-33. Persistent Context And Custom Instructions Prompting
- L-34. Plagiarism Prompting
- L-35. Politeness Prompting
- L-36. Privacy Protection Prompting
- L-37. Prompt Shields and Spotlight Prompting
- L-38. Prompt-To-Code Prompting
- L-39. Retrieval-Augmented Generation (RAG) Prompting
- L-40. Self-Reflection Prompting
- L-41. Show-Me Versus Tell-Me Prompting
- L-42. Sinister Prompting
- L-43. Skeleton-of-Thought (SoT) Prompting
- L-44. Star Trek Trekkie Lingo Prompting
- L-45. Step-Around Prompting Technique
- L-46. “Take A Deep Breath” Prompting
- L-47. Target-Your-Response (TAYOR) Prompting
- L-48. Tree-of-Thoughts (ToT) Prompting
- L-49. Trust Layers For Prompting
- L-50. Vagueness Prompting
It is an impressive list.
I also realize it might seem like a daunting list. I can hear the commentary that this is way too much and there is no possible way for you to spend the needed time and energy to learn them all. You have your day job to deal with. You have work-life balances that need to be balanced. Etc.
Yes, I hear you.
Let’s discuss what I consider to be the ten most important.
My Recommended Top 10 Of The Best You Need To Know
I will next show you my list of the Top 10.
The numbering now does in fact denote a priority or importance.
Please don’t confuse the numbering of the above alphabetical list with the numbering of the below list. The numbers above were for the sake of convenience. The numbers below are for distinguishment of importance or priority.
I still opted to keep the below in alphabetical order. This certainly is highly debatable and you might argue that the Top 10 should be reordered based on their relative position. Sure, I get that. When I next give a public presentation on this matter, I’ll be happy to do such a live rearrangement and we can dexterously debate the sequence at that time.
Here are my Top 10 of the best of the best for prompt engineering techniques:
- (1) Chain-of-Thought (CoT) Prompting (listed above as L-10).
- (2) Chain-of-Verification (CoV) Prompting (listed above as L-12)
- (3) Emotionally Expressed Prompting (listed above as L-17)
- (4) End-Goal Prompting (listed above as L-18)
- (5) Flipped Interaction Prompting (listed above as L-21)
- (6) Generating Prompts Via Generative AI (listed above as L-22)
- (7) Mega-Personas Prompting (listed above as L-30)
- (8) Retrieval-Augmented Generation (RAG) Prompting (listed above as L-39)
- (9) Step-Around Prompting Technique (listed above as L-45)
- (10) Trust Layers For Prompting (listed above as L-49)
I am saying that you should know each of those by heart.
They should be like falling off a log. Roll up your sleeves and get to work. Commit them to memory. Use them daily, as needed.
Allow me a moment to say a bit more. I frankly agonized at coming up with a Top 10. All fifty on the list are, to me, of great merit and well worth knowing. It saddens me to see some in the list of fifty that did not make my Top 10. They are all deserving of being in the Top 10.
So, please do not treat the remaining forty as though they are inconsequential. It would break my heart. If you can, get to know them all.
Conclusion
Lifelong learning.
That’s what everyone is talking about these days. We are told time and again that we need to be lifelong learners. By gosh, I wholeheartedly agree.
This comes up here because the latest and greatest in prompt engineering is constantly changing. There are new ideas brewing. New AI research efforts are pending. It is a glorious time to be using generative AI.
I pledge that I will keep covering prompt engineering and bringing to you the newest prompts, along with whether they work or flop. I’d bet that in the next few months ahead, at least a half dozen new prompts will be identified and garner worthy attention.
Keep your eyes and ears open, and I’ll do my best to make sure you can be a lifelong learner, faithfully and profitably.