Alex Northstar
Alex Northstar

@NorthstarBrain

14 Tweets 2 reads Dec 18, 2023
TOP Prompt Engineering in 10 Steps: OpenAI approved
I read and summarized the newest OpenAI prompting GUIDE to give you the best practical AI prompting insights.
Bookmark and check out these 10 strategies + examples:
Before we start, try to learn and apply these strategies IN COMBINATION, for smartest maximum output, from ChatGPT4 for example.
1. "The less the LLM has to guess at what you want, the more likely you’ll get it." - OpenAI
AI prompting mindset: Be clear like talking to a human, be specific about desired results, provide good examples.
Now let's go deeper:
2. Split complex tasks into simpler subtasks:
If you ask it to do 7 different jobs, it will mess up somewhere and degrade the final result.
It is smarter to break down complex multi-task mega prompts in 2-3 "step prompts".
3. Provide important details or context, otherwise you are leaving it up to the model to guess what you mean.
Example:
NO: Summarize the meeting notes.
YES: Summarize the meeting notes in 1 paragraph, write a markdown list of speakers and key points, then list next steps.
4. Ask the model to adopt a persona and answer in a SPECIFIC way.
Can be done in "Custom Instructions" in the ChatGPT settings or in system message in the Playground.
Example:
5. Use delimiters to CLEARLY indicate distinct parts of input
This is important. Quite a few of my clients struggle because they make mega prompts with multiple requests, which are NOT properly separated and prioritized.
Separate with: triple quotation marks, XML tags, titles.
6. Specify the STEPS required to complete a task
Easy and useful. Just add this to your prompts:
"Use the following step-by-step instructions to respond to user inputs.
Step 1: ...
Step 2: ..."
7. Specify the desired length of the output
Works better with sentences, paragraphs, bullet points. Also ask it to remove intro and outro AI fluff text.
Meanwhile asking the model to generate a specific number of words does not work with high precision.
8. For very long conversations, summarize previous dialogue.
"AI memory" is not endless.
Use a sequence of queries to summarize each section of the document. Section summaries can be linked & summarized, producing summaries of summaries. Until an entire book is summarized.
9. Give models time to "think"
Sometimes we get better results when we explicitly instruct the model to reason from first principles before coming to a conclusion.
Ask it to: "think step-step", "explain its reasoning", "find errors in its logic"
10. Extra insight:
"Language models cannot be relied upon to perform arithmetic or long calculations accurately on their own. In cases where this is needed, a model can be instructed to write and run CODE, instead of making its own calculations."
BONUS: If you want to start leveraging practical AI, I got a free mini course that lasts less than 1h.
10 trainings to work faster and save time with AI, 412 students atm:
the4houraiworkweek.xperiencify.io
Took me quite a bit to go through all of that.
If this was helpful to you: follow, bookmark, share to update others too.

Loading suggestions...