From this article.
zero-shot chain of thought: asking ChatGPT “What is the fourth word in the phrase ‘I am not what I am?’ ChatGPT: not. The author said a little of zero-shot chain of thought can help get the right answer.
ChatGPT does perform much better when you provide more context and specific examples.
…
In the language models developed by OpenAI, there are two primary techniques used to activate its vast store of knowledge and improve the accuracy of responses to prompts.
These techniques are known as “few-shot learning” and “fine-tuning”.
…
The oddly named “few-shot” learning, is where the model is trained to recognise and classify a new object or concept with a small number of training examples, typically less than 10, but numbers can vary. For learning where there is only one example, you might also hear it being called “one-shot” learning.
Few-shot learning in OpenAI models can be implemented at both the ChatGPT prompt, as well as programmatically by calling the OpenAI API (Application Programming Interface) “completion” endpoint.
…
The “genius in the room” mental model
Jessica recommends three best practices when constructing a prompt to extract the most relevant answers from ChatGPT, as follows:
- Explain the problem you want the model to solve
- Articulate the output you want — in what format (“ answer in a bulleted list”), in what tone/style (“answer the question as a patient math teacher…”)
- Provide the unique knowledge needed for the task
…
Zero-Shot Chain of Thought (CoT) Prompting
You may also hear people talking about “zero-shot” learning, where a model is able to classify new concepts or objects that it has not encountered before.
To use this technique, all you need do is to append the words:
“let’s think step by step”, or
“thinking aloud”
At the end of your prompt!
…
Fine-tuning
A minimum of a few hundred examples should be your starting point
319 total views, 2 views today