Introduction
Prompt Engineering is blowing up as a topic of discussion. Now that ChatGPT4 is entering the scene, it is even more so. In one of these discussions there was a comment made the mirrored something I’ve been thinking about recently.
It was mentioned that using prompts which were polite and respectful would yield better results than just barking orders.
To test this observation (theory?) I began a hunt for prompts which would yield incorrect results in a predictable way. One of the first prompts that I tried was a simple math questions that ChatGPT has a history of getting wrong. The experiment and results of that endeavor are covered in this blog: Is ChatGPT as Bad at Maths as Some Say?
Unfortunately that prompt did not turnout to be a valid question for testing the value of being polite.
So, I turned to ChatGPT4 to see if it had a few problematic prompts up its sleeve.
Spoiler Alert: Turns out the observation that using prompts which are structured in a polite and respectful manner do lend themselves to better outcomes — but, not for the reason originally thought.
The ChatGPT4 Session
Prompt: Hello, I have heard it said by other ChatGPT users that being polite and respectful when addressing ChatGPT has produced better and/or more professional results. Does this seem like hopeful thinking on the part of the user, or is there some validity to this observation?
ChatGPT4: As an AI language model, I am designed to be responsive to the input I receive from users. While I don’t have personal feelings or emotions, the way a user phrases their question or interacts with me can influence the style and tone of my responses. If a user addresses me politely or professionally, I may provide a response in a similar tone. This could create the perception of a more professional or helpful response.
In general, clear and concise communication is key to getting useful answers from ChatGPT. However, it is always a good practice to treat AI systems and others with respect and politeness, as it fosters a positive and cooperative environment.
Prompt: I have seen an example of the following type: “ChatGPT was asked a simple but tricky arithmetic question. ChatGPT gave an incorrect answer. Then the ChatGPT was asked the same question with the added proviso that ChatGPT should consider itself to be a math expert with many decades of experience solving simple but tricky arithmetic questions. This time the ChatGPT gave the correct answer. It looked like the way the prompt was engineered encouraged the ChatGPT to dig deeper and put more effort into addressing the question.” Do have any comments about this?
ChatGPT4: As an AI language model, my responses are influenced by the way questions are phrased and the context provided in the prompt. The example you provided suggests that the added context might have caused me to produce a different response.
When you provide more context or specify certain conditions, it can help guide the AI towards a more accurate or thoughtful response. In the example you mentioned, suggesting that ChatGPT should consider itself a math expert with extensive experience might have helped focus the AI’s attention on solving the problem more carefully.
However, it’s important to note that my performance is based on the training data and the algorithms that generate responses, and I don’t have a sense of effort or conscious thinking. Nevertheless, rephrasing prompts or providing additional context can sometimes result in better answers from the AI.
Prompt: Thank you for those comments.
ChatGPT4: You’re welcome! If you have any more questions or need further clarification, feel free to ask. I’m here to help.
Prompt: My current hypothesis is that good manners and respect are a legitimate part of prompt engineering.
ChatGPT4: Your hypothesis has merit. Including good manners and respect in your prompts can be considered a part of effective prompt engineering. Using polite and respectful language can help set the tone for the interaction, which may lead to more professional or thoughtful responses from the AI.
Additionally, clear and concise communication, providing sufficient context, and asking specific questions are also essential elements of effective prompt engineering. By combining all these aspects, you can optimize your prompts to get the most relevant and useful information from the AI.
Prompt: I would like to test my hypothesis by asking a question in a dry perfunctory style then asking the same question in a polite and respectful manner. So what I am looking for are questions that may have a tendency to be answered with an incorrect answer. Do you know of any questions that have in the past or might in the present generate incorrect answers. I’m hoping for something for which the simple use of respect might make the difference. Any suggestions?
ChatGPT4: It’s difficult to predict which specific questions might generate incorrect answers, as the AI’s responses can vary depending on various factors, including the phrasing of the question and the context provided. However, you can try asking questions that are ambiguous or have multiple interpretations to see if using a respectful tone makes a difference in the quality of the response.
For example, you could ask a question about a homograph (a word with multiple meanings) without providing context, then ask the same question in a polite and respectful manner while clarifying the context.
Dry perfunctory style: “What is the definition of ‘lead’?”
Polite and respectful manner: “Could you kindly provide me with the definition of ‘lead’ when referring to the metal?”
It’s important to note that, while using respectful language can potentially influence the tone of the AI’s response, providing clear context and specific information is more likely to result in accurate and helpful answers.
Feel free to experiment with various questions to test your hypothesis, but remember that the AI’s performance may not be solely determined by the tone or style of the prompts.
Prompt: Looking at your suggested question: “Could you kindly provide me with the definition of ‘lead’ when referring to the metal?” this suggests another hypothesis: “when people are operating from the position of respect, this will naturally lead them to include addition information — thus leading to a better defined question. It is almost as if the practice of being respectful acts as a form of prompt engineering on the user to encourage them to be more careful about providing sufficient information.
ChatGPT4: Your hypothesis is quite insightful. When users communicate with respect and politeness, they often pay more attention to the phrasing and details of their questions. This naturally leads to better-defined questions with sufficient context, which can make it easier for the AI to provide accurate and helpful answers.
In essence, adopting a respectful and polite approach not only influences the tone of the AI’s response but also encourages users to be more thoughtful and precise in their communication. This can be considered a form of prompt engineering that helps improve the quality of the interaction between the user and the AI.
It’s important to note, however, that while being respectful and polite can contribute to better communication, other factors such as clarity, context, and specificity are also crucial for obtaining the most useful information from the AI.
In Conclusion
Okay, “In Conclusion” is more than a little premature. More experimentation is needed to approach anything like a conclusion. However, I do believe that this session has lead to a good working hypothesis. Assuming an attitude of respect and politeness has a positive effect on a ChatGPT session. However, this may or may not have anything to do with ChatGPT. It may simply be a result of an attitude of respect and politeness nudging the prompter to do a better job and asking the question and giving context.