Introduction
In this blog I begin with a juicy bit of content from one of E.J. Gold’s explorations. I take this juicy bit and head into the “wherever the heck this takes us” with ChatGPT4. Continue reading
In this blog I begin with a juicy bit of content from one of E.J. Gold’s explorations. I take this juicy bit and head into the “wherever the heck this takes us” with ChatGPT4. Continue reading
Working with ChatGPT as a collaborator the following prompt was generated. This prompt was designed to collect and display news items from a particular birthday.
“Imagine today is [BIRTHDATE in MM/DD/YYYY format]. Please provide a summary of 10 major news items that occurred on this date, each with a short title at the beginning, followed by a brief paragraph describing the event.” Continue reading
Reading The AI Advantage Newsletter I found reference to the prompt below. Very interesting prompt. So, obviously, the first thing I did was see if ChatGPT4 could help create a more succinct version.
The jury is still out on whether the new formulation of the prompt is just as good. I have used the new form several time to good result. Continue reading
Prompt Engineering is blowing up as a topic of discussion. Now that ChatGPT4 is entering the scene, it is even more so. In one of these discussions there was a comment made the mirrored something I’ve been thinking about recently.
It was mentioned that using prompts which were polite and respectful would yield better results than just barking orders.
To test this observation (theory?) I began a hunt for prompts which would yield incorrect results in a predictable way. One of the first prompts that I tried was a simple math questions that ChatGPT has a history of getting wrong. The experiment and results of that endeavor are covered in this blog: Is ChatGPT as Bad at Maths as Some Say?
Unfortunately that prompt did not turnout to be a valid question for testing the value of being polite.
So, I turned to ChatGPT4 to see if it had a few problematic prompts up its sleeve. Continue reading
As of ChatGPT3, here is an example of something that some humans can get, but chat ai cannot get — at least for the moment. Continue reading
Is ChatGPT as bad at math as some say?
The answer to this is yes, maybe, and not necessarily. It all depends on the situation, the circumstance, and what specifically is happening.
A better question might be: Should I trust ChatGPT to solve my math problems?
The answer to this is definitely no. Not, if by “trust” you mean take whatever answer the AI gives you and implement it without question, and/or double-checking, and/or doing a simple reality-check.
Of course you could also ask the question: Should I trust ChatGPT to correctly answer ANY question? Continue reading
Me: Act as a ChatGPT-3 model.
Write a long article to giving 10 red flags that a specific ChatGPT output may be problematic. Do this from the first person, speak to the reader as you. Continue reading
Me: Act as a ChatGPT-3 model.
Write a long article to giving 10 possible methods to fact check ChatGPT output. Do this from the first person, speak to the reader as you. Continue reading
Me: Act as a ChatGPT-3 model.
Write a long article tp Explain and educate on the importance of not blindly accepting ChatGPT output. Do this from the first person, speak to the reader as you. Continue reading