What ChatGPT Is
ChatGPT is a large language model created by OpenAI based on the GPT (Generative Pre-trained Transformer) architecture. Chiefly, it uses deep learning techniques to generate human-like responses to text-based prompts or questions. ChatGPT is designed to understand natural language and can be used for a variety of tasks such as language translation, summarization, and conversation generation. Generally, creating any query is rather simple. It is a matter of typing in a question with some instruction. So, what was the question posed to ChatGPT that rattled LinkedIn?
The ChatGPT Question
Why would a simple post about a query on ChatGPT cause such a commotion on LinkedIn? The original post is as follows:
Asking GPT-4 to generate new potential mental illnesses people might develop in the future that are the result of AI use. The answers are kind of insightful.
Read the ChatGPT results for yourself. Evidently, it stirred the pot because within minutes the post was shared close to 1000 times and over 9,000 people viewed the post.
List 5 new potential mental illnesses people may develop in the future that are related to AI use or effects of AI. Include novel but medically appropriate names for them.
Artificial Intelligence Attachment Disorder (AIAD)
Virtual Reality Depersonalization Syndrome (VRDS)
Algorithmic Anxiety Disorder (AAD)
AI-Induced Comparison Syndrome (AICS)
Techno-Paranoid Delusional Disorder (TPDD)
Regulating AI Tools Like ChatGPT
Before the internet, people would conduct research the old-fashioned way, through a series of checks and balances which included some type of control factor. Then when the internet came about, not everyone had home computers. It was limited to tech professionals and others. After a period of time, Google evolved and all of a sudden, it was like the wild west all over again. People would see something on the internet and automatically believe it must be true. Soon regulations were put into place for content and people were able to research their own sources.
Now with AI tools like ChatGPT, the public is liable to fall back into the same trap by believing anything AI tools tell them. This is a problem for a myriad of reasons. For example, consider a medical student that is supposed to do research. Instead of doing the work, the student opts to utilize an AI tool to complete required assignments. That is not learning.
Other professions, especially ones that require a license pose the same threat of negligence if these types of short cuts are taken in lieu of doing the actual work. Think about what could happen if your CPA took short cuts and did not learn all that was required to obtain a license. There could be costly consequences for numerical errors when it comes to your financial health.
Professions such as a Psychologist, Registered Nurse, an Architect, etc. etc. The list goes on and on as far as examples of what could go wrong due to the sole or partial reliance upon AI tools for knowledge as opposed to traditional methods.
“It was so hard to filter the noise online already, and now with all these AI tools generating so much plausible garbage the most popular new profession will be “fact checking noise filter human”. It will be apparent in the next 2-3 years how useless most of the internet will become. 99% AI-generated plausible sounding nonsense, 1% actually real, tested, reliable information.” -Anonymous
Watch The Video Interview
In this segment of That’s The Story, host June Stoyer, talks to Suchin Jain, an engineer and inventor, about his post on LinkedIn which opened up Pandora’s box about the unexpected potential impact of ChatGPT on human health. Watch below.
“That’s The Story” is a new internet-based talk-radio show hosted by June Stoyer which features top industry leaders, scientists and innovators that discuss important issues pertaining the environment and the world we live in. The show can be found on iTunes, Amazon, iHeartRadio, Stitcher and all major podcasting providers or by going directly to the show’s website.