
ChatGPT: It’s Not as Advanced as You Think

With the advent of tools like ChatGPT, artificial intelligence (AI) has garnered a lot of attention in the media. However, due to the nature of 30 second sound byte media, its representation often lacks factual completeness. Today’s reporting on ChatGPT and similar tools have created an unfounded culture of fear around AI and its implications.
The hype around ChatGPT must be tempered by the reality of its capabilities, which aren’t as sophisticated as the media purports. This blog will explain how ChatGPT works, its limitations, and why it is ultimately nothing to fear.
What Can ChatGPT Do?
ChatGPT is a language processing model that generates text and detailed answers to questions. While ChatGPT is an innovative tool that can be utilized for a variety of purposes - including content creation, research, and language translation - it is not as advanced as claimed.
ChatGPT is a form of supervised learning (Large Language Models), a branch of machine learning that uses existing data sets to train a model to predict outcomes or sequences of words. In the case of ChatGPT, this would be internet data that existed before and up to 2021. Though it seems ChatGPT has come out of the blue, language processing tools have been worked on and developed for decades. The technology is proven. ChatGPT has just reached new and exciting capabilities and is an improvement upon what has been done in the past.
What’s more, while ChatGPT produces human-like language,
it is important to note that it doesn’t actually
understand what it is saying. It knows to produce x
outputs when you input x series of words (i.e., context),
but it doesn’t know the semantic meaning of the word
sequences. It is just very good at generating realistic
sequences of words in simple contexts. LLMs replicate
human language but aren’t - and never will - be
sentient.
The Limitations of ChatGPT
The central limitation of ChatGPT is that it cannot distinguish between right and wrong. While protective measures have been implemented to stop ChatGPT from generating harmful content, they are not entirely foolproof. Even if ChatGPT was only trained on true data, we still could not a priori prove that it is correct. For example, we know that a thermostat will work correctly because it’s based on known interpretable laws (e.g., the laws of physics). However, the same cannot be said for ChatGPT as it isn’t configured with self-evident truths. It uses data analysis to produce that which is most likely to be realistic but not necessarily correct in each context.
Since ChatGPT is vulnerable to error, we must take care when using the tool. Because we cannot take for granted that it is producing true information, ChatGPT shouldn’t be utilized for high stakes tasks. For example, it shouldn’t be applied to support diagnostic testing without human oversight as being correct is of critical importance in healthcare.
Using ChatGPT for tasks requiring very specific
information can also be challenging. ChatGPT is far more
useful and suited for general, low-risk applications or
for summarizing large volumes of content for intelligent
humans to review. Human beings can’t stay on top of all
the medical literature, for example. ChatGPT could
summarize the literature on a topic or query from a
researcher who could review the summary and then seek out
the specific papers of interest.
Ultimately, it
is crucial to distinguish between the capabilities ChatGPT
is reported to have and those it does have. While ChatGPT
can be very useful, it does not have human level
intelligence and is not as sinister as the media insists.
We must approach media coverage of AI with a critical eye
and a clear understanding of what autonomous intelligence
actually is. By acknowledging these distinctions, we can
dispel fears and unsubstantiated claims, and foster
responsible integration of AI into our lives.
