The Threats and Downsides of LLMs and ChatGPT

ChatGPT and LLMs(Large Language Models) are very useful information and search technologies.
Unlike search engines,they do not just give information when queried they rather generate the written content.
They offer several benefits such as below.

  • It offers a conversational and interactive experience for retrieving information.
  • It can be used for language translation and grammer correction
  • It can summarize long text and even outline as bullet points documents efficiently
  • It can generate text and personlized response.
  • It can serve as a coding assistant to fix bugs, write code and even be your personal Stackoverflow
  • ChatGPT saves time by giving detailed responses and answering multiple questions in only one conversation.
  • It can serve as task agents to execute task autonomously with the release of AI agents and Plugins
  • Create new roles and job opportunities like LLM engineers, Prompt Engineers, Chatbot developers, PEO(Prompt engine optimisation) similar to SEO
  • etc

However with every technology, there are disadvantages and certain dangers it can pose.

Monopoly on Answers

The current forms of LLM being used monopolize answers to question by giving only one answer at a time unlike traditionally search engines that offers multiple results of which you decide which to pick. Although there is PageRank, with chatgpt, there is only a first answer then you can ask it for more.

Traditional search engines offers multiple results at a time but with LLM and ChatGPT you require a follow up prompt to get more results. This is good but also it has it disadvantages. It can lead you towards a path were it decides for you.

Suggested Fix and Mitigation: the suggested fix to this monopoly on the answers ChatGPT gives is for future iterations to offer multiple valuable answers with a form of ranking or a singular aggregated answer with multiple references.

Diversity of Results

As of now ChatGPT and LLM return only textual answers to queries, however with traditional search engines, we have not only text, but images and videos. This is useful as a picture can sometimes tell you more than just text.

Misinformation and Hallucination

Content generated by ChatGPT can sound very confident and authoritative yet it may be slightly wrong. This may appear as the correct answer to a novice to the subject however it will not be the right answer. This tendency can add to the torrent of misinformation already available online.

ChatGPT provides finely tuned target answer which can both be an advantage and a disadvantage. If ChatGPT or other LLMS offer misleading answers ,it will be very difficult to spot if not crossed checked. LLMs are models that predict the likelihood and probability of utterances and what text is next in line. That is what we should not forget, they are sophisticated predictors hence they can misinform. Large Language Models are advance pattern recognizers and as such can identify patterns of how humans communicate with language. Because of this it can appear trustworthy however it may be hallucinating the response as though they were legitimate answers. We should be prepared for the hazards that will emerge from misinformation and disinformation from LLMs.

Suggested Fix and Mitigation: the possible way to improve is to have a fact checking feature or a form of reference to where and why that answer was chosen as the response. We should see LLMs as assistance not as authoritative sources except with legitimate references. Even with references generated by LLMs we should also verify and check if they are actually correct and not another layer of misinformation on top of the previous response.

Information Leaks

Since LLMs are trained on large sums of data, it is possible to retrieve from the model certain forms of data, i.e private data or sensitive information. This can happen and will happen as individual users and organizations fine tune the base LLMs (called foundation models) based on their private or organizational data sources. As more LLMs are designed and connected to several applications,it is possible to prompt the LLM to generate sensitive information upon which it was trained. The LLM being able to “remember” what it was trained on may leak sensitive data or even repeat a previously seen answer instead of really generating new response.

Most of the current family of large language models are trained on batch data but in the coming years if they are trained with incremental or online streaming data, the issue will become out of hand. For example imagine using LLMs in your coding IDE or your Office Suite, unintentionally if it is learning incrementally, it is possible to expose and export secrets and API Keys.

Suggested Fix and Mitigation: the possible way to mitigate this is to ensure that private data and sensitive information are anonymised or redacted before training. There should also be a mechanism to hash secrets and API keys using LLMs in your coding IDE or Office Suites.

Stale Data

If ChatGPT is trained on previous data i.e batch ML then it will fall short of the most recent news and events unless
it is constantly being retrained on emerging and current data. Since it cannot yet access real-time facts and events, it will appear to generate misleading answers if queried about recent topics beyond the limit of its training data.

Human Generated Data/Content Vs AI Generated Data Loop

As LLMs and ChatGPT depends on human generated content and data, that means that it will require more data and more compute to improve and give recent results. So who is going to be feeding it human generated data or content ?

Unlike traditional search engines where web-crawlers craws the internet for other people’s content, ChatGPT will need a similar method to get human generated data to feed it system. There is the problem of AI Generated Data loop . This is where the new data that is fed into ChatGPT or other LLMs was previously generated by ChatGPT itself or other AI systems. In this case we will need a system to filter out Human generated content from AI/ChatGPT generated content.

If this is not fixed, the Models will become stale and lose it performance , authenticity and human touch. As of now it is possible to determine machine generated text from human generated text via methods like

  • Diveristy Score
  • Perplexity Score
  • Rouge and Bleu methods used for evaluating summaries
  • etc

However as LLMs and AI becomes more sophisticated and more intelligent, the ability to distinguish machine generated text and human generated text content will be very difficult.

Suggested Fix and Mitigation: We will need different and advanced methods beyond the statistical evaluation methods to detect and filter out AI Generated Content, Machine Generated Data and Human Generated content. For the Data Feeding Loop it is unavoidable.

Plagiarism

Although ChatGPT is awesome, it is not yet capable of original thought. As creative as the responses of ChatGPT and LLM are it is based on training data patterns.Therefore it is prone to pick other peoples content without knowing where it was initially taken.
This can be fixed by identifying the source from which the data is coming from and giving references. In traditional search engines, we mostly know the source of the data via the hyperlink or URL.

Suggested Fix and Mitigation: the possible way to improve is to have a form of reference

Limited Context and Specific Prompt Engineering

LLM and ChatGPT are as accurate and detailed as how good your query or prompt engineering is. Hence you will need to excel in prompt engineering to get the most and the best of it.
Although it uses zero-shot learning /few-shot learning, without proper context, it will not give you what you really want. This is almost the same as search queries in traditional search engines. You will need prompt augmentation (i.e providing context information about the prompt or question you have given), such as
Given the following information, create ...

The internet does not forget, so does LLM and ChatGPT

There is a common saying that what you put online stays there forever even if you delete it, it is on someone else device. The same can be said of ChatGPT and LLM unless it is retrained from scratch. This means that all the data used
in training are still available in the model somehow.
When you build a model, the neural network adapts to the input data by assigning different “weights” to each data point in subsequent layers. Therefore, the LLM or ChatGPT will remember what it has learned even if the original information is deleted from the base layer. This will make it difficult to remove your personal data from an LLM unless it is anonymized before training.

Suggested Fix and Mitigation: The first mitigation is for individuals not to put sensitive information into the collectors for training LLMs or post it online. Moreover during and before training the data should be anonymized if possible.

TextFake

With Chatgpt, it is possible to create credible phishing content for social engineering. It can also be used to
create almost true results that can be misleading to the novice. Just like DeepFake, textfake is a concern we should
be considering with the advent of LLMs.

Textfake refer to machine or AI generated text that is designed to mimic human writing and style however it has a subtle level of misinformation and lies covered in mostly true and correct information. Textfakes can also fool individuals and systems as being authoritative correct answers. LLMs can generate textfakes by offering widely accepted and opinionated answers as facts and presenting them as valid and “true” although they are not.

We should have a system to facts check the generated answer by an LLM such as ChatGPT.

Information and Misinformation Harms and Hazards

LLMs are trained on human language and as we all know humans lie and err, therefore it is possible for LLMs to pick this pattern during training. Due to this reason, there can be misinformation and disinformation. This can result in several harms and hazards as people accept the authoritative and confident responses of LLMs such as GPT-x and believe in them as truth without verifying.

Imagine incorporating LLMs into Health bots that act as a therapists or counselor. If not properly done, this can cause harm to the troubled individual.

Latent Capabilities of Large Language Models

Language is how we communicate as humans. It is with language that we do almost every activity. As such with the advent and advancement of large language models, it is possible to identify certain hidden capabilities in an LLM which we are not yet aware or prepared for. These latent capabilities of language models will definitely come with opportunities and risks. The first to discover these capabilities and utilize them will have an advantage over others.

Job Displacement

With the advent of any new technology there is the need to adapt and adopt your strategy for performing work. LLM and AI are here to stay and to assist humanity therefore they will definitely replace some job roles and introduce new job roles. As AI becomes more useful and advance, certain jobs will be automated and displaced. Others will be enhanced and others will stay the same. This is both an opportunity and a downside which we should consider.

Monetization and Advertisement

Large Language Models require huge sums of data and compute to build and train, so cooperations that will be offering LLMs will need their money back somehow. In most search engines, advertisement is one of the way they monetize their services. And these adverts can be content specific and location specific. In the case of LLMs and ChatGPT, we should be concern about how ads will be used in the platform and how it is going to be.

In this post, we have seen some threats and downsides that large language models and ChatGPT may pose , however there are several benefits of this technology that we should not overlook or undermine.

Thanks for your attention
Jesus Saves

By Jesse E.Agbe(JCharis)

Leave a Comment

Your email address will not be published. Required fields are marked *