![](/themes/wsmea553-741d-11ea-9200-b0487a8ca659/images/static_pages/brian-sims.png)
Brian Sims
Editor
Brian Sims
Editor
CHARTERED SECURITY professional Brendan McGarrity (director of Evolution Risk and Design and also a Fellow of The Security Institute) is warning of the dangers of ChatGPT and the vulnerabilities it might expose in terms of an organisation’s security that have not yet been considered.
McGarrity is also warning of the dangers of impersonation and how potentially sensitive data could find itself in the wrong hands with the potential for compromise when it comes to personal and organisational security.
ChatGPT (Chat Generative Pre-Trained Transformer) is a large landguage model-based chatbot developed by OpenAI. Launched on 30 November last year, it’s notable for enabling users to refine and steer a conversation towards a desired length, format, style, level of detail and language used. Successive prompts and replies, known as ‘prompt engineering’, are considered at each conversation stage as a context.
ChatGPT is built upon GPT-3.5 and GPT-4. These are members of OpenAI’s proprietary series of generative pre-trained transformer (GPT) models based on the transformer architecture developed by Google. It’s fine-tuned for conversational applications using a combination of supervised and reinforcement learning techniques.
Initially, ChatGPT was released as a freely available research preview, but due to its popularity, OpenAI now operates the service on what’s described as a ‘freemium’ model. It allows users on its free tier to access the GPT-3.5-based version. In contrast, the more advanced GPT-4 based version and priority access to newer features are provided to paid-for subscribers under the dedicated commercial name ‘ChatGPT Plus’.
Impact on security
McGarrity feels strongly that the impact of ChatGPT has not been thought through in terms of the security industry and, specifically, in terms of whether it renders today’s organisations less – rather than more – secure.
“In practice,” stated McGarrity, “ChatGPT scrapes information from billions of questions and answers from the Internet and ranks what words will come next in a sentence based on a probability to achieve a ‘reasonable continuation’ of whatever text it has derived thus far.”
He continued: “As one scientist puts it, ChatGPT keeps asking the Internet over and over again ‘given the text so far, what should the next word be’. It might pick the highest-ranked word, but it may also select a more random word which adds a layer of creativity.”
In scraping the Internet, does that expose organisations to potential harm, and does it expose issues that have not yet been uncovered?
Highlighting weaknesses
“Can ChatGPT find and highlight weaknesses in a client’s security profile?” asked McGarrity. “What checks and balances are there to protect what has previously been written and prevent that copy from being presented as something new? How do you lock your inner workings down? Is it possible that one party might be able to accurately impersonate another based on the language they use? Could it be used, for example, to impersonate me?”
Of course, McGarrity accepts that not using ChatGPT or embracing the ‘Artificial Intelligence revolution’ means running the risk of being left behind and trailing the innovation curve. However, he argues that what has been written and is searchable on the Internet in the past, and what might be written and available in the future, could expose a vulnerability that has not yet been considered.
“It could well uncover sensitive data and compromise personal and organisational security,” concluded McGarrity. “ChatGPT is a potentially dangerous invention and organisations need to be protected from it.”