AI Chatbots: Cybersecurity Opportunities and Challenges
How can society maximize the advantages provided by generative artificial intelligence (AI) while minimizing the possible drawbacks, especially the unknown cybersecurity risks?
This is where ENISA, the cybersecurity agency of the European Union, comes in. On Wednesday, ENISA organized a symposium in Brussels with the stated goal of discussing “key elements of cybersecurity in AI systems as well as challenges related to the deployment and supervision of secure and trustworthy AI.”
Apostolos Malatras, an ENISA team leader for knowledge and information, asserted that readiness is the best course of action.
A call to action was made by Huub Janssen of the Dutch Authority for Digital Infrastructure, which oversees the use of AI and cybersecurity in government. AI presents “new opportunities and new risks,” none of which necessarily need to be “bad or devastating,” according to him. He didn’t advocate for seeking to outlaw AI. But we must take action to ensure that it is moving in the correct way, he added.
The development of “foundation models,” such as the models underlying different chat bots, would be subject to regulations governing health, safety, and the environment, as well as promoting democracy and human rights, according to legislation being drafted by EU legislators.
According to Janssen, the development of large language models like Chat GPT is advancing at “warp speed,” with significant improvements frequently appearing weeks apart. He is currently keeping watch of the explosion of open-source light language models that are portable and efficient on laptops. These have already been put to use to enhance one another. This is how: A query is posed to three separate light language models, one model is tasked with determining which response is the best, and another model is instructed to give tasks to the other models in order to enhance their outputs.
Speaking on a panel about the potential and danger of AI chat bot cybersecurity, Adrien Bécue, an AI and cyber security expert at the French multinational Thales Group, stressed the importance of highlighting the significant opportunities that come with AI.
He sees opportunities in AI’s capacity to “analyze and synthesize” substantial amounts of threat intelligence, which is now a time-consuming and difficult process. AI has the potential to automate and coordinate many stakeholder groups across linguistic boundaries for crisis response. “Expert code generator chat bots” could assist programmers in quickly creating and testing urgent remedies for software flaws.
Rapid Advances
It is unclear where and how quickly AI will develop further.
According to panelist David Johnson, a data scientist at Europol, the EU’s criminal intelligence and coordination agency, “the development of AI models is really hard to predict.”
Johnson, who calls himself more of a “large language model (LLM) enthusiast” than a specialist, said his role is to assist Europol in ingesting and enhancing the massive amounts of data it receives from other law enforcement organizations. Since a few months ago, he has been experimenting with LLMs and reports that so far, they are not yet adequately dependable. What I discovered is that it provides an answer with a lot of confidence, so much so that I tend to accept it, yet about half the time, it’s utterly incorrect.
Despite his current doubt, Johnson stated that cybersecurity specialists need to actively monitor AI breakthroughs given how swiftly LLMs are developing because such issues may be resolved “very soon – maybe even tomorrow.”
Thales’ Bécue asserted that AI isn’t intended to deceive. Whether or not it provides the correct answers, he claimed that the problem is that eventually, we will start believing what these things claim. “It’s not that it lies for reason; it’s that it’s designed to make you happy.”
According to IBM Belgium information management specialist Snezhana Dubrovskaya, users are already using Chat GPT output in potentially unexpected ways, such as using it to build Visual Basic for Application code to automatically make PowerPoint presentations.
It is unclear whether people who are copying and pasting this code comprehend what it does, which emphasizes how human factors are a concern with any AI software. As usual, she stated, “We have very good intentions, but it can be abused.”
She does, however, foresee growing cybersecurity utility from tools like Chat GPT acting as a security chat bot that will give a user a list of “typical mitigation actions” for addressing a particular threat or taking an alert, say from a CrowdStrike tool, and generating a Splunk query to examine logs for signs of intrusion.
Chat bots have demonstrated their ability to at the very least help reveal weaknesses. It is undoubtedly feasible, and the publicly accessible chat bots—at least in test scenarios—do it rather well. Europol tested Chat GPT “with a smart contract that we knew was at risk, and it quite simply told us that it was unsafe and what the developer needed to do to lessen its vulnerability.” Chat GPT was developed with cryptocurrency investigations in mind.
A user who is knowledgeable about what they are doing is a prerequisite for many of the instances of how chat bots can be utilized for good in the security field. In particular, a user must provide a chat bot with good prompts, according to IBM’s Dubrovskaya, as the output’s quality is somewhat dependent on the input.
If there is one more problem that chat bots haven’t fully solved yet, it is the “garbage in, garbage out” issue facing computer science.