Kopi Chat Deep Dive: The Dark Side of Artificial Intelligence (AI) in Cybersecurity

NUS Enterprise
4 min readJul 30, 2021

Contributed by Rebecca Tan Li Qi and Jermaine Ng

“Today, AI in cybersecurity is a very hot topic in the space of both AI and cybersecurity … Yet, this topic is surrounded by much mystery and uncertainty.” — Emil Tan

This uncertainty boils down to 1 question: Will AI improve or worsen cybersecurity?

This is an important question to ask because AI is everywhere. From cybersecurity vendors to policy planners, many are creating or learning about AI. This trend, coupled with the rise of digitalisation (especially during this pandemic), makes it inevitable that cybersecurity attacks rise exponentially.

To understand more about the potentials of AI in cybersecurity and how Singapore has been balancing the benefits and downsides of these potentials, we attended Kopi Chat Deep Dive — Finding Light: The Dark Side of AI in Cybersecurity — to listen in to 4 industry experts:

  • Manan Qureshi | Entrepreneur, Podcaster, Multi-Industry CISO and Risk, Security & Resilience Executive
  • Dr Ong Chen Hui | Cluster Director, BizTech Group; Infocomm and Media Development Authority
  • Prof Yu Chien Siang | Chief Innovation & Trust Officer of Amaris AI
  • Moderator, Emil Tan | Co-Founder Division Zero (Div0), Singapore Cybersecurity Community

What are some exciting opportunities in AI in cybersecurity?

AI is not just about algorithms.

It can learn, adapt, and explain data in an intelligible way to its users. This makes AI a powerful tool to detect cybersecurity threats in many different circumstances. Other than threat detection, AI can also launch appropriate responses to defend critical infrastructures or data.

Especially now, AI is becoming increasingly critical to organisations experiencing a rise in cybersecurity attacks, data centres, the government’s proactive cyber defences, as well as network security.

And that’s not it. Unlike a human, AI in cybersecurity systems is able to detect abnormalities in large volumes of complex data to flag suspicious activities promptly.

But…does this mean that AI in cybersecurity is infallible?

No.

In fact, “the attackers [can] destroy your pipelines and all these can be done within minutes.” — Prof Yu

This is because AI in cybersecurity can produce many false positives, especially when there is a lack of quality data, creating unnecessary and tedious work for security analysts.

Since most AI is trained by Machine Learning (ML) and requires certain predictability with a normal baseline that can be characterised to flag suspicious behaviour, new forms of cyberattacks or suspicious behaviour may escape undetected.

AI is also limited in its capabilities to process and interpret technical jargon for users. At this point, the amount of technical data AI can recognise is still inadequate for natural language processing.

And that’s not all.

Algorithmic biases can also be reflected in these seemingly “perfect” AI systems. Human biases can find their way into AI systems, and these algorithmic biases range from gender bias, racial prejudice to age discrimination.

What’s worse is that AI can be exploited to hack cybersecurity systems by finding vulnerabilities, generating malware, and mimicking how humans behave to overcome CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart).

In this case, how does Singapore balance the benefits and downsides of AI in cybersecurity?

Fortunately, Singapore adopts a pragmatic approach rather than assuming absolute security in its AI cybersecurity systems. That being said, Singapore is also trying to increase public trust in AI while acknowledging its limitations.

As much as the black hats will always have the upper hand since defending is more challenging than attacking, AI solutions will continue to play a mandatory role in cybersecurity, making greater transparency, accountability and regulation all the more critical.

Given the importance of AI solutions in cybersecurity, in the near future, Singapore is likely to introduce unified testing, system inspections and certification to certify the robustness of deep Machine Learning (ML) systems.

Opportunities in AI in cybersecurity are boundless, and at the end of it, it is really how users acknowledge and balance both the good and the bad of AI in cybersecurity.

We’ll leave you with some food for thought: Even if there comes a day we can guarantee cybersecurity through AI, can we guarantee the security of AI?

Interested to re-watch the webinar for insights? Check it out at https://fb.watch/5AKFPy7grL/.

If you are keen on entrepreneurship and technology, discover similar events on the NUS Enterprise website: https://enterprise.nus.edu.sg/.

--

--

NUS Enterprise

NUS Enterprise nurtures entrepreneurial talents with global mindsets, while advancing innovation and entrepreneurship at Asia’s leading university.