Cloud Security

Speaker Session Recap: AI in Cybersecurity

Posted on March 6, 2024

Estimated reading time minutes

Artificial intelligence (AI) has become a transformative force across many industries – and cybersecurity is no exception.

But despite AI’s ability to revolutionise threat detection and response, and improve the overall efficiency of cybersecurity operations, concerns still remain about its reliability — not to mention its increasing adoption by cybercriminals.

Amidst all the AI hype, many organisations are unsure of how to implement secure AI development –– viewing the technology as somewhat of a double-edged sword.

In our recent speaker session on the implications of AI in cybersecurity, our Head of Cloud Security, James Pearse, sheds light on some effective strategies for yielding AI as a tool for good.

This article dives into the key takeaways from the presentation and offers some tips for building a framework that maximises the benefits of AI in cybersecurity while mitigating its potential risks. Read on to learn more.

Disadvantages of AI in cybersecurity

James began the speaker session by outlining some general concerns about the efficacy of AI in cybersecurity, including:

  • Bias and discrimination: Whether the AI model is open source (like ChatGPT and Gemini) or an internally-trained Large Language Model (LLM), biased data can negatively impact individuals and entire groups within organisations. Therefore, the demand for explainable AI solutions is increasing, in the hope that the clear reasoning behind outputs can help protect brand reputations.
  • False positives and negatives: AI systems can flag harmless activities as threats (also known as false positives) or miss real threats (false negatives), thus compromising system security.
  • Over-reliance on technology: Human oversight is essential for validating AI outputs, raising questions about its efficacy amongst busy SOC teams.
  • Reliance on Big Data: Organisations with limited access to substantial data sets may struggle to harness AI’s full capabilities — especially in the realm of cybersecurity.
  • Vulnerability to proprietary data theft and manipulation: If data is stolen, manipulated, or ‘poisoned’ in any way, AI outputs can become compromised, leading to inaccurate decision-making and compliance issues. Companies need solutions that prevent unauthorised AI use and maintain strict data privacy protocols.
  • High adoption barriers: Implementing AI and cybersecurity requires substantial expertise and resources, making AI development out of reach for some companies.

Examples of AI-powered cybercrime

Next, James highlighted some examples of how cybercriminals are deploying AI tools to launch devastating attacks. These include:

  • Deepfakes: AI-generated fake audio and video can be used to launch social engineering attacks.
  • Malware: Cyberattackers can manipulate malware code or create fake data to help them avoid detection.
  • Phishing attacks: Criminals can use AI to launch phishing attacks on a massive scale.
  • Learning-based attacks: AI training data can be ‘poisoned’ to generate inaccurate, harmful outputs.
  • Hacking tools: Inexperienced hackers can use AI to bolster their ability to perform sophisticated intrusion attacks. Many pre-made solutions are now available on the dark web.

Advantages of AI in cybersecurity

Despites AI’s disadvantages and risks, enhancing your security operations with the right AI tools can deliver a host of benefits, including:

  • Automates Threat Detection: AI can quickly analyse vast amounts of security data in real-time, identifying threats like Zero-Day attacks.
  • Increases efficiency and accuracy: AI-driven security tools improve their speed and accuracy over time, freeing up valuable resources for security teams.
  • Cost-efficiencies: AI-driven automation enables security teams to eliminate manual tasks, resulting in faster data collection, efficient incident management, and reduced operational costs.
  • Reduces human error: Automating manual tasks minimises the likelihood of human error and improves the effectiveness of your security measures overall.
  • Addresses skills gaps: AI-driven automation empowers organisations to enhance their security posture without hiring additional skilled personnel.

How to utilise AI devices securely – best practices

Finally, James highlighted our best practices for implementing AI in cybersecurity effectively. These include:

  • Choose AI tools carefully: Always vet AI vendors carefully and ensure they focus strongly on security. Evaluate vendors for ‘secure by design principles’ and their understanding of privacy and data protection regulations.
  • Don’t overshare: Avoid sharing sensitive company data with AI tools.
  • Verify AI output: Always check AI outputs to ensure accuracy before use.
  • Check access controls and permissions: Implement proper access controls and restrict unauthorised use of AI systems. Protect AI applications from cyber criminals with firewall and web filtering technology.
  • Backup AI models and data regularly: Back up all AI data to ensure resilience in the event of system failures or attacks.
  • Create an AI development framework: Research out-of-the-box AI development frameworks such as Microsoft’s Responsible AI Standard and Azure AI Content Safety, as well as Google’s Secure AI Framework. Once complete, create your own organisational approach, or seek help from experts like Atech, we’re happy to help.

Atech: A trusted guide in your AI development journey

As a pure-play Microsoft Partner with a proven track record of providing cutting edge infrastructure and outstanding security services, we can make sure you never make a wrong turn in your AI cybersecurity journey.

We offer a range of one-to-one workshops that are designed to ensure you get the most out of your IT investments. Some of these engagements can be funded by Microsoft (if your company qualifies), but otherwise, we discuss your IT and security goals – including your aspirations to implement AI-driven security — and create a clear roadmap for success. Contact us to sign up for your tailored presentation.

Or, if you would like to find out if your company is ready for the Microsoft Copilot revolution, sign up to our webinar on Tuesday 19th March. We’d love to see you there.

How can we help?

 
As Microsoft accredited cloud service providers we’ve got the tools and talent to put the incredible potential of cloud technology at the heart of your operation.

Fill in the form to speak to one of our cloud consultants about your cloud project. Let’s get the conversation started.


 


    First name
    Last name
    Email
    Phone
    Message