AI in Cybersecurity: White And Dark Sides
Some people believe that Artificial Intelligence (AI) has the ability to amplify our natural human intelligence, as long as it remains in good hands. Let’s take a closer look at AI and how it can benefit – or negatively influence – our lives in the near future.
Artificial intelligence is progressing at a rapid pace and we often associate it with science fiction movies where we see robots performing human-like tasks. Today, Artificial Intelligence deployment is very vast – from controlling autonomous weapons, to powering Google search algorithms.
AI is designed to do a specific task, but even more so, a ‘narrow’ task like driving a car or doing facial recognition. That’s why we often refer to today’s technology as ‘narrow AI’. However, the long-term goal is for AI to eventually outperform humans in a variety of tasks. Researchers have coined the term ‘strong AI’ and this can be as complex as solving equations and performing multiple cognitive tasks.
Artificial Intelligence: The White Side
The White House has published a press release as well as a report on Artificial Intelligence, specifically highlighting the Administrations’ plans with AI, how it will handle cybersecurity, and what impact it will have on the economy and the workforce in the US. It’s clear that machine learning and AI is growing exponentially in both value and strength, almost on a daily basis.
When we refer to the white side of AI, we refer to all the good and helpful areas where AI can be an asset to us. There are many examples from our day-to-day life, and here are some that refer to enterprise cybersecurity:
Threat Identification
Organizations face a growing challenge in terms of cyber security, as the areas they have to protect continually expand. Data has to be continually analyzed and this is where modern technology and AI comes in.
Risk Level Assessment
It’s crucial for a business to contextualize internal cyber security intelligence with external threat data to determine real security risks and their impact on business. AI makes this process more quickly and thoroughly, in many cases excluding the human factor.
Remediation
Implement automated processes for immediate security incident notifications and interventions. Remediation steps can be implemented to fix security gaps in a timely manner.
With today’s powerful computer storage and processing power, deep learning is not only possible, but now practical too. Even governments hope to adapt the technology to identify specific patterns in Big Data.
The ‘Internet of Things’ will also continue to grow as more appliances, gadgets and wearable devices start to connect and broadcast data and messages in real-time. Beyond this, we’ll see millions of applications that make use of AI, specifically designed to perform tasks that could not be automated before. Robots will be able to carry out virtually everything a human can do.
Artificial Intelligence: The Dark Side
But for all the good that technology can provide us, it’s important to remember that it can be dangerous in the wrong hands – especially when it comes to cyber security.
Here are a few examples of the dark side of AI:
Advanced Phishing Attacks
Machine learning algorithms will become even more advanced, mimicking the victim’s style, to launch phishing attacks. AI can learn about someone’s behaviours and habits to exploit them and steal sensitive data.
Quick Search for Vulnerabilities
Cybercriminals can search software for previously known vulnerabilities and then exploit them, leaving company data exposed and vulnerable. It will be done with machine efficiency, as opposed to manually and time-consuming.
The Importance of AI Safety
Elon Musk has warned us that AI can be dangerous, especially since machines can simply be programmed to perform a variety of destructive tasks.
Global security strategist Derek Manky said, ‘“In the coming year we expect to see malware designed with adaptive, success-based learning to improve the success and efficacy of attacks. This new generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next.”
AI can be dangerous in one of two ways: it can be programmed to do something damaging, or it can do something good, but use destructive methods to get there. Either way, AI safety is crucial. AI is a tool, and like many other, tools can be used to do good or bad.
The Future of Life Institute launched an AI Safety Research Program in 2015, with the help of a generous donation by Elon Musk. Various projects were launched with the aim of ensuring that AI remains safe and beneficial. More than 40 publications were also completed as well as a host of conferences and events to show what researchers have accomplished.
Many high-profile individuals like Bill Gates and Stephen Hawking have warned that we should be aware of the possible dangers of AI. And they’ve supported numerous initiatives. Elon Musk is among one of the many backers of OpenAI, which is an organization dedicated to develop AI that will benefit humanity.
Because AI has the potential to be more intelligent than any of us, there is no definite way of predicting just how it will behave. Like Elon Musk said – the most important thing we need to focus on, is to make sure that life continues into the future. Rather prevent a negative scenario in the first place, than waiting for it to happen and then trying to fix it.
Was this helpful?
How Can You Maximize SaaS Security Benefits?
Let's get started with a live demo
Latest blog posts
6 Common Mistakes in Google Workspace Backup
Google Workspace is a set of productivity apps, including Gmail, Drive, Docs, Meet, and Calendar...
Google Workspace vs. Microsoft 365 comparison
With so many productivity suites available, it can be overwhelming to decide which one is...
Top 10 Low-Risk Applications and Extensions for Google Workspace
Google Workspace is an extremely popular SaaS productivity suite used by millions of organizations today....