by Chuck Brooks and John Lindsay
digital transformation
For better and for worse, AI is revolutionizing the cybersecurity landscape
The transformative power of artificial intelligence (AI) is not only helping businesses; it is also bringing about major changes in the world of cybersecurity. Positively, AI is improving automated, real-time cyber threat identification and analysis. Businesses now have improved and faster capabilities for cyber defense, enabling them to monitor system activity, identify anomalies, and identify individuals acting unusually.
Regrettably, there are two aspects to consider when utilizing AI to enhance performance. In the present digital landscape, cybercriminals are also using AI and machine learning (ML) tools to attack and exploit victims’ networks more effectively, and there are two main areas where such tools are having the greatest impact.
First, AI can be used to automate and speed up cyberattacks. AI tools reduce time spent on reconnaissance, information gathering, and analysis ahead of an attack. They can also direct the exploration and exploitation of a victim’s IT systems; once an attacker is inside, ultimately reducing the amount of time between initial compromise and the theft or destruction of information.
Second, AI is being used to create increasingly realistic but fake text, audio, or even video. And previously innocuous information, such as public footage of interviews with members of the C-suite, can be used to produce these deepfakes ever more quickly. The most obvious use of deepfakes is to make phishing attempts more convincing; however, they are also now being used to support disinformation campaigns against businesses, for example, by making it appear that a member of staff has said something that could cause significant reputational damage to an organization. And the scale of the problem is growing. In 2024, nearly half of global businesses reported an instance of deepfake-related deception [1].
Workplace AI tools can also cause problems if not deployed securely
Inadvertent misuse of AI tools can also have an impact on information security. Since the release of generative AI applications like ChatGPT, there has been a risk of employees wanting to use these tools to make day-to-day work tasks easier. However, putting sensitive corporate information into public instances of generative AI tools drastically increases the risk of inadvertent disclosure. This risk has been exacerbated by the arrival of AI tools like DeepSeek, based in China. DeepSeek’s permissive privacy policy, along with China’s national security laws, could in theory enable the Chinese government to request access to information and queries relating to DeepSeek.
There are four key considerations for business leaders when addressing AI-driven threats:
Digital transformation technology strategy, IoT, internet of things. Businessman using smart phone … [+]
1) Ensure the support of business leaders
To help counter rising threats and challenges, business leaders must confront the risks posed by AI and address any defensive gaps if they are going to stay commercially viable. They must devote sufficient resources to understanding how AI might damage their enterprises, including how it could be used to create sophisticated disinformation or phishing campaigns. They must also understand the ways that sensitive corporate data could inadvertently or deliberately be leaked into the public domain via misuse of AI applications like ChatGPT.
In addition, executive leadership teams must prioritize cybersecurity and address the increasing threat of AI-enabled attacks. They must know what specific risks they face, given the industry sectors and geographies that they operate in. And they should remain cognizant of the threats of geopolitically motivated attacks by state actors, non-targeted criminal attacks, and attacks by activists driven by their activities. Where possible, they should also take advantage of AI-powered cybersecurity tools to match the increasing sophistication of attackers’ capabilities.
2. Understand your exposure to AI-driven cyber threats
Any commitment to cybersecurity starts with a risk assessment of cyber vulnerabilities. AI-driven cybersecurity tools should be integrated into those assessments.
AI can be used as a tool to help identify any code flaws, configuration mistakes, or maybe malware that might already be in programs and applications. Application security testing ought to be the first stage in that assessment procedure.
The primary objective of the testing process is to identify issues before they affect devices and networks and enter production. It must be continuous, though, as threats evolve. In particular, AI threat detection tools will be required to maintain optimum cybersecurity.
A significant consideration for protecting companies and organizations is technical security controls and protocols. IT/security departments need to be empowered to ensure that there is media monitoring, information security, cyber defenses, and incident response.
Also, with AI, it is important to put the right technical safeguards in place to protect any corporate information that you put into any commercial tools.
3. Enable a top-down cybersecurity culture with expertise in emerging tech trends
AI is already having a significant impact on security and commercial operating models. To fully utilize emerging technologies, the C-suite should possess a thorough understanding of their capabilities and allocate sufficient resources to security teams for the acquisition and integration of new security tools.
However, technology tools can only take you so far and they are continually evolving. As of now, the methods for identifying deepfakes are not always trustworthy. Furthermore, if employees believe that utilizing AI tools will make their jobs easier, they will try to use them, regardless of any restrictions. Therefore, you should either make sure that staff understand how to protect information (e.g., by not placing it in insecure AI tools) or provide them with secure AI solutions to utilize.
A proper culture must be established to facilitate the assimilation of technology, starting at the top. Business leaders should ensure that their staff learns to identify AI-driven risks, fosters a healthy skepticism, and verifies information using reliable, alternative sources. The potential dangers of using AI tools must be communicated, along with their benefits. Staff should be encouraged to look out for and to take responsibility for unintentional AI abuse. Those who self-report or report issues should not be penalized.
The C-suite also needs to develop and demonstrate an understanding of best practices, legislation, and challenges around cybersecurity and AI, as otherwise businesses will remain largely unprepared. To improve cybersecurity culture, there should be more focus on attracting cybersecurity and AI experts to board-level positions. Getting outside assistance to improve the C-Suite’s cybersecurity and AI readiness is a wise choice, as the risks of, and expenses associated with, breaches in the business world keep rising.
4. Be prepared to respond to AI-driven threat scenarios
The C-suite also needs to test regularly its ability to respond to AI-driven threat scenarios. Key elements to test include how to handle internal and external narratives, which could be manipulated by disinformation; how to deal with potential disruption to the chain of command caused by deepfakes; and how to respond to relevant regulators.
Operating securely in a rapidly evolving digital world, driven by developing technologies, presents numerous challenges. Plans that can identify, stop, and lessen evolving cyber threats must be reorganized, and industry awareness must be maintained.
Authors:
John Lindsay
John Lindsay is a Director in the Washington, D.C. office of global strategic advisory firm Hakluyt. He advises corporations and investors on the opportunities and risks facing their businesses, with a particular focus on geopolitics and technology.
Before joining Hakluyt, John held various public affairs and diplomatic roles in the UK government, including as a cyber security adviser to the UK Ministry of Defence and, most recently, for the UK Foreign, Commonwealth & Development Office, where he focused on Afghan politics.
John specializes in facilitating dialogue between technical and non-technical audiences. He has undergraduate and postgraduate degrees from the University of Cambridge, where he studied Politics and International Relations. He also holds several advanced cybersecurity qualifications.
Follow John on LinkedIn
Chuck Brooks
Chuck Brooks currently serves as an Adjunct Professor at Georgetown University in the Cyber Risk Management Program, where he teaches graduate courses on risk management, homeland security, and cybersecurity. He also has his own consulting firm, Brooks Consulting International.
Chuck has received numerous global accolades for his work and promotion of cybersecurity. Recently, he was named the top cybersecurity expert to follow on social media and also as one of the top cybersecurity leaders for 2024. He has also been named “Cybersecurity Person of the Year” by Cyber Express, Cybersecurity Marketer of the Year, and a “Top 5 Tech Person to Follow” by LinkedIn”. Chuck has 122,000 followers on his profile on LinkedIn. Chuck is also a contributor to Forbes, The Washington Post, Dark Reading, Homeland Security Today, Skytop Media, GovCon, Barrons, Reader’s Digest, and The Hill on cybersecurity and emerging technology topics. He has authored a book “Inside Cyber,” that is now available on Amazon. Amazon.com: Inside Cyber: How AI, 5G, IoT, and Quantum Computing Will Transform Privacy and Our Security: 9781394254941: Brooks, Chuck: Books
In his career, Chuck has received presidential appointments for executive service by two U.S. presidents and served as the first Director of Legislative Affairs at the DHS Science & Technology Directorate. He served a decade on the Hill for the late Senator Arlen Specter on Capitol Hill on tech and security issues. Chuck has also served in executive roles for companies such as General Dynamics, Rapiscan, and Xerox.
Chuck has an MA from the University of Chicago, a BA from DePauw University, and a certificate in International Law from The Hague Academy of International Law.
Follow Chuck on LinkedIn: https://www.linkedin.com/in/chuckbrooks/
[1] https://www.globenewswire.com/news-release/2024/09/30/2955054/0/en/Deepfake-Fraud-Doubles-Down-49-of-Businesses-Now-Hit-by-Audio-and-Video-Scams-Regula-s-Survey-Reveals.html