Investments

Securing AI Investments: Navigating Three Security Imperatives


John Scimone is President, Chief Security Officer at Dell Technologies, where he leads the company’s global corporate security program.

AI is transforming businesses and the world around us. It holds the promise of development, growth, efficiency and greater profitability for organizations, but it also introduces new risks alongside these opportunities. There are three security imperatives for AI that organizations need to consider: managing the risks of AI usage, defending against AI-powered attacks and leveraging AI to enhance security.

1. Managing The Risks Of AI Usage

Data fuels AI. Tremendous amounts of data are created and leveraged in creative and unique ways in most generative AI systems, but that data often contains sensitive and confidential information. This heightens the need to understand and protect the data involved—from building, training and tuning the model to prompting and generating outputs.

Organizations must also be concerned about other security risks ranging from data integrity, system availability and compliance. The good news is that we don’t need to reinvent the wheel to secure AI systems. Long-standing security principles and best practices still apply, including least privilege, input and output filtering, authentication and access control, monitoring, upgradeability, etc. Cyber practitioners must learn about AI technologies and apply these existing practices to them. This requires new training that organizations should institute.

In addition to mitigating traditional security risks, organizations should take a holistic risk management approach, weighing the myriad other considerations—from safety to privacy and ethics to company environment and culture. The best way to do this is to establish an AI governance model and processes that integrate all of these factors, with experts from each discipline across the business included and empowered.

2. Defending Against AI-Powered Attacks

We must be mindful of the risks of GenAI/AI, but we must also prepare for hostile use of AI where others intend to use it to harm us. Just like every organization, criminals will seek to use AI to accelerate the pace of their operations, decrease their operating costs and increase the sophistication of their attacks.

The industry is already seeing AI used by criminals. In recently reported cases, we have seen advanced fraud and social engineering attacks resulting in the theft of tens of millions of dollars, as well as broad increases in the sophistication of general-purpose phishing emails. Generally, this is a continuation of existing attack methodologies, meaning existing cybersecurity best practices can help withstand them. This includes technical security controls and tools, as well as training employees to recognize and avoid these AI-powered attacks.

AI’s power not only enhances traditional attacks but also enables entirely new classes of threats—from deepfakes and uncontrolled autonomous hacking to new styles of attacks and crimes that we can’t yet imagine. This evolution will require new ways to manage organizational cybersecurity. We are already seeing instances of sophisticated attacks, such as deepfake audio and video technology, being used to socially engineer employees and defraud organizations. These types of attacks require adjustments to existing practices to account for our new inability to trust unauthenticated audio and video streams.

AI attacks of the future will require us to completely reimagine how to manage cybersecurity. For example, with the advent of criminal autonomous AI agents, cybersecurity teams will be challenged to move from manual, time-intensive operating models to a high degree of defensive autonomy. Security teams will likely need to build and leverage defensive AI agents to combat the criminals’ offensive AI agents, with future cyberwars taking place at lightspeed.

Key decisions will need to include how much decision making autonomy and action organizations give defensive cyber agents when offensive hacking agents are likely to operate without any rules or worry of risk-taking by their criminal creators.

3. Leveraging AI To Enhance Security

Just as criminals leverage AI for harm, organizations should integrate AI into their security programs to improve the efficiency and effectiveness of their operations. AI is probably the first sign of optimism in decades for the beleaguered cybersecurity industry, holding the potential to tip the scales in favor of defenders, unlike any other technology or innovation in the recent past.

Ways AI is strengthening security capabilities today include:

• Secure code development: The majority of security incidents are tied to insecure code. AI can significantly increase not just the speed of software development but also the quality, thereby reducing security vulnerabilities.

• Advanced predictive tools: By utilizing sophisticated algorithms, AI can anticipate possible attack vectors, helping security teams stay ahead and invest their time and resources in the risk areas most likely to materialize.

• Superior threat detection: AI’s ability to rapidly process and analyze vast datasets enhances the accuracy and efficiency of threat detection.

• Enhanced threat response: AI can accelerate cyber defenders’ ability to decide and implement response and containment actions in the wake of a detected threat.

• Informed and empowered users: GenAI systems have the ability to distill thousands of expert security requirements and best practices. They can explain in tailored and clear ways to every employee what their obligations are to operate securely, significantly increasing cybersecurity awareness and reducing the risk of human-induced security vulnerabilities and incidents.

For most organizations, the adoption of these advances will come through their technology and security partners. To be ready to extract the full value of these technologies, organizations must prepare their data and processes, seeking to simplify and standardize. Further, implementing an automation-first cybersecurity strategy will begin to prepare organizations for a world in which most attacks are performed by (and, thus, must be defended by) autonomous AI agents.

Safeguarding Innovation In The AI Era

In an era where AI and GenAI are pivotal to innovation and competitive advantage, securing these investments is crucial. Protecting innovation with robust security measures helps ensure that AI’s transformative potential is realized safely and effectively, empowering organizations to thrive in an increasingly digital landscape. Embracing these imperatives as integral components of security and risk strategies will be essential for success in an AI-fueled future. As AI continues to reshape industries, the ability to innovate securely will play a key role in determining which companies lead and which fall behind.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




Source link

Leave a Reply