This past summer was the first time I watched the Olympics since moving to the U.S. Besides appreciating the sheer greatness of the American Olympic spirit, there was also another thing that could not go missed - AI! 

Filling up every commercial slot seemed to be AI. And mainly, the commercials focused on harnessing AI for business productivity and operations. No matter your take on the greatest Olympic moment or greatest AI commercial, one could not overlook this overwhelming trend. 

So, are we over-hyping AI? Is it going to be like IOT, 3D printing, and augmented reality? Or are we really at the beginning of something huge? 

It’s just the beginning of AI

AI has the potential to revolutionize industries and integrate into business operations. It is not just a trend, but a fundamental shift in how organizations could function. Take for example the recent post from Amazon CEO Andy Jassy about how using their AI assistant internally saved them $260M and 4.5K developer-years of work! 

This technological evolution, while unlocking unprecedented productivity and innovation, also brings with it a host of security challenges that must be addressed to safeguard sensitive information and maintain regulatory compliance. The intersection of AI and security is where these challenges and opportunities converge, creating a new landscape that demands careful navigation.

AI and security currently intersect in three ways: 

  1. Security for AI

  2. AI for enhancing security products

  3. Security for tackling AI-generated attacks

In this article, I will cover security for AI, which addresses the need to safely introduce and utilize AI within an organization. 

The “AI pause”

There’s no question, new AI capabilities can enhance productivity. But with this expansion, critical questions about security and privacy are raised. The more I speak with security, governance, and privacy teams, the more I’m seeing the trend of an “AI pause” due to the security concerns AI poses. 

Some of these security concerts are:

  • Data Input: What data are we feeding into the models? Are we at risk of sensitive data being shared with the models unintentionally?

  • Data Output: What is the likelihood that AI technologies will inadvertently expose or misuse confidential information?

  • Unauthorized Access: What is the risk of sensitive data being accessed by unauthorized users or models?

Raising and addressing security concerns is important – and no organization should open the AI gates without taking them into consideration. However, as a result of security concerns, organizations are pausing many of their AI initiatives. Unfortunately, this can cause employees to use AI platforms of their own choosing. 

When employees utilize applications that are not managed by IT, their inputs are unmonitored and the organization can’t know if sensitive data is being introduced by or to AI. Instead of turning a blind eye to these initiatives, we should take a proactive approach to incorporate AI into our organizations while ensuring we are doing it responsibly.

Flipping the AI narrative

Sometimes you have to go back to the basics. For example, when you allow a third-party platform to be used, you typically know what data it can access, what it can do with that data (read, write), and what the company may (or may not) do with that data outside the confines of your organization.

Instead of saying no to AI, how can we say yes in a monitored and controlled way? Instead of hitting the brakes and risking employees finding alternative, less secure methods of access, let’s put AI into first gear.

Using AI securely is not about “one product” or “one quick fix” and you’re secure. Different security factors will need different security approaches. One approach that can help your organization use AI securely is data security posture management (DSPM).

Securing data and AI with a DSPM strategy

DSPM is defined by Gartner as providing “visibility as to where sensitive data is, who has access to that data, how it has been used, and what the security posture of the data store or application is.” With the proliferation of AI, DSPM can help organizations know what data they have, their data classification, who can access what data, and how that data is being used. 

DSPM leverages automation to continuously discover, scan, and classify data by type and sensitivity on structured, semi-structured, or unstructured data. It eliminates the need for connectors or agent-based solutions, providing users with a unified data classification platform across cloud, SaaS, and on-premises environments. 

This approach is key to tracking data as it is being created, moved, and copied for the use of different AI or engineering initiatives. This is why DSPM is a cybersecurity mindset that by design puts data in the center - making it the perfect match for the AI era.  

So when tying this with the “AI pause” mentioned above, how do some of the DSPM functionalities directly address it? 

Data classification and labeling

As AI tools generate new data or interact with existing information, ensuring that this data is accurately classified is vital. Automated systems that integrate with AI tools can apply and update sensitivity labels. 

What can this look like in real life? If an organization uses Microsoft 365 and Purview MIP labels:

  1. Data teams can use data classification to gain visibility into what data is present in different SharePoint sites. 

  2. They can then apply Purview sensitivity labels to relevant files. 

  3. Finally, they can use the labels to enforce how sensitive data is protected - whether it be through data leak protection (DLP) rules or encryption/access policies. 

Data segmentation

Once data is classified and visibility is achieved, companies can enforce policies that segment data, ensuring it resides only in designated environments, so each model receives the correct data.

A good use case for data segmentation would be creating a policy that real customer data cannot reside in the databases used to train your AI models.

Data access

Understanding and controlling who (or what) has access to what data is essential. There is no need for sensitive files or stores to be overly exposed or shared across the organization. 

Sure, many platforms deal with identity, but when it comes to AI, that context and linkage to data become crucial! 

For example, in your cloud environment, only specific roles and identities should handle the data used to train a chatbot model. Ideally, this responsibility should be assigned to a highly restricted service role, rather than a user-linked role. 

Alternatively, we can ask: Which roles have access to the outputs of the initially trained chatbot? Are we ensuring that the output data is exposed to the fewest number of people necessary?

Preparing for the future of AI and security

As we look to the future, the intersection of AI and security will continue to advance. By the time the 2028 Olympics take place in Los Angeles, AI's impact on both business and security will likely have progressed in ways we can't yet fully predict. Rather than making bold long-term forecasts, it's more practical to focus on the near term: when applied correctly and securely, AI will enhance productivity across many sectors.

To move forward, organizations need to be aware of the risks associated with AI data while still encouraging innovation. DSPM can provide the necessary security framework to elevate LLMs and generative AI to enterprise standards. Moreover, as AI evolves, DSPM will adapt and transform alongside it!