7 Tips from a Security CTO for Balancing AI Innovation with Governance

7 Tips from a Security CTO for Balancing AI Innovation with Governance
Dave Casion Headshot
Written by Dave Casion
Chief Technology Officer

As a modern CTO, it should probably come as no big surprise that I’m an optimist on the innovative prospects of artificial intelligence (AI). But I’ve been in this career for a long time, and that optimism is tempered with experience. 

I’ve seen enough emerging technology patterns to know that it always takes a lot more time and resources than people think to evolve innovative technologies beyond their final barriers. 
One helpful prioritization technique as an effective CTO is finding those situations where the technology can still deliver value even as those challenges are being figured out.

So, the question is how do we take advantage of what we have now with AI advancement and how do we get ready for what we’re going to get later once the AI barriers start to drop? 

At Bitsight we’ve been working on this question for years now—even before the big GenAI boom started to explode in early 2022. 

Bitsight is a company born and bred in the ethos of risk management and governance. So, our approach to AI from the start has been to apply appropriate oversight and governance frameworks around *how* and *when* specific types of AI are used. This has allowed us to move forward with trustworthy and purposeful AI for every appropriate use case without ever compromising the integrity of our products or the security posture of our customers or company.

As we’ve applied governance to the process of vetting AI technology and use cases, the following are some of the lessons we’ve learned. 

1. Data security is paramount

One of the main concerns about AI from the beginning from me and others is that in many implementations of LLMs and other models, the way that they interact with data could easily cause data leakage. We needed to vet technology and AI models to figure out where it was safe to put data in and where it wasn’t. 

Proper governance required us to essentially build a data classification policy first. Once we build the data classification policy, then we could say, okay, now that can help guide and govern all these different things that people want to do. For example, if the information's public or it's going to be a blog post, go ahead and put it wherever you want. But if this is something more company confidential or customer data, we have to have a very different view on the security of that.

First and foremost, we want to make sure we protect all data relating to customers and that we're not really doing anything with customers that would threaten their data.

2. Approach AI governance as a team sport

As we started to formalize our approach, my colleagues and I implicitly understood that governance needed to be the common denominator for every use case where our business wanted to apply AI. We also recognized that if we were going to square rapid adoption with effective decision-making about how AI is used, we’d need to approach AI governance as a team sport. 

We’d need collaboration that crossed all departmental and org-chart boundaries if we were going to create meaningful but functional policies. At the same time, we also strived to create a system that was lightweight and wasn’t so complicated that it sapped productivity and forward motion. We didn’t want the governance framework to keep us from rapidly adopting AI innovations that lined up with our risk appetite.

What we’ve come up with has worked very well to strike that balance. We’ve created a company-wide AI council that’s the central clearinghouse for AI decisioning. The point of the council isn’t to think of AI ideas or to squash them, either. The point is to have a room full of different thinkers to understand what customers and team members want to do with AI, consider the risks from a lot of different angles, and help guide them toward a path that does not limit their creativity and allows them to achieve their goals in a way that’s aligned with our policies. 

I’m on the council and so is our chief risk officer, chief innovation officer, CISO, field CISO, stakeholders from engineering, lead AI engineer, general counsel and operational counsel, and even someone who is in marketing. The council developed our initial AI policy and continues to shepherd it as the AI landscape evolves.

When my team or anyone else in the business wants to bring a new use case to the company, they submit it to the council through a ticketing system that we’re regularly reviewing. If it fits with existing policy, we approve it right away. If it doesn’t fit but causes us to update the policy, then we update it—the policy is user facing so everyone in the company has access. And if it doesn’t fit with policy and doesn’t warrant a change we will offer our concerns and potential recommendations or action items that could help it work instead.  

3. Categorize use cases to optimize adoption priorities

As our engineers and IT operators look for ways to leverage AI, we want them to be able to rapidly look for the high ROI ( return on investment ) opportunities. We want the team to be on the lookout for transformational opportunities—whether internal improvements or customer-facing opportunities—for which we can navigate the risks with some effort and investment.

To focus investment, we established three buckets and prioritize the top work in each in order to keep moving along on different tracks at an appropriate pace for each. The first category is operational and includes all of the AI-enabled tooling we can use or develop internally that makes us more efficient as a business. The second is the customer-delivered category, which is the tooling we can use to make the customer experience better, either in products or services. The final bucket is transformational capabilities—these are the use cases that can help us leap ahead of our current product or operational capabilities in a way that can substantially transform the business. 

What we are trying to do through this categorization is to use the language and philosophy of prioritization to move each of those buckets forward at an appropriate pace even in the face of resource constraints and potential barriers. 

4. Keep a human in the loop for quality

Research and early adoption shows that LLMs get things wrong more than we want them to. In many state-of-the-art applications of LLMs there are situations where these models return errors or inconsistent results 45% to 50% of the time. If a system works really well but it’s only reliable half of the time, then that’s a big problem for sensitive business use cases. 

And so having a human in the loop is really, really important. As we’ve built our governance structure, our focus has been on keeping a human in the loop where the quality and integrity really, really matter. As we’re looking for opportunities of where we can put AI-enabled technology our goal is to implement automated options where the cost of wrong is low and acceptable, or include a human in the loop to minimize the chance of that error being undetected or unaddressed. 

5. Don’t underestimate old-school automation

The other way I would think about the inconsistency of some AI model performance is that we shouldn’t let up on our drive to invest in traditional automation. Automating anything we can is good enough now and we'll get better as the models improve (and as the patterns for AI adoption improve). Let's say you have a human process and if you automate that process 10%, that's a slight efficiency gain. But plugging in AI into a 90% automated process is probably not going to make sense anyway. You have to automate it a hundred percent and then you can maybe consider how you can plug in AI.

My philosophy is to pick areas where you can automate all the way through, automate it, and get the benefit now. Once you have done that work then you can plug in better and smarter enhancements.

6. Preapprove AI tech for better control

Very early on in our journey, everyone on the AI council recognized that if we enabled our entire user base—whether it was engineers or business users—with AI tools we’d vetted and thought were safe for specific use cases, then we could go faster and safer. 

The consultancy ThoughtWorks has something it calls the TechRadar that they publish for technology. We basically implemented a similar system internally for these AI technologies. This provides centralized information on what we’ve got adopted for which use cases, what we have under evaluation, what technologies are on hold due to certain barriers, and what’s on the wish list from our users. This helps us control the risks but also offer up alternatives to our users who are on the hunt for tools and to keep an eye on technology as it is improved. 

7. Consider a centralized AI team to provide internal consulting

Finally, while the use of AI tools is completely distributed, we do have an AI team that is centralized. These are lead engineers with deep expertise and knowledge of the technologies and the risks inherent to them. This includes people on my team and those who operate under our chief innovation officer, and they are some of our best engineers who operate on an internal consulting model. When someone is working on an AI-related project, they overlay on that team for a short period of time to offer lessons learned and advice during implementation. This kind of knowledge-based team helps us avoid mistakes and control the quality of our AI deployments.