Don’t let AI adoption outpace security. Establish an AI Council, vet vendors, and enforce policies to keep AI under control. Discover the framework top enterprises use.
7 Questions Tech Buyers Should Ask About How Their Vendors Use AI
Tags:


As AI becomes an increasingly critical component in the digital supply chain, tech buyers are struggling to appropriately measure and manage their AI risk. Keeping tabs on emerging risk from the AI technology they use is hard enough. But often the most crucial AI business functions that organizations depend upon aren’t directly under their control or care, but instead are governed by the tech vendors that embed them into their underlying software.
It's a classic vendor risk management challenge, one which Bitsight is intimately aware of. Vendor risk and third-party risk management (TPRM) are, of course, our bread and butter. And on the flip side, we are also a tech vendor ourselves that uses AI to help our products and business to better serve customers. Our company has invested considerable resources into managing the risk around AI. We have well-established policies and procedures that govern how Bitsight embeds AI into its products and how the business uses it internally.
With this expert lens, here’s what we see as the most important questions that tech buyers should be asking about how their vendors use AI today.
1. How do you control which data trains and flows into your models?
At the end of the day, one of the biggest questions Bitsight clients want answered is whether their data will be used to train public LLMs (in our case, that answer is an unequivocal ‘no’). But this points to a more foundational line of questioning that all tech buyers should be pressing their vendors on.
No one wants the public or bad actors to ask questions around your priority data and then get served results that should have been kept secret. Ask for concrete information about whether customer data is used to train the vendor’s models and, if so, get details about how the vendor protects the privacy and integrity of the data that flows through and trains their AI models. Buyers should pointedly ask what measures are taken to protect against model inference attacks that can expose training data and against model poisoning attacks that can weaponize data to disrupt the proper functioning of the model. They may also want to ask whether they can opt out their own data from model training, and what mechanisms the vendor uses to ensure their data isn’t used.
2. Can you describe which features use AI models and how it impacts functionality?
AI stands to transform the way different kinds of technology make on-the-fly decisions, calculations, and judgements that involve many different variables. Whether AI is used to power real-time dynamic pricing changes based on market fluctuations or to trigger predictive maintenance actions in manufacturing equipment, it can be a true game-changer for optimizing business processes.
The first fundamental problem of managing risk incurred by this kind of AI use is one of transparency. Does the business even know when its financial software platform is using AI to trigger pricing changes? Does it know when its manufacturing technology leverages AI to schedule and execute changes to its firmware? If this AI is operating under black box conditions where the end user isn’t even aware of its existence, it becomes very difficult for that organization to enumerate and manage the AI exposures that it could be potentially introducing to its digitally-led business processes.
Our most sophisticated clients at Bitsight are becoming more discerning about how we and the rest of their tech vendors are using AI to make calculations and decisions under the hood. As they make critical choices about risk, they should ask for greater transparency into the algorithms their vendors use and about the data that fuels it all.
3. What models do you use and where do they come from?
Once businesses establish where AI is being used within tech vendor functionality, the most common-sense follow-up question is ‘What kind of AI technology is driving those features?’
Tech buyers should be asking about which kinds of AI models are in use, such as simple machine learning, neural networks, generative AI, or natural language processing. They’ll also want to know where the models come from. Is the vendor using commercially developed models or deriving them from public repositories? And once it is sourced, is the model training feeding back into a centralized repository or is it isolated as a private model.
All of these factors are important for creating a more complete picture of AI risk that extends from the vendor’s use.
4. Do you have an AI governance framework and acceptable use policy for AI in your products and services?
It should be table stakes in 2025 for critical tech vendors to have some kind of written policy and framework in place that governs when and how they leverage AI in their products and processes. This is something that Bitsight has taken very seriously. Our company runs a cross-departmental AI council that constantly updates and refines our AI policies, which are designed from the ground-up to deliver trustworthy AI. Buyers should look for similar activities at any vendor that handles sensitive information.
5. How does your company protect the integrity of its AI models and data?
AI models and infrastructure are increasingly going to require a whole new specialized set of controls and processes to properly secure. Especially for critical systems that are enhanced with AI capabilities, customers should start asking their vendors about what kind of access controls they place around who can ‘touch’ models, what kind of monitoring they do to log model interactions and updates, and what kind of mechanisms they have in place to scan models for flaws and malicious code.
6. Does your AI have AI bias safeguards in place?
As AI is increasingly used in technology products to govern critical decisions and information processing that businesses depend upon, AI bias will grow as an increasing business concern. Whether it is AI in a mortgage processing app that charges higher interest rates to certain racial groups or AI in recruitment software that inadvertently discriminates against female candidates, these 'features' added by software vendors could become real legal and reputational liabilities if AI bias isn't accounted for. Even AI powerhouses are learning to grapple with this issue. Last year Google had to press pause on a model in Gemini AI that returned historically inaccurate image portrayals of Vikings and George Washington as Black, likely as an overcorrection for AI bias against underrepresented racial groups.
7. Can I turn off your AI features?
The final and most important question for buyers may well be around whether AI is a mandatory feature in a product. The risk tolerance in certain use cases may be so low—or the vendor’s use of AI so risky—that a buyer may just want to opt out entirely from these capabilities. The question will be whether or not the product can function without tapping into the AI enhancements, and what the technical consequences will be for opting out.
While these questions are far from a comprehensive list for vendor assessment, they are some of the most crucial to get the AI risk discussion started with current and prospective vendors. TPRM is still at the most nascent stages for automatically validating the kinds of answers vendors will provide for these questions. Right now the only way to get answers is manually, but the content and tone of these answers can offer some valuable insight to the level of commitment that vendors have for effectively managing AI risk.
