Whistic is passionate about third-party risk management (TPRM) because we believe that transparently sharing security information builds customer trust and keeps businesses safe from cybersecurity risks. Effective TPRM also helps businesses close deals more quickly and buy the software they need to thrive—fast and without adding risk.
Our goal is to eliminate the wasteful, time-consuming activity that stands between software buyers and sellers. In third-party risk management, that means eliminating the security questionnaire and reducing the time necessary to assess vendors from weeks or months to seconds.
AI is a huge part of that mission. Whistic has developed industry-leading AI capabilities that are transforming both TPRM and customer trust in a single platform. But as any risk or InfoSec leader knows, it’s critical to understand, assess, and analyze new technology before you can fully leverage it.
That’s the purpose of this guide.
What does this guide contain?
With any widespread innovation, it’s important to separate hype from reality. This primer is designed to do just that. Here, you'll find clear definitions for artificial intelligence and related technologies, as well as some of the most common use cases for AI, so you’ll have a working understanding of how it’s already impacting your third-party ecosystem.
As more and more software companies and industries adopt AI use cases, it will be increasingly important to understand the risks they pose and what you can do to assess them in your vendors and third parties. So, we’ve also outlined known AI risks and provided recommendations for security assessments of AI-driven software.
Finally, we cover exactly what Whistic’s long-term investments in AI mean for our customers and the industry. We’ll share how we utilize AI in our software, how we’ve integrated AI into our features, what we do to secure our own AI, and how we help our customers assess AI risk in their vendor supply chain.
What is Artificial Intelligence (AI)?
The name “artificial intelligence” is apt, because it encapsulates what we mean when we talk about AI. AI simulates human intelligence in machines through programming that allows it to think and learn like humans. AI systems can be programmed to perform tasks that typically require traditional human intelligence, such as visual analysis, speech recognition, decision-making, and language translation.
What is Machine Learning (ML)?
Machine learning is derived from AI. Machine learning is based on algorithms and statistical models that allow computers to perform tasks without explicit programming. AI relies on instructions programmed to perform specific tasks, but ML is trained on large data sets to detect patterns and make predictions.
Machine learning can also improve performance over time as it is exposed to more and more data, making it even more effective than traditional AI at things like natural language processing, making recommendations and predictions, or solving complex problems.
What is Deep Learning?
Deep Learning builds on the principles of Machine Learning by using neural networks, which are artificial structures based on how the human brain processes information. The data used in these neural networks is organized in layers that exist between the input of the data and the system’s output. Deep learning systems can distinguish between these layers for much more sophisticated analysis, but they require massive amounts of data (and computing power) to train.
What is Generative AI?
Once again, the name does a lot of the work for us. Generative AI takes Deep Learning a step further with models that actually create (or “generate”) new content, images, text, and other kinds of information. It is said to generate this data because its outputs may not be specifically present in the original data that trains the model—in other words, it is taking existing data and using it to create something new.
AI in Vendor Products
One of the challenges inherent in understanding the current AI landscape is separating the hype from reality. The term “AI” is ubiquitous these days, but understanding precisely what it means in each context is difficult. While identifying the exact ways AI is deployed for all software types is impossible, we can identify some general use cases across industries. This is an important first step in understanding how AI is being used in your own software supply chain.
AI in software development
AI can currently be deployed in the software development process in a number of ways. Generative AI can assist developers in the creation of code to build new software, but it can also be used to augment or automate across the development life cycle, including:
- Building minimum viable products—Innovative ideas can be created more quickly, allowing businesses to assess their viability and iterate future generations fast.
- Spotting bugs in code—AI makes it possible to identify problematic patterns or errors in large sections of code.
- Training junior developers—AI can synthesize summaries of code in plain language, making it possible to get less experienced developers (or new hires who have not worked on a given project before) up to speed fast.
- Testing software—Many aspects of standard software testing can be automated, allowing you to more efficiently understand overall performance.
- Creating technical documentation—Overviews and summaries of software coding can be created to make technical documentation faster and easier.
General uses of AI in software
Beyond its role in software development, AI is increasingly part of the functionality of the software itself. Today, AI is most commonly used in software to provide automation for rote or repetitive tasks. As AI technology grows more sophisticated, the scope and scale of automation is growing, too. Here are some of the most common ways AI software deploys automation:
- Robotic Process Automation (RPA)—By automating simple, rules-based interactions with digital systems, RPA can be utilized for a wide range of tasks like order fulfillment, inventory management, and data entry.
- Natural Language Processing (NLP)—The ability to understand and analyze human language allows NLP chatbots to handle basic customer interactions, respond to basic questions, or guide the training of new system users.
- Data entry and extraction—AI’s ability to identify and collect important information from documents and emails makes it possible to automate data entry and collection while improving accuracy.
- Visual analysis—Images and video can be analyzed to provide things like facial recognition and defect detection.
AI in software across industries and functions
For software companies, the excitement and the promise of AI have created an arm’s race to develop new solutions that provide the greatest value, faster than competitors. As a result, virtually every sector and business function has identified viable use cases for AI-based software:
- Customer support—Chatbots in help centers as the first line of customer service
- Cybersecurity—Threat detection, incident response, identifying suspicious or anomalous behavior, and facial recognition for access control
- HR—Automated candidate screening, skills analysis, employee evaluations, and employee engagement
- Supply chain and logistics—Inventory management, delivery and shipment optimization, and demand forecasting
- Sales—Develop ideal customer profiles, predict the likeliest deals to be closed or lost, more accurately forecast pipeline, and better understand customer activity
- Manufacturing—Quality control, defect detection, and prototyping
- Marketing—Understand customer sentiment across social/digital channels, brand management, targeted advertising, and purchasing-intent data
- Healthcare—Diagnostics, appointment scheduling, billing, and data extraction from patient records
- Insurance—Claims processing and review, underwriting, and compliance
- Financial Services—Fraud detection, risk assessment, and even trading
And of course, AI can be applied to third-party risk management (TPRM) in a number of important ways we’ll discuss soon. But first, let’s take a look at why excellence in TPRM is more critical than ever in the context of AI.
Cybersecurity Risks Associated with AI in the Supply Chain
Leaps in cloud computing have made it simpler and cheaper to license software from vendors than build and maintain large, on-prem infrastructure. This leads to a third-party ecosystem—or software supply chain—of dozens, hundreds, or even thousands of outside technology providers.
This software-as-a-service (SaaS) paradigm has allowed companies to be more agile, quickly scale digital capabilities, and control costs. But it also exposes them to cybersecurity threats as large quantities of data are shared across an ever-growing network of systems in this supply chain.
Third-party risk management has evolved alongside SaaS to identify, assess, and manage these cyber threats. AI will be at the center of TPRM’s next evolution. Keeping your business safe in a world of AI software starts with understanding the risks that artificial intelligence might pose.
What are the most important AI risk factors?
AI technology is evolving rapidly, and this accelerating pace poses some unique challenges when it comes to understanding the risks associated with it. But while an understanding of AI models continues to mature, there are several key risk factors to consider when evaluating your third-party environment:
- Whether and how a vendor leverages AI in their solution—Many companies lack strong vendor management and governance, and this can lead to poor visibility and oversight of AI usage in their tech stack. It’s critical to know which of your vendors have AI in their products and exactly how it’s deployed. It’s hard to protect against risks you can’t see.
- Which vendors use AI to build their software—The use of AI in software coding and development may have an impact on vendor security controls and practices. This risk is especially acute given the current speed at which many companies are racing to incorporate AI.
- Scarce AI security talent—Many organizations lack cybersecurity expertise and resources dedicated to AI.
- Immature AI business processes—Safely integrating and scaling AI into the technology ecosystem requires new, documented processes and governance in Procurement, IT, and InfoSec, among other stakeholders. For many businesses, these changes remain a work in progress that may not keep up with the pace of change.
What are the specific risks associated with AI?
In addition to the risk factors discussed above, businesses must identify several specific risks before they can incorporate them into a robust TPRM program.
1. Data security and privacy
AI systems rely on large data sets for training. If these data sets contain sensitive or personal identifiable information (PII), there could be a risk of unauthorized access, misuse, or data breach. In your third-party supply chain, you may have different AI systems from multiple vendors with access to business-critical data.
2. Deployment and integration issues
Integrating AI tools with your existing systems may introduce security gaps if poorly managed and if proper security protocols and controls are not followed.
3. Lack of explainability
Many AI models are considered “black boxes” because it can be difficult to interpret their decision-making processes. Without a clear understanding of why an AI system makes a particular decision, it can be harder to identify and address security issues.
4. AI-enabled cyber attacks
AI techniques may be adopted by cyber attackers as their capabilities grow more sophisticated. Examples of these kinds of attacks include more targeted and automated phishing and intelligent malware.
5. Regulatory and compliance risks
The data that trains AI models or the data your AI-driven vendor solutions access in your own environment may lead to increased regulatory complexity, making compliance a greater challenge.
Assessing the Risks of AI in Your Third-Party Ecosystem
These risks of AI are real, and should be taken seriously. That, however, is not an argument against utilizing AI-based solutions in your third-party supply chain. Instead, it’s an argument for adopting a risk-based approach to AI technology in the software you purchase (and for addressing risks in the software you develop and sell). That’s where world-class TPRM comes in.
Third-party risk management as a discipline combines processes and technologies (like Whistic) to articulate the unique risk tolerances of your business, identify risk factors to which you are uniquely vulnerable, assess the type and severity of risk in your vendor supply chain, and provide insight into the best ways to manage and mitigate that risk. In short, it provides a systematic approach to weighing the value of a new software against the potential costs.
These general principles can be applied to assessing AI risk, but the Whistic platform also includes AI-specific frameworks, security-assessment questionnaires, and guidelines to enrich your existing TPRM program. Let’s take a look at an AI-centric approach to vendor security assessment.
1. Develop clear, consistent risk ranking
Risk ranking is the foundation of every great TPRM program, and it’s especially important for assessing AI risk. Risk ranking designates tiers of risk based on a set of criteria specific to your business. These criteria give you a rubric for comparing risk across vendors so you can organize them into the proper tier. Your risk-ranking methodology should include:
- Vendor profiles and inventory—Create a comprehensive list of third parties your company uses and information that can help you determine their level of risk, such as systems the vendor will have access to, the data type and volume they will have access to, and their criticality to your business.
- Alignment with vendor intake—Ensure you are consistently collecting and documenting the necessary data to assign risk when you onboard a vendor.
- Effective scoring system—Devise a simple, straightforward formula that weights your particular risk factors and generates a score like “High”, “Medium”, or “Low” risk.
- Single system of record—Collect pertinent data in a centralized way that increases stakeholder visibility and reduces the possibility of redundancies or human error.
2. Utilize AI-specific standards and frameworks
Though there remains some lingering uncertainty around the full scope of AI risk, some of the most trusted sources in information security standards have developed clear frameworks and guidelines to codify AI security assessments. The Whistic Platform includes customized questionnaires based on the three most robust such frameworks:
- NIST AI Risk Management Framework—The National Institute of Standards and Technology (NIST) is a trusted authority on technology frameworks. Like their previous frameworks around cybersecurity and data privacy, NIST’s AI framework focuses on AI risk characterization; recommendations for data management and governance; transparency in AI development and deployment; explainability and interpretability; and human collaboration with AI systems.
- capAI based on the European Union’s AI Act—The AI Act, adopted by the European Parliament on June 14, 2023, is designed to ensure that AI technology is secure, transparent, traceable, unbiased, and environmentally sound. The act designates risk levels for AI systems, with special provisions for the use of generative AI.
Based on this legislation, the University of Oxford developed a tool called capAI, which is a process for testing how well an AI system conforms to EU requirements. Whistic has created a specific security questionnaire based on capAI that can be applied to AI use cases.
- ISO 23053—This framework was created by the International Organization for Standardization (ISO) in June 2022 and is specifically designed to assess AI and Machine Learning systems. ISO 23053 establishes a common terminology and core concepts for these systems, describes AI components and functions, and applies to public and private organizations of all sizes. Whistic has created a security assessment questionnaire based on ISO 23053.
3. Prioritize AI risk in your software evaluation, selection, and procurement processes
One of the biggest mistakes companies make around TPRM is isolating it from other aspects of vendor management—especially in the earliest stages when you are selecting your software. All too often, risk management comes later in the process, which might make it harder to gain clarity and visibility into the ways your vendors are using AI.
Be sure your software selection and procurement criteria reflect AI risk management. To help in this process, make sure potential vendors can answer these questions about AI usage up front:
- What kind of penetration testing has the vendor performed on the AI in their environment and solutions?
- What kind of monitoring are they doing on their AI-based systems?
- What kind of documentation can they provide to demonstrate that they’ve done these things correctly and with due diligence?
Getting answers to these questions early in the process can give you a better sense of how to proceed with an assessment.
The Total TPRM Maturity Checklist
Everything You Need to Build and Benchmark a World-Class Third-Party Risk Management Program.Download the Guide
Whistic AI Solutions
As you can tell, transparency is at the core of AI risk management. That’s something we at Whistic embrace in our own approach to AI in our software and in our own third-party ecosystem. Whistic’s dual-sided TPRM platform incorporates AI in three essential ways by:
- Automating vendor security assessments and summarization for software buyers
- Automating security assessment requests and accelerating sales cycles for software vendors
- Providing AI-specific frameworks and security questionnaires to assess AI in your third-party ecosystem or self-assess AI in your own solutions to build customer trust
But before we dive deeper into the AI-driven features of the Whistic Platform and what they mean for the future of TPRM, it’s important to understand exactly how we use and secure our own AI.
How is Whistic leveraging AI in our platform?
Whistic’s AI-powered capabilities are built on top of OpenAI’s API framework and are designed to serve companies automating TPRM (through the Whistic Assess product) and companies managing customer trust programs (through the Whistic Profile product). Whistic utilizes OpenAI API text comparison endpoint, which measures the relationship between texts using the text-embedding-ada-002 model—a second-generation embedding model.
Whistic’s use case for OpenAI functionality is designed to power the platform using API integrations rather than directly incorporating OpenAI products into our products. This allows us to disrupt the traditional TPRM workflows without having to compromise our product functionality or the privacy and security of the platform—or the security of all the content that resides within it.
How does Whistic secure the AI in our platform?
As we’ve discussed, there are meaningful concerns and risks associated with AI security, especially around the data consumed by AI models and systems. Adding AI to any product—just like adding any other kind of functionality or service—increases the attack surface and adds complexity which can increase vulnerabilities.
This is exactly why Whistic has added additional controls, testing, and assurances in order to mitigate these risks. To maintain the highest security standards in the industry, Whistic:
- Does not utilize ChatGPT—Our platform does not use or repurpose ChatGPT for any functionality. We are strictly using OpenAI’s APIs.
- Can be disabled whenever you choose—We believe our AI features will add significant value to your business, but it is your decision as to when or if to use them. Our AI is configurable.
- Protects you and your vendors’ Whistic data—Data submitted through the OpenAI API via Whistic is not used to train OpenAI models or improve OpenAI service offerings.
- Keeps customers’ security documentation and responses separate from one another—We logically separate customer data with multiple checks through our use of universally unique identifiers (UUIDs) and access validation at the data layer, the application layer, and the presentation layer. This same approach applies when processing documents or other data, with multiple checks for each transaction.
- Assesses the security of OpenAI’s API—OpenAI API undergoes annual third-party penetration testing to identify security weaknesses before they can be exploited by malicious actors (learn more here). OpenAI is SOC 2 Type 2 certified and is compliant with both CCPA and GDPR. Whistic has also completed our own security assessment of OpenAI.
- Ensures the accuracy of query responses—Whistic uses industry-standard similarity measures to maximize the accuracy of our matches. We constantly monitor and adjust our match thresholds to ensure customers receive the best possible results. Additionally, we ensure users have the flexibility to accept or disregard match results.
More on OpenAI API security
Whistic also understands how critical it is that your data isn’t shared with OpenAI or other customers, so it’s worth taking a moment to go a bit deeper there. When data is submitted through the OpenAI API, it is used to create what are known as embeddings, which provide the functionality for our AI-powered product.
AI embeddings are a way to represent data that is simpler for machines to understand. In a sense, they “compress” data while preserving the most important information, filtering out extraneous noise. This means that, once the embeddings are initially created, your queries run through Whistic, not OpenAI.
OpenAI stores this data for 30 days to improve efficiency in search results, but it is not processed by OpenAI for any other purpose. Whistic caches the embeddings in our infrastructure and relies on local calls for functionality, not on re-calling the OpenAI cached embeddings.
OpenAI’s API is also only usable with Transport Layer Security (TLS) or Encryption in Transit active. Whistic secures your data inside our infrastructure using industry best practices for both data in transit (TLS 1.2) and data at rest (AES-256), configured actively within AWS.
Whistic’s AI-powered features for TPRM and Customer Trust
Understanding how Whistic is utilizing AI in our products is a necessary part of the TPRM process. Software buyers—especially including our own customers—need this information to make informed decisions about AI risk.
But now comes the fun part: how the AI in our product is transforming third-party risk management and creating massive value for software buyers and sellers. The AI features in our products are constantly evolving, but they are founded on the following core functions.
Knowledge Base with AI-powered Smart Search
Think of Whistic’s Knowledge Base as a turbocharged, next-generation trust center. It provides Whistic Profile customers with a more intuitive way to store, organize, and intelligently search security documentation.
Knowledge Base utilizes a functionality called Smart Search to locate and share approved security documentation. The AI that powers Smart Search is contextual, leveraging advanced semantic analysis techniques to understand the relationship between words and concepts in your search query. This means that Smart Search understands the actual intent of your query.
Why is this important? It means you can find accurate answers to even customized questionnaires in minutes, not hours. So rather than poring over reams of security documentation in response to specifically phrased questions, Smart Search will provide the context-based answer the question intends—along with document citations and an accuracy score.
With Knowledge Base, software sellers can:
- Unburden InfoSec as the sole source of truth by empowering self-service for Sales, Legal, or Procurement—while still maintaining security controls and access management
- Automate responses to even customized security review questionnaires
- Provide customers with faster, more detailed responses to their questionnaire requests—so they can make smarter, safer buying decisions that help close deals faster
This AI-powered feature helps both software buyers using the Whistic Assess product and software sellers using Whistic Profile—and especially those organizations that are resource constrained or want to unburden large teams involved in the security assessment and response processes.
Smart Response automatically sources answers to security questionnaires in several ways:
- Software buyers can do an automated assessment of a third party by querying the security documentation their vendors provide—or by querying a shared Whistic Profile or other trust center.
- Buyers can also query the Whistic Trust catalog—a marketplace where thousands of vendors proactively share their security posture—to find out which vendors meet their security criteria, allowing them to “comparison shop” based on risk factors early in the process.
- Third parties can upload a questionnaire request into their Knowledge Base, and Smart Response will automatically provide the answers within minutes—even to customized questionnaires.
These responses include a confidence score, a full rationale for the provided response, and citations from the documents sourced. Users can audit answers and accept or reject them where appropriate. Accepted answers will be added to the vendor’s Knowledge base so Smart Response can use them for future questionnaires.
In addition to saving both buyers and sellers huge amounts of time in the TPRM process, Smart Response also helps buyers target the areas of greatest risk during their security assessments and helps sellers identify common vulnerabilities in their own products.
SOC 2 Summarization
AI-powered assessment summarization eliminates the arduous manual task of examining every piece of information contained in a SOC 2 report—without compromising security. With summarization, Whistic Assess customers can automatically extract key audit details, identify exceptions for deeper follow-up or review, and organize by security controls and compliance requirements.
Having this information automatically, within minutes, and in an easily digestible form not only saves hours of InfoSec time; it allows software buyers to focus their resources on exceptions or issues—improving overall third-party security. SOC 2 Summarization also produces executive-level reports that can be included on a vendor record or request ticket and easily shared with Procurement, the business sponsor, or the vendor themselves.
Frameworks and standards to assess AI risk
As we mentioned earlier, standards and frameworks can be an excellent way to cover a broad range of common cybersecurity risks in the third-party supply chain. We discussed a few of the industry standards that make it possible to identify AI risks in your vendor environment. These come standard to our customers in the Whistic Platform and include:
Assessments for AI use cases include questionnaires based on the NIST framework, the EU’s capAI framework, and the ISO 23053 framework. These can be used as part of the vendor security review process or to self-assess the AI in your own software solutions so you can strengthen your security posture and lead with trust.
The Future of Third-Party Risk Management with AI
Through our focus on proactive transparency, our long-term investments in automation, and our commitment to helping software buyers and sellers put security first, Whistic has taken the time for security review from weeks or months to days.
Now, our powerful AI capabilities are decreasing the time it takes to conduct a vendor security assessment from days to minutes—or even seconds. With a single platform for both sides of the TPRM process, we’re leading the charge toward the AI future. Here’s our vision for that future.
AI-powered customer Trust Centers with Whistic Profile
Whistic AI will make it possible for vendors and third parties to:
- Eliminate security questionnaire response for good by transforming security documentation into a polished, accessible, and queryable Trust Center in seconds
- Connect their AI-powered Trust Center to internal systems so control, compliance, and security posture are dynamic and continuously updated automatically
- Answer any security question, from any customer—achieving 100% acceptance rate of their Trust Center
- Empower their customers to ask detailed follow-up security questions whenever they arise
AI-powered third-party risk management with Whistic Assess
Whistic AI will make it possible for software buyers to:
- Automatically collect and assess security posture from vendor Trust Centers, Whistic Profiles and Knowledge Base, or other security documentation—that’s 100% assessment response with no waiting and no endless back-and-forth
- Summarize security documentation instantly without sifting through mountains of data in order to buy with confidence
- Automate issue creation, notifications, and executive summaries with remediation recommendations for maximum visibility and end-to-end issue management
- Transform procurement without sending questionnaires to find a vendor they can trust; by documenting critical security controls, AI-powered search will identify only those vendors that meet buyer requirements
Artificial intelligence has enormous transformative potential for businesses of all kinds. The technology is advancing rapidly, and use cases are quickly making their way into your vendor and third-party supply chain.
Yet it’s also the case that, as with any technology, there are risks. This combination—massive business opportunity, growing adoption of use cases across industries, and risk—makes it essential to take a risk-management approach to AI adoption. This entails:
- Understanding how your vendors utilize AI in their development or products
- Developing the right talent base and processes to manage risk effectively
- Building a strong TPRM program to systematically assess and reassess the vendors in your ecosystem
That’s where Whistic comes in. Our third-party risk management platform is the first of its kind to leverage advanced AI capabilities for both vendors and their customers—so both sides of the relationship can benefit from AI and identify, manage, and mitigate its risks. Whistic’s approach combines innovative technology with industry know-how to deliver holistic TPRM and Customer Trust:
- Consultations to ensure your TPRM program matches your needs
- AI-powered, automated SOC 2 Summarization for Whistic Assess users looking to purchase software or outsource
- AI-powered Knowledge Base and Smart Response for vendors and third-parties through Whistic Profile
- A growing library of industry-standard security frameworks and questionnaires—like capAI, the NIST AI Framework, and ISO 23053—designed specifically to assess the risk of AI in your third-party ecosystem (or self-assess the AI in your own solutions and products to build consumer trust)
Ready to get started on your AI TPRM journey? Schedule a guided tour of our platform today.