1Password recently surveyed 200 security leaders in North America and found that while AI adoption is accelerating, many organizations can’t securely manage AI tools. The survey revealed four major problem areas.
1. Limited Visibility
First, only about one in five (21%) companies said they have full visibility into which AI tools employees use. Wide adoption of public AI tools, such as ChatGPT, makes it nearly impossible for organizations to enforce existing policies or prevent data from being exposed.
Shadow AI has been a growing problem, but one I’ve been expecting. Every new technology introduced into the workplace has seen rogue usage in the early innings of adoption. Users have always looked to new technologies — such as wireless LAN, email, mobile phones, cloud storage and more — to make their lives easier. AI is following that trend. The risk level is higher, however, and organizations need to handle it now before sensitive data is compromised.
2. Weak AI Governance Enforcement
Second, 54% of security leaders admitted to having weak AI governance enforcement. Even with policies in place, 32% of leaders said they believe up to half of their employees continue to use unauthorized AI tools.
This data reveals that simply having a policy is not enough; the real challenge lies in its effective implementation and enforcement. This lack of control opens a company to many risks, including data breaches, loss of intellectual property and noncompliance with regulations like GDPR or HIPAA.
When employees use unapproved AI tools, sensitive company data can be unknowingly shared with third-party vendors, making it vulnerable to exposure. The findings underscore the critical need for comprehensive governance frameworks that aren’t just written documents but are actively managed, monitored and enforced to protect the organization’s assets and reputation in an AI-driven landscape.
3. Access to Sensitive Data
Third, 63% of leaders said their biggest internal security threat is employees unintentionally giving AI tools access to sensitive data. The threat isn’t coming from users who deliberately leak or misuse company information. Most of the time, employees don’t realize that data shared with public AI tools is used to train large language models (LLMs).
This underscores a major, often overlooked, risk in today’s digital workplace. This threat is particularly insidious because it’s not malicious; it stems from a lack of awareness rather than intentional wrongdoing. Employees, often in an effort to be more productive, might use public AI tools without realizing that the data they input — whether it’s customer lists, proprietary code or confidential financial information — is not private. Public AI tools often use this input to train their LLMs, meaning that sensitive company data can become part of a publicly accessible dataset.
This highlights the critical need for a proactive approach to security that focuses on education and training, rather than just punishment. The key is to transform a company’s biggest vulnerability — its employees — into its strongest line of defense by equipping them with the knowledge to use AI tools safely and responsibly.
4. Unmanaged AI Tools
Fourth, more than half of security leaders (56%) estimated that between 26% and 50% of the AI tools their organizations use are unmanaged. Existing identity and access systems weren’t built for AI. It’s difficult to gauge what these tools are doing or who gave them access. This creates potential security risks and compliance violations.
The issue stems from the fact that legacy identity and access management (IAM) systems weren’t designed to handle the unique, dynamic nature of AI tools. Unlike human users with predictable roles and lifecycles, AI tools and agents can operate autonomously, often with permissions inherited from the employee who deployed them. This creates a host of unmonitored connections and data flows, making it difficult to determine what these tools are doing or who granted them access. This lack of visibility and control poses a significant risk of data exfiltration, compliance violations and unauthorized access, as a single compromised AI tool could potentially expose a vast amount of sensitive data.
Best Practices
1Password made several recommendations for how organizations can close the growing “access-trust gap” created by unsanctioned AI use. It’s important to document where AI is already part of daily workflows and where employees plan to use it. Organizations should also implement governance or device trust solutions to monitor unauthorized AI.
Effective AI governance should be part of a broader AI adoption plan. To identify how AI is being used companywide, security leaders should work closely with other departments, such as legal. On top of that, training employees about AI risks can prevent potential data leaks.
Finally, organizations should update the way they control access to AI tools. This means setting clear rules for when and how AI tools can connect to company systems. They should keep track of access and activity to stay in compliance with company policies. It might be necessary to enforce additional policies to prohibit public AI tools in the workplace.



Speak Your Mind