No Jitter Brief:
-
Ensuring that AI-generated content meets compliance standards is a problem, reported nearly half of employees in a Theta Lake survey. 47% of respondents logged concerns that data quality and policy adherence apply to AI input and output.
-
In addition, 45% reported difficulties inspecting content for confidential data exposure in generative AI content, while only 36% say access guardrails for AI tools are working.
-
Knowing who and what is accessing data when is also a concern, with 41% saying that identifying end-user-driven risk with AI tools is a challenge. About 40% also say they find it difficult to find how and with whom AI-generated content is shared.
No Jitter Insight:
According to Gartner, “By 2029, 40% of digital communications governance and archiving customers will monitor conversations of internal and external-facing GenAI, AI assistant and chatbot tools to monitor their agentic AI-based responses, up from less than 5% in 2025.” However, there are data quality concerns around the output from generative AI tools – which concerns anyone who creates content like email drafts or meeting summaries. When the workplace has to uphold specific industry regulations or risk legal and financial repercussions, concerns around the accuracy and regulatory compliance in any gen AI-created content amplify. Not only can AI create inaccurate generated content, but it can also access and expose data it isn’t supposed to.
Also, there’s the risk that employees are using shadow AI. For firms that prohibit AI assistants, only 47% actively track employee and third-party vendor usage of unauthorized AI tools. So a majority of organizations don’t have any guardrails in place to ensure users aren’t putting data into unauthorized AI tools, and several respondents cited challenges in in identifying risky end-user interaction with AI tools (41%) and tracking how and with whom/what data and communications were shared (40%).
After sensitive data is shared, it’s difficult to remediate the issues. FINRA – the Financial Industry Regulatory Authority – reminds financial firms that they are “……responsible for their communications, regardless of whether they are generated by a human or AI technology.” Despite this responsibility, 35% of respondents said that removing AI-generated content that violates policies or is considered inappropriate content from conversations is a challenge. 33% also said remediation of AI controls and notifying and re-training users presents difficulties.
Theta Lake surveyed 500 senior IT and compliance professionals in the financial services industry throughout the US and UK.
