In the rush to roll out and seek business advantages from artificial intelligence technologies, security and governance around AI use is sometimes lagging.
That was one of the findings of Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI, a major study by OpenText (OTEX-T) and its partner the Ponemon Institute.
OpenText, a provider of secure information management for AI, and the Ponemon Institute, an independent research think tank focused on data protection, privacy and information security policy, surveyed 1,878 IT and IT security practitioners across North America, Asia-Pacific, Europe, the Middle East, Africa and Latin America. The firms surveyed covered financial services, healthcare, technology, energy and manufacturing.
"AI maturity isn't just about adopting AI tools, it's about doing it responsibly," said Muhi Majzoub, EVP, product and engineering at OpenText in a release accompanying the study’s results. "Security and governance are foundational to getting real value from AI.
"When they're built into AI systems from the start, organizations can operate with greater transparency, monitor systems continuously, and trust the outcomes AI delivers."
Governance strains: Too few people, little time and tight budgets
The study noted many firms are far from the ideal Majzoub outlined. Only one-in-five firms reported having reached full AI maturity. AI maturity is defined by the Ponemon Institute as having AI “fully deployed and security risks assessed” with KPIs and C-level executives “regularly informed about AI’s ability to prevent and reduce cyberattacks.”
Only 43 per cent of respondents reported having adopted a risk-based strategy to govern AI systems.
One reason for this is organizations often lack the staff needed to deploy AI effectively and handle security across an organization. “Fifty per cent of respondents say AI deployments require too much staff to implement and maintain AI-based technologies and 44 per cent of respondents say the staff does not have enough time to integrate AI-based technologies.”
Added to this are budget constraints. Forty-six per cent of the firms surveyed reported insufficient budgets to deploy AI securely across their organizations.
These constraints of staff and budgets often mean firms struggle to put in place proper risk-based approaches for AI implementation and governance, with only 43 per cent reporting “their organization had adopted a risk-based AI governance approach that focuses on identifying, assessing, and mitigating AI-related risks.”
Still, even with these constraints, 61 per cent of the surveyed firms reported they have put in place, or are creating, policies and procedures to regulate AI use and ensure AI is used responsibly and meets regulatory requirements. Fifty-three per cent said their organizations "continuously monitor the speed of AI performance, evaluate decisions and update models to prevent performance degradation or the emergence of harmful behaviours.”
Concerns over inaccuracies, decision making and human control
While AI, agentic AI agents and other AI-enabled tools are quickly rolling out across many industries, there remains a great deal of skepticism about AI’s usefulness and accuracy. This has organizations looking to put guardrails in place around how AI is employed.
The study noted only 47 per cent said “their AI models can learn robust norms and make safe decisions autonomously. Less than half (48 per cent) say it will be possible in the future to have AI systems that reason and make autonomous decisions based on ethics, regulations and laws to avoid misuse.”
The biggest worries seem to be around errors and inaccuracies in the rules governing AI decisions (45 per cent), and mistakes and inaccuracies in the data being used by AI technologies (40 per cent) that can lead to errors and biases.
This is why 51 per cent of firms said human oversight is needed in AI governance.
"The leaders in this next phase of AI adoption will be those who build transparency and control into AI from the start," said Majzoub in the press release. "As AI becomes embedded in day-to-day operations, organizations need secure information management as the foundation; clear governance frameworks, policy-based controls, and continuous monitoring that ensure AI systems remain trustworthy and compliant.
"Just as important is aligning AI with the right data, security practices, and oversight from the outset so innovation can scale responsibly and deliver measurable business value."
