AI Risk in 2025: What the Data Reveals

We recently analysed the results of an AI risk assessment quiz taken by over 50 organisations. The data provides a revealing snapshot of how businesses are adopting AI and where they are leaving themselves exposed. The majority of companies are already using AI (about two-thirds of respondents), and most of those plan to expand their AI use in the next year (72% have expansion plans). However, our findings show that having AI is not the same as managing AI risk. Many organisations lack fundamental safeguards like clear policies, regular oversight, and staff training. In this blog post, we’ll highlight the main insights from the quiz results and discuss the most prevalent risk areas in AI setups.

1. AI Adoption Is High, but Maturity Varies

AI usage is widespread among the respondents, roughly 70% of the surveyed organisations reported using AI in their business. And among those using AI, enthusiasm is strong. Nearly three-quarters plan to expand their AI use in the next 12 months. This confirms what many industry surveys are seeing, that AI adoption is rapidly growing across sectors [1].

That said, the maturity of AI governance varies greatly. Based on the quiz’s scoring categories, only about one-third of organisations achieved the highest designation of “AI in Control,” meaning they have most best practices in place. The rest showed significant gaps. Roughly one-third were in an “AI Getting Started” stage (these tended to be the companies not using AI yet), and the remaining third fell into risk categories (flagged as either “AI Tune-Up Needed” or “AI at Risk”). In other words, only about 1 in 3 organisations currently has AI well under control, while the others either are just beginning or have notable weaknesses to address. This widespread in maturity underscores that simply deploying AI isn’t enough; governance and risk management must catch up with adoption.

2. Governance and Policy Gaps Are Common

One of the clearest trends is a lack of formal AI governance. Among AI-using companies, only about 60% said they have a documented AI policy in place. Meaning roughly 40% do not have clear rules on AI usage, governance, or security. This aligns with other research showing less than half of businesses have established AI policies [2]. A documented AI policy is crucial for setting guidelines on how AI should be used responsibly and safely within the organisation. Without it, employees may be winging it on their own, which invites trouble.

Regular risk oversight is also falling short. Just 47% of organisations have conducted an AI risk assessment in the last six months, and the rest either have not or are unsure. In fact, over half admitted they haven’t done a recent risk review of their AI systems. Routine risk assessments are like health check-ups for your AI, they help uncover hidden issues (security vulnerabilities, compliance gaps, ethical concerns) before they escalate. If most companies aren’t doing these check-ups, risks can fester unnoticed. It’s no surprise then that in a recent global survey, 93% of companies recognised AI risks but only 9% felt prepared to manage them [3]. Establishing a regular AI risk assessment process is a key governance practice that many are currently missing.

Another governance blind spot is basic oversight of AI usage. Nearly 40% of AI-using firms do not have a clear handle on which AI tools their team is using, some answered “no” or “not sure” when asked if they know all the AI apps in use. This lack of visibility can lead to “rogue” AI usage, where employees experiment with new tools without the internal IT team’s knowledge. It’s hard to manage risks (like data leaks or cost overruns) if you don’t even know an application is being used. Yet many companies are in exactly that situation due to the explosion of easily accessible AI tools.

In summary, foundational governance practices, having a policy, tracking AI tool usage, and performing risk assessments are absent in a large portion of surveyed organisations. These gaps leave companies without a safety net, increasing the likelihood of costly AI-related incidents or compliance issues.

3. Data and Model Management Are Overlooked

Good AI systems require good data management, but our findings show this is a weak spot. When asked “Do you regularly update and improve the information your AI tools use to ensure accuracy?”, the vast majority answered at the lowest possible frequency. In fact, 72% of respondents admitted they never or rarely update their AI’s data or knowledge bases. The average rating for update frequency was only 2.4 out of 10. Essentially, most organisations are not refreshing or tuning their AI with new data. This is a serious concern, because an AI that isn’t kept up to date can quickly become stale or even start producing errors. Imagine an AI tool making business decisions based on last year’s data or outdated assumptions, the outcomes won’t be reliable. Regularly updating training data, knowledge repositories, or model parameters is critical to maintain AI accuracy over time.

Testing and validation of AI outputs is another aspect of data management. Here the picture was somewhat better, but still far from ideal. About 58% of organisations said they do test AI outputs for bias or errors before using them, which means 42% are not consistently checking. Skipping these checks can let biased or incorrect AI recommendations slip through into real decisions. For example, if an AI recruiting tool isn’t tested for bias, it might unknowingly favour or disfavour certain candidates. Or an AI analytics system might output flawed insights due to an unseen data quirk. Regular bias and error testing is how you catch these issues. The fact that 4 in 10 firms aren’t doing this suggests many AI deployments are essentially unmonitored after launch. For robust AI risk management, treating AI models as “fire and forget” is not an option; they need ongoing testing, maintenance, and tuning.

4. Employee Training and Guidance Need Improvement

One striking finding was how few organisations have educated their people on safe and effective AI use. Only about 53% of companies have provided training to their staff on using AI tools, meaning nearly half have offered no formal training or are unsure about it. Even among those using AI actively, many employees may be self-taught or learning by trial-and-error. This lack of training is risky; untrained users might misuse AI tools, ignore ethical guidelines, or simply not get the full value out of AI. Our data echoes a broader trend; a recent survey found 55% of employees using AI at work have received no training on its risks [4]. In other words, many employees are left to figure out AI’s pitfalls on their own, which can lead to mistakes.

Beyond general training, clear guidelines for staff are often missing. We asked if employees know what company or customer information, they are allowed to share with AI tools (for example, entering sensitive data into a generative AI service). Alarmingly, 75% of organisations indicated their staff are not clear on these boundaries. Most respondents picked the lowest possible rating for staff awareness. This was the single most lopsided vulnerability in the quiz. Without guidance, a well-meaning employee might feed confidential data into a third-party AI platform, inadvertently causing a data breach or compliance violation. Every company using AI should explicitly communicate rules on what data can and cannot be exposed to AI services (especially external cloud AI like ChatGPT). But right now, such guidelines appear to be the exception rather than the norm.

On the positive side, many respondents do have plans to improve internal AI skills; over 70% of AI-using organisations said they intend to expand AI use, which likely includes growing their team’s proficiency. However, intention needs to translate into action via training programs, workshops, or hiring expertise. The quiz results make it clear that investing in people is as important as investing in technology when it comes to AI. Well-trained employees who understand AI risks and best practices are one of the best defences against AI-related issues.

5. Cost and Usage Oversight Is Lacking

Controlling the costs and sprawl of AI tools is another area where many firms are vulnerable. The data shows that 72% of organizations have no formal controls or monitoring of AI-related costs (such as API usage fees, cloud computing costs for AI, subscription fees for AI software, etc.). In the quiz, nearly three-quarters of respondents again chose the lowest option when asked about cost management measures. This suggests that most companies aren’t tracking how much they’re spending on AI or if individual teams are racking up unforeseen bills. It’s an easy trap to fall into for instance, a developer might enable an AI API that quietly accrues charges per use, and without cost controls or alerts in place, the finance team gets a nasty surprise at month-end. Organisations that don’t monitor AI costs can end up with budget overruns or inefficient spending, which directly hits the bottom line.

Lack of cost oversight often goes hand-in-hand with the earlier point about not knowing what tools are in use. If employees are experimenting with various AI services unchecked, costs can multiply quickly. Conversely, some companies may be limiting AI usage because of cost fears but without real tracking, it’s hard to strike the right balance. Implementing basic cost controls (like usage quotas, budget alerts, or centralised approval for paid AI services) can save money and prevent unpleasant surprises. Yet, as our findings indicate, few companies have these controls today.

Another aspect of oversight is how AI systems are hosted or deployed. Interestingly, our respondents were split on this about 53% host their AI systems on infrastructure they control (on-premises or private cloud), while roughly 19% rely fully or partly on external cloud providers, and 28% weren’t sure of the setup. There’s no one-size-fits-all answer for the “right” infrastructure but being unsure is a red flag it means IT leadership doesn’t have full visibility into where their AI workloads run or what data might be leaving the company. In any case, companies should consciously decide on a hosting strategy that balances control with convenience and ensure they have insight into that environment. Whether you’re using third-party AI platforms or building your own, oversight is key.

Practical Steps to Reduce AI Risk

The trends above highlight where many organisations are vulnerable. The good news is that these risks can be mitigated with proactive measures. Based on the findings, here are some actionable steps your organisation can take:

  • Establish an AI Policy and Governance Framework: If you don’t have a documented AI policy yet, make it a priority. Define guidelines for acceptable AI use, data privacy, security, and ethical considerations. Designate an AI governance team or point person to keep policies up to date as the technology and regulations evolve. This provides a foundation for all other risk management efforts.
  • Conduct Regular AI Risk Assessments: Treat AI systems to periodic risk reviews (at least annually, if not more often). Evaluate potential failure points, security vulnerabilities, compliance issues, and ethical risks. Use these assessments to inform mitigation plans for example, if a risk assessment finds that an AI tool could produce biased results, put controls in place (such as human review or model retraining) to address it.
  • Monitor AI Usage and Costs: Implement processes to track what AI tools are being used across the organisation and how much is being spent on them. This might involve requiring teams to register new AI tools with IT or procurement, using monitoring software for API usage, and setting spending alerts or limits..
  • Invest in Training and Awareness: Don’t leave your employees flying blind with AI. Provide training sessions on how to use AI tools effectively and safely. Include specific guidance on data security, clearly explain what data can and cannot be shared with external AI services. Encourage a culture of asking when unsure. Employees should know who to consult if they’re considering a new AI tool or use case. Well-trained staff are your first line of defence against AI misuse. As one study noted, training employees on AI risks is crucial, yet only 17% of companies have formally done so [3].
  • Keep AI Models and Data Updated: Build a maintenance schedule for your AI systems. This includes updating the data they rely on (for example, refreshing a training dataset or knowledge base on a regular cadence) and retraining or fine-tuning AI models as needed. Also implement routine checks for accuracy; if an AI output seems questionable, investigate and correct it. Allocating resources to ongoing AI maintenance will ensure your tools continue to perform well and don’t drift into error or irrelevance.
  • Test for Bias and Errors: Make it standard procedure to validate important AI outputs before they are put to use. This could be as simple as a manager spot-checking AI-generated reports, or as formal as an internal audit of AI decisions for fairness. Especially for AI systems that affect customers or high-stakes decisions, put in place bias testing and result verification. Catching and fixing a problematic output early can save the company from reputational harm or liability later.

By taking these steps, organisations can significantly reduce their AI-related risks while still reaping the benefits of the technology. Each action from policy creation to training to monitoring adds a layer of protection and assurance.

Conclusion

AI can unlock tremendous value for businesses, but as the old IT maxim goes, “with great power comes great responsibility.” The assessment data we examined makes it clear that many companies are racing ahead with AI without putting enough guardrails in place. The most prevalent risk areas include the lack of governance (policies and oversight), poor data and model maintenance, inadequate employee training, and little control over costs and tool usage. These gaps leave organisations open to errors, security breaches, compliance violations, and project failures.

The encouraging news is that awareness of AI risk is growing, and the steps to address these issues are becoming well-defined. By focusing on the practical takeaways outlined above; establishing strong governance, educating your workforce, and actively managing your AI systems you can move your organization toward the “AI in Control” end of the spectrum. The companies that succeed with AI will be those that not only innovate with the technology but also invest in the discipline and safeguards to use it responsibly.

Closing Thoughts

AI risk management might not sound as exciting as the latest AI capability, but it is the backbone that will support all your AI ambitions. The insights from this quiz give a candid look at where organisations are today; use them as motivation to ask “How are we handling this in our company?”. By learning from these trends and implementing best practices, your organisation can enjoy the rewards of AI innovation without the unwelcome surprises. Here’s to putting AI risk in check and unlocking AI’s full potential safely.

A real-world warning

Consider the recent case of Deloitte in Australia. They were commissioned by the Department of Employment and Workplace Relations (DEWR) to produce a compliance framework review worth around A$440,000. According to reporting, the firm admitted to using a generative AI tool (Azure OpenAI/GPT-4) in the analysis, and the final report contained a number of errors, including references that turned out to be non-existent.

The outcome? Deloitte agreed to repay the final instalment of the contract citing issues in the output [5].

What this demonstrates is that even large, experienced firms can stumble when AI is used without adequate governance, oversight and validation.

Why this matters to your organisation

  • It shows that errors in AI output can have serious financial and reputational consequences. Organisations shouldn’t assume that AI use means no risk.
  • It underscores that transparency and auditability are essential. The Deloitte case mentioned “lack of traceability” in how the AI-derived findings linked back to legislation or evidence.
  • It drives home the point that human oversight remains critical. AI is a powerful tool, but it needs proper control, governance and verification especially in high-stakes applications.


Sources: The analysis in this post is based on an internal dataset of AI risk quiz responses (2025). Supporting statistics on industry trends were drawn from recent surveys, analyses and reports for context [2] [4] [3] [5].

[1] The State of AI: Global survey - McKinsey

https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[2] Less than half of businesses have an AI governance policy - AI Data Analytics Network

https://www.aidataanalytics.network/data-science-ai/news-trends/less-than-half-of-businesses-have-an-ai-governance-policy

[3] Only 17% of Companies Globally Have Addressed Generative AI Risks with Employees, According to New Riskonnect Research

https://riskonnect.com/press/riskonnect-research-generative-ai-risks-with-employees/

[4] Is Gemini AI Safe or a Security Risk to Your Business? - Metomic

https://www.metomic.io/resource-centre/is-gemini-ai-safe-or-a-security-risk-to-your-business

[5] Deloitte to pay money back to Albanese government after using AI in $440,000 report – The Guardian

https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report


Made with ❤️ in Milton, Brisbane (Meanjin) Australia.

WorkingMouse acknowledges the Traditional Owners and their continuing connection to land, sea and community. We pay our respects to them, their Elders, both past and emerging.

Torres Strait Islands Flag Australian Flag Aboriginal Flag
Top B2B Companies Clutch Award 2022 Clutch Top Company 2022 2023 Clutch Award 2018 iAwards 2023 Buy Queensland Finalist Award Award 1 Technology Fast 50 2017


2025 WorkingMouse Pty Ltd. All Rights Reserved.