Responsible AI’ Foundation Vital For New Zealand Organisations As Global Legislation Ramps Up
New Zealand businesses should be aware of, and be responding to, increasing Artificial Intelligence regulations around the world. That’s according to a recent Accenture report that reveals a number of countries are about to introduce legislation defining when and how AI data and information can be used and stored.
The From AI compliance to competitive advantage report found 80% of companies plan to increase investment in Responsible AI, and 77% see regulation of AI as a priority. However, just 33% of global customers trust how AI is being implemented by organisations.
Nick Mulcahy, Accenture’s New Zealand Country Manager, said governments internationally were introducing regulations to safeguard use of AI technologies so that it is trusted, transparent and safe.
“New Zealand is in the very early stages of AI regulations, but internationally many Governments and regulators are considering how to supervise and set standards for the responsible development and use of AI.
“The EU’s proposed AI Act is the best-known example: once ratified, anyone who wants to use, build or sell AI products and services within the EU will have to consider the legislation’s requirements for their organisation. For New Zealand it’s a matter of when, not if, AI regulations are fully enacted.
“New Zealand companies must prepare for AI regulation now, instead of taking a ‘wait and see’ approach or viewing compliance as just checking a box for completion, both of which can become unstainable.”
The survey of 850 C-suite executives from 17 countries found 97% believe future AI regulations will impact their business to some extent. To prepare for that eventuality, 77% of executives are making it a company-wide priority to plan for regulations with 80% planning to commit at least 10% of their total AI budgets to meet regulatory requirements by 2024.
Mr Mulcahy said New Zealand organisations shouldn’t wait for regulations before adopting a responsible AI framework. Those that do will significantly outperform their competitors and will benefit from more trust gained by customers, suppliers and regulators.
“Responsible AI deals with important ethics, data governance, trust and legal issues. It surrounds the practice of designing, developing and deploying AI with good intentions to empower employees and businesses and positively impact customers and society.
“Responsible AI capabilities are an essential part of being an AI-mature organization, which pays off as the most AI-mature companies already enjoy 50% higher revenue growth than their peers.”
“Designing AI responsibly from the start helps mitigate risks, meet regulatory requirements and creates sustainable value. The converse erodes trust from workers, investors, consumers and society at large, and ultimately becomes a critical barrier to realizing the full potential of AI at scale.”
Accenture’s report notes that organisations can build a Responsible AI foundation based on four key pillars.
- Clear principles and governance structures for AI. The support from C-suite executives is critical.
- A risk management framework that monitors and operationalises current and future policies.
- Technology tools that support fairness, explainability, robustness, accountability and privacy.
- Company culture and training that position Responsible AI as a business imperative and gives employees a clear understanding of how to transfer these principles into action.
“Scaling AI can deliver high performance for customers, shareholders and employees, but organisations must overcome common hurdles to apply AI responsibly and sustainably.
“While CEOs have historically cited lack of talent and poor data quality / availability as the biggest barriers to AI adoption, managing data ethics, privacy and information security now tops the list.
“Being responsible by design can help organisations clear those hurdles and scale AI with confidence. By shifting from a reactive AI compliance strategy to the proactive development of mature responsible AI capabilities, they’ll have the foundations in place to adapt as new regulations emerge,” said Mulcahy.