The AI revolution is upon us, and its impact will be profound. A record 28% of venture capital investment went to AI startups in Q2 2024. While, in the world of small and medium-sized businesses (SMBs) more than 90% will employ AI tools for continuous monitoring and anomaly detection by 2030, according to Sage’s recent Vision to Industry report.

Source: Supplied. Aaron Harris, chief technology officer at Sage.
But, as we embrace the immense potential of AI, we must proceed with caution. The world of AI can be like the Wild West as rapid expansion and access outpaces regulation. A steadfast commitment to ethical considerations is essential moving forward.
Finance leaders know this – with 72% of respondents surveyed by Sage planning to establish policies specific to AI use and 71% committed to conducting regular ethics training for AI users.
But, given the scale and impact of AI, this is not a challenge any one company or country can address alone. We need a global, united approach – led by policymakers, industry leaders, technologists, and ethicists – to establish shared principles and best practices that lead to ethical AI adoption in the finance industry.
Best practices across the board
In Jurassic Park, Jeff Goldblum warned us, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” He was talking about cloning dinosaurs, but the same applies to ethical AI where the temptation can be to implement the technology across the board without considering the ethical implications.
It is vital best practices are created and principles established to introduce ethical guardrails that address bias and encourage transparency, accountability, and data privacy. For instance, at Sage, we could build an AI tool that allows SMBs to rate customers on how quickly they pay. But this could easily disenfranchise struggling businesses and exacerbate the problem instead of finding a solution. Likewise, AI could be used to screen candidates’ job applications. But that is an obvious opportunity for bias to enter the equation.
Unlike other innovative tech, the “move fast and break things” philosophy doesn’t apply for AI. It should be a prerequisite to ensure those building AI solutions are qualified before they begin. There is an ethical risk attached with building AI. That risk can be mitigated if you have the right framework in place.
That means everything from building features to detect data errors and training AI on reliable data sets, to utilising advisory councils, and having the right representation in your data science and AI development teams.
Recent data shows only 22% of AI professionals are women, and 25% of AI employees identify as racial or ethnic minorities. Investments are being made to create clearer pathways here, but it must be a united approach between policymakers and industry leaders.
Collaboration and knowledge sharing
These best practices shouldn’t be created in isolation and, thankfully, the AI developer and data scientist community is a traditionally collaborative one – especially compared to other types of technology development. This community can come together to prioritise and protect ethics – fostering a culture of transparency and accountability when it comes to open-source models used to build and train AI and machine learning algorithms used in finance.
Of course, collaboration itself must be done ethically to maintain data privacy. As we strive to create ethical AI, it's crucial to respect the privacy of individuals and organisations whose data is used in training these models. Another challenge is ensuring equitable access to resources. The risk of power concentration among a few large players can stifle innovation and limit diversity in AI development.
But, by pooling our collective knowledge and expertise, we can build responsible AI. Open-source models offer a unique opportunity to democratise AI development by allowing developers from diverse backgrounds and skill levels to contribute to and learn from each other, driving innovation, avoiding the perpetuation of biases, and ensuring a broad range of perspectives are considered in AI development.
An international approach
By actively promoting collaboration and knowledge sharing, the global AI community can support the development of ethical AI in finance and build trust. This must be backed by effective regulation and international standards for responsible AI development and deployment across borders.
Regulation is at different stages around the world. The EU has the AI Act, the G7 has International Guiding Principles on AI and the AI Code of Conduct, each country is in the process of rolling out regulation, and, in the United States, there is a state-by-state approach. These are positive steps but, broadly speaking, regulation has not caught up with the widespread adoption and accessibility of AI in a world where hundreds of thousands of open source models are available.
AI is a borderless technology. The priority should be for governments, policymakers and countries to align on a co-ordinated set of foundational principles that create the high watermark for ethical AI. Initiatives like the Bletchley Declaration show this approach is gaining traction versus countries taking isolated, independent approaches.
Of course, given how quickly AI is evolving, any international regulations must adapt to the pace of change to maintain ethics, sustain trust in the technology, and enable rather than inhibit innovation that can benefit employers and employees.
A new, ethical era
We are in the AI era. Focusing on AI in accounting, firms and accountants are ready and willing to embrace AI to improve financial accuracy, unlock operational efficiency, and enable strategic decision-making. But this is built on the premise that AI solutions and models need to be ethical. In fact, the accounting ecosystem as a whole is ready to take responsibility for ensuring AI is used in an ethical manner.
That reality can only manifest itself if steps are taken to create a unified approach to ethical AI. A scattergun approach from company to company, and AI model to AI model will only lead to inaccurate, ineffective AI that is viewed with suspicion.
The more the industry can work together to define and agree on ethical frameworks, foundational regulations, and shared principles, the more AI can positively impact the finance industry and the businesses these teams and firms support.