Arnotts Technology Lawyers

The Australian Human Rights Commission (AHRC) has urged banks to be careful of biases in artificial intelligence (AI) technology used to assess customer creditworthiness.

In its final report, the AHRC’s Commissioner Edward Santow noted that AI systems used in banks are built on historical inaccuracies that have led to discrepancies and discriminations in the past. AI-based decision-making has the potential to expedite certain processes but poorly designed or administered systems can cause legal and reputational risks to banks and fintech organisations. The final report specifically looked at home loan assessment, where AI data was calling upon information from up to 50 years ago, where women and marginalised groups were being disadvantaged. Accordingly, progress in gender equality and racial harmony may be undermined by certain AI decision-making.

The problem of bias in AI occurring in financial or legislative systems is due to the manner in which these technologies are built. AI is not created by financial analysts or lawyers, and programmers do not have the same intricate knowledge of these industries to create an AI that is adaptive to financial and legal nuances. This issue was brought to light in 2019 when Apple and Goldman Sachs were accused of offering smaller lines of credit to women than to men. As such, the AHRC is seeking a thorough assessment of AI systems to prevent the second coming of a robo-debt catastrophe.

The AHRC is urging to have AI and related technology to be “more accountable, accurate and fair.” To achieve this, it has called for legislation and an AI Safety Commissioner to oversee the implementation and impact of AI on human rights. It is likely that we will be seeing new policy for AI-based decision-making in several industries in the coming years.