In a data-driven society, the NSW Police force has revealed how it sifts through terabytes of data each day.
In late 2017, the NSW Police rolled out a new digital IT strategy, which included the addition of the Policing Operating System (IPOS). This initiative was implemented to help parse data acquired by CCTV cameras, body cameras, laptops, mobiles, dashcams, interviews, triple zero calls, reports, forensic investigations, and much more. This cloud-based system became possible with the help of Microsoft, and its Azure Cloud, and led to the formation of the AI/ML Insights policing platform. The AI/ML can automate several tasks such as transcribing audio or scrubbing through CCTV footage, saving the NSW government time and money each year.
However, with wider technological adoption and machine learning, ethics come into play. With this in mind, Microsoft’s Office of Responsible AI has performed several reviews of AI/ML solutions with heavy consideration of human rights. These reviews are incredibly important especially in times when Australia’s Human Rights Commission is calling facial recognition into question. Additionally, transparency is essential for Australian people, knowing what data is collected and how it is used. Interestingly, Microsoft’s Azure system does not use facial recognition and instead relies on “computer vision API to identify objects that assist with police cases”. This is designed to democratise the handling of data to ensure that police are operating at full capacity. As such, many other State and Federal bodies have shown interest in the AI/ML solution and its ability to better police work. But regardless of how promising machine learning and artificial intelligence is, these advancements cannot come at the cost of human ethics nor infringe on people’s privacy.