Who is responsible for responsible AI?

4 years ago 135
BOOK THIS SPACE FOR AD
ARTICLE AD

In 2019, Forrester predicted that there will be three high profile AI-related PR snafus in 2020. It's only August and we've already seen plenty of examples of AI going wrong -- the ACLU sued facial recognition provider Clearview for violating a well-known Illinois state biometric law in the US, the UK's Home Office was forced to abandon its visa processing algorithm which was deemed to be racist, and researchers recently found that automated speech recognition systems from Amazon, Apple, IBM, Google, and Microsoft perform much worse for black speakers than white ones

AI will continue to err. And it will continue to surface thorny legal and accountability questions, namely -- who is to blame when AI goes wrong? I am not a lawyer, but my father spent his career as a litigator so I posed this question to him when I kicked off this research. His response: "That's easy -- a lawyer would say, 'Sue everybody!'" 

Regardless of where true accountability (legal or otherwise) lies, that's inevitably what will happen as regulation of AI rises in high-risk use cases such as healthcare, facial recognition and recruitment. So the key is for companies to build and deploy responsible AI systems from the get-go, to minimize overall risk, preempt your AI system from performing in an illegal, unethical, or unintended way.  You will be held accountable for what your AI does, so you'd better make sure it does what it's supposed to. 

Third-Party Risk Is AI's Blind Side 

The AI accountability challenge is difficult enough when you're creating AI systems on your own. But the majority of companies today are partnering with third parties (technology providers, service providers, and consultancies, data labelers, etc.) to develop and deploy AI introducing vulnerability into the complex AI supply chain. Third-party risk is nothing new, but AI differs from traditional software development because of its probabilistic and nondeterministic nature. 

To help our clients reduce these vulnerabilities, we recently unveiled research that analyzes how to tackle third party risk and improve the overall accountability of AI systems. We present multiple best practices across the AI lifecycle for ensuring accountability such as offering bias bounties, conducting rigorous testing, and third-party risk assessments. 

Practice Your Principles With Responsible AI 

For those of you interested in other research on this topic, we have also published research on this topic of responsible AI, including: 

Accountability in AI How to detect and prevent harmful bias Explainability in AI 

We also have upcoming report on Responsible AI solutions where we will shine a light on both established and emerging vendors in this space who are offering ways to build and test trusted AI systems. Stay tuned! 

This post was written by Principal Analyst Brandon Purcell, and it originally appeared here

Read Entire Article