Age of AI – The legal implications of the EU’s establishment of Guidelines for Ethical AI

What just happened?

The EU had published guidelines in April 2019 on Ethical AI.

What does this mean?

It was meant to act as a pilot project for companies to test their principles.[1] These guidelines, as most guidelines are, were voluntary. Several key stakeholders were involved in the consultation process and they highlighted the key requirements for Trustworthy AI.[2] These requirements are the following:

  • Human agency and oversight

  • Technical robustness and safety

  • Privacy and data governance

  • Transparency

  • Diversity

  • Non-Discrimination and fairness

  • Societal and Environmental well-being

  • Accountability.[3]

However, the impact is not what was initially envisaged. The recent white paper was published in February 2020.[4] The EU aims to be a global powerhouse of technology; hence the paper was meant to be a broader effort on catching up to the US and China. However, it, unfortunately, cannot be a leader in regulation if it cannot be a leader in technology.[5]

How will this affect the legal industry?

Most law firms are affected by these guidelines as they use AI to help with price automation, digital diction, legal research, and legal chatbots.[6] Firms like Clifford Chance hold hackathons to train their graduates in legal tech and make them more tech-friendly.[7] Law firms are looking to automate their processes to become more efficient. With the release of the white paper, they need to be careful about human oversight and ensure accountability on matters that require human supervision. They cannot wholly rely on AI data in any part of the advice given. In Australia, firms like Norton Rose Fulbright use chatbots, without the need for a lawyer, to provide necessary legal information on privacy law.[8] Such technology might not work under the EU regulation as this entirely depends on how strict the regulation turns out to be. The white paper only provides suggestions for legislation to be enacted, and there is yet time for its development and implementation. 

Concerning clients, law firms are most likely to provide advice against using facial recognition to their clients.[9] This disrupts business for most of the facial recognition and AI start-ups developing in the region as all high-risk AI applications will undergo a compulsory assessment before entering the market. Rather than following the initial plan of holding a moratorium on the technology, it has been suggested that member states have the autonomy to impose a ban on the technology as required. While the aim of the guidelines is to ensure the technical robustness of the software and data privacy of the EU citizen, it’s wording lacks the courage to impose the moratorium and develop a safer and more innovative technology.[10] Hence law firms may want to steer clients of using such technology or provide solutions to ensure strict compliance with GDPR while doing so. 

Written by Srinidhi Dhulipala

Assessing Firms: #GibsonDunn&CrutcherLLP #Orrick,Herrington&SutcliffeLLP #Latham&WatkinsLLP #Dentons #CooleyLLP

Caught interest in this topic? Read more by clicking on the following links:

  1. OECD (2020), The Impact of Big Data and Artificial Intelligence (AI) in the Insurance Sector

  2. Lloyd Langenhoven, 'The Symbiotic Relationship Between Lawyer and Legal Tech' (Herbert Smith Freehills | Global law firm, 8th October 2018)

  3. 'The Brussels effect, cont; The EU Wants To Set The Rules For The World Of Technology' (The Economist, 20th February 2020)

  4. Siddharth Venkataramakrishnan, 'EU Backs AI Regulation While China and US Favour Technology' (The Financial Times, 25th April 2019)

References;

[1] 'Pilot The Assessment List Of The Ethics Guidelines For Trustworthy AI - FUTURIUM - European Commission' (FUTURIUM - European Commission, 2020)

[2]European Commission Ethics Guidelines for Trustworthy AI’ (Simmons & Simmons, 12th April 2019)

[3] 'EU Policy & Regulatory Alert - EU Publishes Artificial Intelligence Ethics Guidelines | Insights | DLA Piper Global Law Firm' (DLA Piper, 12th April 2019)

[4] Commission (EC), ’On Artificial Intelligence - A European approach to excellence and trust’ White Paper, COM(2020) 65 final.

[5] Javier Espinoza and Madhumita Murgia, 'The Four Problems With Europe’s Vision Of AI' (The Financial Times, 26th February 2020)

[6] Andrew Davies, 'Artificial Intelligence And The Legal Industry - Legal Futures' (Legal Futures, 2nd May 2019)

[7] 'Clifford Chance Hackathon Empowers All Trainees To Produce Apps' (Lawyer Monthly | Legal News Magazine, 11th December 2019)

[8] 'Introducing Parker – Our IDD Chatbot' (Norton Rose Fulbright, December 2018)

[9] Natalia Drozdiak, 'Europe Mulls New Tougher Rules for Artificial Intelligence' (Bloomberg, 17th January 2020)

[10] Javier Espinoza, 'EU Struggles To Find Right Balance On AI' (The Financial Times, 28th February 2020)

Disclaimer: This article (and any information accessed through links in this article) is provided for information purposes only and does not constitute legal advice.