How are Artificial Intelligence (AI) ethics monitored?

VS
Collaboration with other industries are important
Tech companies should be held accountable for their AI models
January 27, 2021
Montana McLaughlin-Tom

A brief backdrop:

Currently AI ethics are enforced through governments, private corporation guidelines/ethical codes, and academic institutions. AI ethics refers to eleven overarching themes including: transparency, responsibility, privacy, freedom, and several others.


The problem..

AI is a relatively new field and its ethical boundaries and protocols are vaguely defined with no clear and concise, or industry unified, steps on a global scale for policy makers, companies or developers. Enforcing these ethical standards becomes a minefield for policy makers and companies to ensure that the standards are met while attempting to not limit innovation.


On one hand, companies should be given the freedom to explore a new field with few boundaries regarding day-to-day product, but on the other, this also means that problems can manifest on a larger scale before they are identified and dealt with. So, should Big Tech continue this way? Or, should companies become more holistic in its development of AI in order to recognize and solve more problems earlier on, and put more industry boundaries into place?

Big Tech should continue with their current forms of transparency through self-monitoring. This presents itself in several forms including intra-industry transparency and collaboration, and self-accountability through the promotion of company-wide ethical credos.


1. Intra-industry transparency & collaboration

Self-monitoring does not only have to mean a company is observing its own employees, it can also refer to multi-company collaboration environments to ensure that ethical standards are produced industry-wide.


For example, Google and OpenAI along with Intel, and U.S. and European research labs published a paper in April of 2020 promoting more transparency between company developers and AI policy makers. The paper breaks down ways that developers can become transparent with their stakeholders while being able to create AI systems in a fair and secure environment. Some of the suggestions include paying developers to find bias within AI, and hiring third-party companies to perform a bias and safety audit market.



2. Self-accountability through empowerment and question culture

Another way that technology companies could improve their own forms of transparency and responsibility is by creating a culture of self-accountability outside of the AI development team.


While most employees in Big Tech are aware of the risks AI presents, there is often a slight disconnect between understanding the risks, the company’s in-house ethical stance, and a culture open to empowerment and questioning. If companies created a way for all teams, including non-AI development teams, to know the company’s personal credo for AI ethics and enabled an environment of question culture, problems could be detected early throughout the development process.


How is that enforcing ethics?

Even though accountability is a two-sided equation, and the concept of Big Tech self-monitoring can be seen as one sided, there are still levels of accountability present. Promoting more intra-industry collaboration and cultivating a deeper understanding of in-house ethics within company employees fosters a question culture within the early stages of AI development, which in turn enables companies to remain innovative while embodying ethical standards.

Big Tech should have a more collaborative view in developing their AI. Ideas of how to implement this vary, but the two main streams of thought are: to make the developers more holistic in their approach to AI development, and by promoting inter-industry collaboration.  


1. A more holistic approach

Several articles briefly discuss the potential of bringing in AI ethics committees that companies can create to work alongside developers to help identify problems as they arise.


But what if we went one step further?

What if companies added colleagues that were trained in other fields to these committees? Adding people who were trained outside of an AI setting would help create broader discussions when debating the scope of AI and its potential problems. Admittedly, there are several realistic challenges with this scenario, however the benefits could outweigh these in the long run.


For example, let us briefly consider the problem of inherent prejudice within AI, which AI has learned from the language of humans. In this particular scenario, social science students have been discussing this debate for decades. Therefore, if disciplinary holistic round-table discussions were promoted, companies would be able to make precautionary decisions rather than reactionary solutions.


2. Inter-Industry Collaboration

On top of promoting round-table discussions, if companies worked on an inter-industry form of collaboration, developers could adapt existing policy and ethics walk-throughs.


For example, the medical field has years of experience in understanding how to moderate and handle the issue between an ethical stance versus the actual practice of that stance. A great example could be a pharmacist who works as a mediator between you and your doctor.


What if AI had a similar role?  In working terms for AI, this would mean explaining in clear and concise terms how data is being collected and handled, while informing the users of that action early on in the process and ensuring that they understand the company’s actions.

In a paper published in April of 2020 for the Stanford Encyclopedia of Philosophy, Vincent C. Müller described his paper as an attempt to “… propose an ordering where little order exists.”, in regards to applied AI ethics. Knowing the lack of boundaries and vast opportunities that are present in the world of AI, do you think we should let AI ethics continue as they are or consider new strategies that could potentially restrict innovation? Can we strike a balance between innovation and AI ethics?

Disclaimer: We are by no means supporting one side of the argument over the other. We collate different views and expand on them to give you a better understanding of the motivation behind these views.
Image credit: Markus Spiske (Pexels)
Thank you! You will receive a welcome email from us.
If you don't see it in your inbox, please check your junk mail!  
😊
Oops! Something went wrong while submitting the form. 😥