top of page
Writer's pictureAlibek Jakupov

AI and ethics on Microsoft : personal experience

Updated: Nov 19, 2021


As a data scientist, I can assure you that ethics should play a crucial role in every step and every iteration of the classical Machine Learning cycle. However, it is still uncommon practice but I do believe that "a journey of a thousand miles begins with a single step", thus starting from the central question is a good first step and we will progress through the availability of our trained ML models. This article has been inspired by personal experience as well as some excellent references available on the internet (links and references are, of course, available at the end of the blog post). Up we go!



How ethics can impact your software?



I've recently been working on a project of recognizing protective clothing for clean rooms and to train the ai model we chose the Custom Vision service. The logic was quite straightforward: with Python Open CV we read the video feed and frame by frame we sent the images to the service via API call. Nevertheless, due to the ethical issues we had to re-write the whole solution and redefine our architecture. So what was the problem? Firstly, we faced some GDPR challenges. We were not conscious of the fact that no data can go out of the clean room. Secondly, not all the clean room operators were aware of the fact they were recorded. Their images would then be used for retraining without their agreement, which would definitely have violated their rights. Thus, we exported the model on local and adapted our software to process and analyze images fully offline and created an additional software to augment the initial data. This required some additional efforts, but in the end we were glad we had respected all the ethics and GDPR rules.


Another issue is data. Although the the subject of protective clothing identification might seem trivial, the phenomenon represents a much larger problem. In our case, just the presence of the protective clothing itself is not sufficient. We should ensure, that goggles, masks and gloves are well put, and all the contamination sources are avoided. For instance, if mask is on the operator's face but nose is open, or hair is visible, the application should turn on the alarm. In addition to internal rules of the company, the absence of such kind of data (photos with masks perfectly covering the face, and photos with masks pulled down a little bit under the nose, in our case) can skew results and even be life-threatening. For example, did you know that your nose is an infection magnet, because your nasal regions have a higher susceptibility of infections?


The knowledge and expertise of your clients/partners help ensure the highest probability of a safe and successful launch of your application. You might not have access to exactly the same resources, but you can do your best to be as ethical as possible with the limited information that's available to you. In our project, we requested our clients, who were experts in the domain, to provide a full list of requirements. We then created a simple software to let them easily generate a training set.


Machine learning problems require rigor an iteration. With each new level of awareness and knowledge we gain from our training set, we learn what other information could be missing, what new questions to ask, and how to prioritize the data to yield more accurate understandings of the domain.


Quote from MS Documentation:

Analysis that considers only one example of negative factors isn't the kind of data NASA would use when real lives are at risk. More data and subject matter expertise would be required before it should be used for any kind of real decision making.


Ethical and social questions AI raise


There are a lot of ethical challenges the developers should address as they explore further opportunities created in partnership of humans and machines. Thus, the main question is what computer should do, and not what the computers can do.


Some examples of social questions raised by AI:

How can we best use AI to assist users and offer people enhanced insights while avoiding exposing them to different types of discrimination in health, housing, law enforcement, and employment? How can we balance the need for efficiency and exploration with fairness and sensitivity to users? As we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?

There exist six ethical principles formulated by Microsoft team that are supposed to guide the development and use of Machine Learning with human at the center:

  • Fairness

  • Reliability & Safety

  • Inclusiveness

  • Privacy and Security

  • Transparency

  • Accountability



The FATE

(Fairness, Accountability, Transparency and Ethics)



To study the complex social implications of AI/ML, large-scale experimentation and automation Microsoft has created the FATE research group. Their main goal is to suggest innovative and ethical techniques and methodologies of computation. Besides formulating these principles they draw the deeper context underlying these issues from different perspectives, like sociology, history or science & technology studies. Moreover, their collaborative research projects address the need for transparency, accountability and fairness in AI and machine learning systems in different disciplines like ML, information retrieval, sociology and political science.



Microsoft's AI for good

Microsoft AI for Good invests in programs that increase access to cloud and AI technologies through grants, education, research, and strategic partnerships.


Accessibility

AI for Accessibility helps people to acquire freedom and achieve more by promoting inclusion through intelligent technology. Seeing AI, for example, employs artificial intelligence to assist those with limited vision in seeing the environment more clearly.


Humanitarian

A new $40 million, five-year program called AI for Humanitarian Action intends to use AI to help people recover from disasters, meet the needs of children, safeguard refugees and displaced persons, and promote human rights respect. For example, Operation Smile, a nonprofit partner, utilizes a face modeling algorithm and Microsoft Pix to enhance surgical results and assist more children in need of facial reconstruction.


Earth

People and organizations may use AI for Earth to improve the way we monitor, model, and manage Earth's natural processes. Agriculture, water, biodiversity, and climate change are all important aspects of a sustainable future. Project Premonition, for example, began as a project to track new illnesses using AI, drones, and Microsoft cloud services. Scientists began studying blood from mosquitoes to keep ahead of infectious illnesses because they needed quick and accurate data, and they are now focusing on biodiversity.


Seeing AI

Seeing AI is a Microsoft research project that blends the cloud's power with artificial intelligence to create an intelligent software to help you get through your day.



 

In this short article we've covered the very basics of such a complicated issue as Ethics in AI. In the next articles we are going to see the concrete solutions and services that will allow you to build a more responsible ML service. Hope this was useful.


References:

98 views0 comments

Comentarios


bottom of page