Welcome to yet another article on UX and Artificial Intelligence. This time we’ll see what is human centered machine learning. But first thing first…
What is human-centered machine learning?
Human Centered Machine Learning, also referred to as Human-Centered Artificial Intelligence (HCAI or HAI), is an approach to developing and deploying machine learning systems (ML) that focuses on the needs and well-being of the individuals and communities affected by the technology.
This approach is growing in popularity as influential technology companies and research labs have considered the human context. In a workshop associated with the Conference on Human Factors in Computing Systems in 2016, it was stated that HCML should explicitly consider the human aspect in the development of ML models, redesign machine learning workflows based on situated human work practices, and explore the co-adaptation of humans and systems.
Human Centered Artificial Intelligence (HCAI)
Human Centered Machine Learning is a subset of Human Centered Artificial INtelligence.
Human-Centered AI (HCAI) is an emerging discipline that aims to develop AI systems that complement, rather than replace, human capabilities. HCAI aims to give humans more control so that AI meets user needs while operating transparently, thus delivering fair results, and respecting privacy.
One of the main axes of human-centered machine learning is the constant collaboration and interaction between humans and AI, and teh subsequent co-creation of solutions. The core idea here is that “humans + AI” is better than each on its own. By developing new user experiences and visualizations that engage collaboration between humans and AI, HCML can provide a robust framework for designing or evaluating models of “human-AI interaction”
Another important area is responsible and user-friendly AI. This area encompasses all aspects of how human centered AI systems can achieve positive and beneficial outcomes for their immediate users, as well as for those affected by their operation and for society at large. To achieve these outcomes, AI must be:
- fair and unbiased
- applied ethically and safely
- responsive to user needs
This includes understanding how people interact with and trust AI systems, and explaining the logic behind AI decisions.
Finally, the human-centered design process is critical to the success of all AI systems. This involves understanding the needs and limitations of the people who will use the AI system and designing the system to meet those needs in a way that is easy to understand and use. By putting people at the center of the design and development of AI systems, we can ensure that they are not only more efficient, but also easier to use and understand.
Examples of HCML application in real life
HCML research, then, is about studies that establish design guidelines and principles for HCML or provide guidance for the development of HCML products and services. These works come from different frameworks, each with different goals. For example, guidelines for creating intelligent user interfaces, visualization, prototyping, and general Human Computer Interaction (HCI) concerns.
Some designers and researchers have focused on creating requirements and guidelines for virtual visualization tools that avoid (or attempt to minimize) bias ML. One article highlights guidelines for three areas of HCML: ethical design, technology that mimics human intelligence, and human factors design. Similarly, there is an ever-growing number of articles explaining what HCML is and how AI systems should understand humans and vice versa.
A review of recent deep learning approaches in human-centered machine learning shows us all kind of studies and applications, including:
- analysis of the literature in mental healthand AI to understand which people are focused on this work and to create guidelines that put people at the center.
- Figuring out how to classify human-centered explainable AI in terms of prioritizing people.
- Design a theory based on a user-centered explainable AI framework and evaluate a tool developed with real clinicians.
- Exploring ways to develop chatbots that can handle ‘race talk.”
- Define learner-centered AI and figure out what to look for in development.
- Insights for designers and researchers on overcoming challenges in human-AI interaction.
- Minimize ML bias in conversational interfaces
- Explore different considerations for ML governance
- Designing HCML frameworks for Quantum UX considering ML ethics.
- Inclusive machine learning and human centered artificial intelligence
Machine Learning Systems and subjects
The most important part of HCML is the Human and therefore it is important to empower the human as such and as an entity that interacts with the AI system . The “human” in HCML is defined in different ML skill levels ranging from no ML background to an expert ML scientist.
A Human in HCML may also be involved in different ways at different stages of the ML system development process. For example, the focus may be on different types of users, whether the end user, the developer, the investor, or some other type of entity or stakeholder.
Some HCML models may focus on a particular user aspect in the development of a product or service, while another model may elaborate on the design principles for a particular ML system to optimize usability and acceptability. The multidimensionality of what is considered human in the context of HCML adds to the complexity of this domain.
HCML system application
Research on the user side has looked at general end users or consumers, while others have catered to specific end users such as people who need help, medical professionals, international travelers, Amazon Mechanical Turk workers, drivers, musicians, teachers, students, children, UX designers, UI designers, data analysts, video developers, and game designers.
Some studies have attempted to understand the different user perspectives from ML engineers to end users. These studies are based on previous work targeting the developer, as the human focus is on novice ML engineers to help them develop HCML systems faster. However, it is important to note that most of the work targeting the developer side focuses on ML engineers.
Explainable AI (XAI)
An important aspect of human-centered ML is Explainable AI (XAI). Explainable AI is teh part of human centered AI that refers to the ability of a ML system to provide understandable and interpretable explanations for its predictions, decisions, and actions. XAI is critical for building trust and understanding between humans and ML systems, as well as for identifying and addressing issues of bias and fairness.
This aspect is closely related to conversational interfaces, chatbots, cultural UX, sensorial UX, and anything related to user preferences and cultural background.
What is ML bias?
Another important aspect of human centered machine learning is addressing issues of bias and fairness. ML Bias is the tendency of ML systems to perpetuate and even reinforce existing societal biases. These biases may be based on, for example, race, gender, culture, religion, socio-economic status, etc. This can have serious consequences for individuals and communities that are disproportionately affected by these biases.
Addressing bias and fairness in ML requires a combination of technical and social approaches, semantic studies, emotional design, and other techniques. These include the use of diverse and representative data, transparency and interpretability in ML models, and ongoing engagement with communities affected by the technology. Of course, this engagement can be done manually in a presence-oriented way, or, most interestingly, using AI technology!
ML ethics: an ongoing discussion
As human centered machine learning systems become more prevalent and influential in society, it is important to consider the ethical implications of these technologies. ML ethics refers to the moral and societal implications of developing, deploying, and using ML systems. These include issues such as privacy, autonomy, accountability, and the distribution of benefits and harms. Addressing these ethical issues requires ongoing dialog and engagement among researchers, practitioners, policy makers, stakeholders, the public, and diverse communities at large.
Cultural differences must be respected and represented, and this must be equally considered by HCML models. Otherwise, different biases will lead to growing resistance to human-centered machine learning and AI in general. Which leads us to the following point: ML Governance.
ML Governance Explained
ML governance refers to the practices, policies, and institutions that ensure ML systems are developed, deployed, and used in ways that are consistent with society’s values and interests. This includes aspects such as regulation, accountability, transparency and participation.
ML governance is important to address the potential negative impacts of ML and to ensure that the benefits of these technologies are distributed fairly and equitably. The Massachusetts Institute of Technology (MIT) and Google Design have also recognized the importance of human-centeredness ML and have initiated research projects to explore the co-adaptation of people and systems.
In 2019, the Stanford Institute for Human-Centered Artificial Intelligence was launched with the goal of improving AI research, education, policy, and practice and focusing on developing AI technologies. These technologies and applications are meant to be collaborative, complementary, and improve human productivity and quality of life.
Human Centered Machine Learning: References and additional reading
Of course, this is not even the tip of the iceberg when it comes to Human Centered Machine Learning (or HCML for short). So I include a bibliography for those readers who want to explore these topics in more depth (which I encourage them to do)
Ahmetovic, D.; Sato, D.; Oh, U.; Ishihara, T.; Kitani, K.; Asakawa, C. ReCog: Supporting Blind People in Recognizing Personal Objects. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI’20, Honolulu, HI, USA, 25 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; |
Banovic, N.; Wang, A.; Jin, Y.; Chang, C.; Ramos, J.; Dey, A.; Mankoff, J. Leveraging human routine models to detect and generate human behaviors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; |
Feiz, S.; Billah, S.M.; Ashok, V.; Shilkrot, R.; Ramakrishnan, I. Towards Enabling Blind People to Independently Write on Printed Forms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; |
Hu, K.; Bakker, M.A.; Li, S.; Kraska, T.; Hidalgo, C. Vizml: A machine learning approach to visualization recommendation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; |
Xie, Y.; Chen, M.; Kao, D.; Gao, G.; Chen, X. CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis using human centered machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI’20, Honolulu, HI, USA, 25 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; |
Lee, K.; Kacorri, H. Hands holding clues for object recognition in teachable machines. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; |
Liebling, D.J.; Lahav, M.; Evans, A.; Donsbach, A.; Holbrook, J.; Smus, B.; Boran, L. Unmet Needs and Opportunities for Mobile Translation AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI’20, Honolulu, HI, USA, 25 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; |
Ohn-Bar, E.; Guerreiro, J.A.; Ahmetovic, D.; Kitani, K.M.; Asakawa, C. Modeling Expertise in Assistive Navigation Interfaces for Blind People. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, IUI’18, Tokyo, Japan, 7–11 March 2018; |
Wu, S.; Reynolds, L.; Li, X.; Guzmán, F. Design and Evaluation of a Social Media Writing Support Tool for People with Dyslexia. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; |
Yin, M.; Wortman Vaughan, J.; Wallach, H. Understanding the Effect of Accuracy on Trust in human centered machine learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI’19, Glasgow, UK, 4–9 May 2019; |
We can improve your business!
Let us help you with the best solutions for your business.
It only takes one step, you're one click away from getting guaranteed results!
I want to improve my business NOW!