Human Centered Machine Learning: the good and the bad

Last Modified: Jul 10th, 2024 - Category: Artificial Intelligence, Development, UX Research, UX Theory
1870 words, 10 minutes estimated read time.
Human Centered Machine Learning: the good and the bad

Welcome to yet another article on UX and Artificial Intelligence. This time we’ll see what is human centered machine learning. But first thing first…

What is human-centered machine learning?

Human-Centered Machine Learning (HCML), also known as Human-Centered Artificial Intelligence (HCAI or HAI), is an approach to developing and deploying machine learning (ML) systems that prioritize the needs and well-being of individuals and communities affected by the technology.

This approach is gaining popularity as influential technology companies and research labs recognize the importance of considering the human context. At a workshop associated with the Conference on Human Factors in Computing Systems in 2016, it was highlighted that HCML should explicitly consider the human aspect in ML model development, redesign machine learning workflows based on situated human work practices, and explore the co-adaptation of humans and systems.

Human Centered Artificial Intelligence (HCAI)

Human-Centered Machine Learning is a subset of Human-Centered Artificial Intelligence (HCAI). HCAI aims to develop AI systems that complement human capabilities rather than replace them. The goal is to give humans more control, ensuring that AI meets user needs, operates transparently, delivers fair results, and respects privacy.

Core Principles of Human-Centered Machine Learning

  1. Collaboration and Co-Creation: A central tenet of HCML is the continuous collaboration and interaction between humans and AI. The principle “humans + AI” emphasizes that combined efforts are more effective than either working alone. By developing new user experiences and visualizations that facilitate human-AI collaboration, HCML provides a robust framework for designing and evaluating models of “human-AI interaction”.
  2. Responsible and User-Friendly AI: This area focuses on how human-centered AI systems can achieve positive and beneficial outcomes for their users, those affected by their operation, and society at large. To achieve these outcomes, AI must be:
    • Fair and Unbiased: Ensuring that AI systems operate without discrimination.
    • Ethical and Safe: Applying AI in a manner that upholds ethical standards and ensures user safety.
    • Responsive to User Needs: Adapting to and meeting the specific requirements of users.

Understanding how people interact with and trust AI systems, and explaining the logic behind AI decisions, is crucial in this area.

  1. Human-Centered Design Process: The success of AI systems depends heavily on the human-centered design process. This involves:
    • Understanding User Needs and Limitations: Designing AI systems that cater to the specific needs and constraints of the users.
    • Creating Easy-to-Use Systems: Ensuring that AI systems are intuitive and straightforward to use.
    • Enhancing Efficiency and Usability: By focusing on the human aspect during design and development, AI systems can become not only more efficient but also more user-friendly.

By placing people at the center of AI system design and development, we can create AI that is both more effective and easier to understand and use.

Examples of HCML application in real life

HCML and robotics: How is ML governance applied?
HCML applied to robotics in factory

HCML research focuses on establishing design guidelines and principles for Human-Centered Machine Learning, providing guidance for developing HCML products and services. This research encompasses various frameworks, each with different goals, such as creating intelligent user interfaces, visualization, prototyping, and addressing general Human-Computer Interaction (HCI) concerns.

Some designers and researchers emphasize creating requirements and guidelines for virtual visualization tools that minimize bias in ML. One article highlights guidelines for three key areas of HCML: ethical design, technology that mimics human intelligence, and human factors design. Additionally, there is a growing body of literature explaining what HCML is and how AI systems should understand humans and vice versa.

A review of recent deep learning approaches in human-centered machine learning shows us all kind of studies and applications, including:

  • analysis of the literature in mental healthand AI to understand which people are focused on this work and to create guidelines that put people at the center.
  • Figuring out how to classify human-centered explainable AI in terms of prioritizing people.
  • Design a theory based on a user-centered explainable AI framework and evaluate a tool developed with real clinicians.
  • Exploring ways to develop chatbots that can handle ‘race talk.”
  • Define learner-centered AI and figure out what to look for in development.
  • Insights for designers and researchers on overcoming challenges in human-AI interaction.
  • Minimize ML bias in conversational interfaces
  • Explore different considerations for ML governance
  • Designing HCML frameworks for Quantum UX considering ML ethics.
  • Inclusive machine learning and human centered artificial intelligence

Machine Learning Systems and subjects

ML ethics diagram
ML ethics diagram. Credits: https://www.frontiersin.org/articles/10.3389/fsurg.2022.862322/full

The most important part of HCML is the human element. Therefore, it is crucial to empower individuals as entities that interact with AI systems. In HCML, “human” is defined across different ML skill levels, ranging from no ML background to expert ML scientists.

A human in HCML may be involved in various ways at different stages of the ML system development process. The focus can shift depending on the type of user, whether it be the end user, developer, investor, or another stakeholder.

Some HCML models concentrate on a particular user aspect during product or service development, while others elaborate on design principles for specific ML systems to optimize usability and acceptability. The multidimensionality of the human element in HCML adds to the complexity of this domain.

HCML system application

Research on the user side has addressed general end users and specific groups such as individuals needing assistance, medical professionals, international travelers, Amazon Mechanical Turk workers, drivers, musicians, teachers, students, children, UX designers, UI designers, data analysts, video developers, and game designers.

Some studies aim to understand different user perspectives, from ML engineers to end users. These studies often build on previous work targeting developers, focusing on novice ML engineers to help them develop HCML systems more efficiently. However, much of the research targeting developers centers on ML engineers.

Explainable AI (XAI)

An essential aspect of human-centered ML is Explainable AI (XAI). XAI refers to the ability of an ML system to provide understandable and interpretable explanations for its predictions, decisions, and actions. This is crucial for building trust and understanding between humans and ML systems and for identifying and addressing issues of bias and fairness.

XAI is closely related to conversational interfaces, chatbots, cultural UX, sensorial UX, and user preferences and cultural backgrounds. By ensuring that AI systems can explain their processes, HCML fosters greater transparency and user trust.

Example of Explanable AI used in a medical application

What is ML bias?

An important aspect of human-centered machine learning (HCML) is addressing issues of bias and fairness. ML bias is the tendency of ML systems to perpetuate and even reinforce existing societal biases. These biases can be based on race, gender, culture, religion, socio-economic status, and other factors. Such biases can have serious consequences for individuals and communities that are disproportionately affected.

Addressing bias and fairness in ML requires a combination of technical and social approaches, semantic studies, emotional design, and other techniques. These include using diverse and representative data, ensuring transparency and interpretability in ML models, and maintaining ongoing engagement with communities affected by the technology. This engagement can be conducted manually or, interestingly, using AI technology.

ML ethics: an ongoing discussion

As human-centered machine learning systems become more prevalent and influential, it is crucial to consider their ethical implications. ML ethics involves the moral and societal considerations of developing, deploying, and using ML systems. Key issues include privacy, autonomy, accountability, and the distribution of benefits and harms. Addressing these ethical concerns requires ongoing dialogue and engagement among researchers, practitioners, policymakers, stakeholders, the public, and diverse communities.

Respecting and representing cultural differences is essential for HCML models. Failure to do so can lead to biases and resistance to human-centered machine learning and AI in general. This brings us to the important topic of ML Governance.

ML Governance Explained

ML governance diagram considering ML bias and ML ethics
A proposed model for Machine Learning Governance. Credits: https://mlops.community/the-new-5-step-approach-to-model-governance-for-the-modern-enterprise/

ML governance refers to the practices, policies, and institutions that ensure ML systems are developed, deployed, and used in ways that align with societal values and interests. This encompasses regulation, accountability, transparency, and participation.

Effective ML governance is essential to mitigate the potential negative impacts of ML and ensure that the benefits of these technologies are distributed fairly and equitably. Institutions like the Massachusetts Institute of Technology (MIT) and Google Design have recognized the importance of human-centered ML and initiated research projects to explore the co-adaptation of people and systems.

In 2019, the Stanford Institute for Human-Centered Artificial Intelligence was launched with the goal of advancing AI research, education, policy, and practice. The institute focuses on developing AI technologies that are collaborative, complementary, and enhance human productivity and quality of life.

Human Centered Machine Learning: References and additional reading

Of course, this is not even the tip of the iceberg when it comes to Human Centered Machine Learning (or HCML for short). So I include a bibliography for those readers who want to explore these topics in more depth (which I encourage them to do)

Ahmetovic, D.; Sato, D.; Oh, U.; Ishihara, T.; Kitani, K.; Asakawa, C. ReCog: Supporting Blind People in Recognizing Personal Objects. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI’20, Honolulu, HI, USA, 25 April 2020; Association for Computing Machinery: New York, NY, USA, 2020;
Banovic, N.; Wang, A.; Jin, Y.; Chang, C.; Ramos, J.; Dey, A.; Mankoff, J. Leveraging human routine models to detect and generate human behaviors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017;
Feiz, S.; Billah, S.M.; Ashok, V.; Shilkrot, R.; Ramakrishnan, I. Towards Enabling Blind People to Independently Write on Printed Forms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019;
Hu, K.; Bakker, M.A.; Li, S.; Kraska, T.; Hidalgo, C. Vizml: A machine learning approach to visualization recommendation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019;
Xie, Y.; Chen, M.; Kao, D.; Gao, G.; Chen, X. CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis using human centered machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI’20, Honolulu, HI, USA, 25 April 2020; Association for Computing Machinery: New York, NY, USA, 2020;
Lee, K.; Kacorri, H. Hands holding clues for object recognition in teachable machines. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019;
Liebling, D.J.; Lahav, M.; Evans, A.; Donsbach, A.; Holbrook, J.; Smus, B.; Boran, L. Unmet Needs and Opportunities for Mobile Translation AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI’20, Honolulu, HI, USA, 25 April 2020; Association for Computing Machinery: New York, NY, USA, 2020;
Ohn-Bar, E.; Guerreiro, J.A.; Ahmetovic, D.; Kitani, K.M.; Asakawa, C. Modeling Expertise in Assistive Navigation Interfaces for Blind People. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, IUI’18, Tokyo, Japan, 7–11 March 2018;
Wu, S.; Reynolds, L.; Li, X.; Guzmán, F. Design and Evaluation of a Social Media Writing Support Tool for People with Dyslexia. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019;
Yin, M.; Wortman Vaughan, J.; Wallach, H. Understanding the Effect of Accuracy on Trust in human centered machine learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI’19, Glasgow, UK, 4–9 May 2019;

We can improve your business!

Let us help you with the best solutions for your business.

It only takes one step, you're one click away from getting guaranteed results!

I want to improve my business NOW!