Note: this is a modified extract of one of the chapters of my new book on Multidimensional User Experience or Quantum UX (QUX). Parts of this article appeared in Spanish in a previous article I wrote at UXpanol
After all, in most cases we are talking about HCI or Human-Computer Interaction.
Human–computer interaction (HCI) studies the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI and User Experience observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human–computer interaction is situated at the intersection of computer science, behavioural sciences, design, media studies, and several other fields of study.Source: Wikipedia
At the same time, it is never completely defined what a user would be, when a user is a user and what the scope of the relationship between the systems and user interfaces is.
HCI and USer Experience: Welcome Actors
It is important to note that the definition of actor does not only contemplate human beings, but (literally) any biological or cybernetic entity. This is something you’ll understand better once we talk about XCI.
However, these actors are not defined by what they are, but by their roles in the process to be analyzed.
For example, a restaurant customer may have innumerable characteristics (which we certainly won’t ignore). But for the model proposed by Cockburn, this user will be identified by his role.
In the same way, there will be other actors, such as the chef, waiters, additionists, etc.
Suppose: a 50-year-old male client with a Caucasian ethnicity and a 25-year-old woman with an African ethnicity are going to be the same in this model. This is because they are defined by their roles (in this case, clients).
The same happens in the case of a food critic or a chef: the important thing will be the roles they play in the process to be analyzed.
The system proposed by Cockburn is widely used in UX, but its clear systemic conception makes it very difficult for us to explain dialectically the different users and their roles.
Also, in UML roles are never associated, although this rule is often violated by the overlapping process. It is clear that a system where violation of the rules is the norm is not very consistent or precise.
UX is not a science fiction movie. But it is science!
So we saw that one of the most common methods has some problems related to the imprecision of the names. This can pose very serious problems when defining systems, people, user flows, etc.
Let’s take a simple example:
- I create a website for a client (user).
- The client will dedicate human resources to manage it (users).
- These resources will be divided into administration (users) and content creation (users).
- And of course, the website resulting from this synergy will be experienced by … users
My role then will be to ensure that all users participating in the process at their different stages and with their different roles can have the best possible user experience.
This simple example, translated to UML, is extremely complex, and almost all roles can fall on the same actor.
For example, let’s say the customer gets a service to create websites, such as WordPress.com, Wix, etc. Then she decides to create content. And of course view the content on the website to see what it looks like.
In other words: 3 roles in a single actor in one the most common processes that we can think of (having a website)
In the image above we see how the Cockburn model requires to violate the very rules of UML through overlapping.
And still, we are still not clear about who the users are and where they are involved in the process, at least dialectically.
What does this mean? It means the UML model can work very well as long as we have the visual help of its diagrams (because at the end of the day, that’s exactly what it’s all about).
But in this way, everything is part of a system and a user exists only as a participant in said system. Or, in other words: a cog in a machine. And absolutely nothing more than that.
Proposing a more accurate user identification in HCI
What follows is not the “correct” way (I hope it is not “wrong”!) of identifying our users.
Instead, it’s a proposal that in my opinion simplifies the identification of users based not on their role, but on their place of participation in the process.
Basically, we take interested parties or stakeholders, and we add synergy with artificial systems and entities.
In other words: the same proposal of Cockburn, only that instead of differentiating by roles, we differentiate by membership groups within the process. Let’s look at the following diagram:
Explanation: As we saw, the roles of a user can be complementary, supplementary, overlapping or totally different.
In the same way, each user can have different or similar goals, and the way to reach that goal can be the same or different as well.
Thus, in this abstraction of a model user case, yellow and violet have the same (or similar) role, but different goals. And violet and orange have different roles, but similar goals.
The roles and goals for each of the users will define the form of the processes, and the entities involved in it, which may be human or cybernetic. Once the process is defined by the sum of the intervening parts, effective action is reached for the user’s goal.
But this may not end here: there is a post-process instance that feeds back both entities and users.
Example of roles and processes in HCI
- Role: Consumer
- Goal: I need to buy a phone.
- Process: Buy online
- Entities: e-commerce system, product recommendation algorithm, sales person (there are many more, this is a simplification)
- Effective Action: Purchase
- Post-Process: My effective action is referred to a post-sale or Customer Experience service (CX) and the recommendation algorithm, which learns from my behavior and can suggest products based on my interests, feeding back (or even creating) my need to acquire a related product, generating a loop in which the whole process is carried out again from the beginning
And users? Sure, also users. The seller, the customer service agent, even users like me are all users of the system that affect and feed it back. They will perceive a user experience that may or may not coincide with mine. Or it could complement it, deny it, or any other possibility you may think of.
The important thing about this diagram and this long introduction (my apologies!) is to recognize not only the interactions. But also recognize the plane from which the interactions are born. And who are the entities that interact to generate, modify, enrich or disturb said human-computer interactions (or HCI).
Welcome QUX, we needed you!
Let’s be honest, if you got this far it is because you want to know how the movie ends and if the robots finally win.
Spoiler Alert: YES, in the same way as humans.
Just for simplification sake, let’s not go further into QUX, and focus only on XCI
In other words: we leave aside the limited conception of Human Computer Interaction in favor of X-Computer Interaction, where X defines an entity of any type: biological, cybernetic or mechanical.
Furthermore, XCI contemplates the user experiences that occur even when the person or entity does not “live” that experience.
For example, a biological entity is not only an adult human capable of answering a questionnaire: it is also a baby, a toddler, a person with a neurological or psychological disability. Furthermore, it can be an animal! (And if anyone has any questions, ask cat or dog owners if they don’t prefer certain objects or experiences that satisfy them more than others).
XCI is a superior state of HCI
In a way, it’s like Cockburn’s UML models – the only way they work is by breaking the rules.
The problem is that resistance to change and the maintenance of a certain “status quo” means that new advances and conceptions in UXD are strongly resisted, even when the models commonly used prove to be anachronic and clearly insufficient.
In an era when robotics, machine learning, or IoT are so common, it’s a big challenge for UX designers to consider cyber entities and their place in processes. A challenge that offers a lot of resistance.
To my great surprise, in different talks, conferences, books, etc., I find that leaving the comfort zone of Human Factors and HCI is something highly resisted by UX professionals. Sometimes only in dialectical terms. Other times, sadly, completely ignoring these entities.
I could just expose them to the Turing Test and end any controversy (well, in some cases of very stubborn people, exacerbate it).
But the reality is that in many cases, these entities would not pass the Turing Test, and still we are obliged to consider them as users.These entities would not pass the Turing Test, and we are still obliged to consider them as users. Click To Tweet
Practical example of XCI: user experience for robots
Extra-planetary exploration is done with robots, due to technical and factual impossibilities for exploration by humans. What we see above these lines is one of those robots, the Sojourner, used for exploration on Mars.
I don’t think I need to go too far into design considerations for our robotic user (for example: why isn’t it anthropomorphic?).
Is Mars too far away for you? Well, ask any SEO specialist if they don’t constantly work to please a machine, robot or spider.
Or ask yourself, what advertisements and services will you see after reading this note, and who (or WHAT) manipulates all your tastes, preferences, fashion style, opinions, etc.
Nowadays, a cybernetic entity will tell me which keyword to use to better reach a goal, which part of the page a user liked (or disliked), which color to use, which wording, or which design works better. And in return, I’ll try to please those cybernetic entities in a mutual synergy. So, is this HCI or XCI? I think the answer is more than evident.
Just in case someone didn’t notice it: it’s called User Experience Research. I’m talking about measuring user experience results, or how actors interact with UI. And for that, I use tools like Google Analytics, Hotjar, Ahrefs and many more tools based on AI (artificial intelligence).
QUX: accurate design for custom, personalized experiences
The best part is that the concept of XCI (between other QUX concepts) is way more accurate, faster and affordable than any existing UXD framework. We can design experiences that reach any level of complexity and test them immediately. Not only complexity: we can design millions of experiences at a touch of a button, test them in a specially design UI and research with users even without any human intervention.
Long gone are the days of “5 testers are enough”. We can test with thousands, or even millions, in any place of the world, at any time. Considering language, culture, age, activities they engage, groups of affinity…
So yes, we can design these experiences under the good old HCI premises we all embraced at some point. Or we can embrace XCI, let a non-human entity analyze zillions of megabytes of info, create statistical analysis and design a perfect multidimensional experience for each and every user.
And when I say “design for each and every user”, it is not a dialectical error. I mean it is possible to create millions of different custom designs for each person using QUX and XCI
How to design an XCI experience
What about this? Let’s say I design some QUX system to sell fine wines. I define some parameters based on available research. I don’t even need to create UX personas (although it would be a good idea to do it). I can replace personas based on limited research with real users using statistical analysis from thousands of possible users.
So now I design a UI (user interface) for this project. Note that we’re still in the HCI dimension. However, I’ll design this slightly differently: instead of some rigid “this element goes here, that element goes there” approach I design it in a fluid, multidimensional way. Which means all elements of the UI might or might NOT be there according to certain conditions. Furthermore: they don’t even need to be in the same place, have the same color or wording or anything.
So now comes the fun. I completely move away from HCI and go deep to XCI: once I start to get data from my campaign, I design an AI entity that serves the different UI mentioned above. And not only that: this AI entity may connect to smart home devices, or DOOHA ads . It could send an alert to your phone once you’re close to a shop selling our fine wines. And this robot AI not only will analyze your interaction and answers to all these stimulii. It will also get information about other products it can sell to you.
This is not science-fiction. This is science.
News flash: the future is already old.
Where does XCI stands in relation to other UX methodologies?
Finally, do you remember the image above where I showed a UML diagram for a common HCI process? Well, that diagram is very simple (it could also be more complex) because all the processes described in that diagram are sequential, linear and two-dimensional. This is one of the reasons why in UX we use frameworks like Design Thinking.
This is the design for a Design Thinking methodology:
As you can see, it shares something in common with the model previously mentioned: it’s linear, sequential and two-dimensional. There’s overlapping too, but Design Thinking basically requires that some step happens, then the other, then another one and so on, in a neverending process (hopefully!).
Well, XCI is very different. First, it is not a methodology or framework, but a small part of Quantum UX (QUX), a concept that contains different methodologies. Or even the idea of methods, techniques and methodologies that do not yet exist!
And if I were to represent a QUX design process on a diagram, it would be very different. First of all, it has to be three-dimensional. In fact, a diagram made under a QUX conceptualization would be multiform or directly amorphous. And a perfect QUX diagram would be infinite.
Anecdotically, this conceptualization is why our front page design has morphing shapes and our logo creates the letters U and X with morphing shapes. And also why you may not find the same content every time you visit it. And this is just a very simple use of QUX, just a simple inteface.
QUX and XCI conclusion
Cyber entities are part of the consideration of users in order to design more complete and complex QUX experience. And XCI is the only valid way to create such experiences.
Why Robots Must Learn to Tell Us “No” by Gordon Briggs y Matthias Scheutz
User Experience in Social Human-Robot Interaction by Alenljung, B., Lindblom, J., Anderasson, R. & T. Ziemke
How to Teach Your Robot in 5 Minutes: Applying UX Paradigms to Human-Robot-Interaction (PDF) by Martin Kraft and Markus Rickert
We can improve your business!
Let us help you with the best solutions for your business.
It only takes one step, you're one click away from getting guaranteed results!I want to improve my business NOW!