Artificial Intelligence and UX: a 2000 words Quantum UX practical example

Added on July 1, 2021 - Category: Essays, Theory, UX Research, UX Theory
Artificial Intelligence and UX

Artificial intelligence and UX goes hand in hand. After my posts about Quantum UX and XMI, many people contacted us asking about practical uses of Quantum UX (it’s as simple as checking your Social Media feed! 😉 ) or some kind of tutorial about XMI and XCI.

But to make it more interesting, we created a simple experiment: Now that a few months have passed since we wrote that article, we researched all the keywords used to find the page on search engines. Then, all the terms people used to talk about it on social media and in articles, and after that, all the content from our own site.

With all this data, we created the following text using AI (Artificial Intelligence). None of us wrote a single word of the following text, it was created solely by user intent!

A QUX practical example in this article!

For the purpose of this example, we used 2 AI intelligence engines (OpenAI and a propietary one built in Python). The whole process took 1’22” in Open AI and 13 seconds in our engine, for a total of 1 minute and 35 seconds.

My only intervention in this post was to style, add links and paragraphs for legibility. Everything else was made by Artificial Intelligence considering the input from our readers. Which, in other words, is the essence and very definition of Quantum UX.

Without further ado, let’s see how an XCI entity interacted with what I think and what other people thought about what I wrote!

The Quantum UX practical example

XCI versions and aspects XCI comes in different versions and aspects.

In order to better understand the concept, let’s look at an example based on different types of actors.

Let’s say we have a car that can be driven by a human or an animal (we will not discuss the possibility of machines driving cars).

In this case, we can clearly distinguish between two types of actors: a human (or animal) and the car itself.

And it is possible to create two versions of XCI depending on who we consider the actor.

In other words, if we consider the driver as an actor, then we have Driver XCI; while if we consider the car itself as an actor, then we have Vehicle XCI.

This distinction is crucial because it defines how QUX will be applied to UX research and design in each case.

For example, let’s take into account one scenario in which a driver is driving a car: when he drives too fast or not fast enough. When he turns too sharply or not sharply enough; when he uses his seat belt or does not use it. When he takes a curve at full speed or fails to reduce his speed… And the list goes on and on.

Similarly, if we consider the vehicle as an actor (and thus Vehicle XCI), then there are many things that could happen with it: a tire may blow out; its breaks may fail; its engine may overheat…

Again, a very long list of possibilities.

How does QUX methodology work here?

In both cases there are many variations that can occur.

But they are still quite different from each other and QUX will not provide good results in either situation since it only works with what happens to multiple actors at once.

As you can see from these examples taken from everyday life, QUX is impossible without multiple actors interacting simultaneously within a system.

Artificial Intelligence and UX: QUX requires multiple actors
QUX require multiple actors and environment interaction

The fact that QUX studies multiple experiences at once reveals another limitation of HCI: that when applied to UX design research does not allow for multidimensional data analysis (something that occurs very often in UX design due to various factors). We should never forget that QUX is about multiform data analysis! For this reason alone it has no equivalent in HCD and other UX specializations based on UML or other paradigms such as BPR (Business Process Reengineering), DPR (Decision Process Reengineering), CPA, etc., which only focus on one dimension at a time despite their rigid structure.

Another simple XMI example

Let us take another simple example from everyday life: While surfing on your smartphone you meet someone interested in going out together tonight.

He suggests going out for drinks and dinner first.

Now you have two options: you can go out drinking with him for some time before heading off somewhere for dinner later (or vice versa), or you can go straight to dinner without having drinks first (or vice versa).

It doesn’t matter which option you choose; what matters is that there are two choices available—and these possibilities exist independently of whether they are first-order options or second-order options.

That is: whether they are direct options within your own expectations on what you want to do tonight or indirect options deriving from what our interlocutor wants us to do tonight.

UX Multidimensionality always involves more than one dimension—in our example above there are at least two dimensions—.

Therefore, whenever this happens:

  • there is more than one dimension involved in our experience within a system
  • there are multiple actors interacting simultaneously with each other (and even with themselves)

We must use QUX instead of HCD-based UX research tools like UAT / UCD / etc., which apply only one type of data analysis per situation (one dimensional per dimension).

The role of XCI in UX design processes

The role of XCI in UX design processes cannot be understated. It is the core component of UX design processes and plays a key role at all stages (i.e., concept, analysis and design) as well as in the operation of systems.

The same applies to UX research, since it is one of the elements that can be used for defining the problem (if we consider the consumer as a separate actor) and to collect information about users’ expectations (if we consider the system as an actor).

But what happens when we use QUX in a solution-focused approach? This will be addressed later on.

Evaluative methods

Let’s now take a look at evaluative methods with respect to QUX. Evaluative methods include those that are applied to the results of UX design processes.

In other words, evaluative methods help us evaluate each process stage, which is particularly important since they provide useful data about what went well in each stage and what did not go so well (so that we can improve its performance), as well as whether or not each stage was successful.

The following list shows these different levels within UX design processes:

  • Concept Analysis
  • Design Operation
  • System Evaluation
    • Concept Stage
    • Definition of requirements
    • Objectives
    • User profile
    • Needs
    • Functions
    • Physical aspects
  • Competences
  • Analysis Stage
  • Evaluation Criteria
  • Design Stage
  • Evaluation Criteria
  • Operation Stage
  • Evaluation Criteria
  • System Evaluation

Quantum UX Evaluative Methods

Here are some examples of evaluative methods for each stage:

For Concept evaluation it would be, among others:

  • Diagrammatic Evaluation Assessment against objectives
  • Assessment against user profile
  • Assessment against needs
  • Assessment against functions
  • Assessment against physical aspects
  • Assessment against competences

For Analysis evaluation it would be:

  • Diagrammatic Evaluation
  • Diagrammatic Hierarchy
  • Analytic Hierarchy Matrix Tree
  • Self-assessment

For Design evaluation it would be:

  • Diagrammatic Evaluation
  • Diagrammatic Hierarchy
  • Analytic Hierarchy Matrix Tree
  • Self-assessment

For Operation evaluation it would be:

  • Diagrammatic Evaluation
  • Diagrammatic Hierarchy
  • Analytic Hierarchy Matrix Tree
  • Self-assessment

For System evaluation it would be:

  • Diagrammatic Evaluation
  • Diagrammatic Hierarchy
  • Analytic Hierarchy Matrix Tree
  • Self-assessment

In order to understand this classification let’s take a look at one example from everyday life; namely, books.

Books are created based on many things including their content, visual aspects, format and structure, etc. But their quality is evaluated based on various criteria such as their content, how they are written and presented, whether or not they are easy to read/understand/use.

If we apply this example to our task here, then you will see that these stages correspond very closely to concept-based approaches such as UCD or BPR which focus on defining specific requirements; analysis-based approaches such as CAAD or DPA which focus on analyzing data; and design-based approaches such as CDA or BPA which focus on designing solutions for users’ needs using certain tools (commitment diagramming analyzers).

artificial intelligence and ux: directionality is multiple
QUX goes and considers all ways. Every way.

Moreover, another element that should not go unnoticed here is the fact that these evaluative methods are an instrument used in QUX research. Simply because they can help us define the problem itself by focusing our attention on certain experiences that could have happened during the use of products/services by users. While also helping us identify all possible causes leading up to them through cause-effect diagrams and even perform quality control checks during execution.

This helps us avoid mistakes throughout all stages of development (conceptualization phase, data collection phase, analysis phase and designing phase), thus minimizing costs and saving time. And this is something that should always happen during every product development project!

Additionally, using QUX methods allows designers to better understand other stakeholders within organizations.

By comparing their own experiences with those observed during designs based on QUX principles we can easily detect problems with proposed solutions before they occur thus helping them avoid many costly mistakes later down the line!

Artificial Intelligence and UX working together

In order to better understand why Artificial Intelligence and UX working together helps solve problems throughout multiple stages of a project let’s take a look at an example taken from everyday’s user experience. In this case, cars (yes, again!)

Since cars nowadays are equipped with multiple systems (e.g., braking system or engine management system), any fault within any single system could lead to major problems such as accidents. These probabilities are due to more than one failure at once making it necessary for designers who create these systems to consider all possible scenarios. And these scenarios should be considered prior to beginning work so they can offer reliable solutions.

Cyber Entities in UX

The subject of cyber entities and the discussion of them is very interesting, but it seems to be the most difficult area for UX designers. It’s like trying to talk about quantum constructs and so on without using scientific terminology – just try it.

Conclusions

I think there are some challenges in Artificial Intelligence and UX providing a comprehensive definition that is more than a tautology, that is not tied to a specific context, and yet can be applied in different HCI contexts.

In my opinion, we should agree on this:

A UX designer is someone who designs experiences with humans, or between humans and other human entities, or between humans and non-human entities. The latter also known as “hyper-entities” (non-human entities that behave like humans).

This definition has the advantage of being extremely broad in terms of what one can do as a UX designer, while covering all human, non-human, and inter-human actors involved in experience design.

So far, I’ve been able to apply this definition all over the place in several projects and every time it has worked perfectly. A core competency of UXD is the ability to build empathy with all actors involved in experience design – empathy with users also means empathy with robots, machines, hyper-presence, etc. The process of building empathy involves understanding how these entities perceive themselves and how they are perceived by humans (including how humans perceive themselves).

It may also involve designing for such perceptions (e.g., designing for how humans perceive robots, or designing for how robots perceive their own “selves”).

My hypothesis is that UXD encompasses multiple types of design for perceptions, including design for human perceptions (e.g., error messages), robot perceptions (e.g., self-guidance), etc.

The above definition implies that UX designers should empathize not only with users/humans, but also with machines/robots, etc.

And that in turn implies that the way we define user experience will change dramatically in many cases if we consider previously ignored entities like robots/machines as part of our audience (not just tools)

Of course, this will require us to rewire our brains, which are wired for interaction with other humans, but not so much for interacting directly or indirectly with machines/robots/hyper presences.

For example, If you treat machines as just another commodity that you use and dispose of when they are no longer useful (rather than something you interact with), then you will likely never appreciate their potential as agents in defining your experience

To some extent, I think we already do this when we interact with AI driven systems like Alexa or Siri.

However, I’d like to see more research on how these kinds of things work from an emotional standpoint.

In some ways, it reminds me of James Norman Hall’s story “Across Patagonia” where he describes his interactions with a horse on an expedition:

“We became instant friends; he was like a brother almost before his halter was untied; we had come through many trials, sufferings, hardships together … Sometimes I would give him caresses; he would answer by pressing his nose into my hand… He was my faithful comrade.”

Across Patagonia

Conclusions on using Artificial Intelligence and UX for this article

Everything from the headline “The Quantum UX practical example” to the title directly above these lines was created by AI.

As I mentioned at the beginning of the article, the only training we gave the AI engine was the keywords in the search engines, the comments on various social media accounts, and our archive of posts.

I styled the post, added relevant links and titles, and changed a few minor punctuation issues.

There were a few errors (this was a quick experiment, so it was to be expected). For example, animals driving cars, or the book “Across Patagonia” being incorrectly attributed to James Norman Hall (it was written by feminist author Lady Florence Dixie ).

Of course, the obvious conclusion is that this is focused on SEO services and content writing (which is not a minor thing!).

But the point we can’t miss here is that the system has understood all the variables, concatenated them and created its own version, writing an article with my own theory, faking my writing style and improving it.

Of course, I could just combine this with analytics and retargeting and randomize this article for each reader (which I won’t do).

But imagine the following scenario: If I did the exact same thing I did here, I could have served up a statistically validated version of a landing page, or a proper cross-cultural user experience, or an eCommerce with just the right products. This is nothing new, this is what you see on Amazon, Alibaba and other eShops!

In short: Artificial Intelligence and UX is the future. And the future is already old

We can improve your business!

Let us help you with the best solutions for your business.

It only takes one step, you're one click away from getting guaranteed results!

I want to improve my business NOW!
-->