What Happens When the User Becomes the User Experience: Robotic UX

What Do Robotics Mean for the UX Design Landscape? Written By — Brett Lindstrom


There are many amazing experts in the field of UX design. However, one topic that has not been touched upon to the extent it should is robotics.

As a UX designer who has worked in fields from medical science and military intelligence to the ever eccentric startup community; ideas have become a dime-a-dozen. That being said, I can assure you robotics is going nowhere but UP on the list of pay-attention-to’s.

There are many amazing experts in the field of UX design. However, one topic that has not been touched upon to the extent it should is robotics.



Involved in the creation of a robust robotic experience as it pertains to a “true-human” conversation.

  • Observe
  • Archive “Real-time” observations
  • Emulation (first step to successful learning)
  • Integration / True learning
  • Equate sensory information to a “Response-Necessary” or “No-Response Necessary” response
  • Generate a call to the database for proper response criteria
  • Continue observation during this “call” to ensure that no further data that could effect the response is observed
  • Respond accordingly
  • Repeat this process once response is completed

(The above user-story embodies the information necessary to program the “false-human” interface to produce an accurate true to life user-experience between a “true human” and a “pseudo-human”)


The Theory of “User-Evolution”

To achieve a truly human experience the evolutionary process must be replicated as closely to our own evolution as possible. But we must focus on translating “quantitative” “sensory” information that parallels those milestones that heavily contributed to our own historical development. I.e.) The discovery of and replication of fire

How can we contextually replicate that experience to propel our own contemporary growth needs within the world today.

I.e.) Program these humanlike interfaces to solve complex issues like global warming by aggregating online information and quantifying it’s propose solutions into a solution with the highest percentage of success.

That way as our collective consciousness grows (the Internet), so does our “pseudo-human” knowledge base. Until we move beyond prototype to person.

So from a theoretical perspective this is all well-and-good but How do we define and analyze the best way to quantify that sensory information.

The internet can be looked at like the world’s collective consciousness. A robotic programmers job will be to personify “that” collective consciousness.


The Personification of a Robotic Interface : from a Pseudo-Technical Perspective

Questions to consider…

How do we program the “eyes” to sense the twinge of the corner of someone’s eye and register it as a mood of “discontent” in that observational subject.

Furthermore, each facial component is a unique variable that must be considered when developing the overall equation that defines: “discontent”


Theoretical programmatic “Playout”:

<video observation> “lip configuration” = upper left raised “30" </video observation>

<video registration> “eye configuration” = “eye lid” = 30% closed, “pupil placement” = (x=20, y=30) </video observation>



(The above configuration represents a qualitative value of “discontent”)

The equation above obviously only takes into consideration the eyes and lips. To accurately determine a persons facial expression and mood, many more variables would need to be considered for a proper sensory algorithm to be created.


The conclusion as to the role of UX robotics for now…

Defining as many sensory and situational variables as “humanly possible” in order to define what should be analyzed to learn and produce the proper responses by our pseudo-human friends of the future.