These are my notes from UX LX 2016, recently recovered from Evernote and therefore being published outside of their natural sequence. For the rest of the content, open the UXLX16 story page


Humans respond to the volume they hear.

today we are going to cover our process of conversation, how we speak to machines and how to make machines with voice interfaces stronger.

rules that we follow everyday

when we walk down the street we don’t make eye contact with other humans.

making eye contact is a form of recognition and that we would like to engage in conversation.


  1. recognition
  2. greeting
  3. initial enquiry

Grician maxims

  1. say what is true
  2. be as informative as required
  3. be relevant
  4. be perspicuous

Leaning about Grician Maxims (and how fictional narratives constantly break them) from @jonesabi #uxlx


Computers need to wait for us to stop talking in order to process what we said.

It then extracts tiny parts of speech, runs sound recognition to understand the natural language.

The dialogue manager keeps track of this process and identifies the differente elements of the sentence.

It then generates natural language.

Neural Networks are being used to improve this process.

Paris - France + Italy = Rome.

Basebal - Bat + Racket = tennis

there is a hidden layer that humans don’t have access to, and it processes this information to come up with the best probable outcome.


Cohesion is the glue of discourse - cohen giangola, balogh

My kid only eats rice He can’t survive on that.

Computers can now understand pronouns.


rythm, stress and intonation

For every word spoken to a computer, we need to give different levels of intonation


Context is an agent’s understanding of the relationships between the elements of the agent’s environment.

— Andrew Hinton


People will only use voice interfaces when they don’t need failure modes like the computer saying it can’t do that without asking for more information.

A conversation interface that really works needs to be able to match all these requirements.


recording of ‘how we talk, how machines listen’ 20160526 11:01:31.m4a

Select one of the pages in this story UXLx - User Experience Lisbon 2016

Web RanchStories

Alan Cooper #RanchStories The father of visual basic Writing feature packed bug free software didn’t guarantee success Most significant innovation is …

Giles Colborne
Web Interaction Designers vs Algorithms

@gilescolborne gilescolborne: How I stopped worrying and learned to love algorithms. #uxlx How I stopped worrying and learned to …

Melissa Perri
Web Designing to Learn

@lissijean The controversial subject of the MVP Great for testing yet some teams seem …

Stephanie Rieger
Web The Emerging Global Web

@stephanierieger The online World is saturated in developed economies …

Per Axbom
Web Fairy Tale Experiences

Per Axbom The more people’s brain is in auto pilot, the better ? The reptilian brain is great for conversion. What is our responsibility to the user? …

Abi Jones
Web How We Talk, How Machines Listen

These are my notes from UX LX 2016, recently recovered from Evernote and therefore being published outside of their natural sequence. For the rest of the …

Amber Case
Web Calm Tech

@caseorganic We are all cyborgs, we use objects and technology to go outside our natural environment or be able to do new things. We are waking up next to …

Chriss Noessel at UXLx
Web Semantic Noodling

UXLx UXLx brings professionals from all over the world to Lisbon for 3 whole days of trainning and networking. Next edition is taking place on the 23rd and …

Notes and slides from a presentation by Denise Jacobs at UXLX 2016
Business Creativity Co-Create Creating Better Together

UXLx This is possibly the biggest conference for User Experience professionals of any field. Next edition is on the 23 and 26th of May 2017. Programme and …


Enter your email to get a weekly digest of new posts.