Gesture & Mental Models: A recap of the QuantQualBridge meeting

Dane DeSutter, Ph.D.
5 min readJul 7, 2023

--

Our talk sought to bridge the gap between academic research on gestures and its practical application for User Experience (UX) research practitioners. We set out to equip UXRs with the necessary knowledge to use gesture analysis to conduct more effective and thorough interviews, leading to a better understanding of user mental models.

At the June meeting of QuantQualBridge (QQB), Dr. Stephanie Scopelitis and I had the opportunity to present some of our work to the broader UX research community. Prior to the meeting, I approached Stephanie with an idea that I believed would leverage both of our research experience in academia and industry: translating our extensive knowledge of gesture research to be useful and relevant to UX researchers.

Who are we?

Stephanie and I first crossed paths during my early days as a graduate student at the University of Illinois-Chicago when she was a postdoctoral scholar. It was during this time that Stephanie introduced me to gesture research and how it was being used to study human communication and cognition.

Learning about her dissertation research and collaborating on various gesture grants in the Stieff lab piqued my fascination with theories centered around the embodied mind. These theories propose that humans mentally represent information about the world through multimodal representations, incorporating content from our perceptual systems to form the basis of memories, concepts, and even creative thought.

We have a combined 20+ years of experience of studying gesture as a window to both understand how humans negotiate meaning through interaction and also how people think.

Dr. Dane DeSutter is a Data Scientist, mixed methods User Researcher, and Software Engineer at Catalyst Education. His work merges laboratory & design-based research, quantitative & qualitative UX research, and big data to derive product insights. His prior research in embodied cognition looked at how spontaneous co-speech gestures revealed and altered complex mental models in science domains. He has had work published in multiple high impact peer-reviewed journals. Additionally, he has built and studied a number of software products, including Data Insights in Labflow. Dane is a classically trained pianist, loves to swim and bike, and is trying to visit as many national parks as he can.
Dr. Stephanie Scopelitis is a Learning Scientist, qualitative Design-Based Researcher, an educator, and former professional modern dancer. Her research has examined such topics as the role of the body in teaching and learning, multi-disciplinary explanation, and arts and community development. Her work has been published in various top tier journals and presented nationally. Currently Stephanie serves as independent researcher and consultant partnering with learners and professionals in such fields as healthcare, business, and education to enhance awareness of gestures and body language for enriching understanding in interaction. She also works as a professional coach carrying individuals through ethical decision making processes. She has performed on the Paris Opera House stage, slept in a graveyard in Prague, and dropped out of college twice to explore the world.

On gestures and mental models

In this talk, Dr. Dane DeSutter and Dr. Stephanie Scopelitis will discuss how User Experience (UX) researchers can triangulate and enrich information from 1-on-1 interviews by attending to users’ co-speech gestures — the spontaneous movements that humans make with their hands and body when communicating. Gestures are a “window to the mind” and can reveal unspoken information about users’ emotional states as well as the structure and composition of their mental models. As research teams face increased pressures to evolve the rigor and strategic value of their work, gesture analysis can bring a new lens to how teams arrive at foundational insights and usability feedback. We will conclude with a practical guide for efficiently implementing gesture research.

Full video of “Gesture & Mental Models: What co-speech gestures reveal about users’ thinking during interviews” by Dane DeSutter, Ph.D. and Stephanie Scopelitis, Ph.D. Conversation has been lightly edited for length and to enhance audio quality.

Important threads from presentation

For the fullest explanation, we encourage you to watch the talk for a detailed exploration of these ideas.

  1. Mental models are an object of UX research and gesture is the other manifestation of mental models alongside language.
  2. Gesture & speech together communicate and represent meaning.
  3. Attending to gesture during interviews highlights moments for deeper lines of questioning.
  4. Analyzing gesture in video data reveals subtleties in mental models not presented in language.
  5. Gesture research can be implemented on a sliding scale in response to the nature, value, clarity, and risk of the problem being studied
  6. Open source technologies may also be leveraged to track, segment, and quantify components of gesture in large datasets.

Questions from the audience

We received a lot of excellent questions and engagement from the audience and wanted to highlight some of the questions here. Some of these questions deserve a detailed response and will be the subject of future blog posts — stay tuned!

Q: What about interviewing people who just have low propensity for gesture?

Not everyone is the same height, has the same hair color, or likes the same foods. There will always be individual differences between people and this is also true for gesture likelihood.

What’s important is to set the gestural stage of your interview for the best possible outcomes. We encourage you to make your interviewee feel comfortable (establish rapport), use fluid body language during interaction (establish conversational flow), and use a warm-up task to prime them for gesture (priming question).

Individuals may also be more likely to gesture based on their profession or hobbies. Artists, athletes, musicians for example are more likely to be gesturers by nature.

That said, everyone gestures. Just try to talk while sitting on your hands, it’s very difficult!

Q: Are there cross-cultural differences that can be observed in gesture?

Yes! Gestures vary in predictable ways between cultures. A thumbs up would signal agreement between English speakers, but the same gesture is interpreted quite differently in Morocco.

Culturally defined gestures—called both emblematic or “Italianate” gestures—often have predictable form and meaning, but may not translate across cultural lines. While these differences are well known, they rarely contain any representational information.

There are, however, predictable cultural differences in gesture based on language. One example is the conduit metaphor—the idea that information is packaged up in words and sent between speakers (e.g., “she gave me this great idea!”).

This metaphor is ubiquitous for English speakers and will influence how they gesture, showing information transference. This metaphor is absent for Chinese speakers, though. Mandarin speakers are less likely to gesture in this way when discussing communication.

We plan to write more about these cultural differences in a future post.

Q: Can gestures from the interviewer negatively impact the interview?

Absolutely. Interviewers can unintentionally present conflicting information between gesture and speech. This might be as simple as presenting a mirror image of what is intended because of perspective between interlocutors. Interviewers should also be careful not to “lead the witness” by allowing their gestures to betray the goal of their study.

Q: What kinds of research questions & studies is gesture analysis best suited to?

The short answer is that the method must match the research question. Research questions around the distinctions users make between high level concepts, how they understand technology mediated communications, their understanding of spatial information & tasks, and user perception of product designs that integrate into or reproduce physical workflows are all excellent candidates. More to come on this in a future post!

Supplemental materials

Spontaneous co-speech gestures can be categorized as either representing something or supporting the flow of conversation. Here’s a guide that you can use to remember which is which.

Gesture categorization scheme based on David McNeill’s taxonomy. Easy-to-understand names are given to each gesture alongside the names that you’ll find in academic literature.

About QQB

The QuantQualBridge (QQB) society is a monthly event — with intercessions for holidays and travel — that brings together business and academic researchers. QQB sessions are organized by Dr. Paula Bach and Dr. Iulia Cornigeanu.

Meetings are hosted by

of Rosenfeld Media and its main purpose is to provide a platform for researchers to share their knowledge and expertise in the field of UX research. Follow on Medium.

Want to get involved in the QQB community? Request to join the Google Group.

Disclaimer: These writings reflect my own opinions and not necessarily those of my employer.

--

--

Dane DeSutter, Ph.D.

Dane is a data scientist at Catalyst Education. He likes to build things, break things, and ask "what is that thing and what could it do?"