My journey to find the user experience in data science
Machine learning (ML) and artificial intelligence (A.I.) are radically reshaping consumer products. User experience (UX) researchers and data scientists could work more closely together now to develop insights and collaboration practices that can guide the broader field.
This is the first article in a series exploring my journey to blend user experience research into my data science work
Navigation app: “Take the next exit and turn right onto Main St.”
Friend: “I’m going to stay on the highway. The map is wrong, I know it is faster to just go this way.”
Awesome, traffic jam.
We have all been in the car with that guy who was convinced that they knew better than the map.
After getting out of the long and (probably) avoidable traffic jam, you start to reflect. What exactly led your friend to make this ill-fated decision? Why did he consider that his intuition was more trustworthy than the output of his navigation app?
Rationally, your friend ought to know that navigation systems are powered by sophisticated machine learning models that are trained on petabytes of real world data from nearly every conceivable scenario.
Not only does the app know your friend’s driving habits, it also knows the current location of every motorist on the road who is also using the app. It models that real-time data against years of other drivers’ behaviors collected across multiple cities, states, and countries.
This should make the navigation app far better informed about viable route options and which one is best: an ultimately rational decision maker.
But that’s not how your friend reacted. He trusted his gut and now you’re both late.
A brave new technoworld
Modern human history has been shaped by the accelerating pace of invention: the first Industrial Revolution, the birth of mass manufacturing, the digital revolution. We are now in the midst of what has been dubbed The A.I. Revolution.
It is somewhat of a historical truism that innovation often leaps far ahead and society has to play catchup.
This effect has been sent into overdrive in the past decade: machine learning and broader artificial intelligence tools are rapidly becoming essential components of products and services that we use everyday.
Social media platforms use image recognition and recommender systems to drive engagement and serve ads. Insurance and retail companies triage customer queries through sophisticated chatbots. E-commerce platforms use prior purchase and browsing history to present highly personalized shopping experiences.
An entire new class of apps is emerging from this frontier that represents an exponential leap forward. Tools like DALL-E, Stable Diffusion, and ChatGPT are moving learning and A.I. from the periphery to the locus of user experience.
This is what Senior UX researcher at Google and Machine Learning + User Experience (MLUX) group founder Michelle Carney identifies as products moving “…from experiences ‘powered by [A.I.]’ to [A.I.] as the experience itself.” (Check out this excellent article about her view of the iterative relationship between ML and UX)
Products are moving “…from being ‘powered by [A.I.]’ to [A.I.] as the experience itself.”
— Michelle Carney, Google & MLUX
Many of these innovations are truly inspiring. They have also surfaced debates around ethics and how ML/A.I. interacts with user experience.
We’ve all been in the car with that friend or had an experience with an A.I. that we questioned. How do our friend’s and our prior experience explain our relationship to A.I.?
How does the broader social discourse around machine learning and A.I. as propagating racial and gender bias or promoting unrealistic standards of beauty impact customer perception of a brand or service?
Your friend may also wonder whether his data is protected and fairly used and whether it is possible to control the privacy of that data.
Not every product or service may center the outputs of machine learning or artificial intelligence to the same extent, but we cannot escape this new reality. Ethical questions will thus remain for the foreseeable future at the center of ML/A.I. concerning transparency, ownership, privacy, bias, and trust.
Why is bringing user experience to data science important?
This begs the question: are there small steps we can be taking today to start paving a way forward through this thicket? I propose that there is an opportunity to begin pulling at the threads of the Gordian knot of ML/A.I. and the user in the context of data science.
1. Data science is already well integrated with business practices
The first reason is quite pragmatic. Data science has already proliferated (and continues to do so) within the business setting.
Data science roles have consistently been in the top ten fastest growing careers according to the Bureau of Labor Statistics. Here we can see that even amid pandemic era distortions to the labor market, data science holds its place at number six.
It is in many ways a more advanced discipline in the business context than pure ML/A.I. innovation because it has a lower barrier to entry for employees looking to grow their careers.
Data science also slots well into existing project management techniques for data mining and software development (e.g., CRISP-DM, Agile).
2. Data scientists build the same predictive black-box models
What makes data science projects a useful testbed for better understanding user experience with our products is that data scientists draw on many of the same inferential techniques and models from ML/A.I. to do their work.
Models like artificial neural networks are opaque I/O machines, but their outputs influence all manner of things in the user experience from the way a streaming service organizes movie rentals on the landing page to how a supermarket places related products on their shelves.
Data scientists can bring a technical level of understanding to how these black-boxes work. Their nous can be leveraged to expand what we are looking for when trying to identify how and when user experience might break down.
3. Data science already shares a nexus with UX research
Data science is anchored to achieving pragmatic goals of the business. For the many consumer facing industries, data scientists take up problems that directly aim to enhance the experiences of their current and future customers.
In this way, data scientists and UX researchers are already doing complementing and often overlapping work in organizations.
Dr. Bahar Salehi, senior data scientist at go1, recently wrote an article arguing in the title that “UX and Data Science Teams Need to Work Together.” She argues that data scientists and UX researchers both “help improve decision making and products by understanding users and their needs.”
Indeed, data scientists are positioned within companies to draw on product and user data to derive insights that improve business processes and uncover meaningful behaviors of their customers and customer segments.
Similarly, UX researchers draw on varied data from sources like interviews, focus groups, surveys, and usability studies to pinpoint user needs, attitudes, and preferences.
Both groups are focused on producing deep insights about how people interact with a product or service. Both guide the iterative cycles of product improvement within organizations.
4. Data science and UX research together generate more robust and valid insight
The final reason is that data scientists and UX researchers make a natural mixed methods team. The insights generated from a mixed methods approach are usually more complete through data triangulation and converging evidence.
An excellent case study is how Spotify brought their data science and UX research teams together to understand users’ mental models of how ad skipping worked in their app.
The team used qualitative interview data after an A/B test to understand their “power” vs. medium vs. no ad skippers. Their interviews indicated that some of the individuals they had labeled as power skippers actually did not see themselves as such; instead, many of them misunderstood the number of ads they were allowed to skip within a listening session.
Triangulation pinpointed the source of user confusion and led to the redesign of helper prompts when skipping songs and ads in their music player. Only when working together did the team uncover the root cause.
In general, data science and UX research have potential to yield more robust insight when brought together to solve core product and business questions.
What we stand to gain: Lessons learned today can guide ML/A.I. tomorrow
Let’s return to our friend in the car. Should we just dismiss his actions as idiosyncratic or does his reaction speak to broader user experiences in our ML/A.I. powered applications?
I would argue that we ignore our friend at our own peril.
A mixed methods team might approach this problem from two vantage points: UX researchers might try to both probe the attitudinal factors he holds towards the navigation app and also look for behavioral queues to triangulate this insight.
Data scientists might look at his usage data in the navigation app. Did he have any instances where the map’s recommender system actually suggested a route that ended up being 5, 10, 20% longer than predicted?
Maybe he takes the same route regularly and the data scientist suggests that the UX team probe whether he notices that the app is sometimes inconsistent in recommending the highway vs. surface streets despite a similar ETA.
But we should also be prepared to dig deeper
But we should also be prepared to dig deeper. We may not find a “just so” explanation. Instead there may be a much richer tapestry of experiences from outside our product that have shaped his beliefs, and beliefs of others like him, as they interact with intelligent algorithms.
Zooming out another level
As a design-based researcher by training, I am also constantly asking myself whether individual phenomena like our friend’s distrust of his map speak to something more fundamental about human psychology.
For example, can trust in an ML/A.I. application only be achieved with sufficient transparency? Does the instability of probabilistic systems create fragile mental models or increase the odds of someone developing closed off attitudes towards these tools?
On the other hand, does a “machine knows better” mentality lead to users naively accepting the predictions of an ML/A.I. system rather than critically examining and interrogating those outputs?
While I think answering these and other questions are likely to challenge us for decades—even generations—we should take on these problems now while we are still in the early days of the A.I. revolution.
Let’s learn as many lessons as we can by bringing data science and UX research together and map a more ethical and informed path for the future.
Disclaimer: These writings reflect my own opinions and not necessarily those of my employer.