Black Friday Deal: Take $250 off any 2024 workshop with code: BF2024

Cyber Week Savings: Take $2,025 off any bootcamp or short course starting before 3/31

Cyber Week Savings, Extended: Take $2,025 off any bootcamp or short course starting before 3/31

Black Friday Deal: Take £250 off any 2024 workshop with code: BF2024

Cyber Week Savings: Take £2,025 off any bootcamp starting before 31 March

Cyber Week Savings, Extended: Take £2,025 off any bootcamp starting before 31 March

Black Friday Deal: Take $250 off any 2024 workshop with code: BF2024

Cyber Week Savings: Take $1,500 off any bootcamp or short course starting before 31 March

Cyber Week Savings, Extended: Take $1,500 off any bootcamp or short course starting before 31 March

Get ahead of 2025’s biggest tech talent shifts. Register for our December 11th webinar.

Get More Info
Blog Global Perspectives: UX Design in the Age of AI
Article

Global Perspectives: UX Design in the Age of AI

General Assembly
October 25, 2024
Global Perspectives Blog Series header graphic

The buzz around AI is louder than ever, with companies rushing to add AI features to their products and workflows. But, as exciting as AI is, integrating it into a product or an experience won’t automatically make it better. Designers today face the critical task of asking: does this technology actually help users, or is it just another trendy feature? 

In this conversation, UK-based Clementine Brown, Principal Product Designer at Red Badger and Australia-based Miranda Maturana, Learning Designer and Design Researcher at Reece, discuss:

  • Why designers across the globe must resist the pressure to implement AI features without purpose
  • How designers can drive more ethical and human-centered AI applications
  • Why longitudinal research is key to understanding the effects of our design choices

Watch the recording here.

Just because you can add AI features doesn’t mean you should 

Miranda: With all the buzz around AI right now, it’s crucial to keep in mind that adding AI to something isn’t magically going to make it better. You have to consider whether it will actually help your users. This is especially important when reconciling business and user needs. Design isn’t about giving people what they want—it’s about solving problems for them. If you don’t understand your users’ problems and provide a solution for them, you’ll never reach your business goals.

So, speak to users and study their problems. And if AI is going to help you solve them, then, by all means, go for it. But treat AI as a tool to help your users, not something you’re integrating into a product simply to make it look cooler. 

Clementine: This reminds me a lot of when gamification became the hot trend in the industry a few years ago. People assumed that adding points, badges, and rewards to their products would be a silver bullet to make them more fun and engaging. But all those bells and whistles quickly start feeling forced and gimmicky when they’re not aligned with user goals. 

And I think that’s what’s happening with AI right now. Everyone wants it, but few are taking the time to check whether they truly need it. And the problem with this is that you risk turning those shiny, new, and super expensive AI-powered features into one more thing for people to ignore or an extra hurdle they have to overcome before they can get to what they actually want to see or do. So treat AI like everything else in UX: make sure it’s going to add value. Otherwise, just skip it. 

And I say this fully acknowledging that this can be a hard conversation for UX and design professionals in business settings, because the tech industry loves a buzzword.

Designers must advocate for ethical and user-centered AI applications

Clementine: I’m convinced that, right now, one of the most impactful areas for innovation isn’t offering people something new or flashy—it’s empowering them with greater control over their data. At a time when we’ve all become hyper-aware of how much of our data companies are collecting and how they’re using it, being thoughtful about how you use that data in your product can be truly innovative. 

I recently worked on a healthcare project where we were using AI and machine learning to compare users’ health inputs to large data sets and predict whether they were at an increased risk for certain health conditions. We had a big debate about whether we needed to tell users that we were using AI to generate those outputs and specify whether we were adding their personal data to our datasets. And it was designers who raised the question: “Hey, have you thought about how this will impact people? Should we apply the same strict guidelines to this as cookie policies or pixel tracking?” 

As designers we’re the ones always thinking about the users. And, as AI and data become an increasingly important part of digital products, it’s on us to advocate for thoughtful, user-centered AI integration. It’s not sexy or super cool. But, it will make a huge difference for the safety, privacy, and well-being of your users. 

Look to the past so you can design a better future

Clementine: Here’s the thing: nothing’s ever new. History repeats itself, and we can apply those hard-earned learnings from other technologies—whether it’s the web, smartphones, or social media—to AI. We already know the dangers of applying technologies irresponsibly. We’ve seen what happens when we miss the full picture blinded by the promise of more engaging applications, higher usage rates, or greater profits. As we move forward, it will be up to designers to recognize these ethical dilemmas and, at the very least, initiate conversations around them.

Another key point is that, while AI may feel cutting-edge, it’s been around long enough for important discussions on issues like bias and data privacy to surface. That means that, along with analogous insights from our experiences with other technologies, designers and organizations do have some frameworks to guide them in implementing AI more responsibly.

Still, using AI responsibly is a challenge because you have to dig into where it’s come from and how it got there. It’s not magic. I didn’t appear out of thin air. There are human-made decisions behind it—from putting together datasets to choosing which datasets to include in a model. And who those humans are and how they made those decisions matter.

Miranda: Moving forward, one of the biggest challenges for designers with AI will be pushing back against stakeholders and leadership eager to integrate it simply because it’s new or because they think it will be a market differentiator. Our job is not following trends—it’s solving problems. 

Another thing is doing research. And not just initial research. Longitudinal research and impact assessments are crucial for understanding the long-term effects of our decisions. We design with the best intentions, but if you don’t do enough research, you will never be able to see the unintended consequences of your decisions. For example, I designed a lot of stuff for social media that, looking back on it today, I don’t feel so proud about. That’s why as designers, it’s important we take a long-term view and always ask ourselves: What kind of world do I want to design?

Make it real

At General Assembly, we deliver the goods to keep you ahead of the curve. Our students are thinkers and doers who don’t wait for some imagined future, but build their skills (AI and more) to contribute to the future they want and need. What we offer is a charge-up from the inside out—so change never stops you in your tracks.

Explore our course catalog and move forward with real skills. 

LET’S CONNECT

What’s your reason for connecting? *

By providing your email, you confirm you have read and acknowledge General Assembly’s Privacy Policy and Terms of Service.