A List Apart
Articles for people who make web sites.
Ever since I was a boy, Iâve been fascinated with movies. I loved the characters and the excitementâbut most of all the stories. I wanted to be an actor. And I believed that Iâd get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that thereâs an element of theater to UXâI hadnât really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholdersâthe product team and decision makersâalong and get them interested in learning more.
Think of your favorite movie. More than likely it follows a three-act structure thatâs commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.
Use storytelling as a structure to do research
Itâs sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers orâworseâtheir own opinion to make the ârightâ choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving usersâ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.
In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Letâs look at the different acts and how they align with user research.
Act one: setup
The setup is all about understanding the background, and thatâs where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. Youâre learning about what exists today, the challenges users have, and how the challenges affect themâjust like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesnât need to be a huge investment in time or money.
Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: ââWalk me through your day yesterday.â Thatâs it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, youâre doing ethnography.â According to Hall, â[This] will probably prove quite illuminating. In the highly unlikely case that you didnât learn anything new or useful, carry on with enhanced confidence in your direction.â
This makes total sense to me. And I love that this makes user research so accessible. You donât need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and itâll help you better understand them and whatâs going on in their lives. Thatâs really what act one is all about: understanding where users are coming from.
Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what youâve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And thatâs the beginning of a compelling story. Itâs the point in the plot where you realize that the main charactersâor the users in this caseâare facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users canât complete certain tasks. Or maybe they do empathize with usersâ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.
Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyoneâusers, the product, and stakeholders. Itâs like winning an Oscar in movie termsâit often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research.
This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.
Act two: conflict
Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process thatâs tripping users up. Like act two in a movie, more issues will crop up along the way. Itâs here that you learn more about the characters as they grow and develop through this act.
Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: âAs you add more and more users, you learn less and less because you will keep seeing the same things again and again⌠After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.â
There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each userâs struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.
Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactionsâincluding surprise, agreement, disagreement, and discussions about what theyâre seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actorsâ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.
If in-person usability testing is like watching a playâstaged and controlledâthen conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldnât have in a lab environmentâand conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail thatâs often missing from remote usability tests.
Thatâs not to say that the âmoviesââremote sessionsâarenât a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see whatâs going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants canât log in or get their microphone working.
The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why theyâre problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, youâll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the storyâwhere the excitement isâbut there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at thingsâand these twists in the story can move things in new directions.
Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that youâre evaluating in the usability test arenât grounded in a solid understanding of your users (foundational research), thereâs not much to be gained by doing usability testing in the first place. Thatâs because youâre narrowing the focus of what youâre getting feedback on, without understanding the users' needs. As a result, thereâs no way of knowing whether the designs might solve a problem that users have. Itâs only feedback on a particular design in the context of a usability test.
On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you wonât know whether the thing that youâre building will actually solve that. This illustrates the importance of doing both foundational and directional research.
In act two, stakeholders willâhopefullyâget to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.
Act three: resolution
While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While itâs important to have an audience for the first two acts, itâs crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear usersâ feedback together, ask questions, and discuss whatâs possible within the projectâs constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.
This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.
Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. âThe most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,â writes Duarte. âThat tension helps them persuade the audience to adopt a new mindset or behave differently.â
This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for âwhat isââthe problems that youâve identified. And âwhat could beââyour recommendations on how to address them. And so on and so forth.
You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when youâve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the productâthe denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!
While we are nearly at the end of this story, letâs reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research:
- Act one: You meet the protagonists (the users) and the antagonists (the problems affecting users). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
- Act two: Next, thereâs character development. Thereâs conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
- Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures.Â
The researcher has multiple roles: theyâre the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell usersâ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the productâs ills.
So the next time that youâre planning research with clients or youâre speaking to stakeholders about research that youâve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.
Picture this. Youâve joined a squad at your company thatâs designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, youâre designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed.
Between the fantasy of getting it right and the fear of it going wrongâlike when we encounter âpersofailsâ in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seatsâthe personalization gap is real. Itâs an especially confounding place to be a digital professional without a map, a compass, or a plan.
For those of you venturing into personalization, thereâs no Lonely Planet and few tour guides because effective personalization is so specific to each organizationâs talent, technology, and market position.
But you can ensure that your team has packed its bags sensibly.
Thereâs a DIY formula to increase your chances for success. At minimum, youâll defuse your bossâs irrational exuberance. Before the party youâll need to effectively prepare.
We call it prepersonalization.
Behind the music
Consider Spotifyâs DJ feature, which debuted this past year.
Weâre used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.
So how do you know where to place your personalization bets? How do you design consistent interactions that wonât trip up users orâworseâbreed mistrust? Weâve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.
âFrom Big Tech to fledgling startups, weâve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a programâs ultimate track recordâand its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology effortsâturns on how effectively these prepersonalization activities play out.
Time and again, weâve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.
A personalization practice involves a multiyear effort of testing and feature development. Itâs not a switch-flip moment in your tech stack. Itâs best managed as a backlog that often evolves through three steps:
- customer experience optimization (CXO, also known as A/B testing or experimentation)
- always-on automations (whether rules-based or machine-generated)
- mature features or standalone product development (such as Spotifyâs DJ experience)
This is why we created our progressive personalization framework and why weâre field-testing an accompanying deck of cards: we believe that thereâs a base grammar, a set of ânouns and verbsâ that your organization can use to design experiences that are customized, personalized, or automated. You wonât need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.
Set your kitchen timer
How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Hereâs a summary of our broader approach along with details on the essential first-day activities.
The full arc of the wider workshop is threefold:
- Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
- Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
- Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.
Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.
Kickstart: Whet your appetite
We call the first lesson the âlandscape of connected experience.â It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.
Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Hereâs a list of 142 different interactions to jog your thinking.
This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, hereâs a long-form primer and a strategic framework.
Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.
Next, have your team plot each idea on the following 2Ă2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. Itâs also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.
Each team member should vote on where they see your product or service putting its emphasis. Naturally, you canât prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.
The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (Weâre pretty sure that you do: itâs just a matter of recognizing the relative size of that need and its remedy.) In our cards, weâve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.
Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.
At this point, youâve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Goodâyouâre ready to continue.
Hit that test kitchen
Next, letâs look at what youâll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when youâre configuring a connected experience?
Whatâs important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization programâs regularly evolving menu.
The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating âdishesâ is the way that youâll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.
The dishes will come from recipes, and those recipes have set ingredients.
Verify your ingredients
Like a good product manager, youâll make sureâandyouâll validate with the right stakeholders presentâthat you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that youâre targeting, content and design elements, the context for the interaction, and your measure for how itâll come together.
This isnât just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team:
- compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette;
- specify a consistent set of interactions that users find uniform or familiar;
- and develop parity across performance measurements and key performance indicators too.
This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.
Compose your recipe
What ingredients are important to you? Think of a who-what-when-why construct:
- Who are your key audience segments or groups?
- What kind of content will you give them, in what design elements, and under what circumstances?
- And for which business and user benefits?
We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.
Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below.
- Nurture personalization: When a guest or an unknown visitor interacts with a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
- Welcome automation: When thereâs a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
- Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.
A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although weâve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.
You can think of the later stages of the workshop as moving from recipes toward a cookbook in focusâlike a more nuanced customer-journey mapping. Individual âcooksâ will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.
Better kitchens require better architecture
Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said, âComplicated problems can be hard to solve, but they are addressable with rules and recipes.â
When personalization becomes a laugh line, itâs because a team is overfitting: they arenât designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AIâs output quality, for example, is indeed limited by your IA. Spotifyâs poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.
You can definitely stand the heatâŚ
Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.
This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isnât an overnight affair. But if you use the same cookbook and shared recipes, youâll have solid footing for success. We designed these activities to make your organizationâs needs concrete and clear, long before the hazards pile up.
While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Donât squander it. The proof, as they say, is in the pudding.
I offer a single bit of advice to friends and family when they become new parents: When you start to think that youâve got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, itâs time for solid food, potty training, and overnight sleeping. When you figure those out, itâs time for preschool and rare naps. The cycle goes on and on.
The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, Iâve seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.
How we got here
I built my first website in the mid-â90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table
elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font
tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 âweb safeâ colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.
The birth of web standards
At the turn of the century, a new cycle started. Crufty code littered with table
layouts and font
tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didnât happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsersâwith additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.
Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.
These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.
The web as software platform
The symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.
At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.
This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.
Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote âOf Time and the Web.â Or check out the âWeb Design History Timelineâ at the Web Design Museum. Neal Agarwal also has a fun tour through âInternet Artifacts.â
Where we are now
In the last couple of years, itâs felt like weâve begun to reach another major inflection point. As social-media platforms fracture and wane, thereâs been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but theyâre still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.
Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isnât browser support but simply the limits of how quickly designers and developers can learn whatâs available and how to adopt it.
Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.
If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworksâwhich used to let us adopt new techniques soonerâhave now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), thereâs often no alternative, leaving users with blank or broken pages.
Where do we go from here?
Todayâs hacks help to shape tomorrowâs standards. And thereâs nothing inherently wrong with embracing hacksâfor nowâto move the present forward. Problems only arise when weâre unwilling to admit that theyâre hacks or we hesitate to replace them. So what can we do to create the future we want for the web?
Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? Whatâs the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes itâs just a hack that youâve grown accustomed to. And sometimes itâs holding you back from even better options.
Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isnât always true of third-party frameworks. Sites built with even the hackiest of HTML from the â90s still work just fine today. The same canât always be said of sites built with frameworks even after just a couple years.
Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to âmove fast and break things,â use the time saved by modern tools to consider more carefully and design with deliberation.
Always be learning. If youâre always learning, youâre also growing. Sometimes it may be hard to pinpoint whatâs worth learning and whatâs just todayâs hack. You might end up focusing on something that wonât matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.
Play, experiment, and be weird! This web that weâve built is the ultimate experiment. Itâs the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what weâre capable of.
Share and amplify. As you experiment, play, and learn, share whatâs worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what theyâve taught you.
Go forth and make
As designers and developers for the web (and beyond), weâre responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Letâs imbue our values into the things that we create, and letâs make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that youâve mastered the web, everything will change.
In reading Joe Dolsonâs recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, Iâm very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.
Iâd like you to consider this a âyes⌠andâ piece to complement Joeâs post. Iâm not trying to refute any of what heâs saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, Iâm not saying that there arenât real risks or pressing issues with AI that need to be addressedâthere are, and weâve needed to address them, like, yesterdayâbut I want to take a little time to talk about whatâs possible in hopes that weâll get there one day.
Alternative text
Joeâs piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results arenât great. As he rightly points out, the current state of image analysis is pretty poorâespecially for certain image typesâin large part because current AI systems examine images in isolation rather than within the contexts that theyâre in (which is a consequence of having separate âfoundationâ models for text analysis and image analysis). Todayâs models arenât trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think thereâs potential in this space.
As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt textâeven if that starting point might be a prompt saying What is this BS? Thatâs not right at all⌠Let me try to offer a starting pointâI think thatâs a win.
Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and itâll improve authorsâ efficiency toward making their pages more accessible.
While complex imagesâlike graphs and chartsâare challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Letâs suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, letâs suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:
- Do more people use smartphones or feature phones?
- How many more?
- Is there a group of people that donât fall into either of these buckets?
- How many is that?
Setting aside the realities of large language model (LLM) hallucinationsâwhere a model just makes up plausible-sounding âfactsââfor a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.
Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these toolsâ chat-based interfaces and our existing ability to manipulate images in todayâs AI tools, that seems like a possibility.
Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!
Matching algorithms
Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that itâs equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether itâs Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, thereâs real potential for algorithm development to help people with disabilities.
Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidateâs strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.
When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. Thatâs why diverse teams are so important.
Imagine that a social media companyâs recommendation engine was tuned to analyze who youâre following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or arenât white or arenât male who also talk about AI. If you took its recommendations, perhaps youâd get a more holistic and nuanced understanding of whatâs happening in the AI field. These same systems should also use their understanding of biases about particular communitiesâincluding, for instance, the disability communityâto make sure that they arenât recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.
Other ways that AI can helps people with disabilities
If I werenât trying to put this together between other tasks, Iâm sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but Iâm going to make this last section into a bit of a lightning round. In no particular order:
- Voice preservation. You may have seen the VALL-E paper or Appleâs Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. Itâs possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrigâs disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so itâs something that we need to approach responsibly, but the tech has truly transformative potential.
- Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinsonâs and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
- Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text thatâs prepped for Bionic Reading.
The importance of diverse teams and data
We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiencesâwith all their complexities (and joys and pain)âare valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.
Want a model that doesnât demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities thatâs authored by people with a range of disabilities, and make sure that thatâs well represented in the training data.
Want a model that doesnât use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models wonât be replacing human copy editors anytime soon.
Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.
I have no doubt that AI can and will harm people⌠today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.
Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.
I am a creative. What I do is alchemy. It is a mystery. I do not so much do it, as let it be done through me.
I am a creative. Not all creative people like this label. Not all see themselves this way. Some creative people see science in what they do. That is their truth, and I respect it. Maybe I even envy them, a little. But my process is differentâmy being is different.
Apologizing and qualifying in advance is a distraction. Thatâs what my brain does to sabotage me. I set it aside for now. I can come back later to apologize and qualify. After Iâve said what I came to say. Which is hard enough.
Except when it is easy and flows like a river of wine.
Sometimes it does come that way. Sometimes what I need to create comes in an instant. I have learned not to say it at that moment, because if you admit that sometimes the idea just comes and it is the best idea and you know it is the best idea, they think you donât work hard enough.
Sometimes I work and work and work until the idea comes. Sometimes it comes instantly and I donât tell anyone for three days. Sometimes Iâm so excited by the idea that came instantly that I blurt it out, canât help myself. Like a boy who found a prize in his Cracker Jacks. Sometimes I get away with this. Sometimes other people agree: yes, that is the best idea. Most times they donât and I regret having given way to enthusiasm.
Enthusiasm is best saved for the meeting where it will make a difference. Not the casual get-together that precedes that meeting by two other meetings. Nobody knows why we have all these meetings. We keep saying weâre doing away with them, but then just finding other ways to have them. Sometimes they are even good. But other times they are a distraction from the actual work. The proportion between when meetings are useful, and when they are a pitiful distraction, varies, depending on what you do and where you do it. And who you are and how you do it. Again I digress. I am a creative. That is the theme.
Sometimes many hours of hard and patient work produce something that is barely serviceable. Sometimes I have to accept that and move on to the next project.
Donât ask about process. I am a creative.
I am a creative. I donât control my dreams. And I donât control my best ideas.
I can hammer away, surround myself with facts or images, and sometimes that works. I can go for a walk, and sometimes that works. I can be making dinner and thereâs a Eureka having nothing to do with sizzling oil and bubbling pots. Often I know what to do the instant I wake up. And then, almost as often, as I become conscious and part of the world again, the idea that would have saved me turns to vanishing dust in a mindless wind of oblivion. For creativity, I believe, comes from that other world. The one we enter in dreams, and perhaps, before birth and after death. But thatâs for poets to wonder, and I am not a poet. I am a creative. And itâs for theologians to mass armies about in their creative world that they insist is real. But that is another digression. And a depressing one. Maybe on a much more important topic than whether I am a creative or not. But still a digression from what I came here to say.
Sometimes the process is avoidance. And agony. You know the clichĂŠ about the tortured artist? Itâs true, even when the artist (and letâs put that noun in quotes) is trying to write a soft drink jingle, a callback in a tired sitcom, a budget request.
Some people who hate being called creative may be closeted creatives, but thatâs between them and their gods. No offense meant. Your truth is true, too. But mine is for me.Â
Creatives recognize creatives.
Creatives recognize creatives like queers recognize queers, like real rappers recognize real rappers, like cons know cons. Creatives feel massive respect for creatives. We love, honor, emulate, and practically deify the great ones. To deify any human is, of course, a tragic mistake. We have been warned. We know better. We know people are just people. They squabble, they are lonely, they regret their most important decisions, they are poor and hungry, they can be cruel, they can be just as stupid as we can, because, like us, they are clay. But. But. But they make this amazing thing. They birth something that did not exist before them, and could not exist without them. They are the mothers of ideas. And I suppose, since itâs just lying there, I have to add that they are the mothers of invention. Ba dum bum! OK, thatâs done. Continue.
Creatives belittle our own small achievements, because we compare them to those of the great ones. Beautiful animation! Well, Iâm no Miyazaki. Now THAT is greatness. That is greatness straight from the mind of God. This half-starved little thing that I made? It more or less fell off the back of the turnip truck. And the turnips werenât even fresh.
Creatives knows that, at best, they are Salieri. Even the creatives who are Mozart believe that.
I am a creative. I havenât worked in advertising in 30 years, but in my nightmares, itâs my former creative directors who judge me. And they are right to do so. I am too lazy, too facile, and when it really counts, my mind goes blank. There is no pill for creative dysfunction.
I am a creative. Every deadline I make is an adventure that makes Indiana Jones look like a pensioner snoring in a deck chair. The longer I remain a creative, the faster I am when I do my work and the longer I brood and walk in circles and stare blankly before I do that work.Â
I am still 10 times faster than people who are not creative, or people who have only been creative a short while, or people who have only been professionally creative a short while. Itâs just that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work off. I am that confident in my ability to do a great job when I put my mind to it. I am that addicted to the adrenaline rush of postponement. I am still that afraid of the jump.
I am not an artist.
I am a creative. Not an artist. Though I dreamed, as a lad, of someday being that. Some of us belittle our gifts and dislike ourselves because we are not Michelangelos and Warhols. That is narcissismâbut at least we arenât in politics.
I am a creative. Though I believe in reason and science, I decide by intuition and impulse. And live with what followsâthe catastrophes as well as the triumphs.Â
I am a creative. Every word Iâve said here will annoy other creatives, who see things differently. Ask two creatives a question, get three opinions. Our disagreement, our passion about it, and our commitment to our own truth are, at least to me, the proofs that we are creatives, no matter how we may feel about it.
I am a creative. I lament my lack of taste in the areas about which I know very little, which is to say almost all areas of human knowledge. And I trust my taste above all other things in the areas closest to my heart, or perhaps, more accurately, to my obsessions. Without my obsessions, I would probably have to spend my time looking life in the eye, and almost none of us can do that for long. Not honestly. Not really. Because much in life, if you really look at it, is unbearable.
I am a creative. I believe, as a parent believes, that when I am gone, some small good part of me will carry on in the mind of at least one other person.
Working saves me from worrying about work.
I am a creative. I live in dread of my small gift suddenly going away.
I am a creative. I am too busy making the next thing to spend too much time deeply considering that almost nothing I make will come anywhere near the greatness I comically aspire to.
I am a creative. I believe in the ultimate mystery of process. I believe in it so much, I am even fool enough to publish an essay I dictated into a tiny machine and didnât take time to review or revise. I wonât do this often, I promise. But I did it just now, because, as afraid as I might be of your seeing through my pitiful gestures toward the beautiful, I was even more afraid of forgetting what I came to say.Â
There. I think Iâve said it.
Humility, a designerâs essential valueâthat has a nice ring to it. What about humility, an office managerâs essential value? Or a dentistâs? Or a librarianâs? They all sound great. When humility is our guiding light, the path is always open for fulfillment, evolution, connection, and engagement. In this chapter, weâre going to talk about why.
That said, this is a book for designers, and to that end, Iâd like to start with a storyâwell, a journey, really. Itâs a personal one, and Iâm going to make myself a bit vulnerable along the way. I call it:
The Tale of Justinâs Preposterous Pate
When I was coming out of art school, a long-haired, goateed neophyte, print was a known quantity to me; design on the web, however, was rife with complexities to navigate and discover, a problem to be solved. Though I had been formally trained in graphic design, typography, and layout, what fascinated me was how these traditional skills might be applied to a fledgling digital landscape. This theme would ultimately shape the rest of my career.
So rather than graduate and go into print like many of my friends, I devoured HTML and JavaScript books into the wee hours of the morning and taught myself how to code during my senior year. I wantedânay, neededâto better understand the underlying implications of what my design decisions would mean once rendered in a browser.
The late â90s and early 2000s were the so-called âWild Westâ of web design. Designers at the time were all figuring out how to apply design and visual communication to the digital landscape. What were the rules? How could we break them and still engage, entertain, and convey information? At a more macro level, how could my values, inclusive of humility, respect, and connection, align in tandem with that? I was hungry to find out.
Though Iâm talking about a different era, those are timeless considerations between non-career interactions and the world of design. What are your core passions, or values, that transcend medium? Itâs essentially the same concept we discussed earlier on the direct parallels between what fulfills you, agnostic of the tangible or digital realms; the core themes are all the same.
First within tables, animated GIFs, Flash, then with Web Standards, div
s, and CSS, there was personality, raw unbridled creativity, and unique means of presentment that often defied any semblance of a visible grid. Splash screens and âbrowser requirementâ pages aplenty. Usability and accessibility were typically victims of such a creation, but such paramount facets of any digital design were largely (and, in hindsight, unfairly) disregarded at the expense of experimentation.
For example, this iteration of my personal portfolio site (âthe pseudoroomâ) from that era was experimental, if not a bit heavy- handed, in the visual communication of the concept of a living sketchbook. Very skeuomorphic. I collaborated with fellow designer and dear friend Marc Clancy (now a co-founder of the creative project organizing app Milanote) on this one, where weâd first sketch and then pass a Photoshop file back and forth to trick things out and play with varied user interactions. Then, Iâd break it down and code it into a digital layout.
Along with design folio pieces, the site also offered free downloads for Mac OS customizations: desktop wallpapers that were effectively design experimentation, custom-designed typefaces, and desktop icons.
From around the same time, GUI Galaxy was a design, pixel art, and Mac-centric news portal some graphic designer friends and I conceived, designed, developed, and deployed.
Design news portals were incredibly popular during this period, featuring (what would now be considered) Tweet-size, small-format snippets of pertinent news from the categories I previously mentioned. If you took Twitter, curated it to a few categories, and wrapped it in a custom-branded experience, youâd have a design news portal from the late 90s / early 2000s.
We as designers had evolved and created a bandwidth-sensitive, web standards award-winning, much more accessibility-conscious website. Still ripe with experimentation, yet more mindful of equitable engagement. You can see a couple of content panes here, noting general news (tech, design) and Mac-centric news below. We also offered many of the custom downloads I cited before as present on my folio site but branded and themed to GUI Galaxy.
The siteâs backbone was a homegrown CMS, with the presentation layer consisting of global design + illustration + news author collaboration. And the collaboration effort here, in addition to experimentation on a âbrandâ and content delivery, was hitting my core. We were designing something bigger than any single one of us and connecting with a global audience.
Collaboration and connection transcend medium in their impact, immensely fulfilling me as a designer.
Now, why am I taking you down this trip of design memory lane? Two reasons.
First, thereâs a reason for the nostalgia for that design era (the âWild Westâ era, as I called it earlier): the inherent exploration, personality, and creativity that saturated many design portals and personal portfolio sites. Ultra-finely detailed pixel art UI, custom illustration, bespoke vector graphics, all underpinned by a strong design community.
Todayâs web design has been in a period of stagnation. I suspect thereâs a strong chance youâve seen a site whose structure looks something like this: a hero image / banner with text overlaid, perhaps with a lovely rotating carousel of images (laying the snark on heavy there), a call to action, and three columns of sub-content directly beneath. Maybe an icon library is employed with selections that vaguely relate to their respective content.
Design, as itâs applied to the digital landscape, is in dire need of thoughtful layout, typography, and visual engagement that goes hand-in-hand with all the modern considerations we now know are paramount: usability. Accessibility. Load times and bandwidth- sensitive content delivery. A responsive presentation that meets human beings wherever theyâre engaging from. We must be mindful of, and respectful toward, those concernsâbut not at the expense of creativity of visual communication or via replicating cookie-cutter layouts.
Pixel Problems
Websites during this period were often designed and built on Macs whose OS and desktops looked something like this. This is Mac OS 7.5, but 8 and 9 werenât that different.
Desktop icons fascinated me: how could any single one, at any given point, stand out to get my attention? In this example, the userâs desktop is tidy, but think of a more realistic example with icon pandemonium. Or, say an icon was part of a larger system grouping (fonts, extensions, control panels)âhow did it also maintain cohesion amongst a group?
These were 32 x 32 pixel creations, utilizing a 256-color palette, designed pixel-by-pixel as mini mosaics. To me, this was the embodiment of digital visual communication under such ridiculous constraints. And often, ridiculous restrictions can yield the purification of concept and theme.
So I began to research and do my homework. I was a student of this new medium, hungry to dissect, process, discover, and make it my own.
Expanding upon the notion of exploration, I wanted to see how I could push the limits of a 32x32 pixel grid with that 256-color palette. Those ridiculous constraints forced a clarity of concept and presentation that I found incredibly appealing. The digital gauntlet had been tossed, and that challenge fueled me. And so, in my dorm room into the wee hours of the morning, I toiled away, bringing conceptual sketches into mini mosaic fruition.
These are some of my creations, utilizing the only tool available at the time to create icons called ResEdit. ResEdit was a clunky, built-in Mac OS utility not really made for exactly what we were using it for. At the core of all of this work: Research. Challenge. Problem- solving. Again, these core connection-based values are agnostic of medium.
Thereâs one more design portal I want to talk about, which also serves as the second reason for my story to bring this all together.
This is K10k, short for Kaliber 1000. K10k was founded in 1998 by Michael Schmidt and Toke Nygaard, and was the design news portal on the web during this period. With its pixel art-fueled presentation, ultra-focused care given to every facet and detail, and with many of the more influential designers of the time who were invited to be news authors on the site, well... it was the place to be, my friend. With respect where respect is due, GUI Galaxyâs concept was inspired by what these folks were doing.
For my part, the combination of my web design work and pixel art exploration began to get me some notoriety in the design scene. Eventually, K10k noticed and added me as one of their very select group of news authors to contribute content to the site.
Amongst my personal work and side projectsâand now with this inclusionâin the design community, this put me on the map. My design work also began to be published in various printed collections, in magazines domestically and overseas, and featured on other design news portals. With that degree of success while in my early twenties, something else happened:
I evolvedâdevolved, reallyâinto a colossal asshole (and in just about a year out of art school, no less). The press and the praise became what fulfilled me, and they went straight to my head. They inflated my ego. I actually felt somewhat superior to my fellow designers.
The casualties? My design stagnated. Its evolutionâmy evolutionâ stagnated.
I felt so supremely confident in my abilities that I effectively stopped researching and discovering. When previously sketching concepts or iterating ideas in lead was my automatic step one, I instead leaped right into Photoshop. I drew my inspiration from the smallest of sources (and with blinders on). Any critique of my work from my peers was often vehemently dismissed. The most tragic loss: I had lost touch with my values.
My ego almost cost me some of my friendships and burgeoning professional relationships. I was toxic in talking about design and in collaboration. But thankfully, those same friends gave me a priceless gift: candor. They called me out on my unhealthy behavior.
Admittedly, it was a gift I initially did not accept but ultimately was able to deeply reflect upon. I was soon able to accept, and process, and course correct. The realization laid me low, but the re-awakening was essential. I let go of the ârewardâ of adulation and re-centered upon what stoked the fire for me in art school. Most importantly: I got back to my core values.
Always Students
Following that short-term regression, I was able to push forward in my personal design and career. And I could self-reflect as I got older to facilitate further growth and course correction as needed.
As an example, letâs talk about the Large Hadron Collider. The LHC was designed âto help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, and in particular the interrelation between quantum mechanics and general relativity.â Thanks, Wikipedia.
Around fifteen years ago, in one of my earlier professional roles, I designed the interface for the application that generated the LHCâs particle collision diagrams. These diagrams are the rendering of whatâs actually happening inside the Collider during any given particle collision event and are often considered works of art unto themselves.
Designing the interface for this application was a fascinating process for me, in that I worked with Fermilab physicists to understand what the application was trying to achieve, but also how the physicists themselves would be using it. To that end, in this role,
I cut my teeth on usability testing, working with the Fermilab team to iterate and improve the interface. How they spoke and what they spoke about was like an alien language to me. And by making myself humble and working under the mindset that I was but a student, I made myself available to be a part of their world to generate that vital connection.
I also had my first ethnographic observation experience: going to the Fermilab location and observing how the physicists used the tool in their actual environment, on their actual terminals. For example, one takeaway was that due to the level of ambient light-driven contrast within the facility, the data columns ended up using white text on a dark gray background instead of black text-on-white. This enabled them to pore over reams of data during the day and ease their eye strain. And Fermilab and CERN are government entities with rigorous accessibility standards, so my knowledge in that realm also grew. The barrier-free design was another essential form of connection.
So to those core drivers of my visual problem-solving soul and ultimate fulfillment: discovery, exposure to new media, observation, human connection, and evolution. What opened the door for those values was me checking my ego before I walked through it.
An evergreen willingness to listen, learn, understand, grow, evolve, and connect yields our best work. In particular, I want to focus on the words âgrowâ and âevolveâ in that statement. If we are always students of our craft, we are also continually making ourselves available to evolve. Yes, we have years of applicable design study under our belt. Or the focused lab sessions from a UX bootcamp. Or the monogrammed portfolio of our work. Or, ultimately, decades of a career behind us.
But all that said: experience does not equal âexpert.â
As soon as we close our minds via an inner monologue of âknowing it allâ or branding ourselves a â#thoughtleaderâ on social media, the designer we are is our final form. The designer we can be will never exist.
As a UX professional in todayâs data-driven landscape, itâs increasingly likely that youâve been asked to design a personalized digital experience, whether itâs a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX.
Thatâs where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started).Â
Getting Started
For the sake of this article, weâll assume youâre already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points.
Common scenarios for starting a personalization project:
- Your organization or client purchased a content management system (CMS) or marketing automation platform (MAP) or related technology that supports personalization
- The CMO, CDO, or CIO has identified personalization as a goal
- Customer data is disjointed or ambiguous
- You are running some isolated targeting campaigns or A/B testing
- Stakeholders disagree on personalization approach
- Mandate of customer privacy rules (e.g. GDPR) requires revisiting existing user targeting practices
Regardless of where you begin, a successful personalization program will require the same core building blocks. Weâve captured these as the âlevelsâ on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful.
From top to bottom, the levels include:
- North Star: What larger strategic objective is driving the personalization program?
- Goals: What are the specific, measurable outcomes of the program?
- Touchpoints: Where will the personalized experience be served?
- Contexts and Campaigns: What personalization content will the user see?
- User Segments: What constitutes a unique, usable audience?
- Actionable Data: What reliable and authoritative data is captured by our technical platform to drive personalization?
- Raw Data: What wider set of data is conceivably available (already in our setting) allowing you to personalize?
Weâll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. Weâve found them helpful in personalization brainstorming sessions, and will include examples for you here.
Starting at the Top
The components of the pyramid are as follows:
North Star
A north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include:
- Function: Personalize based on basic user inputs. Examples: âRawâ notifications, basic search results, system user settings and configuration options, general customization, basic optimizations
- Feature: Self-contained personalization componentry. Examples: âCookedâ notifications, advanced optimizations (geolocation), basic dynamic messaging, customized modules, automations, recommenders
- Experience: Personalized user experiences across multiple interactions and user flows. Examples: Email campaigns, landing pages, advanced messaging (i.e. C2C chat) or conversational interfaces, larger user flows and content-intensive optimizations (localization).
- Product: Highly differentiating personalized product experiences. Examples: Standalone, branded experiences with personalization at their core, like the âalgotorialâ playlists by Spotify such as Discover Weekly.
Goals
As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include:
- Conversion
- Time on task
- Net promoter score (NPS)
- Customer satisfaction
Touchpoints
Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a userâs experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples:
Channel-level Touchpoints
- Email: Role
- Email: Time of open
- In-store display (JSON endpoint)
- Native app
- Search
Wireframe-level Touchpoints
- Web overlay
- Web alert bar
- Web banner
- Web content block
- Web menu
If youâre designing for web interfaces, for example, you will likely need to include personalized âzonesâ in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns.
Source: âEssential Guide to End-to-End Personaliztionâ by Kibo.
Contexts and Campaigns
Once youâve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as âcampaignsâ (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an âEnrichâ campaign that shows related articles may be a suitable supplement to extant content).
Personalization Context Model:
- Browse
- Skim
- Nudge
- Feast
Personalization Content Model:
- Alert
- Make Easier
- Cross-Sell
- Enrich
Weâve written extensively about each of these models elsewhere, so if youâd like to read more you can check out Colinâs Personalization Content Model and Jeffâs Personalization Context Model.Â
User Segments
User segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid:
- Unknown
- Guest
- Authenticated
- Default
- Referred
- Role
- Cohort
- Unique ID
Actionable Data
Every organization with any digital presence has data. Itâs a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as âdata activation.â) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience.Â
First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the âcreep factorâ of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples:
There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow.
While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organizationâs overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable.
Pulling it Together
While the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping.
In assembling a card âhandâ, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlogâwhich is a fine subject for another article.
In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how.
Lay Down Your Cards
Any sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no âeasy buttonâ wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory.
The mobile-first design methodology is greatâit focuses on what really matters to the user, itâs well-practiced, and itâs been a common design pattern for years. So developing your CSS mobile-first should also be great, tooâŚright?
Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width
media queries for larger viewports (for a good overview see âWhat is Mobile First CSS and Why Does It Rock?â). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base thatâs harder to maintain. Admit itâhow many of us willingly want that?
On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions youâre working on. To help you get started, hereâs how I go about tackling the factors you need to watch for, and Iâll discuss some alternate solutions if mobile-first doesnât seem to suit your project.
Advantages of mobile-first
Some of the things to like with mobile-first CSS developmentâand why itâs been the de facto development methodology for so longâmake a lot of sense:
Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchyâyou just focus on the mobile view and get developing.
Tried and tested. Itâs a tried and tested methodology thatâs worked for years for a reason: it solves a problem really well.
Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project).
Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices!
Disadvantages of mobile-first
Setting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications:
More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints.
Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible.
Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested.
The browser canât prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width
media queries donât leverage the browserâs capability to download CSS files in priority order.
The problem of property value overrides
There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We wonât be able to use a utility class for a style that has been reset with a higher specificity.
With this in mind, Iâm developing CSS with a focus on the default values much more these days. Since thereâs no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width
set).
This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a componentâs layout looks like it should be based on Flexbox at all breakpoints, itâs fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We donât want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view!
Though this approach isnât going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others.
Having said that, I donât feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheetâa sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but thatâs by no means a requirement.
Closed media query ranges in practice
In classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (Iâm using SCSS for brevity), letâs assume there are three visual designs:
- smaller than 768
- from 768 to below 1024
- 1024 and anything larger
Take a simple example where a block-level element has a default padding
of â20px,â which is overwritten at tablet to be â40pxâ and set back to â20pxâ on desktop.
Classic
|
Closed media query range
|
The subtle difference is that the mobile-first example sets the default padding
to â20pxâ and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding
to â20pxâ and only overrides it at the relevant breakpoint where it isnât the default value (in this instance, tablet is the exception).
The goal is to:
- Only set styles when needed.
- Not set them with the expectation of overwriting them later on, again and again.
To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. Weâll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited.
Taking the above example, if we find that .my-block
spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding
in a closed media query range.
.my-block {
@media (max-width: 767.98px) {
padding: 20px;
}
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The browser default padding
for our block is â0,â so instead of adding a desktop media query and using unset
or â0â for the padding
value (which we would need with mobile-first), we can wrap the mobile padding
in a closed media query (since it is now also an exception) so it wonât get picked up at wider breakpoints. At the desktop breakpoint, we wonât need to set any padding
style, as we want the browser default value.
Bundling versus separating the CSS
Back in the day, keeping the number of requests to a minimum was very important due to the browserâs limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority.
With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesnât. This is more performant and can reduce the overall time page rendering is blocked.
Which HTTP version are you using?
To determine which version of HTTP youâre using, go to your website and open your browserâs dev tools. Next, select the Network tab and make sure the Protocol column is visible. If âh2â is listed under Protocol, it means HTTP/2 is being used.
Note: to view the Protocol in your browserâs dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column.
Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2.
Splitting the CSS
Separating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media
attribute allows the browser to identify which files are needed immediately (because theyâre render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority.
In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with âHighestâ priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case theyâll be needed later, but with âLowestâ priority.
With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.
While, as noted, with the CSS separated into different files linked and marked up with the relevant media
attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width
queries, where the desktop browser would have to download all the CSS with Highest priority. We canât assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow.
The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below.
Bundled CSS <link href="site.css" rel="stylesheet"> This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority. |
Separated CSS <link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet"> Separating the CSS and specifying a |
Depending on the projectâs deployment strategy, a change to one file (mobile.css
, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css
file, an approach that would normally trigger a full regression test.
Moving on
The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices.
I donât think anyone wants to return to that development model again, but itâs important we donât lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular deviceâany deviceâover others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and whatâs an exception, seems like the natural next step. Iâve started noticing small simplifications in my own CSS, as well as other developersâ, and that testing and maintenance work is also a bit more simplified and productive.
In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first mayâor may notâturn out to be the best choice for whatâs involved, but first you need to solidly understand the trade-offs youâre stepping into.
About two and a half years ago, I introduced the idea of daily ethical design. It was born out of my frustration with the many obstacles to achieving design thatâs usable and equitable; protects peopleâs privacy, agency, and focus; benefits society; and restores nature. I argued that we need to overcome the inconveniences that prevent us from acting ethically and that we need to elevate design ethics to a more practical level by structurally integrating it into our daily work, processes, and tools.
Unfortunately, weâre still very far from this ideal.
At the time, I didnât know yet how to structurally integrate ethics. Yes, I had found some tools that had worked for me in previous projects, such as using checklists, assumption tracking, and âdark realityâ sessions, but I didnât manage to apply those in every project. I was still struggling for time and support, and at best I had only partially achieved a higher (moral) quality of designâwhich is far from my definition of structurally integrated.
I decided to dig deeper for the root causes in business that prevent us from practicing daily ethical design. Now, after much research and experimentation, I believe that Iâve found the key that will let us structurally integrate ethics. And itâs surprisingly simple! But first we need to zoom out to get a better understanding of what weâre up against.
Influence the system
Sadly, weâre trapped in a capitalistic system that reinforces consumerism and inequality, and itâs obsessed with the fantasy of endless growth. Sea levels, temperatures, and our demand for energy continue to rise unchallenged, while the gap between rich and poor continues to widen. Shareholders expect ever-higher returns on their investments, and companies feel forced to set short-term objectives that reflect this. Over the last decades, those objectives have twisted our well-intended human-centered mindset into a powerful machine that promotes ever-higher levels of consumption. When weâre working for an organization that pursues âdouble-digit growthâ or âaggressive sales targetsâ (which is 99 percent of us), thatâs very hard to resist while remaining human friendly. Even with our best intentions, and even though we like to say that we create solutions for people, weâre a part of the problem.
What can we do to change this?
We can start by acting on the right level of the system. Donella H. Meadows, a system thinker, once listed ways to influence a system in order of effectiveness. When you apply these to design, you get:
- At the lowest level of effectiveness, you can affect numbers such as usability scores or the number of design critiques. But none of that will change the direction of a company.
- Similarly, affecting buffers (such as team budgets), stocks (such as the number of designers), flows (such as the number of new hires), and delays (such as the time that it takes to hear about the effect of design) wonât significantly affect a company.
- Focusing instead on feedback loops such as management control, employee recognition, or design-system investments can help a company become better at achieving its objectives. But that doesnât change the objectives themselves, which means that the organization will still work against your ethical-design ideals.
- The next level, information flows, is what most ethical-design initiatives focus on now: the exchange of ethical methods, toolkits, articles, conferences, workshops, and so on. This is also where ethical design has remained mostly theoretical. Weâve been focusing on the wrong level of the system all this time.
- Take rules, for exampleâthey beat knowledge every time. There can be widely accepted rules, such as how finance works, or a scrum teamâs definition of done. But ethical design can also be smothered by unofficial rules meant to maintain profits, often revealed through comments such as âthe client didnât ask for itâ or âdonât make it too big.â
- Changing the rules without holding official power is very hard. Thatâs why the next level is so influential: self-organization. Experimentation, bottom-up initiatives, passion projects, self-steering teamsâall of these are examples of self-organization that improve the resilience and creativity of a company. Itâs exactly this diversity of viewpoints thatâs needed to structurally tackle big systemic issues like consumerism, wealth inequality, and climate change.
- Yet even stronger than self-organization are objectives and metrics. Our companies want to make more money, which means that everything and everyone in the company does their best to⌠make the company more money. And once I realized that profit is nothing more than a measurement, I understood how crucial a very specific, defined metric can be toward pushing a company in a certain direction.
The takeaway? If we truly want to incorporate ethics into our daily design practice, we must first change the measurable objectives of the company we work for, from the bottom up.
Redefine success
Traditionally, we consider a product or service successful if itâs desirable to humans, technologically feasible, and financially viable. You tend to see these represented as equals; if you type the three words in a search engine, youâll find diagrams of three equally sized, evenly arranged circles.
But in our hearts, we all know that the three dimensions arenât equally weighted: itâs viability that ultimately controls whether a product will go live. So a more realistic representation might look like this:
Desirability and feasibility are the means; viability is the goal. Companiesâoutside of nonprofits and charitiesâexist to make money.
A genuinely purpose-driven company would try to reverse this dynamic: it would recognize finance for what it was intended for: a means. So both feasibility and viability are means to achieve what the company set out to achieve. It makes intuitive sense: to achieve most anything, you need resources, people, and money. (Fun fact: the Italian language knows no difference between feasibility and viability; both are simply fattibilitĂ .)
But simply swapping viable for desirable isnât enough to achieve an ethical outcome. Desirability is still linked to consumerism because the associated activities aim to identify what people wantâwhether itâs good for them or not. Desirability objectives, such as user satisfaction or conversion, donât consider whether a product is healthy for people. They donât prevent us from creating products that distract or manipulate people or stop us from contributing to societyâs wealth inequality. Theyâre unsuitable for establishing a healthy balance with nature.
Thereâs a fourth dimension of success thatâs missing: our designs also need to be ethical in the effect that they have on the world.
This is hardly a new idea. Many similar models exist, some calling the fourth dimension accountability, integrity, or responsibility. What Iâve never seen before, however, is the necessary step that comes after: to influence the system as designers and to make ethical design more practical, we must create objectives for ethical design that are achievable and inspirational. Thereâs no one way to do this because it highly depends on your culture, values, and industry. But Iâll give you the version that I developed with a group of colleagues at a design agency. Consider it a template to get started.
Pursue well-being, equity, and sustainability
We created objectives that address designâs effect on three levels: individual, societal, and global.
An objective on the individual level tells us what success is beyond the typical focus of usability and satisfactionâinstead considering matters such as how much time and attention is required from users. We pursued well-being:
We create products and services that allow for peopleâs health and happiness. Our solutions are calm, transparent, nonaddictive, and nonmisleading. We respect our usersâ time, attention, and privacy, and help them make healthy and respectful choices.
An objective on the societal level forces us to consider our impact beyond just the user, widening our attention to the economy, communities, and other indirect stakeholders. We called this objective equity:
We create products and services that have a positive social impact. We consider economic equality, racial justice, and the inclusivity and diversity of people as teams, users, and customer segments. We listen to local culture, communities, and those we affect.
Finally, the objective on the global level aims to ensure that we remain in balance with the only home we have as humanity. Referring to it simply as sustainability, our definition was:
We create products and services that reward sufficiency and reusability. Our solutions support the circular economy: we create value from waste, repurpose products, and prioritize sustainable choices. We deliver functionality instead of ownership, and we limit energy use.
In short, ethical design (to us) meant achieving wellbeing for each user and an equitable value distribution within society through a design that can be sustained by our living planet. When we introduced these objectives in the company, for many colleagues, design ethics and responsible design suddenly became tangible and achievable through practicalâand even familiarâactions.
Measure impact
But defining these objectives still isnât enough. What truly caught the attention of senior management was the fact that we created a way to measure every design projectâs well-being, equity, and sustainability.
This overview lists example metrics that you can use as you pursue well-being, equity, and sustainability:
Thereâs a lot of power in measurement. As the saying goes, what gets measured gets done. Donella Meadows once shared this example:
âIf the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security.â
This phenomenon explains why desirability is a poor indicator of success: itâs typically defined as the increase in customer satisfaction, session length, frequency of use, conversion rate, churn rate, download rate, and so on. But none of these metrics increase the health of people, communities, or ecosystems. What if instead we measured success through metrics for (digital) well-being, such as (reduced) screen time or software energy consumption?
Thereâs another important message here. Even if we set an objective to build a calm interface, if we were to choose the wrong metric for calmnessâsay, the number of interface elementsâwe could still end up with a screen that induces anxiety. Choosing the wrong metric can completely undo good intentions.
Additionally, choosing the right metric is enormously helpful in focusing the design team. Once you go through the exercise of choosing metrics for our objectives, youâre forced to consider what success looks like concretely and how you can prove that youâve reached your ethical objectives. It also forces you to consider what we as designers have control over: what can I include in my design or change in my process that will lead to the right type of success? The answer to this question brings a lot of clarity and focus.
And finally, itâs good to remember that traditional businesses run on measurements, and managers love to spend much time discussing charts (ideally hockey-stick shaped)âespecially if they concern profit, the one-above-all of metrics. For good or ill, to improve the system, to have a serious discussion about ethical design with managers, weâll need to speak that business language.
Practice daily ethical design
Once youâve defined your objectives and you have a reasonable idea of the potential metrics for your design project, only then do you have a chance to structurally practice ethical design. It âsimplyâ becomes a matter of using your creativity and choosing from all the knowledge and toolkits already available to you.
I think this is quite exciting! It opens a whole new set of challenges and considerations for the design process. Should you go with that energy-consuming video or would a simple illustration be enough? Which typeface is the most calm and inclusive? Which new tools and methods do you use? When is the websiteâs end of life? How can you provide the same service while requiring less attention from users? How do you make sure that those who are affected by decisions are there when those decisions are made? How can you measure our effects?
The redefinition of success will completely change what it means to do good design.
There is, however, a final piece of the puzzle thatâs missing: convincing your client, product owner, or manager to be mindful of well-being, equity, and sustainability. For this, itâs essential to engage stakeholders in a dedicated kickoff session.
Kick it off or fall back to status quo
The kickoff is the most important meeting that can be so easy to forget to include. It consists of two major phases: 1) the alignment of expectations, and 2) the definition of success.
In the first phase, the entire (design) team goes over the project brief and meets with all the relevant stakeholders. Everyone gets to know one another and express their expectations on the outcome and their contributions to achieving it. Assumptions are raised and discussed. The aim is to get on the same level of understanding and to in turn avoid preventable miscommunications and surprises later in the project.
For example, for a recent freelance project that aimed to design a digital platform that facilitates US student advisorsâ documentation and communication, we conducted an online kickoff with the client, a subject-matter expert, and two other designers. We used a combination of canvases on Miro: one with questions from âManual of Meâ (to get to know each other), a Team Canvas (to express expectations), and a version of the Project Canvas to align on scope, timeline, and other practical matters.
The above is the traditional purpose of a kickoff. But just as important as expressing expectations is agreeing on what success means for the projectâin terms of desirability, viability, feasibility, and ethics. What are the objectives in each dimension?
Agreement on what success means at such an early stage is crucial because you can rely on it for the remainder of the project. If, for example, the design team wants to build an inclusive app for a diverse user group, they can raise diversity as a specific success criterion during the kickoff. If the client agrees, the team can refer back to that promise throughout the project. âAs we agreed in our first meeting, having a diverse user group that includes A and B is necessary to build a successful product. So we do activity X and follow research process Y.â Compare those odds to a situation in which the team didnât agree to that beforehand and had to ask for permission halfway through the project. The client might argue that that came on top of the agreed scopeâand sheâd be right.
In the case of this freelance project, to define success I prepared a round canvas that I call the Wheel of Success. It consists of an inner ring, meant to capture ideas for objectives, and a set of outer rings, meant to capture ideas on how to measure those objectives. The rings are divided into five dimensions of successful design: healthy, equitable, sustainable, desirable, feasible, and viable.
We went through each dimension, writing down ideas on digital sticky notes. Then we discussed our ideas and verbally agreed on the most important ones. For example, our client agreed that sustainability and progressive enhancement are important success criteria for the platform. And the subject-matter expert emphasized the importance of including students from low-income and disadvantaged groups in the design process.
After the kickoff, we summarized our ideas and shared understanding in a project brief that captured these aspects:
- the projectâs origin and purpose: why are we doing this project?
- the problem definition: what do we want to solve?
- the concrete goals and metrics for each success dimension: what do we want to achieve?
- the scope, process, and role descriptions: how will we achieve it?
With such a brief in place, you can use the agreed-upon objectives and concrete metrics as a checklist of success, and your design team will be ready to pursue the right objectiveâusing the tools, methods, and metrics at their disposal to achieve ethical outcomes.
Conclusion
Over the past year, quite a few colleagues have asked me, âWhere do I start with ethical design?â My answer has always been the same: organize a session with your stakeholders to (re)define success. Even though you might not always be 100 percent successful in agreeing on goals that cover all responsibility objectives, that beats the alternative (the status quo) every time. If you want to be an ethical, responsible designer, thereâs no skipping this step.
To be even more specific: if you consider yourself a strategic designer, your challenge is to define ethical objectives, set the right metrics, and conduct those kick-off sessions. If you consider yourself a system designer, your starting point is to understand how your industry contributes to consumerism and inequality, understand how finance drives business, and brainstorm which levers are available to influence the system on the highest level. Then redefine success to create the space to exercise those levers.
And for those who consider themselves service designers or UX designers or UI designers: if you truly want to have a positive, meaningful impact, stay away from the toolkits and meetups and conferences for a while. Instead, gather your colleagues and define goals for well-being, equity, and sustainability through design. Engage your stakeholders in a workshop and challenge them to think of ways to achieve and measure those ethical goals. Take their input, make it concrete and visible, ask for their agreement, and hold them to it.
Otherwise, Iâm genuinely sorry to say, youâre wasting your precious time and creative energy.
Of course, engaging your stakeholders in this way can be uncomfortable. Many of my colleagues expressed doubts such as âWhat will the client think of this?,â âWill they take me seriously?,â and âCanât we just do it within the design team instead?â In fact, a product manager once asked me why ethics couldnât just be a structured part of the design processâto just do it without spending the effort to define ethical objectives. Itâs a tempting idea, right? We wouldnât have to have difficult discussions with stakeholders about what values or which key-performance indicators to pursue. It would let us focus on what we like and do best: designing.
But as systems theory tells us, thatâs not enough. For those of us who arenât from marginalized groups and have the privilege to be able to speak up and be heard, that uncomfortable space is exactly where we need to be if we truly want to make a difference. We canât remain within the design-for-designers bubble, enjoying our privileged working-from-home situation, disconnected from the real world out there. For those of us who have the possibility to speak up and be heard: if we solely keep talking about ethical design and it remains at the level of articles and toolkitsâweâre not designing ethically. Itâs just theory. We need to actively engage our colleagues and clients by challenging them to redefine success in business.
With a bit of courage, determination, and focus, we can break out of this cage that finance and business-as-usual have built around us and become facilitators of a new type of business that can see beyond financial value. We just need to agree on the right objectives at the start of each design project, find the right metrics, and realize that we already have everything that we need to get started. Thatâs what it means to do daily ethical design.
For their inspiration and support over the years, I would like to thank Emanuela Cozzi Schettini, JosĂŠ Gallegos, Annegret BĂśnemann, Ian Dorr, Vera Rademaker, Virginia Rispoli, Cecilia Scolaro, Rouzbeh Amini, and many others.
CSS is about styling boxes. In fact, the whole web is made of boxes, from the browser viewport to elements on a page. But every once in a while a new feature comes along that makes us rethink our design approach.
Round displays, for example, make it fun to play with circular clip areas. Mobile screen notches and virtual keyboards offer challenges to best organize content that stays clear of them. And dual screen or foldable devices make us rethink how to best use available space in a number of different device postures.
These recent evolutions of the web platform made it both more challenging and more interesting to design products. Theyâre great opportunities for us to break out of our rectangular boxes.
Iâd like to talk about a new feature similar to the above: the Window Controls Overlay for Progressive Web Apps (PWAs).
Progressive Web Apps are blurring the lines between apps and websites. They combine the best of both worlds. On one hand, theyâre stable, linkable, searchable, and responsive just like websites. On the other hand, they provide additional powerful capabilities, work offline, and read files just like native apps.
As a design surface, PWAs are really interesting because they challenge us to think about what mixing web and device-native user interfaces can be. On desktop devices in particular, we have more than 40 years of history telling us what applications should look like, and it can be hard to break out of this mental model.
At the end of the day though, PWAs on desktop are constrained to the window they appear in: a rectangle with a title bar at the top.
Hereâs what a typical desktop PWA app looks like:
Sure, as the author of a PWA, you get to choose the color of the title bar (using the Web Application Manifest theme_color property), but thatâs about it.
What if we could think outside this box, and reclaim the real estate of the appâs entire window? Doing so would give us a chance to make our apps more beautiful and feel more integrated in the operating system.
This is exactly what the Window Controls Overlay offers. This new PWA functionality makes it possible to take advantage of the full surface area of the app, including where the title bar normally appears.
About the title bar and window controls
Letâs start with an explanation of what the title bar and window controls are.
The title bar is the area displayed at the top of an app window, which usually contains the appâs name. Window controls are the affordances, or buttons, that make it possible to minimize, maximize, or close the appâs window, and are also displayed at the top.
Window Controls Overlay removes the physical constraint of the title bar and window controls areas. It frees up the full height of the app window, enabling the title bar and window control buttons to be overlaid on top of the applicationâs web content.
If you are reading this article on a desktop computer, take a quick look at other apps. Chances are theyâre already doing something similar to this. In fact, the very web browser you are using to read this uses the top area to display tabs.
Spotify displays album artwork all the way to the top edge of the application window.
Microsoft Word uses the available title bar space to display the auto-save and search functionalities, and more.
The whole point of this feature is to allow you to make use of this space with your own content while providing a way to account for the window control buttons. And it enables you to offer this modified experience on a range of platforms while not adversely affecting the experience on browsers or devices that donât support Window Controls Overlay. After all, PWAs are all about progressive enhancement, so this feature is a chance to enhance your app to use this extra space when itâs available.
Letâs use the feature
For the rest of this article, weâll be working on a demo app to learn more about using the feature.
The demo app is called 1DIV. Itâs a simple CSS playground where users can create designs using CSS and a single HTML element.
The app has two pages. The first lists the existing CSS designs youâve created:
The second page enables you to create and edit CSS designs:
Since Iâve added a simple web manifest and service worker, we can install the app as a PWA on desktop. Here is what it looks like on macOS:
And on Windows:
Our app is looking good, but the white title bar in the first page is wasted space. In the second page, it would be really nice if the design area went all the way to the top of the app window.
Letâs use the Window Controls Overlay feature to improve this.
Enabling Window Controls Overlay
The feature is still experimental at the moment. To try it, you need to enable it in one of the supported browsers.
As of now, it has been implemented in Chromium, as a collaboration between Microsoft and Google. We can therefore use it in Chrome or Edge by going to the internal about://flags page, and enabling the Desktop PWA Window Controls Overlay flag.
Using Window Controls Overlay
To use the feature, we need to add the following display_override member to our web appâs manifest file:
{
"name": "1DIV",
"description": "1DIV is a mini CSS playground",
"lang": "en-US",
"start_url": "/",
"theme_color": "#ffffff",
"background_color": "#ffffff",
"display_override": [
"window-controls-overlay"
],
"icons": [
...
]
}
On the surface, the feature is really simple to use. This manifest change is the only thing we need to make the title bar disappear and turn the window controls into an overlay.
However, to provide a great experience for all users regardless of what device or browser they use, and to make the most of the title bar area in our design, weâll need a bit of CSS and JavaScript code.
Here is what the app looks like now:
The title bar is gone, which is what we wanted, but our logo, search field, and NEW button are partially covered by the window controls because now our layout starts at the top of the window.
Itâs similar on Windows, with the difference that the close, maximize, and minimize buttons appear on the right side, grouped together with the PWA control buttons:
Using CSS to keep clear of the window controls
Along with the feature, new CSS environment variables have been introduced:
titlebar-area-x
titlebar-area-y
titlebar-area-width
titlebar-area-height
You use these variables with the CSS env() function to position your content where the title bar would have been while ensuring it wonât overlap with the window controls. In our case, weâll use two of the variables to position our header, which contains the logo, search bar, and NEW button.
header {
position: absolute;
left: env(titlebar-area-x, 0);
width: env(titlebar-area-width, 100%);
height: var(--toolbar-height);
}
The titlebar-area-x
variable gives us the distance from the left of the viewport to where the title bar would appear, and titlebar-area-width
is its width. (Remember, this is not equivalent to the width of the entire viewport, just the title bar portion, which as noted earlier, doesnât include the window controls.)
By doing this, we make sure our content remains fully visible. Weâre also defining fallback values (the second parameter in the env()
function) for when the variables are not defined (such as on non-supporting browsers, or when the Windows Control Overlay feature is disabled).
Now our header adapts to its surroundings, and it doesnât feel like the window control buttons have been added as an afterthought. The app looks a lot more like a native app.
Changing the window controls background color so it blends in
Now letâs take a closer look at our second page: the CSS playground editor.
Not great. Our CSS demo area does go all the way to the top, which is what we wanted, but the way the window controls appear as white rectangles on top of it is quite jarring.
We can fix this by changing the appâs theme color. There are a couple of ways to define it:
- PWAs can define a theme color in the web app manifest file using the theme_color manifest member. This color is then used by the OS in different ways. On desktop platforms, it is used to provide a background color to the title bar and window controls.
- Websites can use the theme-color meta tag as well. Itâs used by browsers to customize the color of the UI around the web page. For PWAs, this color can override the manifest
theme_color
.
In our case, we can set the manifest theme_color
to white to provide the right default color for our app. The OS will read this color value when the app is installed and use it to make the window controls background color white. This color works great for our main page with the list of demos.
The theme-color
meta tag can be changed at runtime, using JavaScript. So we can do that to override the white with the right demo background color when one is opened.
Here is the function weâll use:
function themeWindow(bgColor) {
document.querySelector("meta[name=theme-color]").setAttribute('content', bgColor);
}
With this in place, we can imagine how using color and CSS transitions can produce a smooth change from the list page to the demo page, and enable the window control buttons to blend in with the rest of the appâs interface.
Dragging the window
Now, getting rid of the title bar entirely does have an important accessibility consequence: itâs much more difficult to move the application window around.
The title bar provides a sizable area for users to click and drag, but by using the Window Controls Overlay feature, this area becomes limited to where the control buttons are, and users have to very precisely aim between these buttons to move the window.
Fortunately, this can be fixed using CSS with the app-region
property. This property is, for now, only supported in Chromium-based browsers and needs the -webkit-
vendor prefix.
To make any element of the app become a dragging target for the window, we can use the following:
-webkit-app-region: drag;
It is also possible to explicitly make an element non-draggable:
-webkit-app-region: no-drag;
These options can be useful for us. We can make the entire header a dragging target, but make the search field and NEW button within it non-draggable so they can still be used as normal.
However, because the editor page doesnât display the header, users wouldnât be able to drag the window while editing code. So let's use a different approach. Weâll create another element before our header, also absolutely positioned, and dedicated to dragging the window.
<div class="drag"></div>
<header>...</header>
.drag {
position: absolute;
top: 0;
width: 100%;
height: env(titlebar-area-height, 0);
-webkit-app-region: drag;
}
With the above code, weâre making the draggable area span the entire viewport width, and using the titlebar-area-height
variable to make it as tall as what the title bar would have been. This way, our draggable area is aligned with the window control buttons as shown below.
And, now, to make sure our search field and button remain usable:
header .search,
header .new {
-webkit-app-region: no-drag;
}
With the above code, users can click and drag where the title bar used to be. It is an area that users expect to be able to use to move windows on desktop, and weâre not breaking this expectation, which is good.
Adapting to window resize
It may be useful for an app to know both whether the window controls overlay is visible and when its size changes. In our case, if the user made the window very narrow, there wouldnât be enough space for the search field, logo, and button to fit, so weâd want to push them down a bit.
The Window Controls Overlay feature comes with a JavaScript API we can use to do this: navigator.windowControlsOverlay
.
The API provides three interesting things:
navigator.windowControlsOverlay.visible
lets us know whether the overlay is visible.navigator.windowControlsOverlay.getBoundingClientRect()
lets us know the position and size of the title bar area.navigator.windowControlsOverlay.ongeometrychange
lets us know when the size or visibility changes.
Letâs use this to be aware of the size of the title bar area and move the header down if itâs too narrow.
if (navigator.windowControlsOverlay) {
navigator.windowControlsOverlay.addEventListener('geometrychange', () => {
const { width } = navigator.windowControlsOverlay.getBoundingClientRect();
document.body.classList.toggle('narrow', width < 250);
});
}
In the example above, we set the narrow
class on the body
of the app if the title bar area is narrower than 250px. We could do something similar with a media query, but using the windowControlsOverlay
API has two advantages for our use case:
- Itâs only fired when the feature is supported and used; we donât want to adapt the design otherwise.
- We get the size of the title bar area across operating systems, which is great because the size of the window controls is different on Mac and Windows. Using a media query wouldnât make it possible for us to know exactly how much space remains.
.narrow header {
top: env(titlebar-area-height, 0);
left: 0;
width: 100%;
}
Using the above CSS code, we can move our header down to stay clear of the window control buttons when the window is too narrow, and move the thumbnails down accordingly.
Thirty pixels of exciting design opportunities
Using the Window Controls Overlay feature, we were able to take our simple demo app and turn it into something that feels so much more integrated on desktop devices. Something that reaches out of the usual window constraints and provides a custom experience for its users.
In reality, this feature only gives us about 30 pixels of extra room and comes with challenges on how to deal with the window controls. And yet, this extra room and those challenges can be turned into exciting design opportunities.
More devices of all shapes and forms get invented all the time, and the web keeps on evolving to adapt to them. New features get added to the web platform to allow us, web authors, to integrate more and more deeply with those devices. From watches or foldable devices to desktop computers, we need to evolve our design approach for the web. Building for the web now lets us think outside the rectangular box.
So letâs embrace this. Letâs use the standard technologies already at our disposal, and experiment with new ideas to provide tailored experiences for all devices, all from a single codebase!
If you get a chance to try the Window Controls Overlay feature and have feedback about it, you can open issues on the specâs repository. Itâs still early in the development of this feature, and you can help make it even better. Or, you can take a look at the featureâs existing documentation, or this demo app and its source code.
Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, âPlease, sir, I want some more.â
Hereâs the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think itâs their idea.
In this article, Iâll show you how to collaboratively expose misalignment and gaps in the teamâs shared understanding by bringing the team together around two simple questions:
- What are the objects?
- What are the relationships between those objects?
A gauntlet between research and screen design
These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, whatâs ORCA?! Glad you asked.
ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.
The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.
I sometimes say that ORCA is a âgarbage in, garbage outâ process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you donât have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.
In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.
Getting in the same curiosity-boat
What gets us into trouble is not what we donât know. Itâs what we know for sure that just ainât so.
Mark Twain
The first two steps of the ORCA processâObject Discovery and Relationship Discoveryâshine a spotlight on the dark, dusty corners of your teamâs misalignments and any inherent complexity thatâs been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:
This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often canât sell research: every decision-maker is confident in their own mental picture.
Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.
But how we do this is important. However much we might want to, we canât just tell everyone, âYOU ARE WRONG!â Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.
Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.
You might try to sell research by honestly saying: âWe need to understand doctors better! What are their pain points? How do they use the current app?â But hereâs the problem with that. Those questions are vague, and the answers to them donât feel acutely actionable.
Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Letâs listen in:
âWait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?â
âCan a patient even have more than one primary doctor?â
âIs it a âprimary doctorâ or just a âprimary caregiverâ⌠Canât that role be a nurse practitioner?â
âNo, caregivers are something else⌠Thatâs the patientâs family contacts, right?â
âSo are caregivers in scope for this redesign?â
âYeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note⌠Or on the appointment?â
Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidenceâgently and diplomatically.
When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.
If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.
The two questions
But how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably?
We can do this by starting with those two big questions that align to the first two steps of the ORCA process:
- What are the objects?
- What are the relationships between those objects?
In practice, getting to these answers is easier said than done. Iâm going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these âseedâ questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.
Prep work: Noun foraging
In the next section, Iâll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.
Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.
Here are just a few great noun foraging sources:
- the productâs marketing site
- the productâs competitorsâ marketing sites (competitive analysis, anyone?)
- the existing product (look at labels!)
- user interview transcripts
- notes from stakeholder interviews or vision docs from stakeholders
Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.
As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if youâll be creating an object map later!).
Youâll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:
- Structure
- Instances
- Purpose
Think of a library app, for example. Is âbookâ an object?
Structure: can you think of a few attributes for this potential object? Title, author, publish date⌠Yep, it has structure. Check!
Instance: what are some examples of this potential âbookâ object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops⌠OK, check!
Purpose: why is this object important to the users and business? Well, âbookâ is what our library client is providing to people and books are why people come to the library⌠Check, check, check!
As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickersâyour UX system is not your design system! Components are just the packaging for objectsâthey are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.
Letâs say we work for a startup disrupting the email experience. This is how Iâd start my noun foraging.
First Iâd look at my own email client, which happens to be Gmail. Iâd then look at Outlook and the new HEY email. Iâd look at Yahoo, HotmailâŚIâd even look at Slack and Basecamp and other so-called âemail replacers.â Iâd read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.
(Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just donât get lost in your own emails! Come back to me!)
Drumroll, pleaseâŚ
Here are a few nouns I came up with during my noun foraging:
- email message
- thread
- contact
- client
- rule/automation
- email address that is not a contact?
- contact groups
- attachment
- Google doc file / other integrated file
- newsletter? (HEY treats this differently)
- saved responses and templates
Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google whatâs googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.
Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:
- Record Locator
- Incentive Home
- Augmented Line Item
- Curriculum-Based Measurement Probe
This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.
Facilitate an Object Definition Workshop
You could actually start your workshop with noun foragingâthis activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the timeâs up, come together and find the overlap. Affinity mapping is your friend here!
If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.
HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You donât need to let them know that youâre looking for gaps in the teamâs understanding so that you can prove the need for more user researchâthat will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers.
Then, let the question whack-a-mole commence.
1. What is this thing?
Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.
As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an âopen questionsâ parking lot. đ
After definitions solidify, hereâs a great follow-up:
2. Do our users know what these things are? What do users call this thing?
Stakeholder 1: They probably call email clients âapps.â But Iâm not sure.
Stakeholder 2: Automations are often called âworkflows,â I think. Or, maybe users think workflows are something different.
If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the usersâ language and mindset.
OK, moving on.
If you have two or more objects that seem to overlap in purpose, ask one of these questions:
3. Are these the same thing? Or are these different? If they are not the same, how are they different?
You: Is a saved response the same as a template?
Stakeholder 1: Yes! Definitely.
Stakeholder 2: I donât think so⌠A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images.
Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your âopen questionsâ parking lot.
If you successfully determine that two similar things are, in fact, different, hereâs your next follow-up question:
4. Whatâs the relationship between these objects?
You: Are saved responses and templates related in any way?
Stakeholder 3: Yeah, a template can be applied to a saved response.
You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?
Listen. Capture uncertainty. Once the list of âopen questionsâ grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldnât make it to the workshop. And many questions will need to be labeled âuser.â
Do you see how we are building up to our UXR sales pitch?
5. Is this object in scope?
Your next question narrows the teamâs focus toward whatâs most important to your users. You can simply ask, âAre saved responses in scope for our first release?,â but Iâve got a better, more devious strategy.
By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisinglyâor not so surprisinglyâitâs not unusual for the VP to rank something like âsaved responsesâ as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.
I did this for a startup a few years ago. We posted the three groupsâ wildly different sort orders on the whiteboard.
The CEO stood back, looked at it, and said, âThis is why we havenât been able to move forward in two years.â
Admittedly, itâs tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.
Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.
6. Create a visual representation of the objectsâ relationships
Weâve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple âhas aâ and âhas manyâ statements.
This system modeling activity brings up all sorts of new questions:
- Can a saved response have attachments?
- Can a saved response use a template? If so, if an email uses a saved response with a template, can the user override that template?
- Do users want to see all the emails they sent that included a particular attachment? For example, âshow me all the emails I sent with ProfessionalImage.jpg attached. Iâve changed my professional photo and I want to alert everyone to update it.â
Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.
Light the fuse
Youâve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.
Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.
Hereâs your final step. Take those questions youâve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, âif we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?â
With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry.
Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6â8 user interviews focused specifically on these questions.
HOT TIP: if you are new to UX research, please note that youâll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and donât lead the user into any default answers.
Final words: Hold the screen design!
Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?
I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world.
I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, thereâs more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots.
All the best of luck! Now go sell research!
Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.
But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content modelâa definition of content types, attributes, and relationships that let people and systems understand contentâwith my more familiar design-system thinking would capsize my customerâs omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content.
I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot deliveryâdesigning content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces.
A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic typesâtypes named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.
Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy canât rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.
Two essential principles for an effective content model
We needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitiveâat least at firstâbecause it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:
- Content models must define semantics instead of layout.
- And content models should connect content that belongs together.
Semantic content models
A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they donât help delivery channels understand the contentâs meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit.
When youâre creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.
A semantic content model has several benefits:
- Even if your team doesnât care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the websiteâs design without needing to refactor its content. In this way, content can withstand disruptive website redesigns.
- A semantic content model also provides a competitive edge. By adding structured data based on Schema.orgâs types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.
- Beyond those practical benefits, youâll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions.
For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.
Content models that connect
After struggling to describe what makes a good content model, Iâve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ itemâs question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.
Think about writing an article or essay. An articleâs meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.
To illustrate, letâs look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldnât we make it as easy and as flexible as possible to add any number of tabs in the future?
Because our design-system instincts were so familiar, it felt like we had needed a content type called âtab sectionâ so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the softwareâs overview or its specifications. Another tab might provide a list of resources.
Our inclination to break down the content model into âtab sectionâ pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldnât have been understood by additional delivery channels. For example, how would another system have been able to tell which âtab sectionâ referred to a productâs specifications or its resource listâwould that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design systemâs layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.
We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software productâs overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on whatâs visual and familiar had obscured the intent of the designs. With a little digging, it didnât take long to realize that the concept of tabs wasnât relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.
In fact, the customer could have decided to display this content in a different wayâwithout tabsâsomewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The softwareâs product information stayed together because it wasnât sliced across separate components like âtab sectionsâ that were derived from the contentâs presentation. Any delivery channelâincluding future onesâcould understand and present this content.
Conclusion
In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if youâre working on a content model to support an omnichannel content strategyâor even if you just want to make sure that Google and other interfaces understand your contentâremember:
- A design system isnât a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.
- If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.orgâbased structured data in your website. Even if additional delivery channels arenât on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.
- Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they wonât be held back by the cost of content migrations. Theyâll be able to create new designs without the obstacle of compatibility between the design and the content, and âtheyâll be ready for the next big thing.
By rigorously advocating for these principles, youâll help your team treat content the way that it deservesâas the most critical asset in your user experience and the best way to connect with your audience.
Antiracist economist Kim Crayton says that âintention without strategy is chaos.â Weâve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical techâbut what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.
This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech thatâs safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)
The process for inclusive safety
When you are designing for safety, your goals are to:
- identify ways your product can be used for abuse,
- design ways to prevent the abuse, and
- provide support for vulnerable users to reclaim power and control.
The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). Itâs a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:
- Conducting research
- Creating archetypes
- Brainstorming problems
- Designing solutions
- Testing for safety
The Process is meant to be flexibleâit wonât make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.
And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. Itâs a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.
If youâre working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.
Step 1: Conduct research
Design research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.
Broad research
Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm thatâs been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.
Specific research: Survivors
When possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, youâll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If youâve uncovered possible domestic violence issues, for example, the experts youâll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.
Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Donât ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. Weâll talk more about how to appropriately interview survivors in Chapter 6.
Specific research: Abusers
Itâs unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Donât make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.
Step 2: Create archetypes
Once youâve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as theyâre not based on real people that you interviewed and surveyed. Instead, theyâre based on your research into likely safety issues, much like when we design for accessibility: we donât need to have found a group of blind or low-vision users in our interview pool to create a design thatâs inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.
The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they donât know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.
The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetypeâs understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware theyâve been targeted in the first place and need to be alerted (Fig 5.3)?
You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know itâs happening but donât know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. Youâll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.
It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse youâve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology thatâs being used for abuse. Later, youâll brainstorm how to prevent the abuserâs goals and assist the survivorâs goals.
And while the âabuser/survivorâ model fits most cases, it doesnât fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the childâs parents would get survivor archetype.
Step 3: Brainstorm problems
After creating archetypes, brainstorm novel abuse cases and safety issues. âNovelâ means things not found in your research; youâre trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You arenât worrying about how to prevent the harm yetâthat comes in the next step.
How could your product be used for any kind of abuse, outside of what youâve already identified in your research? I recommend setting aside at least a few hours with your team for this process.
If youâre looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the showâthe most wild, awful, out-of-control ways it could be used for harm. When Iâve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is greatâitâs okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.
After youâve identified as many opportunities for abuse as possible, you may still not feel confident that youâve uncovered every potential form of harm. A healthy amount of anxiety is normal when youâre doing this kind of work. Itâs common for teams designing for safety to worry, âHave we really identified every possible harm? What if weâve missed something?â If youâve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.
Itâs impossible to guarantee youâve thought of everything; instead of aiming for 100 percent assurance, recognize that youâve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.
Step 4: Design solutions
At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuserâs goals and to support the survivorâs goals. This step is a good one to insert alongside existing parts of your design process where youâre proposing solutions for the various problems your research uncovered.
Some questions to ask yourself to help prevent harm and support your archetypes include:
- Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?
- How can you make the victim aware that abuse is happening through your product?
- How can you help the victim understand what they need to do to make the problem stop?
- Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?
In some products, itâs possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but itâs worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.
That said, use caution: you donât want to do anything that could put a user in harmâs way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. Weâll walk through a good example of this in the next chapter.
Step 5: Test for safety
The final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point youâll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.
Ideally, safety testing happens along with usability testing. If youâre at a company that doesnât do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that donât make sense to them.
Youâll want to conduct safety testing on either your final prototype or the actual product if itâs already been released. Thereâs nothing wrong with testing an existing product that wasnât designed with safety goals in mind from the onsetââretrofittingâ it for safety is a good thing to do.
Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, youâll want to test from the perspective of each one.
As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.
Abuser testing
The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.
For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, youâd try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.
If by the end of this youâve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.
Survivor testing
Survivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldnât be needed from the survivorâs perspective.
However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetypeâs goals would be to understand who or what is making the temperature change when they arenât doing it themselves. You could test this by looking for the thermostatâs history log and checking for usernames, actions, and times; if you couldnât find that information, you would have more work to do in step 4.
Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.
Stress testing
To make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good dayâbut real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called âstress cases,â and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.
In the 1950s, many in the elite running community had begun to believe it wasnât possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasnât built for the task.
But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, Englandâconditions no one expected to lend themselves to record-settingâand yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes.
This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannisterâs record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.
We achieve far more when we believe that something is possible, and we will believe itâs possible only when we see someone else has already done itâand as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.
Establishing standards for a sustainable web
In most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we arenât held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.
The primary goal in sustainable web design is to reduce carbon emissions. However, itâs almost impossible to actually measure the amount of CO2 produced by a web product. We canât measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do?
If we canât measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:
- Data transfer
- Carbon intensity of electricity
Letâs take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.
Data transfer
Most researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.
For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. Itâs fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).
The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes.
Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as âdesktopâ and 1.77 MB for âmobile,â with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website.
History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.
You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.
Designing for fast performance does often lead to reduced data transfer and emissions, but it isnât always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design.
We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website weâre replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.
If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they donât need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.
Page weight budgets are easy to track throughout a design and development process. Although they donât actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.
In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as weâll see next, since all web products demand some power, itâs important to consider the source of that electricity, too.
Carbon intensity of electricity
Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200â400 gCO2/kWh.
Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the websiteâs data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.
We donât have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).
That said, we donât want to locate our servers too far away from our users; it takes energy to transmit data through the telecomâs networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the websiteâs core user base as âmegabyte milesââand we want it to be as small as possible.
Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we donât know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.
For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. Thatâs a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so itâs a win-win.
Converting it back to carbon emissions
If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.
If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.
With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; weâre more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isnât as intuitiveâbut carbon budgets do focus our minds on the primary thing weâre trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.
Browser Energy
Data transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end userâs devices, it canât offer us insights into the efficiency in any specific part of the system.
One part of the system we can look at in more detail is the energy used by end usersâ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to usersâ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the âthinkingâ work is done partly or entirely in the browser.
All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the userâs web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the userâs device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isnât just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.
In part because the tools are limited, and partly because there are so many different models of devices, itâs difficult to measure website energy consumption on end usersâ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).
You know when you load a website and your computerâs cooling fans start spinning so frantically you think it might actually take off? Thatâs essentially what this tool is measuring.
It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesnât give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.
Weâve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.
Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.
In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation âTo whom it may concern,â for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.
Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfacesâthe machines we conduct spoken conversations withâwe face exciting challenges as designers and content strategists.
Voice Interactions
We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because:
- we need something done (such as a transaction),
- we want to know something (information of some sort), or
- we are social beings and want someone to talk to (conversation for conversationâs sake).
These three categoriesâwhich I call transactional, informational, and prosocialâalso characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interfaceâs first greeting and ending with the user exiting the interface. Note here that a conversation in our human senseâa chat between people that leads to some result and lasts an arbitrary length of timeâcould encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.
Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines donât yet have the capacity to really want to know how weâre doing and to do the sort of glad-handing humans crave. Thereâs also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to usersâ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be humanâpotentially alienating them in the process (http://bkaprt.com/vcu36/01-01).
That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (âbuy iced teaâ) and an informational voice interaction teaching us something new (âdiscuss a musicalâ).
Transactional voice interactions
Unless youâre tapping buttons on a food delivery app, youâre generally having a conversationâand therefore a voice interactionâwhen you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).
Alison: Hey, howâs it going?
Burhan: Hi, welcome to Crust Deluxe! Itâs cold out there. How can I help you?
Alison: Can I get a Hawaiian pizza with extra pineapple?
Burhan: Sure, what size?
Alison: Large.
Burhan: Anything else?
Alison: No thanks, thatâs it.
Burhan: Something to drink?
Alison: Iâll have a bottle of Coke.
Burhan: You got it. Thatâll be $13.55 and about fifteen minutes.
Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: theyâre direct, to the point, and economical. They quickly dispense with pleasantries.
Informational voice interactions
Meanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, weâre after much more.
Alison: Hey, howâs it going?
Burhan: Hi, welcome to Crust Deluxe! Itâs cold out there. How can I help you?
Alison: Can I ask a few questions?
Burhan: Of course! Go right ahead.
Alison: Do you have any halal options on the menu?
Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?
Alison: What about gluten-free pizzas?
Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?
Alison: Thatâs it for now. Good to know. Thanks!
Burhan: Anytime, come back soon!
This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truthâresearch expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.
Voice Interfaces
At their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesnât mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, weâre most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.
Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.
Interactive voice response (IVR) systems
Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.
IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (âSay Reservations to book a flight or check an itineraryâ); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and usersâ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF).
While IVR systems are great for highly repetitive, monotonous conversations that generally donât veer from a single format, they have a reputation for less scintillating conversation than weâre used to in real life (or even in science fiction).
Screen readers
Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, itâs the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.
Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04).
With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web âprovide mechanisms that translate visual design constructsâproximity, proportion, etc.âinto useful information,â writes Aaron Gustafson in A List Apart. âAt least they do when documents are authored thoughtfullyâ (http://bkaprt.com/vcu36/01-05).
Though deeply instructive for voice interface designers, thereâs one significant problem with screen readers: theyâre difficult to use and unremittingly verbose. The visual structures of websites and web navigation donât translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.
In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:
From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06)
In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.
Voice assistants
When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrettâs voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And theyâre rapidly gaining more attention from accessibility advocates for their assistive potential.
Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web âagentâ that would perform typical errands like âchecking calendars, making appointments, and finding locationsâ (http://bkaprt.com/vcu36/01-07, behind paywall). It wasnât until 2011 that Appleâs Siri finally entered the picture, making voice assistants a tangible reality for consumers.
Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Appleâs Siri and Microsoftâs Cortana couldnât be extended beyond their existing capabilities. Even today, it isnât possible to program Siri to perform arbitrary functions, because thereâs no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.
At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.
As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, theyâre also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.
Often by necessity, voice assistants like Amazon Alexa tend to be monochannelâtheyâre tightly coupled to a device and canât be accessed on a computer or smartphone instead. By contrast, many development platforms like Googleâs Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I donât prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 weâll get into some of the implications these variables might have on the way you build out your design artifacts.
Voice Content
Simply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and conciseâeverything written content isnât.
Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, weâre most concerned with content delivered auditorilyânot as an option, but as a necessity.
For many of us, our first foray into informational voice interfaces will be to deliver content to users. Thereâs only one problem: any content we already have isnât in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?
Lately, weâve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:
A dayâs weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08)
Iâd update Dashâs definition of microcontent to include all examples of bite-sized content that go well beyond written communiquĂŠs. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.
As microcontent, voice content is unique because itâs an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we canât easily escape or skip, something screen reader users are all too familiar with.
Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where theyâll eventually end up, we need to ensure that our microcontent truly performs well as voice contentâand that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.
Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.
Iâm not sure when I first heard this quote, but itâs something that has stayed with me over the years. How do you create services for situations you canât imagine? Or design products that work on devices yet to be invented?
Flash, Photoshop, and responsive design
When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.
Ethan Marcotteâs talk at An Event Apart and subsequent article âResponsive Web Designâ in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.
The fear wasnât helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you canât just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.
A new way to design
Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:
.column-span-6 {
width: 49%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-4 {
width: 32%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-3 {
width: 24%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:
.logo {
@include colSpan(6);
}
.search {
@include colSpan(3);
}
.social-share {
@include colSpan(3);
}
Media queries
The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).
Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on.
For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markupâsomething a small business owner might struggle with. This is because each row in the grid was defined using a div
as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.
Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.
<section class="row">
<div class="column-span-4">1 of 7</div>
<div class="column-span-4">2 of 7</div>
<div class="column-span-4">3 of 7</div>
</section>
<section class="row">
<div class="column-span-4">4 of 7</div>
<div class="column-span-4">5 of 7</div>
<div class="column-span-4">6 of 7</div>
</section>
<section class="row">
<div class="column-span-4">7 of 7</div>
</section>
Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components.
Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices youâre designing for correspond to the viewport sizes used in the pattern libraryâin the process not really hitting that âdevices that donât yet existâ goal.
Then thereâs the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?
Container queries: our savior or a false dawn?
Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.
One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.
In other words, responsive components to replace responsive layouts.
Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.
My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component?
A component library removed from context and real content is probably not the best place for that decision.
As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?
In this example, the dimensions of the container are not what should dictate the design; rather, the image is.
Itâs hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.
CSS is changing
Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div
elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.
.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, 450px);
gap: 10px;
}
The repeat()
function paired with auto-fit
or auto-fill
allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and âflexâ to fill available space.Â
.wrapper {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
.child {
flex-basis: 32%;
margin-bottom: 20px;
}
The biggest benefit of all this is you donât need to wrap elements in container rows. Without rows, content isnât tied to page markup in quite the same way, allowing for removals or additions of content without additional development.
This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid.Â
Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below?
Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.
.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
grid-template-rows: auto 1fr auto;
gap: 10px;
}
.sub-grid {
display: grid;
grid-row: span 3;
grid-template-rows: subgrid; /* sets rows to parent grid */
}
CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query.
Intrinsic layouts
Iâd be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space.Â
Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that wonât ever shrink so much that they render the content illegible.
fr
units is a way to say I want you to distribute the extra space in this way, but...donât ever make it smaller than the content thatâs inside of it.
âJen Simmons, âDesigning Intrinsic Layoutsâ
Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.
What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation.
We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.
Another 2010 moment?
This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, itâs another âeverything changedâ moment.
But it doesnât seem to be moving quite as fast; I havenât yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention.
One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase.
Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isnât exactly all-new; itâs about using existing skills and existing CSS knowledge in a different way.
You canât framework your way out of a content problem
Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change.
Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.
Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.
And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.
How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally Iâm a big fan of.
The debate about âwhether designers should codeâ is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?
Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.
Content first
Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.
Thankfully, thereâs more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line
and ::first-letter
help to separate design from markup so we can create designs that allow for changes.
Instead of old markup hacks like thisâ
<p>
<span class="first-line">First line of text with different styling</span>...
</p>
âwe can target content based on where it appears.
.element::first-line {
font-size: 1.4em;
}
.element::first-letter {
color: red;
}
Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min()
, max()
,
and clamp()
.
This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.
In the Sass version, directional variables need to be set.
$direction: rtl;
$opposite-direction: ltr;
$start-direction: right;
$end-direction: left;
These variables can be used as valuesâ
body {
direction: $direction;
text-align: $start-direction;
}
âor as properties.
margin-#{$end-direction}: 10px;
padding-#{$start-direction}: 10px;
However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.
margin-block-end: 10px;
padding-block-start: 10px;
There are also native start and end values for properties like text-align
, which means we can replace text-align: right
with text-align: start
.
Like the earlier examples, these properties help to build out designs that arenât constrained to one language; the design will reflect the contentâs needs.
Fixed and fluid
We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min()
and max()
functions are a similar concept, allowing you to specify a fixed value with a flexible alternative.Â
For min()
this means setting a fluid minimum value and a maximum fixed value.
.element {
width: min(50%, 300px);
}
The element in the figure above will be 50% of its container as long as the elementâs width doesnât exceed 300px.
For max()
we can set a flexible max value and a minimum fixed value.
.element {
width: max(50%, 300px);
}
Now the element will be 50% of its container as long as the elementâs width is at least 300px. This means we can set limits but allow content to react to the available space.Â
The clamp()
function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.
.element {
width: clamp(300px, 50%, 600px);
}
This time, the elementâs width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.
With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.
Situation first
Thanks to what weâve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldmanâs quote, â...situations you havenât imaginedâ?
Itâs a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.
This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.
Thankfully, there is a lot we can do to provide choice.
Responsible design
âThere are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.â
âI Used the Web for a Day on a 50 MB Budgetâ
Chris Ashton
One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that wonât load, but there are ways we can help users use less data or deal with sporadic connectivity.
The srcset
attribute allows the browser to decide which image to serve. This means we can create smaller âcroppedâ images to display on mobile devices in turn using less bandwidth and less data.
<img
src="image-file.jpg"
srcset="large.jpg 1024w,
medium.jpg 640w,
small.jpg 320w"
alt="Image alt text" />
The preload
attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience.Â
<link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup-->
<link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup-->
Thereâs also native lazy loading, which indicates assets that should only be downloaded when they are needed.
<img src="image.png" loading="lazy" alt="âŚ">
With srcset
, preload
, and lazy loading, we can start to tailor a userâs experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browserâs to make.Â
So how can we put users in control?
The return of media queries
Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.
Weâve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; itâs less about one-size-fits-all and more about serving adaptable content.
As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.
For example, thereâs a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.
@media (light-level: normal) {
--background-color: #fff;
--text-color: #0b0c0c;
}
@media (light-level: dim) {
--background-color: #efd226;
--text-color: #0b0c0c;
}
Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data
, prefers-color-scheme
, and prefers-reduced-motion
, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people donât have to spend time making each site they visit more usable.Â
Media queries like this go beyond choices made by a browser to grant more control to the user.
Expect the unexpected
In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.
We canât design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products.
A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.
When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries.
Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.
âAny comment?â is probably one of the worst ways to ask for feedback. Itâs vague and open ended, and it doesnât provide any indication of what weâre looking for. Getting good feedback starts earlier than we might expect: it starts with the request.
It might seem counterintuitive to start the process of receiving feedback with a question, but that makes sense if we realize that getting feedback can be thought of as a form of design research. In the same way that we wouldnât do any research without the right questions to get the insights that we need, the best way to ask for feedback is also to craft sharp questions.
Design critique is not a one-shot process. Sure, any good feedback workflow continues until the project is finished, but this is particularly true for design because design work continues iteration after iteration, from a high level to the finest details. Each level needs its own set of questions.
And finally, as with any good research, we need to review what we got back, get to the core of its insights, and take action. Question, iteration, and review. Letâs look at each of those.
The question
Being open to feedback is essential, but we need to be precise about what weâre looking for. Just saying âAny comment?â, âWhat do you think?â, or âIâd love to get your opinionâ at the end of a presentationâwhether itâs in person, over video, or through a written postâis likely to get a number of varied opinions or, even worse, get everyone to follow the direction of the first person who speaks up. And then... we get frustrated because vague questions like those can turn a high-level flows review into people instead commenting on the borders of buttons. Which might be a hearty topic, so it might be hard at that point to redirect the team to the subject that you had wanted to focus on.
But how do we get into this situation? Itâs a mix of factors. One is that we donât usually consider asking as a part of the feedback process. Another is how natural it is to just leave the question implied, expecting the others to be on the same page. Another is that in nonprofessional discussions, thereâs often no need to be that precise. In short, we tend to underestimate the importance of the questions, so we donât work on improving them.
The act of asking good questions guides and focuses the critique. Itâs also a form of consent: it makes it clear that youâre open to comments and what kind of comments youâd like to get. It puts people in the right mental state, especially in situations when they werenât expecting to give feedback.
There isnât a single best way to ask for feedback. It just needs to be specific, and specificity can take many shapes. A model for design critique that Iâve found particularly useful in my coaching is the one of stage versus depth.
âStageâ refers to each of the steps of the processâin our case, the design process. In progressing from user research to the final design, the kind of feedback evolves. But within a single step, one might still review whether some assumptions are correct and whether thereâs been a proper translation of the amassed feedback into updated designs as the project has evolved. A starting point for potential questions could derive from the layers of user experience. What do you want to know: Project objectives? User needs? Functionality? Content? Interaction design? Information architecture? UI design? Navigation design? Visual design? Branding?
Hereâre a few example questions that are precise and to the point that refer to different layers:
- Functionality: Is automating account creation desirable?
- Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I mightâve missed.
- Information architecture: We have two competing bits of information on this page. Is the structure effective in communicating them both?
- UI design: What are your thoughts on the error counter at the top of the page that makes sure that you see the next error, even if the error is out of the viewport?
- Navigation design: From research, we identified these second-level navigation items, but once youâre on the page, the list feels too long and hard to navigate. Are there any suggestions to address this?
- Visual design: Are the sticky notifications in the bottom-right corner visible enough?
The other axis of specificity is about how deep youâd like to go on whatâs being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and youâd like a detailed review of that. This can be especially useful from one iteration to the next where itâs important to highlight the parts that have changed.
There are other things that we can consider when we want to achieve more specificâand more effectiveâquestions.
A simple trick is to remove generic qualifiers from your questions like âgood,â âwell,â ânice,â âbad,â âokay,â and âcool.â For example, asking, âWhen the block opens and the buttons appear, is this interaction good?â might look specific, but you can spot the âgoodâ qualifier, and convert it to an even better question: âWhen the block opens and the buttons appear, is it clear what the next action is?â
Sometimes we actually do want broad feedback. Thatâs rare, but it can happen. In that sense, you might still make it explicit that youâre looking for a wide range of opinions, whether at a high level or with details. Or maybe just say, âAt first glance, what do you think?â so that itâs clear that what youâre asking is open ended but focused on someoneâs impression after their first five seconds of looking at it.
Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these situations, it might be useful to explicitly say that some parts are already locked in and arenât open to feedback. Itâs not something that Iâd recommend in general, but Iâve found it useful to avoid falling again into rabbit holes of the sort that might lead to further refinement but arenât whatâs most important right now.
Asking specific questions can completely change the quality of the feedback that you receive. People with less refined critique skills will now be able to offer more actionable feedback, and even expert designers will welcome the clarity and efficiency that comes from focusing only on whatâs needed. It can save a lot of time and frustration.
The iteration
Design iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Yet a lot of design tools with inline commenting tend to show changes as a single fluid stream in the same file, and those types of design tools make conversations disappear once theyâre resolved, update shared UI components automatically, and compel designs to always show the latest versionâunless these would-be helpful features were to be manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. Thatâs probably not the best way to approach design critiques, but even if I donât want to be too prescriptive here: that could work for some teams.
The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. Iâm going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. Any platform that can accommodate this structure can use this. By the way, when I refer to a âwrite-up or presentation,â Iâm including video recordings or other media too: as long as itâs asynchronous, it works.
Using iteration posts has many advantages:
- It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
- It makes decisions visible for future review, and conversations are likewise always available.
- It creates a record of how the design changed over time.
- Depending on the tool, it might also make it easier to collect feedback and act on it.
These posts of course donât mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And other feedback approaches (such as live critique, pair designing, or inline comments) can build from there.
I donât think thereâs a standard format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:
- The goal
- The design
- The list of changes
- The questions
Each project is likely to have a goal, and hopefully itâs something thatâs already been summarized in a single sentence somewhere else, such as the client brief, the product managerâs outline, or the project ownerâs request. So this is something that Iâd repeat in every iteration postâliterally copy and pasting it. The idea is to provide context and to repeat whatâs essential to make each iteration post complete so that thereâs no need to find information spread across multiple posts. If I want to know about the latest design, the latest iteration post will have all that I need.
This copy-and-paste part introduces another relevant concept: alignment comes from repetition. So having posts that repeat information is actually very effective toward making sure that everyone is on the same page.
The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work thatâs been done. In short, itâs any design artifact. For the final stages of work, I prefer the term blueprint to emphasize that Iâll be showing full flows instead of individual screens to make it easier to understand the bigger picture.
It can also be useful to label the artifacts with clear titles because that can make it easier to refer to them. Write the post in a way that helps people understand the work. Itâs not too different from organizing a good live presentation.
For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on whatâs new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.
And finally, as noted earlier, itâs essential that you include a list of the questions to drive the design critique in the direction you want. Doing this as a numbered list can also help make it easier to refer to each question by its number.
Not all iterations are the same. Earlier iterations donât need to be as tightly focusedâthey can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see whatâs possible. Then later, the iterations start settling on a solution and refining it until the design process reaches its end and the feature ships.
I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draftâjust a concept to get a conversation goingâor it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.
Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. This might look like a minor labelling tip, but it can help in multiple ways:
- UniqueâItâs a clear unique marker. Within each project, one can easily say, âThis was discussed in i4,â and everyone knows where they can go to review things.
- UnassumingâIt works like versions (such as v1, v2, and v3) but in contrast, versions create the impression of something thatâs big, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
- Future proofâIt resolves the âfinalâ naming problem that you can run into with versions. No more files named âfinal final complete no-really-its-done.â Within each project, the largest number always represents the latest iteration.
To mark when a design is complete enough to be worked on, even if there might be some bits still in need of attention and in turn more iterations needed, the wording release candidate (RC) could be used to describe it: âwith i8, we reached RCâ or âi12 is an RC.â
The review
What usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This approach is particularly effective during live, synchronous feedback. But when we work asynchronously, itâs more effective to use a different approach: we can shift to a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.
This shift has some major benefits that make asynchronous feedback particularly effective, especially around these friction points:
- It removes the pressure to reply to everyone.
- It reduces the frustration from swoop-by comments.
- It lessens our personal stake.
The first friction point is feeling a pressure to reply to every single comment. Sometimes we write the iteration post, and we get replies from our team. Itâs just a few of them, itâs easy, and it doesnât feel like a problem. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. This might be especially true if the person whoâs replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We need to accept that this pressure is absolutely normal, and itâs human nature to try to accommodate people who we care about. Sometimes replying to all comments can be effective, but if we treat a design critique more like user research, we realize that we donât have to reply to every comment, and in asynchronous spaces, there are alternatives:
- One is to let the next iteration speak for itself. When the design evolves and we post a follow-up iteration, thatâs the reply. You might tag all the people who were involved in the previous discussion, but even thatâs a choice, not a requirement.
- Another is to briefly reply to acknowledge each comment, such as âUnderstood. Thank you,â âGood pointsâIâll review,â or âThanks. Iâll include these in the next iteration.â In some cases, this could also be just a single top-level comment along the lines of âThanks for all the feedback everyoneâthe next iteration is coming soon!â
- Another is to provide a quick summary of the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.
The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirementsâor of the previous iterationsâ discussions. On their side, thereâs something that one can hope that they might learn: they could start to acknowledge that theyâre doing this and they could be more conscious in outlining where theyâre coming from. Swoop-by comments often trigger the simple thought âWeâve already discussed thisâŚâ, and it can be frustrating to have to repeat the same reply over and over.
Letâs begin by acknowledging again that thereâs no need to reply to every comment. If, however, replying to a previously litigated point might be useful, a short reply with a link to the previous discussion for extra details is usually enough. Remember, alignment comes from repetition, so itâs okay to repeat things sometimes!
Swoop-by commenting can still be useful for two reasons: they might point out something that still isnât clear, and they also have the potential to stand in for the point of view of a user whoâs seeing the design for the first time. Sure, youâll still be frustrated, but that might at least help in dealing with it.
The third friction point is the personal stake we could have with the design, which could make us feel defensive if the review were to feel more like a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego (because yes, even if we donât want to admit it, itâs there). And ultimately, treating everything in aggregated form allows us to better prioritize our work.
Always remember that while you need to listen to stakeholders, project owners, and specific advice, you donât have to accept every piece of feedback. You have to analyze it and make a decision that you can justify, but sometimes ânoâ is the right answer.
As the designer leading the project, youâre in charge of that decision. Ultimately, everyone has their specialty, and as the designer, youâre the one who has the most knowledge and the most context to make the right decision. And by listening to the feedback that youâve received, youâre making sure that itâs also the best and most balanced decision.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.
Feedback, in whichever form it takes, and whatever it may be called, is one of the most effective soft skills that we have at our disposal to collaboratively get our designs to a better place while growing our own skills and perspectives.
Feedback is also one of the most underestimated tools, and often by assuming that weâre already good at it, we settle, forgetting that itâs a skill that can be trained, grown, and improved. Poor feedback can create confusion in projects, bring down morale, and affect trust and team collaboration over the long term. Quality feedback can be a transformative force.
Practicing our skills is surely a good way to improve, but the learning gets even faster when itâs paired with a good foundation that channels and focuses the practice. What are some foundational aspects of giving good feedback? And how can feedback be adjusted for remote and distributed work environments?
On the web, we can identify a long tradition of asynchronous feedback: from the early days of open source, code was shared and discussed on mailing lists. Today, developers engage on pull requests, designers comment in their favorite design tools, project managers and scrum masters exchange ideas on tickets, and so on.
Design critique is often the name used for a type of feedback thatâs provided to make our work better, collaboratively. So it shares a lot of the principles with feedback in general, but it also has some differences.
The content
The foundation of every good critique is the feedbackâs content, so thatâs where we need to start. There are many models that you can use to shape your content. The one that I personally like bestâbecause itâs clear and actionableâis this one from Lara Hogan.
While this equation is generally used to give feedback to people, it also fits really well in a design critique because it ultimately answers some of the core questions that we work on: What? Where? Why? How? Imagine that youâre giving some feedback about some design work that spans multiple screens, like an onboarding flow: there are some pages shown, a flow blueprint, and an outline of the decisions made. You spot something that could be improved. If you keep the three elements of the equation in mind, youâll have a mental model that can help you be more precise and effective.
Here is a comment that could be given as a part of some feedback, and it might look reasonable at a first glance: it seems to superficially fulfill the elements in the equation. But does it?
Not sure about the buttonsâ styles and hierarchyâit feels off. Can you change them?
Observation for design feedback doesnât just mean pointing out which part of the interface your feedback refers to, but it also refers to offering a perspective thatâs as specific as possible. Are you providing the userâs perspective? Your expert perspective? A business perspective? The project managerâs perspective? A first-time userâs perspective?
When I see these two buttons, I expect one to go forward and one to go back.
Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what youâre pointing out.
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an âĂâ to close. This seems to be breaking the consistency in the flow.
The question approach is meant to provide open guidance by eliciting the critical thinking in the designer receiving the feedback. Notably, in Laraâs equation she provides a second approach: request, which instead provides guidance toward a specific solution. While thatâs a viable option for feedback in general, for design critiques, in my experience, defaulting to the question approach usually reaches the best solutions because designers are generally more comfortable in being given an open space to explore.
The difference between the two can be exemplified with, for the question approach:
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an âĂâ to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?
Or, for the request approach:
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an âĂâ to close. This seems to be breaking the consistency in the flow. Letâs make sure that all screens have the same pair of forward and back buttons.
At this point in some situations, it might be useful to integrate with an extra why: why you consider the given suggestion to be better.
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an âĂâ to close. This seems to be breaking the consistency in the flow. Letâs make sure that all screens have the same two forward and back buttons so that users donât get confused.
Choosing the question approach or the request approach can also at times be a matter of personal preference. A while ago, I was putting a lot of effort into improving my feedback: I did rounds of anonymous feedback, and I reviewed feedback with other people. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. Until I changed teams. To my shock, my next round of feedback from one specific person wasnât that great. The reason is that I had previously tried not to be prescriptive in my adviceâbecause the people who I was previously working with preferred the open-ended question format over the request style of suggestions. But now in this other team, there was one person who instead preferred specific guidance. So I adapted my feedback for them to include requests.
One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesnât seem very efficient. No⌠but also yes. Letâs explore both sides.
No, this style of feedback is actually efficient because the length here is a byproduct of clarity, and spending time giving this kind of feedback can provide exactly enough information for a good fix. Also if we zoom out, it can reduce future back-and-forth conversations and misunderstandings, improving the overall efficiency and effectiveness of collaboration beyond the single comment. Imagine that in the example above the feedback were instead just, âLetâs make sure that all screens have the same two forward and back buttons.â The designer receiving this feedback wouldnât have much to go by, so they might just apply the change. In later iterations, the interface might change or they might introduce new featuresâand maybe that change might not make sense anymore. Without the why, the designer might imagine that the change is about consistency⌠but what if it wasnât? So there could now be an underlying concern that changing the buttons would be perceived as a regression.
Yes, this style of feedback is not always efficient because the points in some comments donât always need to be exhaustive, sometimes because certain changes may be obvious (âThe font used doesnât follow our guidelinesâ) and sometimes because the team may have a lot of internal knowledge such that some of the whys may be implied.
So the equation above isnât meant to suggest a strict template for feedback but a mnemonic to reflect and improve the practice. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.
The tone
Well-grounded content is the foundation of feedback, but thatâs not really enough. The soft skills of the person whoâs providing the critique can multiply the likelihood that the feedback will be well received and understood. Tone alone can make the difference between content thatâs rejected or welcomed, and itâs been demonstrated that only positive feedback creates sustained change in people.
Since our goal is to be understood and to have a positive working environment, tone is essential to work on. Over the years, Iâve tried to summarize the required soft skills in a formula that mirrors the one for content: the receptivity equation.
Respectful feedback comes across as grounded, solid, and constructive. Itâs the kind of feedback that, whether itâs positive or negative, is perceived as useful and fair.
Timing refers to when the feedback happens. To-the-point feedback doesnât have much hope of being well received if itâs given at the wrong time. Questioning the entire high-level information architecture of a new feature when itâs about to ship might still be relevant if that questioning highlights a major blocker that nobody saw, but itâs way more likely that those concerns will have to wait for a later rework. So in general, attune your feedback to the stage of the project. Early iteration? Late iteration? Polishing work in progress? These all have different needs. The right timing will make it more likely that your feedback will be well received.
Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. That means checking before we write to see whether what we have in mind will truly help the person and make the project better overall. This might be a hard reflection at times because maybe we donât want to admit that we donât really appreciate that person. Hopefully thatâs not the case, but that can happen, and thatâs okay. Acknowledging and owning that can help you make up for that: how would I write if I really cared about them? How can I avoid being passive aggressive? How can I be more constructive?
Form is relevant especially in a diverse and cross-cultural work environments because having great content, perfect timing, and the right attitude might not come across if the way that we write creates misunderstandings. There might be many reasons for this: sometimes certain words might trigger specific reactions; sometimes nonnative speakers might not understand all the nuances of some sentences; sometimes our brains might just be different and we might perceive the world differentlyâneurodiversity must be taken into consideration. Whatever the reason, itâs important to review not just what we write but how.
A few years back, I was asking for some feedback on how I give feedback. I received some good advice but also a comment that surprised me. They pointed out that when I wrote âOh, [âŚ],â I made them feel stupid. That wasnât my intent! I felt really bad, and I just realized that I provided feedback to them for months, and every time I might have made them feel stupid. I was horrified⌠but also thankful. I made a quick fix: I added âohâ in my list of replaced words (your choice between: macOSâs text replacement, aText, TextExpander, or others) so that when I typed âoh,â it was instantly deleted.
Something to highlight because itâs quite frequentâespecially in teams that have a strong group spiritâis that people tend to beat around the bush. Itâs important to remember here that a positive attitude doesnât mean going light on the feedbackâit just means that even when you provide hard, difficult, or challenging feedback, you do so in a way thatâs respectful and constructive. The nicest thing that you can do for someone is to help them grow.
We have a great advantage in giving feedback in written form: it can be reviewed by another person who isnât directly involved, which can help to reduce or remove any bias that might be there. I found that the best, most insightful moments for me have happened when Iâve shared a comment and Iâve asked someone who I highly trusted, âHow does this sound?,â âHow can I do it better,â and even âHow would you have written it?ââand Iâve learned a lot by seeing the two versions side by side.
The format
Asynchronous feedback also has a major inherent advantage: we can take more time to refine what weâve written to make sure that it fulfills two main goals: the clarity of communication and the actionability of the suggestions.
Letâs imagine that someone shared a design iteration for a project. You are reviewing it and leaving a comment. There are many ways to do this, and of course context matters, but letâs try to think about some elements that may be useful to consider.
In terms of clarity, start by grounding the critique that youâre about to give by providing context. Specifically, this means describing where youâre coming from: do you have a deep knowledge of the project, or is this the first time that youâre seeing it? Are you coming from a high-level perspective, or are you figuring out the details? Are there regressions? Which userâs perspective are you taking when providing your feedback? Is the design iteration at a point where it would be okay to ship this, or are there major things that need to be addressed first?
Providing context is helpful even if youâre sharing feedback within a team that already has some information on the project. And context is absolutely essential when giving cross-team feedback. If I were to review a design that might be indirectly related to my work, and if I had no knowledge about how the project arrived at that point, I would say so, highlighting my take as external.
We often focus on the negatives, trying to outline all the things that could be done better. Thatâs of course important, but itâs just as importantâif not moreâto focus on the positives, especially if you saw progress from the previous iteration. This might seem superfluous, but itâs important to keep in mind that design is a discipline where there are hundreds of possible solutions for every problem. So pointing out that the design solution that was chosen is good and explaining why itâs good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. In the longer term, sharing positive feedback can help prevent regressions on things that are going well because those things will have been highlighted as important. As a bonus, positive feedback can also help reduce impostor syndrome.
Thereâs one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo (compared to a previous iteration, competitors, or benchmarks) and why, and then on that foundation, you can add what could be improved. This is powerful because thereâs a big difference between a critique thatâs for a design thatâs already in good shape and a critique thatâs for a design that isnât quite there yet.
Another way that you can improve your feedback is to depersonalize the feedback: the comments should always be about the work, never about the person who made it. Itâs âThis button isnât well alignedâ versus âYou havenât aligned this button well.â This is very easy to change in your writing by reviewing it just before sending.
In terms of actionability, one of the best approaches to help the designer whoâs reading through your feedback is to split it into bullet points or paragraphs, which are easier to review and analyze one by one. For longer pieces of feedback, you might also consider splitting it into sections or even across multiple comments. Of course, adding screenshots or signifying markers of the specific part of the interface youâre referring to can also be especially useful.
One approach that Iâve personally used effectively in some contexts is to enhance the bullet points with four markers using emojis. So a red square đĽ means that itâs something that I consider blocking; a yellow diamond đś is something that I can be convinced otherwise, but it seems to me that it should be changed; and a green circle đ˘ is a detailed, positive confirmation. I also use a blue spiral đ for either something that Iâm not sure about, an exploration, an open alternative, or just a note. But Iâd use this approach only on teams where Iâve already established a good level of trust because if it happens that I have to deliver a lot of red squares, the impact could be quite demoralizing, and Iâd reframe how Iâd communicate that a bit.
Letâs see how this would work by reusing the example that we used earlier as the first bullet point in this list:
- đś NavigationâWhen I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an âĂâ to close. This seems to be breaking the consistency in the flow. Letâs make sure that all screens have the same two forward and back buttons so that users donât get confused.
- đ˘ OverallâI think the page is solid, and this is good enough to be our release candidate for a version 1.0.
- đ˘ MetricsâGood improvement in the buttons on the metrics area; the improved contrast and new focus style make them more accessible.
- đĽ Button StyleâUsing the green accent in this context creates the impression that itâs a positive action because green is usually perceived as a confirmation color. Do we need to explore a different color?
- đśTilesâGiven the number of items on the page, and the overall page hierarchy, it seems to me that the tiles shouldnât be using the Subtitle 1 style but the Subtitle 2 style. This will keep the visual hierarchy more consistent.
- đ BackgroundâUsing a light texture works well, but I wonder whether it adds too much noise in this kind of page. What is the thinking in using that?
What about giving feedback directly in Figma or another design tool that allows in-place feedback? In general, I find these difficult to use because they hide discussions and theyâre harder to track, but in the right context, they can be very effective. Just make sure that each of the comments is separate so that itâs easier to match each discussion to a single task, similar to the idea of splitting mentioned above.
One final note: say the obvious. Sometimes we might feel that something is obviously good or obviously wrong, and so we donât say it. Or sometimes we might have a doubt that we donât express because the question might sound stupid. Say itâthatâs okay. You might have to reword it a little bit to make the reader feel more comfortable, but donât hold it back. Good feedback is transparent, even when it may be obvious.
Thereâs another advantage of asynchronous feedback: written feedback automatically tracks decisions. Especially in large projects, âWhy did we do this?â could be a question that pops up from time to time, and thereâs nothing better than open, transparent discussions that can be reviewed at any time. For this reason, I recommend using software that saves these discussions, without hiding them once they are resolved.
Content, tone, and format. Each one of these subjects provides a useful model, but working to improve eight areasâobservation, impact, question, timing, attitude, form, clarity, and actionabilityâis a lot of work to put in all at once. One effective approach is to take them one by one: first identify the area that you lack the most (either from your perspective or from feedback from others) and start there. Then the second, then the third, and so on. At first youâll have to put in extra time for every piece of feedback that you give, but after a while, itâll become second nature, and your impact on the work will multiply.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.
Are you like me, reading about people fading away as they burn out, and feeling unable to relate? Do you feel like your feelings are invisible to the world because youâre experiencing burnout differently? When burnout starts to push down on us, our core comes through more. Beautiful, peaceful souls get quieter and fade into that distant and distracted burnout weâve all read about. But some of us, those with fires always burning on the edges of our core, get hotter. In my heart I am fire. When I face burnout I double down, triple down, burning hotter and hotter to try to best the challenge. I donât fadeâI am engulfed in a zealous burnout.
So what on earth is a zealous burnout?
Imagine a woman determined to do it all. She has two amazing children whom she, along with her husband who is also working remotely, is homeschooling during a pandemic. She has a demanding client load at workâall of whom she loves. She gets up early to get some movement in (or often catch up on work), does dinner prep as the kids are eating breakfast, and gets to work while positioning herself near âfourth gradeâ to listen in as she juggles clients, tasks, and budgets. Sound like a lot? Even with a supportive team both at home and at work, it is.
Sounds like this woman has too much on her plate and needs self-care. But no, she doesnât have time for that. In fact, she starts to feel like sheâs dropping balls. Not accomplishing enough. Thereâs not enough of her to be here and there; she is trying to divide her mind in two all the time, all day, every day. She starts to doubt herself. And as those feelings creep in more and more, her internal narrative becomes more and more critical.
Suddenly she KNOWS what she needs to do! She should DO MORE.
This is a hard and dangerous cycle. Know why? Because once she doesnât finish that new goal, that narrative will get worse. Suddenly sheâs failing. She isnât doing enough. SHE is not enough. She might fail, she might fail her family...so sheâll find more she should do. She doesnât sleep as much, move as much, all in the efforts to do more. Caught in this cycle of trying to prove herself to herself, never reaching any goal. Never feeling âenough.â
So, yeah, thatâs what zealous burnout looks like for me. It doesnât happen overnight in some grand gesture but instead slowly builds over weeks and months. My burning out process looks like speeding up, not a person losing focus. I speed up and up and up...and then I just stop.
I am the one who could
Itâs funny the things that shape us. Through the lens of childhood, I viewed the fears, struggles, and sacrifices of someone who had to make it all work without having enough. I was lucky that my mother was so resourceful and my father supportive; I never went without and even got an extra here or there.
Growing up, I did not feel shame when my mother paid with food stamps; in fact, Iâd have likely taken on any debate on the topic, verbally eviscerating anyone who dared to criticize the disabled woman trying to make sure all our needs were met with so little. As a child, I watched the way the fear of not making those ends meet impacted people I love. As the non-disabled person in my home, I would take on many of the physical tasks because I was âthe one who couldâ make our lives a little easier. I learned early to associate fears or uncertainty with putting more of myself into itâI am the one who can. I learned early that when something frightens me, I can double down and work harder to make it better. I can own the challenge. When people have seen this in me as an adult, Iâve been told I seem fearless, but make no mistake, Iâm not. If I seem fearless, itâs because this behavior was forged from other peopleâs fears.
And here I am, more than 30 years later still feeling the urge to mindlessly push myself forward when faced with overwhelming tasks ahead of me, assuming that I am the one who can and therefore should. I find myself driven to prove that I can make things happen if I work longer hours, take on more responsibility, and do more.
I do not see people who struggle financially as failures, because I have seen how strong that tide can beâit pulls you along the way. I truly get that I have been privileged to be able to avoid many of the challenges that were present in my youth. That said, I am still âthe one who canâ who feels she should, so if I were faced with not having enough to make ends meet for my own family, I would see myself as having failed. Though I am supported and educated, most of this is due to good fortune. I will, however, allow myself the arrogance of saying I have been careful with my choices to have encouraged that luck. My identity stems from the idea that I am âthe one who canâ so therefore feel obligated to do the most. I can choose to stop, and with some quite literal cold water splashed in my face, Iâve made the choice to before. But that choosing to stop is not my go-to; I move forward, driven by a fear that is so a part of me that I barely notice itâs there until Iâm feeling utterly worn away.
So why all the history? You see, burnout is a fickle thing. I have heard and read a lot about burnout over the years. Burnout is real. Especially now, with COVID, many of us are balancing more than we ever have beforeâall at once! Itâs hard, and the procrastinating, the avoidance, the shutting down impacts so many amazing professionals. There are important articles that relate to what I imagine must be the majority of people out there, but not me. Thatâs not what my burnout looks like.
The dangerous invisibility of zealous burnout
A lot of work environments see the extra hours, extra effort, and overall focused commitment as an asset (and sometimes thatâs all it is). They see someone trying to rise to challenges, not someone stuck in their fear. Many well-meaning organizations have safeguards in place to protect their teams from burnout. But in cases like this, those alarms are not always tripped, and then when the inevitable stop comes, some members of the organization feel surprised and disappointed. And sometimes maybe even betrayed.
Parentsâmore so mothers, statistically speakingâare praised as being so on top of it all when they can work, be involved in the after-school activities, practice self-care in the form of diet and exercise, and still meet friends for coffee or wine. During COVID many of us have binged countless streaming episodes showing how itâs so hard for the female protagonist, but she is strong and funny and can do it. Itâs a âvery special episodeâ when she breaks down, cries in the bathroom, woefully admits she needs help, and just stops for a bit. Truth is, countless people are hiding their tears or are doom-scrolling to escape. We know that the media is a lie to amuse us, but often the perception that itâs what we should strive for has penetrated much of society.
Women and burnout
I love men. And though I donât love every man (heads up, I donât love every woman or nonbinary person either), I think there is a beautiful spectrum of individuals who represent that particular binary gender.
That said, women are still more often at risk of burnout than their male counterparts, especially in these COVID stressed times. Mothers in the workplace feel the pressure to do all the âmomâ things while giving 110%. Mothers not in the workplace feel they need to do more to âjustifyâ their lack of traditional employment. Women who are not mothers often feel the need to do even more because they donât have that extra pressure at home. Itâs vicious and systemic and so a part of our culture that weâre often not even aware of the enormity of the pressures we put on ourselves and each other.
And there are prices beyond happiness too. Harvard Health Publishing released a study a decade ago that âuncovered strong links between womenâs job stress and cardiovascular disease.â The CDC noted, âHeart disease is the leading cause of death for women in the United States, killing 299,578 women in 2017âor about 1 in every 5 female deaths.â
This relationship between work stress and health, from what I have read, is more dangerous for women than it is for their non-female counterparts.
But what if your burnout isnât like that either?
That might not be you either. After all, each of us is so different and how we respond to stressors is too. Itâs part of what makes us human. Donât stress what burnout looks like, just learn to recognize it in yourself. Here are a few questions I sometimes ask friends if I am concerned about them.
Are you happy? This simple question should be the first thing you ask yourself. Chances are, even if youâre burning out doing all the things you love, as you approach burnout youâll just stop taking as much joy from it all.
Do you feel empowered to say no? I have observed in myself and others that when someone is burning out, they no longer feel they can say no to things. Even those who donât âspeed upâ feel pressure to say yes to not disappoint the people around them.
What are three things youâve done for yourself? Another observance is that we all tend to stop doing things for ourselves. Anything from skipping showers and eating poorly to avoiding talking to friends. These can be red flags.
Are you making excuses? Many of us try to disregard feelings of burnout. Over and over I have heard, âItâs just crunch time,â âAs soon as I do this one thing, it will all be better,â and âWell I should be able to handle this, so Iâll figure it out.â And it might really be crunch time, a single goal, and/or a skill set you need to learn. That happensâlife happens. BUT if this doesnât stop, be honest with yourself. If youâve worked more 50-hour weeks since January than not, maybe itâs not crunch timeâmaybe itâs a bad situation that youâre burning out from.
Do you have a plan to stop feeling this way? If something is truly temporary and you do need to just push through, then it has an exit route with a
defined end.
Take the time to listen to yourself as you would a friend. Be honest, allow yourself to be uncomfortable, and break the thought cycles that prevent you from healing.
So now what?
What I just described is a different path to burnout, but itâs still burnout. There are well-established approaches to working through burnout:
- Get enough sleep.
- Eat healthy.
- Work out.
- Get outside.
- Take a break.
- Overall, practice self-care.
Those are hard for me because they feel like more tasks. If Iâm in the burnout cycle, doing any of the above for me feels like a waste. The narrative is that if Iâm already failing, why would I take care of myself when Iâm dropping all those other balls? People need me, right?
If youâre deep in the cycle, your inner voice might be pretty awful by now. If you need to, tell yourself you need to take care of the person your people depend on. If your roles are pushing you toward burnout, use them to help make healing easier by justifying the time spent working on you.
To help remind myself of the airline attendant message about putting the mask on yourself first, I have come up with a few things that I do when I start feeling myself going into a zealous burnout.
Cook an elaborate meal for someone!
OK, I am a âfood-focusedâ individual so cooking for someone is always my go-to. There are countless tales in my home of someone walking into the kitchen and turning right around and walking out when they noticed I was âchopping angrily.â But itâs more than that, and you should give it a try. Seriously. Itâs the perfect go-to if you donât feel worthy of taking time for yourselfâdo it for someone else. Most of us work in a digital world, so cooking can fill all of your senses and force you to be in the moment with all the ways you perceive the world. It can break you out of your head and help you gain a better perspective. In my house, Iâve been known to pick a place on the map and cook food that comes from wherever that is (thank you, Pinterest). I love cooking Indian food, as the smells are warm, the bread needs just enough kneading to keep my hands busy, and the process takes real attention for me because itâs not what I was brought up making. And in the end, we all win!
Vent like a foul-mouthed fool
Be careful with this one!
I have been making an effort to practice more gratitude over the past few years, and I recognize the true benefits of that. That said, sometimes you just gotta let it all outâeven the ugly. Hell, Iâm a big fan of not sugarcoating our lives, and that sometimes means that to get past the big pile of poop, youâre gonna wanna complain about it a bit.
When that is whatâs needed, turn to a trusted friend and allow yourself some pure verbal diarrhea, saying all the things that are bothering you. You need to trust this friend not to judge, to see your pain, and, most importantly, to tell you to remove your cranium from your own rectal cavity. Seriously, itâs about getting a reality check here! One of the things I admire the most about my husband (though often after the fact) is his ability to break things down to their simplest. âWeâre spending our lives together, of course youâre going to disappoint me from time to time, so get over itâ has been his way of speaking his dedication, love, and acceptance of meâand I could not be more grateful. It also, of course, has meant that I needed to remove my head from that rectal cavity. So, again, usually those moments are appreciated in hindsight.
Pick up a book!
There are many books out there that arenât so much self-help as they are people just like you sharing their stories and how theyâve come to find greater balance. Maybe youâll find something that speaks to you. Titles that have stood out to me include:
- Thrive by Arianna Huffington
- Tools of Titans by Tim Ferriss
- Girl, Stop Apologizing by Rachel Hollis
- Dare to Lead by BrenĂŠ Brown
Or, another tactic I love to employ is to read or listen to a book that has NOTHING to do with my work-life balance. Iâve read the following books and found they helped balance me out because my mind was pondering their interesting topics instead of running in circles:
- The Drunken Botanist by Amy Stewart
- Superlife by Darin Olien
- A Brief History of Everyone Who Ever Lived by Adam Rutherford
- Gaiaâs Garden by Toby Hemenway
If youâre not into reading, pick up a topic on YouTube or choose a podcast to subscribe to. Iâve watched countless permaculture and gardening topics in addition to how to raise chickens and ducks. For the record, I do not have a particularly large food garden, nor do I own livestock of any kind...yet. I just find the topic interesting, and it has nothing to do with any aspect of my life that needs anything from me.
Forgive yourself
You are never going to be perfectâhell, it would be boring if you were. Itâs OK to be broken and flawed. Itâs human to be tired and sad and worried. Itâs OK to not do it all. Itâs scary to be imperfect, but you cannot be brave if nothing were scary.
This last one is the most important: allow yourself permission to NOT do it all. You never promised to be everything to everyone at all times. We are more powerful than the fears that drive us.
This is hard. It is hard for me. Itâs whatâs driven me to write thisâthat itâs OK to stop. Itâs OK that your unhealthy habit that might even benefit those around you needs to end. You can still be successful in life.
I recently read that we are all writing our eulogy in how we live. Knowing that your professional accomplishments wonât be mentioned in that speech, what will yours say? What do you want it to say?
Look, I get that none of these ideas will âfix it,â and thatâs not their purpose. None of us are in control of our surroundings, only how we respond to them. These suggestions are to help stop the spiral effect so that you are empowered to address the underlying issues and choose your response. They are things that work for me most of the time. Maybe theyâll work for you.
Does this sound familiar?
If this sounds familiar, itâs not just you. Donât let your negative self-talk tell you that you âeven burn out wrong.â Itâs not wrong. Even if rooted in fear like my own drivers, I believe that this need to do more comes from a place of love, determination, motivation, and other wonderful attributes that make you the amazing person you are. Weâre going to be OK, ya know. The lives that unfold before us might never look like that story in our headâthat idea of âperfectâ or âdoneâ weâre looking for, but thatâs OK. Really, when we stop and look around, usually the only eyes that judge us are in the mirror.
Do you remember that Winnie the Pooh sketch that had Pooh eat so much at Rabbitâs house that his buttocks couldnât fit through the door? Well, I already associate a lot with Rabbit, so it came as no surprise when he abruptly declared that this was unacceptable. But do you recall what happened next? He put a shelf across poor Poohâs ankles and decorations on his back, and made the best of the big butt in his kitchen.
At the end of the day we are resourceful and know that we are able to push ourselves if we need toâeven when we are tired to our core or have a big butt of fluff ânâ stuff in our room. None of us has to be afraid, as we can manage any obstacle put in front of us. And maybe that means we will need to redefine success to allow space for being uncomfortably human, but that doesnât really sound so bad either.
So, wherever you are right now, please breathe. Do what you need to do to get out of your head. Forgive and take care.
This Person Does Not Exist is a website that generates human faces with a machine learning algorithm. It takes real portraits and recombines them into fake human faces. We recently scrolled past a LinkedIn post stating that this website could be useful âif you are developing a persona and looking for a photo.â
We agree: the computer-generated faces could be a great match for personasâbut not for the reason you might think. Ironically, the website highlights the core issue of this very common design method: the person(a) does not exist. Like the pictures, personas are artificially made. Information is taken out of natural context and recombined into an isolated snapshot thatâs detached from reality.
But strangely enough, designers use personas to inspire their design for the real world.
Personas: A step back
Most designers have created, used, or come across personas at least once in their career. In their article âPersonas - A Simple Introduction,â the Interaction Design Foundation defines personas as âfictional characters, which you create based upon your research in order to represent the different user types that might use your service, product, site, or brand.â In their most complete expression, personas typically consist of a name, profile picture, quotes, demographics, goals, needs, behavior in relation to a certain service/product, emotions, and motivations (for example, see Creative Companionâs Persona Core Poster). The purpose of personas, as stated by design agency Designit, is âto make the research relatable, [and] easy to communicate, digest, reference, and apply to product and service development.â
The decontextualization of personas
Personas are popular because they make âdryâ research data more relatable, more human. However, this method constrains the researcherâs data analysis in such a way that the investigated users are removed from their unique contexts. As a result, personas donât portray key factors that make you understand their decision-making process or allow you to relate to usersâ thoughts and behavior; they lack stories. You understand what the persona did, but you donât have the background to understand why. You end up with representations of users that are actually less human.
This âdecontextualizationâ we see in personas happens in four ways, which weâll explain below.
Personas assume people are static
Although many companies still try to box in their employees and customers with outdated personality tests (referring to you, Myers-Briggs), hereâs a painfully obvious truth: people are not a fixed set of features. You act, think, and feel differently according to the situations you experience. You appear different to different people; you might act friendly to some, rough to others. And you change your mind all the time about decisions youâve taken.
Modern psychologists agree that while people generally behave according to certain patterns, itâs actually a combination of background and environment that determines how people act and take decisions. The contextâthe environment, the influence of other people, your mood, the entire history that led up to a situationâdetermines the kind of person you are in each specific moment.
In their attempt to simplify reality, personas do not take this variability into account; they present a user as a fixed set of features. Like personality tests, personas snatch people away from real life. Even worse, people are reduced to a label and categorized as âthat kind of personâ with no means to exercise their innate flexibility. This practice reinforces stereotypes, lowers diversity, and doesnât reflect reality.
Personas focus on individuals, not the environment
In the real world, youâre designing for a context, not for an individual. Each person lives in a family, a community, an ecosystem, where there are environmental, political, and social factors you need to consider. A design is never meant for a single user. Rather, you design for one or more particular contexts in which many people might use that product. Personas, however, show the user alone rather than describe how the user relates to the environment.
Would you always make the same decision over and over again? Maybe youâre a committed vegan but still decide to buy some meat when your relatives are coming over. As they depend on different situations and variables, your decisionsâand behavior, opinions, and statementsâare not absolute but highly contextual. The persona that ârepresentsâ you wouldnât take into account this dependency, because it doesnât specify the premises of your decisions. It doesnât provide a justification of why you act the way you do. Personas enact the well-known bias called fundamental attribution error: explaining othersâ behavior too much by their personality and too little by the situation.
As mentioned by the Interaction Design Foundation, personas are usually placed in a scenario thatâs a âspecific context with a problem they want to or have to solveââdoes that mean context actually is considered? Unfortunately, what often happens is that you take a fictional character and based on that fiction determine how this character might deal with a certain situation. This is made worse by the fact that you havenât even fully investigated and understood the current context of the people your persona seeks to represent; so how could you possibly understand how they would act in new situations?
Personas are meaningless averages
As mentioned in Shlomo Goltzâs introductory article on Smashing Magazine, âa persona is depicted as a specific person but is not a real individual; rather, it is synthesized from observations of many people.â A well-known critique to this aspect of personas is that the average person does not exist, as per the famous example of the USA Air Force designing planes based on the average of 140 of their pilotsâ physical dimensions and not a single pilot actually fitting within that average seat.
The same limitation applies to mental aspects of people. Have you ever heard a famous person say, âThey took what I said out of context! They used my words, but I didnât mean it like that.â The celebrityâs statement was reported literally, but the reporter failed to explain the context around the statement and didnât describe the non-verbal expressions. As a result, the intended meaning was lost. You do the same when you create personas: you collect somebodyâs statement (or goal, or need, or emotion), of which the meaning can only be understood if you provide its own specific context, yet report it as an isolated finding.Â
But personas go a step further, extracting a decontextualized finding and joining it with another decontextualized finding from somebody else. The resulting set of findings often does not make sense: itâs unclear, or even contrasting, because it lacks the underlying reasons on why and how that finding has arisen. It lacks meaning. And the persona doesnât give you the full background of the person(s) to uncover this meaning: you would need to dive into the raw data for each single persona item to find it. What, then, is the usefulness of the persona?
The relatability of personas is deceiving
To a certain extent, designers realize that a persona is a lifeless average. To overcome this, designers invent and add ârelatableâ details to personas to make them resemble real individuals. Nothing captures the absurdity of this better than a sentence by the Interaction Design Foundation: âAdd a few fictional personal details to make the persona a realistic character.â In other words, you add non-realism in an attempt to create more realism. You deliberately obscure the fact that âJohn Doeâ is an abstract representation of research findings; but wouldnât it be much more responsible to emphasize that John is only an abstraction? If something is artificial, letâs present it as such.
Itâs the finishing touch of a personaâs decontextualization: after having assumed that peopleâs personalities are fixed, dismissed the importance of their environment, and hidden meaning by joining isolated, non-generalizable findings, designers invent new context to create (their own) meaning. In doing so, as with everything they create, they introduce a host of biases. As phrased by Designit, as designers we can âcontextualize [the persona] based on our reality and experience. We create connections that are familiar to us.â This practice reinforces stereotypes, doesnât reflect real-world diversity, and gets further away from peopleâs actual reality with every detail added.
To do good design research, we should report the reality âas-isâ and make it relatable for our audience, so everyone can use their own empathy and develop their own interpretation and emotional response.
Dynamic Selves: The alternative to personas
If we shouldnât use personas, what should we do instead?
Designit has proposed using Mindsets instead of personas. Each Mindset is a âspectrum of attitudes and emotional responses that different people have within the same context or life experience.â It challenges designers to not get fixated on a single userâs way of being. Unfortunately, while being a step in the right direction, this proposal doesnât take into account that people are part of an environment that determines their personality, their behavior, and, yes, their mindset. Therefore, Mindsets are also not absolute but change in regard to the situation. The question remains, what determines a certain Mindset?
Another alternative comes from Margaret P., author of the article âKill Your Personas,â who has argued for replacing personas with persona spectrums that consist of a range of user abilities. For example, a visual impairment could be permanent (blindness), temporary (recovery from eye surgery), or situational (screen glare). Persona spectrums are highly useful for more inclusive and context-based design, as theyâre based on the understanding that the context is the pattern, not the personality. Their limitation, however, is that they have a very functional take on users that misses the relatability of a real person taken from within a spectrum.
In developing an alternative to personas, we aim to transform the standard design process to be context-based. Contexts are generalizable and have patterns that we can identify, just like we tried to do previously with people. So how do we identify these patterns? How do we ensure truly context-based design?
Understand real individuals in multiple contexts
Nothing is more relatable and inspiring than reality. Therefore, we have to understand real individuals in their multi-faceted contexts, and use this understanding to fuel our design. We refer to this approach as Dynamic Selves.
Letâs take a look at what the approach looks like, based on an example of how one of us applied it in a recent project that researched habits of Italians around energy consumption. We drafted a design research plan aimed at investigating peopleâs attitudes toward energy consumption and sustainable behavior, with a focus on smart thermostats.
1. Choose the right sample
When we argue against personas, weâre often challenged with quotes such as âWhere are you going to find a single person that encapsulates all the information from one of these advanced personas[?]â The answer is simple: you donât have to. You donât need to have information about many people for your insights to be deep and meaningful.
In qualitative research, validity does not derive from quantity but from accurate sampling. You select the people that best represent the âpopulationâ youâre designing for. If this sample is chosen well, and you have understood the sampled people in sufficient depth, youâre able to infer how the rest of the population thinks and behaves. Thereâs no need to study seven Susans and five Yuriys; one of each will do.
Similarly, you donât need to understand Susan in fifteen different contexts. Once youâve seen her in a couple of diverse situations, youâve understood the scheme of Susanâs response to different contexts. Not Susan as an atomic being but Susan in relation to the surrounding environment: how she might act, feel, and think in different situations.
Given that each person is representative of a part of the total population youâre researching, it becomes clear why each should be represented as an individual, as each already is an abstraction of a larger group of individuals in similar contexts. You donât want abstractions of abstractions! These selected people need to be understood and shown in their full expression, remaining in their microcosmosâand if you want to identify patterns you can focus on identifying patterns in contexts.
Yet the question remains: how do you select a representative sample? First of all, you have to consider whatâs the target audience of the product or service you are designing: it might be useful to look at the companyâs goals and strategy, the current customer base, and/or a possible future target audience.
In our example project, we were designing an application for those who own a smart thermostat. In the future, everyone could have a smart thermostat in their house. Right now, though, only early adopters own one. To build a significant sample, we needed to understand the reason why these early adopters became such. We therefore recruited by asking people why they had a smart thermostat and how they got it. There were those who had chosen to buy it, those who had been influenced by others to buy it, and those who had found it in their house. So we selected representatives of these three situations, from different age groups and geographical locations, with an equal balance of tech savvy and non-tech savvy participants.
2. Conduct your research
After having chosen and recruited your sample, conduct your research using ethnographic methodologies. This will make your qualitative data rich with anecdotes and examples. In our example project, given COVID-19 restrictions, we converted an in-house ethnographic research effort into remote family interviews, conducted from home and accompanied by diary studies.
To gain an in-depth understanding of attitudes and decision-making trade-offs, the research focus was not limited to the interviewee alone but deliberately included the whole family. Each interviewee would tell a story that would then become much more lively and precise with the corrections or additional details coming from wives, husbands, children, or sometimes even pets. We also focused on the relationships with other meaningful people (such as colleagues or distant family) and all the behaviors that resulted from those relationships. This wide research focus allowed us to shape a vivid mental image of dynamic situations with multiple actors.
Itâs essential that the scope of the research remains broad enough to be able to include all possible actors. Therefore, it normally works best to define broad research areas with macro questions. Interviews are best set up in a semi-structured way, where follow-up questions will dive into topics mentioned spontaneously by the interviewee. This open-minded âplan to be surprisedâ will yield the most insightful findings. When we asked one of our participants how his family regulated the house temperature, he replied, âMy wife has not installed the thermostatâs appâshe uses WhatsApp instead. If she wants to turn on the heater and she is not home, she will text me. I am her thermostat.â
3. Analysis: Create the Dynamic Selves
During the research analysis, you start representing each individual with multiple Dynamic Selves, each âSelfâ representing one of the contexts you have investigated. The core of each Dynamic Self is a quote, which comes supported by a photo and a few relevant demographics that illustrate the wider context. The research findings themselves will show which demographics are relevant to show. In our case, as our research focused on families and their lifestyle to understand their needs for thermal regulation, the important demographics were family type, number and nature of houses owned, economic status, and technological maturity. (We also included the individualâs name and age, but theyâre optionalâwe included them to ease the stakeholdersâ transition from personas and be able to connect multiple actions and contexts to the same person).
To capture exact quotes, interviews need to be video-recorded and notes need to be taken verbatim as much as possible. This is essential to the truthfulness of the several Selves of each participant. In the case of real-life ethnographic research, photos of the context and anonymized actors are essential to build realistic Selves. Ideally, these photos should come directly from field research, but an evocative and representative image will work, too, as long as itâs realistic and depicts meaningful actions that you associate with your participants. For example, one of our interviewees told us about his mountain home where he used to spend every weekend with his family. Therefore, we portrayed him hiking with his little daughter.
At the end of the research analysis, we displayed all of the Selvesâ âcardsâ on a single canvas, categorized by activities. Each card displayed a situation, represented by a quote and a unique photo. All participants had multiple cards about themselves.
4. Identify design opportunities
Once you have collected all main quotes from the interview transcripts and diaries, and laid them all down as Self cards, you will see patterns emerge. These patterns will highlight the opportunity areas for new product creation, new functionalities, and new servicesâfor new design.
In our example project, there was a particularly interesting insight around the concept of humidity. We realized that people donât know what humidity is and why it is important to monitor it for health: an environment thatâs too dry or too wet can cause respiratory problems or worsen existing ones. This highlighted a big opportunity for our client to educate users on this concept and become a health advisor.
Benefits of Dynamic Selves
When you use the Dynamic Selves approach in your research, you start to notice unique social relations, peculiar situations real people face and the actions that follow, and that people are surrounded by changing environments. In our thermostat project, we have come to know one of the participants, Davide, as a boyfriend, dog-lover, and tech enthusiast.
Davide is an individual we might have once reduced to a persona called âtech enthusiast.â But we can have tech enthusiasts who have families or are single, who are rich or poor. Their motivations and priorities when deciding to purchase a new thermostat can be opposite according to these different frames.
Once you have understood Davide in multiple situations, and for each situation have understood in sufficient depth the underlying reasons for his behavior, youâre able to generalize how he would act in another situation. You can use your understanding of him to infer what he would think and do in the contexts (or scenarios) that you design for.
The Dynamic Selves approach aims to dismiss the conflicted dual purpose of personasâto summarize and empathize at the same timeâby separating your research summary from the people youâre seeking to empathize with. This is important because our empathy for people is affected by scale: the bigger the group, the harder it is to feel empathy for others. We feel the strongest empathy for individuals we can personally relate to.
If you take a real person as inspiration for your design, you no longer need to create an artificial character. No more inventing details to make the character more ârealistic,â no more unnecessary additional bias. Itâs simply how this person is in real life. In fact, in our experience, personas quickly become nothing more than a name in our priority guides and prototype screens, as we all know that these characters donât really exist.
Another powerful benefit of the Dynamic Selves approach is that it raises the stakes of your work: if you mess up your design, someone real, a person you and the team know and have met, is going to feel the consequences. It might stop you from taking shortcuts and will remind you to conduct daily checks on your designs.
And finally, real people in their specific contexts are a better basis for anecdotal storytelling and therefore are more effective in persuasion. Documentation of real research is essential in achieving this result. It adds weight and urgency behind your design arguments: âWhen I met Alessandra, the conditions of her workplace struck me. Noise, bad ergonomics, lack of light, you name it. If we go for this functionality, Iâm afraid weâre going to add complexity to her life.â
Conclusion
Designit mentioned in their article on Mindsets that âdesign thinking tools offer a shortcut to deal with realityâs complexities, but this process of simplification can sometimes flatten out peopleâs lives into a few general characteristics.â Unfortunately, personas have been culprits in a crime of oversimplification. They are unsuited to represent the complex nature of our usersâ decision-making processes and donât account for the fact that humans are immersed in contexts.
Design needs simplification but not generalization. You have to look at the research elements that stand out: the sentences that captured your attention, the images that struck you, the sounds that linger. Portray those, use them to describe the person in their multiple contexts. Both insights and people come with a context; they cannot be cut from that context because it would remove meaning.
Itâs high time for design to move away from fiction, and embrace realityâin its messy, surprising, and unquantifiable beautyâas our guide and inspiration.