Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • User Research Is Storytelling

    Ever since I was a boy, I’ve been fascinated with movies. I loved the characters and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that there’s an element of theater to UX—I hadn’t really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholders—the product team and decision makers—along and get them interested in learning more.

    Think of your favorite movie. More than likely it follows a three-act structure that’s commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.

    A detailed graph that shows the narrative structure of The Godfather and The Dark Knight across three acts. The graph is divided into segments labeled “Act 1,” “Act 2,” and “Act 3” for each film. The purple line represents narrative elements, pacing, and rise in tension and excitement within the movies. For The Godfather, in Act 1, the line rises and then dips slightly before entering Act 2. Act 2 sees the line rise, before reaching a crescendo in Act 3. The line then declines steadily until the end of Act 3. For The Dark Knight, in Act 1, the line rises and then dips slightly before entering Act 2. Act 2 the line rises and dips slightly before entering Act 3. The line then rises again and peaks, which is followed by decline until the end of Act 3.
    Three-act structure in movies (© 2024 StudioBinder. Image used with permission from StudioBinder.).

    Use storytelling as a structure to do research

    It’s sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the “right” choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving users’ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.

    In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let’s look at the different acts and how they align with user research.

    Act one: setup

    The setup is all about understanding the background, and that’s where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. You’re learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesn’t need to be a huge investment in time or money.

    Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: “‘Walk me through your day yesterday.’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography.” According to Hall, [This] will probably prove quite illuminating. In the highly unlikely case that you didn’t learn anything new or useful, carry on with enhanced confidence in your direction.”  

    This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from. 

    Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users’ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.

    Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research. 

    This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.

    Act two: conflict

    Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act. 

    Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: “As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.” 

    There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.

    Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors’ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.

    If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that’s often missing from remote usability tests. 

    That’s not to say that the “movies”—remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working. 

    The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things—and these twists in the story can move things in new directions. 

    Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users (foundational research), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users' needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test.  

    On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research. 

    In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.

    Act three: resolution

    While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users’ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.

    This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.

    Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. “The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,” writes Duarte. “That tension helps them persuade the audience to adopt a new mindset or behave differently.”

    A diagram, depicting a persuasive story pattern, segmented into distinct sections that outline a narrative flow. Starting with “Beginning,” followed by “Middle,” and concluding with “End.” The “Beginning” starts with a box labeled “What is.” A line rises up to the box labeled “What could be.” A line goes from this box into “Middle” and back down to “What is” and then back up to “What could be.” This repeats one more time in “Middle,” before a line goes from “What could be” up to a box labeled “Vision of the future” in “End.” “'Call to action” is written below the “Vision of the future” box to signify that the vision is a call to action.
    A persuasive story pattern.

    This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is”—the problems that you’ve identified. And “what could be”—your recommendations on how to address them. And so on and so forth.

    You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!

    While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research: 

    • Act one: You meet the protagonists (the users) and the antagonists (the problems affecting users). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
    • Act two: Next, there’s character development. There’s conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
    • Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures. 

    The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users’ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills. 

    So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.

  • To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    Picture this. You’ve joined a squad at your company that’s designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you’re designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed. 

    Between the fantasy of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It’s an especially confounding place to be a digital professional without a map, a compass, or a plan.

    For those of you venturing into personalization, there’s no Lonely Planet and few tour guides because effective personalization is so specific to each organization’s talent, technology, and market position. 

    But you can ensure that your team has packed its bags sensibly.

    A sign at a mountain scene says “People who liked this also liked,” which is followed by photographs of other scenic landscapes. Satirical art installation by Scott Kelly and Ben Polkinghome.
    Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome.

    There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare.

    We call it prepersonalization.

    Behind the music

    Consider Spotify’s DJ feature, which debuted this past year.

    https://www.youtube.com/watch?v=ok-aNnc0Dko

    We’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.

    So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.

    ​From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.

    Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.

    A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps: 

    1. customer experience optimization (CXO, also known as A/B testing or experimentation)
    2. always-on automations (whether rules-based or machine-generated)
    3. mature features or standalone product development (such as Spotify’s DJ experience)

    This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.

    Set your kitchen timer

    How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities.

    The full arc of the wider workshop is threefold:

    1. Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
    2. Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
    3. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.

    Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.

    Kickstart: Whet your appetite

    We call the first lesson the “landscape of connected experience.” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.

    Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking.

    This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework.

    Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.

    Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.

    A two-by-two grid shows the four areas of emphasis for a personalization program in an organization: Business efficiency, customer experience, business orchestration, and customer understanding. The focus varies from front-stage to back-stage and from business-focused to customer-focused outcomes.
    Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio.

    Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.

    The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy.) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.

    Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.

    Barriers to personalization according to a Boston Consulting Group 2016 research study. The top items include “too few personnel dedicated to personalization,” “lack of a clear roadmap,” and “inadequate cross-functional coordination and project management.”
    The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group.

    At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue.

    Hit that test kitchen

    Next, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience?

    What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.

    The Progressive Personalization Model v2: A pyramid with the following layers, starting at the base and working up: Raw Data (millions), Actionable Data (hundreds of thousands), Segments (thousands), Customer Experience Patterns (many), Interactions (dozens), and Goals (handful).
    Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan.

    The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.

    The dishes will come from recipes, and those recipes have set ingredients.

    A photo of the Progressive Personalization deck of cards with accompanying text reading: Align on key terms and tactics. Draft and groom a full backlog, designing with data.
    A zoomed out view of many of the cards in the deck. Cards have colors corresponding to the layers of the personalization pyramid and include actionable details.
    Progressive personalization is a model of designing for personalized interactions that uses playing cards to assemble the typical parts for such features and functionality.
    In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan.

    Verify your ingredients

    Like a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together. 

    This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team: 

    1. compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette; 
    2. specify a consistent set of interactions that users find uniform or familiar; 
    3. and develop parity across performance measurements and key performance indicators too. 

    This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.

    Compose your recipe

    What ingredients are important to you? Think of a who-what-when-why construct

    • Who are your key audience segments or groups?
    • What kind of content will you give them, in what design elements, and under what circumstances?
    • And for which business and user benefits?

    We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.

    Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below. 

    1. Nurture personalization: When a guest or an unknown visitor interacts with  a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
    2. Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
    3. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.
    A selection of prompt cards assembled to represent the key parameters of a “nurture” user flow.
    A “nurture” automation may trigger a banner or alert box that promotes content that makes it easier for users to complete a common task, based on behavioral profiling of two user types. Credit: Bucket Studio.
    A selection of prompt cards assembled to represent the key parameters of a “welcome”, or onboarding, user flow.
    A “welcome” automation may be triggered for any user that sends an email to help familiarize them with the breadth of a content library, and this email ideally helps them consider selecting various titles (no matter how much time they devote to reviewing the email’s content itself). Credit: Bucket Studio.
    A selection of prompt cards assembled to represent the key parameters of a “winback”, or customer-churn risk, user flow.
    A “winback” automation may be triggered for a specific group, such as users with recently failed credit-card transactions or users at risk of churning out of active usage, that present them with a specific offer to mitigate near-future inactivity. Credit: Bucket Studio.

    A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.

    You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.

    Better kitchens require better architecture

    Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said,  “Complicated problems can be hard to solve, but they are addressable with rules and recipes.”

    When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.

    You can definitely stand the heat…

    Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.

    This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.

    While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding.

  • The Wax and the Wane of the Web

    I offer a single bit of advice to friends and family when they become new parents: When you start to think that you’ve got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, it’s time for solid food, potty training, and overnight sleeping. When you figure those out, it’s time for preschool and rare naps. The cycle goes on and on.

    The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, I’ve seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.

    How we got here

    I built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.

    The birth of web standards

    At the turn of the century, a new cycle started. Crufty code littered with table layouts and font tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.

    Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.

    These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.

    The web as software platform

    The symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.

    At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.

    This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.

    Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote “Of Time and the Web.” Or check out the “Web Design History Timeline” at the Web Design Museum. Neal Agarwal also has a fun tour through “Internet Artifacts.”

    Where we are now

    In the last couple of years, it’s felt like we’ve begun to reach another major inflection point. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but they’re still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.

    Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isn’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.

    Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.

    If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks—which used to let us adopt new techniques sooner—have now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), there’s often no alternative, leaving users with blank or broken pages.

    Where do we go from here?

    Today’s hacks help to shape tomorrow’s standards. And there’s nothing inherently wrong with embracing hacks—for now—to move the present forward. Problems only arise when we’re unwilling to admit that they’re hacks or we hesitate to replace them. So what can we do to create the future we want for the web?

    Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? What’s the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes it’s just a hack that you’ve grown accustomed to. And sometimes it’s holding you back from even better options.

    Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isn’t always true of third-party frameworks. Sites built with even the hackiest of HTML from the ’90s still work just fine today. The same can’t always be said of sites built with frameworks even after just a couple years.

    Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to “move fast and break things,” use the time saved by modern tools to consider more carefully and design with deliberation.

    Always be learning. If you’re always learning, you’re also growing. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. You might end up focusing on something that won’t matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.

    Play, experiment, and be weird! This web that we’ve built is the ultimate experiment. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what we’re capable of.

    Share and amplify. As you experiment, play, and learn, share what’s worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.

    Go forth and make

    As designers and developers for the web (and beyond), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s imbue our values into the things that we create, and let’s make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that you’ve mastered the web, everything will change.

  • Opportunities for AI in Accessibility

    In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.

    I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day.

    Alternative text

    Joe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space.

    As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win.

    Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible.

    While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:

    • Do more people use smartphones or feature phones?
    • How many more?
    • Is there a group of people that don’t fall into either of these buckets?
    • How many is that?

    Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.

    Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility.

    Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!

    Matching algorithms

    Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities.

    Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.

    When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important.

    Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.

    Other ways that AI can helps people with disabilities

    If I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:

    • Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential.
    • Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
    • Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading.

    The importance of diverse teams and data

    We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.

    Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data.

    Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon. 

    Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.


    I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.


    Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.

  • I am a creative.

    I am a creative. What I do is alchemy. It is a mystery. I do not so much do it, as let it be done through me.

    I am a creative. Not all creative people like this label. Not all see themselves this way. Some creative people see science in what they do. That is their truth, and I respect it. Maybe I even envy them, a little. But my process is different—my being is different.

    Apologizing and qualifying in advance is a distraction. That’s what my brain does to sabotage me. I set it aside for now. I can come back later to apologize and qualify. After I’ve said what I came to say. Which is hard enough. 

    Except when it is easy and flows like a river of wine.

    Sometimes it does come that way. Sometimes what I need to create comes in an instant. I have learned not to say it at that moment, because if you admit that sometimes the idea just comes and it is the best idea and you know it is the best idea, they think you don’t work hard enough.

    Sometimes I work and work and work until the idea comes. Sometimes it comes instantly and I don’t tell anyone for three days. Sometimes I’m so excited by the idea that came instantly that I blurt it out, can’t help myself. Like a boy who found a prize in his Cracker Jacks. Sometimes I get away with this. Sometimes other people agree: yes, that is the best idea. Most times they don’t and I regret having  given way to enthusiasm. 

    Enthusiasm is best saved for the meeting where it will make a difference. Not the casual get-together that precedes that meeting by two other meetings. Nobody knows why we have all these meetings. We keep saying we’re doing away with them, but then just finding other ways to have them. Sometimes they are even good. But other times they are a distraction from the actual work. The proportion between when meetings are useful, and when they are a pitiful distraction, varies, depending on what you do and where you do it. And who you are and how you do it. Again I digress. I am a creative. That is the theme.

    Sometimes many hours of hard and patient work produce something that is barely serviceable. Sometimes I have to accept that and move on to the next project.

    Don’t ask about process. I am a creative.

    I am a creative. I don’t control my dreams. And I don’t control my best ideas.

    I can hammer away, surround myself with facts or images, and sometimes that works. I can go for a walk, and sometimes that works. I can be making dinner and there’s a Eureka having nothing to do with sizzling oil and bubbling pots. Often I know what to do the instant I wake up. And then, almost as often, as I become conscious and part of the world again, the idea that would have saved me turns to vanishing dust in a mindless wind of oblivion. For creativity, I believe, comes from that other world. The one we enter in dreams, and perhaps, before birth and after death. But that’s for poets to wonder, and I am not a poet. I am a creative. And it’s for theologians to mass armies about in their creative world that they insist is real. But that is another digression. And a depressing one. Maybe on a much more important topic than whether I am a creative or not. But still a digression from what I came here to say.

    Sometimes the process is avoidance. And agony. You know the cliché about the tortured artist? It’s true, even when the artist (and let’s put that noun in quotes) is trying to write a soft drink jingle, a callback in a tired sitcom, a budget request.

    Some people who hate being called creative may be closeted creatives, but that’s between them and their gods. No offense meant. Your truth is true, too. But mine is for me. 

    Creatives recognize creatives.

    Creatives recognize creatives like queers recognize queers, like real rappers recognize real rappers, like cons know cons. Creatives feel massive respect for creatives. We love, honor, emulate, and practically deify the great ones. To deify any human is, of course, a tragic mistake. We have been warned. We know better. We know people are just people. They squabble, they are lonely, they regret their most important decisions, they are poor and hungry, they can be cruel, they can be just as stupid as we can, because, like us, they are clay. But. But. But they make this amazing thing. They birth something that did not exist before them, and could not exist without them. They are the mothers of ideas. And I suppose, since it’s just lying there, I have to add that they are the mothers of invention. Ba dum bum! OK, that’s done. Continue.

    Creatives belittle our own small achievements, because we compare them to those of the great ones. Beautiful animation! Well, I’m no Miyazaki. Now THAT is greatness. That is greatness straight from the mind of God. This half-starved little thing that I made? It more or less fell off the back of the turnip truck. And the turnips weren’t even fresh.

    Creatives knows that, at best, they are Salieri. Even the creatives who are Mozart believe that. 

    I am a creative. I haven’t worked in advertising in 30 years, but in my nightmares, it’s my former creative directors who judge me. And they are right to do so. I am too lazy, too facile, and when it really counts, my mind goes blank. There is no pill for creative dysfunction.

    I am a creative. Every deadline I make is an adventure that makes Indiana Jones look like a pensioner snoring in a deck chair. The longer I remain a creative, the faster I am when I do my work and the longer I brood and walk in circles and stare blankly before I do that work. 

    I am still 10 times faster than people who are not creative, or people who have only been creative a short while, or people who have only been professionally creative a short while. It’s just that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work off. I am that confident in my ability to do a great job when I put my mind to it. I am that addicted to the adrenaline rush of postponement. I am still that afraid of the jump.

    I am not an artist.

    I am a creative. Not an artist. Though I dreamed, as a lad, of someday being that. Some of us belittle our gifts and dislike ourselves because we are not Michelangelos and Warhols. That is narcissism—but at least we aren’t in politics.

    I am a creative. Though I believe in reason and science, I decide by intuition and impulse. And live with what follows—the catastrophes as well as the triumphs. 

    I am a creative. Every word I’ve said here will annoy other creatives, who see things differently. Ask two creatives a question, get three opinions. Our disagreement, our passion about it, and our commitment to our own truth are, at least to me, the proofs that we are creatives, no matter how we may feel about it.

    I am a creative. I lament my lack of taste in the areas about which I know very little, which is to say almost all areas of human knowledge. And I trust my taste above all other things in the areas closest to my heart, or perhaps, more accurately, to my obsessions. Without my obsessions, I would probably have to spend my time looking life in the eye, and almost none of us can do that for long. Not honestly. Not really. Because much in life, if you really look at it, is unbearable.

    I am a creative. I believe, as a parent believes, that when I am gone, some small good part of me will carry on in the mind of at least one other person.

    Working saves me from worrying about work.

    I am a creative. I live in dread of my small gift suddenly going away.

    I am a creative. I am too busy making the next thing to spend too much time deeply considering that almost nothing I make will come anywhere near the greatness I comically aspire to.

    I am a creative. I believe in the ultimate mystery of process. I believe in it so much, I am even fool enough to publish an essay I dictated into a tiny machine and didn’t take time to review or revise. I won’t do this often, I promise. But I did it just now, because, as afraid as I might be of your seeing through my pitiful gestures toward the beautiful, I was even more afraid of forgetting what I came to say. 

    There. I think I’ve said it. 

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
PCWorld helps you navigate the PC ecosystem to find the products you want and the advice you need to get the job done.
  • 8 last-minute Prime Day tech deals under $100: Don’t miss out!

    The final hours of Amazon’s Prime Big Deal Days are upon us, and you’ve probably made all the big purchases you were targeting heading into the event. But why not spend the waning hours of October Prime Day by seizing advantage of the deals to upgrade your overall tech setup?

    Here are a handful of under-$100 tech deals you absolutely, positively don’t want to miss before Amazon’s Big Deal Days ends at midnight Pacific time.

    SD Hynix Beetle X31 portable SSD: $64.99 (36% off) on Amazon

    We absolutely adored this pint-sized powerhouse when we reviewed it, gracing it with a sterling 4.5-star rating and a coveted Editors’ Choice award. During Prime Days, you can get the 1TB model for a measly $65 – way below the usual $95 and in fact cheaper than the 512GB model. Don’t miss out – I’ve personally recommended this deal to three of my friends during Prime Day.

    Kensington SD2480T USB-C/Thunderbolt 3 dock: $62.77 (65% off) on Amazon

    Thunderbolt docks provide abundant connections for port-starved laptops, letting you turn your notebook into an ad-hoc desktop setup, but man are they expensive: Most start at $200 and only go up from there.

    Fortunately, the Kensington SD2480T is available for a song during Prime Days as the company clears out older (but still incredibly performant) Thunderbolt 3-based docks. You can check out our deeper dive into the specs here but tl;dr? If you want a TB dock without breaking the bank, just go buy it while it’s cheap!

    1.5TB SanDisk Ultra UHS-1 MicroSDXC Card: $89.29 (40% off) on Amazon

    My colleague Mike Crider summed it up perfectly in his coverage of this killer deal:

    “Normally $150, the SanDisk Ultra UHS-1 MicroSDXC Card is on sale for Amazon Prime Day with a sweet $60 discount. That brings it down to $89.29, which is barely $4 more than the 1TB variant (which is currently going for $85). I could give you sarcastic figures about how many MP3s it would hold, but here’s a more relevant example: It can hold a full PC installation of Call of Duty: Modern Warfare III six times over. Good lord.”

    Edifier R1280T bookshelf speakers: $83.99 (30% off) on Amazon

    Mike once again summed things up perfectly in this roundup of 6 killer Prime Day deals on PC gear I own, use, and love:

    “These babies have been sitting on my desk for years. There’s a reason this affordable, powerful set of speakers is so popular: they sound great and they’ll rattle your bones in a small office. There are fancier speakers out there, but you won’t find anything with this combination of quality and value. If you’re still using the teeny-tiny speakers in your monitor, for the love of Pete just get these already.”

    Good flash drives for cheap!

    It’s a great time to stock up on fast, spacious flash drives too, because everyone can use more storage. This compact 128GB Samsung Fit Plus flash drive is just $16.99 on Amazon, a huge drop from its $45 MSRP. If you need even more portable storage performance, SanDisk’s 512GB Ultra Fit flash drive is on sale for $35, down from its usual $50. Not only does the SanDisk offer much more capacity, it offers password protection with 128-bit AES encryption for the security-conscious.

    Iniu 10,000mAh portable charger: $19.99 (33% off) on Amazon

    A mobile power bank is always a good thing to have in your backpack – especially when this one is thin, light, delivers enough juice to fully charge most phones, and on sale for just under $20.

    If you’re looking for a larger power bank with enough juice to rejuvenate a laptop, check out Anker’s luxurious 24,000mAh 3-port laptop charger instead. It’s just $89.99 during Prime Day, down from its usual chest-clutchingly expensive $150.

    TP-Link AC1200 WiFi Extender: $22.98 (54% off) on Amazon

    Want to eliminate pesky dead spots in the furthest reaches of your home, but don’t want to blow hundreds on a full mesh Wi-Fi system. Give this wireless extender a try for around $20 instead and see it it helps!

    More Prime Day deals you’ll love

    That’s it for my recommendations, but there are plenty of other long-tenured tech geeks on the PCWorld team. We’re all ready for Prime Day and our experts have rounded up some of the best tech that’s now available for compelling prices:

  • Get this 27-inch 1440p Dell monitor with USB-C video for $180

    Whoa, doggy! This Dell monitor is just bursting with extra features. You get a 27-inch panel, a boosted 1440p resolution, and best of all, you get USB-C — one cable for video and 65 watts of power for a laptop — plus a couple of extra USB-A ports.

    The one thing you don’t get is a high price because right now Amazon is selling it for $179.99, which is $80 off for Prime Day.

    With that USB-C video port, extra ports for accessories, and multi-device support via HDMI and a headphone jack, the Dell S2722DC is a great screen if you want an inexpensive home office upgrade. It also makes a pretty good add-on display thanks to the standard VESA mounting point for a monitor arm.

    I can’t find any specific mention of the LCD panel type on Amazon’s listing or Dell’s spec page, so I’m assuming that means it’s VA, not IPS. In that case, I wouldn’t recommend it if you need to work with color-accurate photos or video. And at 75Hz, it’s not going to blow away any gamers, either. But for standard web surfing, Excel processing, and the occasional YouTube binge on a much comfier display, it’s great.

    Though the sale is happening during Prime Day, it appears to be open to Amazon shoppers who aren’t paying for Prime. That’s the good news. The bad news is that it won’t stick around for long — I would assume that this price is going away tomorrow — so get it quick!

    Get this Dell 27-inch USB-C monitor for just $180
  • Grab SanDisk’s super-fast 512GB flash drive for just $35 this Prime Day

    Prime Day is great when you’re on the hunt for the best deals on flash drives — and wouldn’t you know it, right now you can get a solid 512GB of portable storage with the SanDisk Ultra Fit USB flash drive that’s on sale for $35, a 30 percent discount off its regular $50.

    This is a compact plug-and-play flash drive that works seamlessly with laptops, gaming consoles, and even your car’s audio system. And it really is super tiny — about the size of a Bluetooth receiver dongle.

    But the best thing about this 512GB thumb drive (besides its fantastic sale price) is its speed. It delivers ultra-fast transfer speeds of up to 400MB/s, which is ideal if you want to move large files between devices. It only takes about half a minute to transfer an entire movie!

    Plus, it comes with password protection using 128-bit AES encryption, giving you peace of mind that your sensitive files, confidential documents, and personal data are secure.

    Prime Day is almost over so don’t miss this chance to get 512GB of flash drive storage for just $35 on Amazon!

    Save 30% on this tiny, fast flash drive for Prime Day
  • Play Foldit, the game that helped its designers win a Nobel Prize

    You may know that biochemist David Baker shares the 2024 Nobel Prize in chemistry with two researchers from Google’s DeepMind. But a video game helped get them there, and you can still play — and contribute to science — even today.

    Baker and the University of Washington’s Institute for Protein Design led the team to create fold.it, an online puzzle game in which players actually design synthetic proteins. Originally, the game was coded to allow players to help determine the structure of existing proteins, beginning in 2008. In 2019, the project expanded to allow gamers to actually design new proteins that had never before existed.

    The game essentially puts the tools in place for gamers to come up with novel proteins, and when “solved,” allows scientists to check them.

    “The scientists tested 146 proteins designed by Foldit players in the laboratory. 56 were found to be stable,” the university said then. “This finding suggested the gamers had produced some realistic proteins. The researchers collected enough data on four of these new molecules to show that the designs adopted their intended structures.”

    The game (and the work) continue today. Users can take “chains” of amino acids and fold them up into their proper shape. This shape allows the protein to carry out its assigned function. Examples of proteins include insulin and hemoglobin. Foldit has expanded to take on small molecules that aren’t protein, such as aspirin.

    “If a protein researcher is struggling with a particular problem, they will create a Foldit puzzle for their problem,” the Foldit page says. “By playing Foldit puzzles, you help to solve protein research problems.

    At press time, current problems included platypus venom and KCNQ1 VSD, which helps regulate the heart’s rhythm. Give foldit a shot today!

  • Best Thunderbolt dock, USB-C hub deals for October Prime Day 2024

    Amazon’s Prime Big Deal Days are a great opportunity to find the best deals on Thunderbolt docking stations and USB-C hubs, which are simply the best accessories to connect your laptop to legacy hardware, displays, and other computing peripherals. They’re on now!

    Amazon’s fall version of Prime Day begins today, October 8. I’ve listed the very best deals on Thunderbolt docks from manufacturers like Anker and Belkin, as well as USB-C hubs and dongles from a number of suppliers.

    My deal recommendations factor in top picks from PCWorld’s roundups of best Thunderbolt docks and the best USB-C hubs. I’ve worked as a technology reporter for 30 years and have reviewed dozens of Thunderbolt docks and hubs since 2020, which is when the WFH (work from home) movement started generating all sorts of demand for these products.

    Below you’ll find my curated list of the best Prime Big Deal Days bargains on Thunderbolt docks and USB-C hubs from Amazon. (I check other retailers, too, but Amazon consistently has the best deals on docking stations.) I’ll continue adding to it through October 9, when Prime Big Deal Days ends. Be sure to check out our Prime Big Deal Days hub, where we’ll have deals in other categories as well as specific stories on the hottest deals, as we find them.

    Last updated on Oct. 9, 2024 with current pricing as of 9:41 AM.

    Best Prime Big Deal Days deals on Thunderbolt docks

    I just can’t quit the Belkin Dock Core, and apparently Belkin can’t either. (It always crops up on our deals pages!) Mac people hate it, but it’s a compact little Thunderbolt dock that delivers a ton for the money. Here’s my review of the Belkin Thunderbolt 3 Dock Core, which I awarded an Editor’s Choice. It does require you to supply a power brick, however.

    Kensington’s SD2480T (the “old version” is listed, but it seems to be the same as the “new” version) is just an older dock on sale for a great price. This Amazon sale may be the last gasp of Thunderbolt 3 hardware. The dock does run hot, as many note, but it shouldn’t affect its performance. Be aware that you’ll need displays with DisplayPort adapters. Or you’ll need to buy a cheap DisplayPort to HDMI adapter for $20 or so.

    Ditto for the Anker 577. The Anker 777 12-in-1 dock I reviewed a bit back wasn’t the fastest. Like the Anker 577 it contains an extra Thunderbolt port that doubles as a display port, forcing you to buy a dongle. But it’s on a good sale.

    The Kensington K37010NA is a lovely design — the floating aesthetic is unique. I haven’t reviewed this dock, but I really like Kensington hardware. Again: There’s one useful HDMI port, but also one upstream Thunderbolt port. That’s not a huge deal. In this case, I buy an extra uni USB-C to HDMI cable for about $16. The 100W charging should take care of most laptops, though perhaps not ones with discrete GPUs. Note the thoughtful 9v/2.22A (20W) USB-C charging on the front that should fast-charge a smartphone.

    Some of the other docks shave a bit here and there. The two Lenovos are what I’d call “standard” Thunderbolt docks. Remember that Thunderbolt 3 and 4 are basically identical, so I might lean toward the slightly cheaper TB3 dock for that reason. Microsoft’s Surface Thunderbolt 4 dock doesn’t have anything particularly to recommend it, but it’s a decent deal. For whatever reason, Amazon consistently offers the best deals on docking stations. But you occasionally find deals elsewhere, like this one.

    Best Prime Big Deal Days deals on USB-C hubs, dongles, and docking stations

    Anker has solid discounts across the board on Prime Big Deals Day. The Anker 675 is a genuinely cool product at a solid discount. It’s actually a monitor stand that doubles as a USB-C dock, and with a wireless charging pad attached. I doubt you’ve seen anything like it!

    IF you need a practical USB-C hub (a port for your mouse, one for your keyboard or printer, an HDMI display port, and a USB-C port for a phone), the Anker 332 is it. If you want something a bit more full-featured, opt for the 553. It offers two HDMI ports at a fantastic price, but note that a typical laptop will output 30Hz across a pair of 4K displays.

    Right now, the other deals aren’t as impressive. The Targus dock’s $249.99 MSRP doesn’t feel that realistic, but the discounted price is a solid discount. Note that the 65W power delivery might be a little low for high-end notebooks, including gaming notebooks. Lenovo’s docking station feels like a good deal as well, with two dedicated display ports but what looks like a proprietary power plug.

    Otherwise, I’m really not impressed by many of the USB-C hub/dongle/docking-station deals right now, but the Acer deal is a basic deal on basic products. Plugable’s 13-in-1 docking station is designed for PC users who don’t care too much about the latest and greatest: You’ll only get 4K30 output (as opposed to 4K60, or 60Hz) on your main display, and up to 1920×1200 on your extra displays.

    Thunderbolt dock deals FAQ


    1.

    What should I look for when buying a Thunderbolt dock or USB-C hub?

    While Thunderbolt docks and USB-C hubs are often seen as distinct product categories, they have similarities. Both utilize a USB-C connection from your laptop. However, the distinction lies in how some laptops employ this port: some as a standard USB-C port, while others channel the high-speed Thunderbolt 3 or 4 protocol via the USB-C connector. The standard port typically supports up to 10Gbps of data transfer, which is adequate for USB drives, external storage devices, and possibly an external monitor.

    Thunderbolt (either Thunderbolt 3 or Thunderbolt 4) allows for 40Gbps of throughput, designed for high-speed external SSDs and multiple displays. Our roundups of the best USB-C hubs and the best Thunderbolt docks explain further in much greater detail. Thunderbolt 3 and Thunderbolt 4 are close enough that you can save money by buying the older technology that retailers are trying to get rid of. That’s important! Thunderbolt 5, which will deliver 80Gbps, just hasn’t appeared, and that’s a little disappointing.

    If you want to connect high-speed peripherals (or just a ton of them) a Thunderbolt dock might be the best bet — especially more than one display. Otherwise, a USB-C hub might work just fine. A Thunderbolt dock definitely is a future-proofed solution though.

    USB-C hubs and dongles are relatively cheap, rarely climbing over $60. Thunderbolt docks can cost anywhere from $100 to $300, depending on what features the dock offers.

    Usually, the best Prime Big Deal Days deals on USB-C hubs and dongles are on the more expensive docking stations, not the $20-$50 hubs. Thunderbolt dock deals usually feature older Thunderbolt 3 hardware which is functionally equivalent to the latest gear. It’s like buying a car with 1,000 miles on it for 25% off. That’s not a perfect example, of course, but you get the idea.

    2.

    I have a USB-C port on my laptop. How do I know what to use with it?

    Refer to your laptop’s manual to identify the Thunderbolt port, which may be marked with a small lightning bolt icon. However, this symbol might also indicate a charging port. If in doubt, a USB-C dongle or hub is universally compatible with USB-C ports.

    3.

    I don’t understand the difference between the USB-C and Thunderbolt interfaces. How does it all work?

    USB ports have a long history. USB-C, known for its versatility, replaced USB-A (the thick square port) due to its reversible connector and capability for higher transfer speeds. USB-C ports can support 5Gbps or 10Gbps, similar to standard USB-A ports. However, some USB-C ports are linked to a Thunderbolt chip within your laptop, enabling them to operate at an elevated speed of 40Gbps. While the physical appearance of the connector remains the same, its functionality is what sets it apart.

    4.

    What’s the difference between a USB-C hub and a Thunderbolt dock?

    A 10Gbps USB-C hub offers speed and versatility, connecting to a single 4K (or 1080p) display and offering various ports such as USB-A and SD card slots. Typically, you can connect your laptop’s USB-C power cable directly to the hub if needed.

    A 40Gbps Thunderbolt dock, on the other hand, provides greater bandwidth to support additional ports. It stands out in two main ways: it can handle two 4K displays simultaneously, and many docks include a power supply that can charge both your laptop and phone through the Thunderbolt cable that links your laptop to the dock. We still haven’t seen 80Gbps Thunderbolt 5 hardware yet.

    5.

    My laptop has USB4, not Thunderbolt. Can I use a Thunderbolt dock?

    If your laptop runs USB4, it won’t “understand” Thunderbolt 3 protocols, I’m told. But otherwise, USB4 and Thunderbolt 4 are functionally the same. Intel refuses to certify non-Intel platforms like AMD’s Ryzen for Thunderbolt, and the new Copilot+ PCs from Microsoft powered by Qualcomm Snapdragon X Elite chips are in the same boat.

    USB-C hubs work with basically anything with a USB-C port on it. Don’t worry about those at all.

    6.

    Is Thunderbolt 4 better than Thunderbolt 3?

    Physically, they use the same USB-C cable. (Well, they have a different logo — one has a “3,” and the other a “4.”) Functionally, they’re almost the same. Thunderbolt 4 was released almost as a patch to Thunderbolt 3, ensuring that everything worked properly. They both run at 40Gbps and connect to the same peripherals. If you own a laptop equipped with Thunderbolt, you can connect to both and basically your experience will be the same.

    The kicker? Thunderbolt 3 hardware is older, and retailers want you to buy the latest gear. So as far as deals go, buying Thunderbolt 3 hardware is a real steal.

    7.

    Do I need a Thunderbolt dock if I own a desktop PC?

    Usually desktops come chock full of ports, even legacy ones like USB-A. What they don’t always have is a microSD and SD card slot, and a USB-C dock might be a good and cheap way to add this functionality.

    Intel has historically struggled to get Thunderbolt into desktop PCs, though, so USB-C may in fact be your only option. There’s really no guarantee that a desktop with have a Thunderbolt port.

    8.

    Some of these docks have had bad reviews on shopping sites. Why?

    Mac users, am I right? While Macs adopted Thunderbolt first, some of the Apple M1 silicon couldn’t keep up with Intel Thunderbolt controllers used by Windows PCs, and the Apple MacOS experience suffered as a result.

    If a user complains about a bad Windows experience, sure, that’s worth paying attention to. But a Mac user? Bah. They bought the wrong platform.

CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
  • Airlines, banks, casinos, package deliveries, and emergency services around the world are recovering today from what could be "the largest tech outage in history." the root cause was not a foreign agent but linked back to a software update issued by a u-s based cybersecurity firm called "Crowd Strike." Could this have been avoided?
  • In London, a mobile phone is stolen every 6 minutes. "If I steal your phone, I'm stealing a thousand dollars," says digital identity expert David Birch. But "If I can get into your bank account, I can steal $100,000. So that's what they really want." So there are important steps to take immediately - including turning off message preview.
  • "Human beings had a play-based childhood from time immemorial," says author Jonathan Haidt. What caused teen mental health decline is "between 2010 and 2015, phones, screens come sweeping in...The most important thing that parents can do is delay the age at which their child gets immersed in internet culture."
  • Grammy award-winning artist Ne-Yo joins CNN's Laura Coates to discuss the impact of artificial intelligence on the music industry.
  • Fareed hosts a spirited debate on the House bill that could lead to a US ban on TikTok, with the American Enterprise Institute's Kori Schake and Glen Gerstell, former general counsel for the National Security Agency. They discuss national-security risks the Chinese-owned app might pose given its many American users.
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.