Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • To Ignite a Personalization Practice, Run this Prepersonalization Workshop

    Picture this. You’ve joined a squad at your company that’s designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you’re designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed. 

    Between the fantasy of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It’s an especially confounding place to be a digital professional without a map, a compass, or a plan.

    For those of you venturing into personalization, there’s no Lonely Planet and few tour guides because effective personalization is so specific to each organization’s talent, technology, and market position. 

    But you can ensure that your team has packed its bags sensibly.

    A sign at a mountain scene says “People who liked this also liked,” which is followed by photographs of other scenic landscapes. Satirical art installation by Scott Kelly and Ben Polkinghome.
    Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome.

    There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare.

    We call it prepersonalization.

    Behind the music

    Consider Spotify’s DJ feature, which debuted this past year.


    We’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.

    So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.

    ​From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.

    Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.

    A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps: 

    1. customer experience optimization (CXO, also known as A/B testing or experimentation)
    2. always-on automations (whether rules-based or machine-generated)
    3. mature features or standalone product development (such as Spotify’s DJ experience)

    This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.

    Set your kitchen timer

    How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities.

    The full arc of the wider workshop is threefold:

    1. Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
    2. Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
    3. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.

    Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.

    Kickstart: Whet your appetite

    We call the first lesson the “landscape of connected experience.” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.

    Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking.

    This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework.

    Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.

    Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.

    A two-by-two grid shows the four areas of emphasis for a personalization program in an organization: Business efficiency, customer experience, business orchestration, and customer understanding. The focus varies from front-stage to back-stage and from business-focused to customer-focused outcomes.
    Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio.

    Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.

    The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy.) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.

    Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.

    Barriers to personalization according to a Boston Consulting Group 2016 research study. The top items include “too few personnel dedicated to personalization,” “lack of a clear roadmap,” and “inadequate cross-functional coordination and project management.”
    The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group.

    At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue.

    Hit that test kitchen

    Next, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience?

    What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.

    The Progressive Personalization Model v2: A pyramid with the following layers, starting at the base and working up: Raw Data (millions), Actionable Data (hundreds of thousands), Segments (thousands), Customer Experience Patterns (many), Interactions (dozens), and Goals (handful).
    Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan.

    The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.

    The dishes will come from recipes, and those recipes have set ingredients.

    A photo of the Progressive Personalization deck of cards with accompanying text reading: Align on key terms and tactics. Draft and groom a full backlog, designing with data.
    A zoomed out view of many of the cards in the deck. Cards have colors corresponding to the layers of the personalization pyramid and include actionable details.
    Progressive personalization is a model of designing for personalized interactions that uses playing cards to assemble the typical parts for such features and functionality.
    In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan.

    Verify your ingredients

    Like a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together. 

    This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team: 

    1. compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette; 
    2. specify a consistent set of interactions that users find uniform or familiar; 
    3. and develop parity across performance measurements and key performance indicators too. 

    This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.

    Compose your recipe

    What ingredients are important to you? Think of a who-what-when-why construct

    • Who are your key audience segments or groups?
    • What kind of content will you give them, in what design elements, and under what circumstances?
    • And for which business and user benefits?

    We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.

    Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below. 

    1. Nurture personalization: When a guest or an unknown visitor interacts with  a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
    2. Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
    3. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.
    A selection of prompt cards assembled to represent the key parameters of a “nurture” user flow.
    A “nurture” automation may trigger a banner or alert box that promotes content that makes it easier for users to complete a common task, based on behavioral profiling of two user types. Credit: Bucket Studio.
    A selection of prompt cards assembled to represent the key parameters of a “welcome”, or onboarding, user flow.
    A “welcome” automation may be triggered for any user that sends an email to help familiarize them with the breadth of a content library, and this email ideally helps them consider selecting various titles (no matter how much time they devote to reviewing the email’s content itself). Credit: Bucket Studio.
    A selection of prompt cards assembled to represent the key parameters of a “winback”, or customer-churn risk, user flow.
    A “winback” automation may be triggered for a specific group, such as users with recently failed credit-card transactions or users at risk of churning out of active usage, that present them with a specific offer to mitigate near-future inactivity. Credit: Bucket Studio.

    A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.

    You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.

    Better kitchens require better architecture

    Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said,  “Complicated problems can be hard to solve, but they are addressable with rules and recipes.”

    When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.

    You can definitely stand the heat…

    Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.

    This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.

    While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding.

  • The Wax and the Wane of the Web

    I offer a single bit of advice to friends and family when they become new parents: When you start to think that you’ve got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, it’s time for solid food, potty training, and overnight sleeping. When you figure those out, it’s time for preschool and rare naps. The cycle goes on and on.

    The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, I’ve seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.

    How we got here

    I built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.

    The birth of web standards

    At the turn of the century, a new cycle started. Crufty code littered with table layouts and font tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.

    Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.

    These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.

    The web as software platform

    The symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.

    At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.

    This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.

    Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote “Of Time and the Web.” Or check out the “Web Design History Timeline” at the Web Design Museum. Neal Agarwal also has a fun tour through “Internet Artifacts.”

    Where we are now

    In the last couple of years, it’s felt like we’ve begun to reach another major inflection point. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but they’re still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.

    Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isn’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.

    Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.

    If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks—which used to let us adopt new techniques sooner—have now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), there’s often no alternative, leaving users with blank or broken pages.

    Where do we go from here?

    Today’s hacks help to shape tomorrow’s standards. And there’s nothing inherently wrong with embracing hacks—for now—to move the present forward. Problems only arise when we’re unwilling to admit that they’re hacks or we hesitate to replace them. So what can we do to create the future we want for the web?

    Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? What’s the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes it’s just a hack that you’ve grown accustomed to. And sometimes it’s holding you back from even better options.

    Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isn’t always true of third-party frameworks. Sites built with even the hackiest of HTML from the ’90s still work just fine today. The same can’t always be said of sites built with frameworks even after just a couple years.

    Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to “move fast and break things,” use the time saved by modern tools to consider more carefully and design with deliberation.

    Always be learning. If you’re always learning, you’re also growing. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. You might end up focusing on something that won’t matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.

    Play, experiment, and be weird! This web that we’ve built is the ultimate experiment. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what we’re capable of.

    Share and amplify. As you experiment, play, and learn, share what’s worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.

    Go forth and make

    As designers and developers for the web (and beyond), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s imbue our values into the things that we create, and let’s make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that you’ve mastered the web, everything will change.

  • Opportunities for AI in Accessibility

    In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.

    I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day.

    Alternative text

    Joe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space.

    As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win.

    Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible.

    While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:

    • Do more people use smartphones or feature phones?
    • How many more?
    • Is there a group of people that don’t fall into either of these buckets?
    • How many is that?

    Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.

    Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility.

    Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!

    Matching algorithms

    Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities.

    Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.

    When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important.

    Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.

    Other ways that AI can helps people with disabilities

    If I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:

    • Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential.
    • Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
    • Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading.

    The importance of diverse teams and data

    We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.

    Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data.

    Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon. 

    Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.

    I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.

    Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.

  • I am a creative.

    I am a creative. What I do is alchemy. It is a mystery. I do not so much do it, as let it be done through me.

    I am a creative. Not all creative people like this label. Not all see themselves this way. Some creative people see science in what they do. That is their truth, and I respect it. Maybe I even envy them, a little. But my process is different—my being is different.

    Apologizing and qualifying in advance is a distraction. That’s what my brain does to sabotage me. I set it aside for now. I can come back later to apologize and qualify. After I’ve said what I came to say. Which is hard enough. 

    Except when it is easy and flows like a river of wine.

    Sometimes it does come that way. Sometimes what I need to create comes in an instant. I have learned not to say it at that moment, because if you admit that sometimes the idea just comes and it is the best idea and you know it is the best idea, they think you don’t work hard enough.

    Sometimes I work and work and work until the idea comes. Sometimes it comes instantly and I don’t tell anyone for three days. Sometimes I’m so excited by the idea that came instantly that I blurt it out, can’t help myself. Like a boy who found a prize in his Cracker Jacks. Sometimes I get away with this. Sometimes other people agree: yes, that is the best idea. Most times they don’t and I regret having  given way to enthusiasm. 

    Enthusiasm is best saved for the meeting where it will make a difference. Not the casual get-together that precedes that meeting by two other meetings. Nobody knows why we have all these meetings. We keep saying we’re doing away with them, but then just finding other ways to have them. Sometimes they are even good. But other times they are a distraction from the actual work. The proportion between when meetings are useful, and when they are a pitiful distraction, varies, depending on what you do and where you do it. And who you are and how you do it. Again I digress. I am a creative. That is the theme.

    Sometimes many hours of hard and patient work produce something that is barely serviceable. Sometimes I have to accept that and move on to the next project.

    Don’t ask about process. I am a creative.

    I am a creative. I don’t control my dreams. And I don’t control my best ideas.

    I can hammer away, surround myself with facts or images, and sometimes that works. I can go for a walk, and sometimes that works. I can be making dinner and there’s a Eureka having nothing to do with sizzling oil and bubbling pots. Often I know what to do the instant I wake up. And then, almost as often, as I become conscious and part of the world again, the idea that would have saved me turns to vanishing dust in a mindless wind of oblivion. For creativity, I believe, comes from that other world. The one we enter in dreams, and perhaps, before birth and after death. But that’s for poets to wonder, and I am not a poet. I am a creative. And it’s for theologians to mass armies about in their creative world that they insist is real. But that is another digression. And a depressing one. Maybe on a much more important topic than whether I am a creative or not. But still a digression from what I came here to say.

    Sometimes the process is avoidance. And agony. You know the cliché about the tortured artist? It’s true, even when the artist (and let’s put that noun in quotes) is trying to write a soft drink jingle, a callback in a tired sitcom, a budget request.

    Some people who hate being called creative may be closeted creatives, but that’s between them and their gods. No offense meant. Your truth is true, too. But mine is for me. 

    Creatives recognize creatives.

    Creatives recognize creatives like queers recognize queers, like real rappers recognize real rappers, like cons know cons. Creatives feel massive respect for creatives. We love, honor, emulate, and practically deify the great ones. To deify any human is, of course, a tragic mistake. We have been warned. We know better. We know people are just people. They squabble, they are lonely, they regret their most important decisions, they are poor and hungry, they can be cruel, they can be just as stupid as we can, because, like us, they are clay. But. But. But they make this amazing thing. They birth something that did not exist before them, and could not exist without them. They are the mothers of ideas. And I suppose, since it’s just lying there, I have to add that they are the mothers of invention. Ba dum bum! OK, that’s done. Continue.

    Creatives belittle our own small achievements, because we compare them to those of the great ones. Beautiful animation! Well, I’m no Miyazaki. Now THAT is greatness. That is greatness straight from the mind of God. This half-starved little thing that I made? It more or less fell off the back of the turnip truck. And the turnips weren’t even fresh.

    Creatives knows that, at best, they are Salieri. Even the creatives who are Mozart believe that. 

    I am a creative. I haven’t worked in advertising in 30 years, but in my nightmares, it’s my former creative directors who judge me. And they are right to do so. I am too lazy, too facile, and when it really counts, my mind goes blank. There is no pill for creative dysfunction.

    I am a creative. Every deadline I make is an adventure that makes Indiana Jones look like a pensioner snoring in a deck chair. The longer I remain a creative, the faster I am when I do my work and the longer I brood and walk in circles and stare blankly before I do that work. 

    I am still 10 times faster than people who are not creative, or people who have only been creative a short while, or people who have only been professionally creative a short while. It’s just that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work off. I am that confident in my ability to do a great job when I put my mind to it. I am that addicted to the adrenaline rush of postponement. I am still that afraid of the jump.

    I am not an artist.

    I am a creative. Not an artist. Though I dreamed, as a lad, of someday being that. Some of us belittle our gifts and dislike ourselves because we are not Michelangelos and Warhols. That is narcissism—but at least we aren’t in politics.

    I am a creative. Though I believe in reason and science, I decide by intuition and impulse. And live with what follows—the catastrophes as well as the triumphs. 

    I am a creative. Every word I’ve said here will annoy other creatives, who see things differently. Ask two creatives a question, get three opinions. Our disagreement, our passion about it, and our commitment to our own truth are, at least to me, the proofs that we are creatives, no matter how we may feel about it.

    I am a creative. I lament my lack of taste in the areas about which I know very little, which is to say almost all areas of human knowledge. And I trust my taste above all other things in the areas closest to my heart, or perhaps, more accurately, to my obsessions. Without my obsessions, I would probably have to spend my time looking life in the eye, and almost none of us can do that for long. Not honestly. Not really. Because much in life, if you really look at it, is unbearable.

    I am a creative. I believe, as a parent believes, that when I am gone, some small good part of me will carry on in the mind of at least one other person.

    Working saves me from worrying about work.

    I am a creative. I live in dread of my small gift suddenly going away.

    I am a creative. I am too busy making the next thing to spend too much time deeply considering that almost nothing I make will come anywhere near the greatness I comically aspire to.

    I am a creative. I believe in the ultimate mystery of process. I believe in it so much, I am even fool enough to publish an essay I dictated into a tiny machine and didn’t take time to review or revise. I won’t do this often, I promise. But I did it just now, because, as afraid as I might be of your seeing through my pitiful gestures toward the beautiful, I was even more afraid of forgetting what I came to say. 

    There. I think I’ve said it. 

  • Humility: An Essential Value

    Humility, a designer’s essential value—that has a nice ring to it. What about humility, an office manager’s essential value? Or a dentist’s? Or a librarian’s? They all sound great. When humility is our guiding light, the path is always open for fulfillment, evolution, connection, and engagement. In this chapter, we’re going to talk about why.

    That said, this is a book for designers, and to that end, I’d like to start with a story—well, a journey, really. It’s a personal one, and I’m going to make myself a bit vulnerable along the way. I call it:

    The Tale of Justin’s Preposterous Pate

    When I was coming out of art school, a long-haired, goateed neophyte, print was a known quantity to me; design on the web, however, was rife with complexities to navigate and discover, a problem to be solved. Though I had been formally trained in graphic design, typography, and layout, what fascinated me was how these traditional skills might be applied to a fledgling digital landscape. This theme would ultimately shape the rest of my career.

    So rather than graduate and go into print like many of my friends, I devoured HTML and JavaScript books into the wee hours of the morning and taught myself how to code during my senior year. I wanted—nay, needed—to better understand the underlying implications of what my design decisions would mean once rendered in a browser.

    The late ’90s and early 2000s were the so-called “Wild West” of web design. Designers at the time were all figuring out how to apply design and visual communication to the digital landscape. What were the rules? How could we break them and still engage, entertain, and convey information? At a more macro level, how could my values, inclusive of humility, respect, and connection, align in tandem with that? I was hungry to find out.

    Though I’m talking about a different era, those are timeless considerations between non-career interactions and the world of design. What are your core passions, or values, that transcend medium? It’s essentially the same concept we discussed earlier on the direct parallels between what fulfills you, agnostic of the tangible or digital realms; the core themes are all the same.

    First within tables, animated GIFs, Flash, then with Web Standards, divs, and CSS, there was personality, raw unbridled creativity, and unique means of presentment that often defied any semblance of a visible grid. Splash screens and “browser requirement” pages aplenty. Usability and accessibility were typically victims of such a creation, but such paramount facets of any digital design were largely (and, in hindsight, unfairly) disregarded at the expense of experimentation.

    For example, this iteration of my personal portfolio site (“the pseudoroom”) from that era was experimental, if not a bit heavy- handed, in the visual communication of the concept of a living sketchbook. Very skeuomorphic. I collaborated with fellow designer and dear friend Marc Clancy (now a co-founder of the creative project organizing app Milanote) on this one, where we’d first sketch and then pass a Photoshop file back and forth to trick things out and play with varied user interactions. Then, I’d break it down and code it into a digital layout.

    Figure 1: “the pseudoroom” website, hitting the sketchbook metaphor hard.

    Along with design folio pieces, the site also offered free downloads for Mac OS customizations: desktop wallpapers that were effectively design experimentation, custom-designed typefaces, and desktop icons.

    From around the same time, GUI Galaxy was a design, pixel art, and Mac-centric news portal some graphic designer friends and I conceived, designed, developed, and deployed.

    Figure 2: GUI Galaxy, web standards-compliant design news portal

    Design news portals were incredibly popular during this period, featuring (what would now be considered) Tweet-size, small-format snippets of pertinent news from the categories I previously mentioned. If you took Twitter, curated it to a few categories, and wrapped it in a custom-branded experience, you’d have a design news portal from the late 90s / early 2000s.

    We as designers had evolved and created a bandwidth-sensitive, web standards award-winning, much more accessibility-conscious website. Still ripe with experimentation, yet more mindful of equitable engagement. You can see a couple of content panes here, noting general news (tech, design) and Mac-centric news below. We also offered many of the custom downloads I cited before as present on my folio site but branded and themed to GUI Galaxy.

    The site’s backbone was a homegrown CMS, with the presentation layer consisting of global design + illustration + news author collaboration. And the collaboration effort here, in addition to experimentation on a ‘brand’ and content delivery, was hitting my core. We were designing something bigger than any single one of us and connecting with a global audience.

    Collaboration and connection transcend medium in their impact, immensely fulfilling me as a designer.

    Now, why am I taking you down this trip of design memory lane? Two reasons.

    First, there’s a reason for the nostalgia for that design era (the “Wild West” era, as I called it earlier): the inherent exploration, personality, and creativity that saturated many design portals and personal portfolio sites. Ultra-finely detailed pixel art UI, custom illustration, bespoke vector graphics, all underpinned by a strong design community.

    Today’s web design has been in a period of stagnation. I suspect there’s a strong chance you’ve seen a site whose structure looks something like this: a hero image / banner with text overlaid, perhaps with a lovely rotating carousel of images (laying the snark on heavy there), a call to action, and three columns of sub-content directly beneath. Maybe an icon library is employed with selections that vaguely relate to their respective content.

    Design, as it’s applied to the digital landscape, is in dire need of thoughtful layout, typography, and visual engagement that goes hand-in-hand with all the modern considerations we now know are paramount: usability. Accessibility. Load times and bandwidth- sensitive content delivery. A responsive presentation that meets human beings wherever they’re engaging from. We must be mindful of, and respectful toward, those concerns—but not at the expense of creativity of visual communication or via replicating cookie-cutter layouts.

    Pixel Problems

    Websites during this period were often designed and built on Macs whose OS and desktops looked something like this. This is Mac OS 7.5, but 8 and 9 weren’t that different.

    Figure 3: A Mac OS 7.5-centric desktop.

    Desktop icons fascinated me: how could any single one, at any given point, stand out to get my attention? In this example, the user’s desktop is tidy, but think of a more realistic example with icon pandemonium. Or, say an icon was part of a larger system grouping (fonts, extensions, control panels)—how did it also maintain cohesion amongst a group?

    These were 32 x 32 pixel creations, utilizing a 256-color palette, designed pixel-by-pixel as mini mosaics. To me, this was the embodiment of digital visual communication under such ridiculous constraints. And often, ridiculous restrictions can yield the purification of concept and theme.

    So I began to research and do my homework. I was a student of this new medium, hungry to dissect, process, discover, and make it my own.

    Expanding upon the notion of exploration, I wanted to see how I could push the limits of a 32x32 pixel grid with that 256-color palette. Those ridiculous constraints forced a clarity of concept and presentation that I found incredibly appealing. The digital gauntlet had been tossed, and that challenge fueled me. And so, in my dorm room into the wee hours of the morning, I toiled away, bringing conceptual sketches into mini mosaic fruition.

    These are some of my creations, utilizing the only tool available at the time to create icons called ResEdit. ResEdit was a clunky, built-in Mac OS utility not really made for exactly what we were using it for. At the core of all of this work: Research. Challenge. Problem- solving. Again, these core connection-based values are agnostic of medium.

    Figure 4: A selection of my pixel art design, 32x32 pixel canvas, 8-bit palette

    There’s one more design portal I want to talk about, which also serves as the second reason for my story to bring this all together.

    This is K10k, short for Kaliber 1000. K10k was founded in 1998 by Michael Schmidt and Toke Nygaard, and was the design news portal on the web during this period. With its pixel art-fueled presentation, ultra-focused care given to every facet and detail, and with many of the more influential designers of the time who were invited to be news authors on the site, well... it was the place to be, my friend. With respect where respect is due, GUI Galaxy’s concept was inspired by what these folks were doing.

    Figure 5: The K10k website

    For my part, the combination of my web design work and pixel art exploration began to get me some notoriety in the design scene. Eventually, K10k noticed and added me as one of their very select group of news authors to contribute content to the site.

    Amongst my personal work and side projects—and now with this inclusion—in the design community, this put me on the map. My design work also began to be published in various printed collections, in magazines domestically and overseas, and featured on other design news portals. With that degree of success while in my early twenties, something else happened:

    I evolved—devolved, really—into a colossal asshole (and in just about a year out of art school, no less). The press and the praise became what fulfilled me, and they went straight to my head. They inflated my ego. I actually felt somewhat superior to my fellow designers.

    The casualties? My design stagnated. Its evolution—my evolution— stagnated.

    I felt so supremely confident in my abilities that I effectively stopped researching and discovering. When previously sketching concepts or iterating ideas in lead was my automatic step one, I instead leaped right into Photoshop. I drew my inspiration from the smallest of sources (and with blinders on). Any critique of my work from my peers was often vehemently dismissed. The most tragic loss: I had lost touch with my values.

    My ego almost cost me some of my friendships and burgeoning professional relationships. I was toxic in talking about design and in collaboration. But thankfully, those same friends gave me a priceless gift: candor. They called me out on my unhealthy behavior.

    Admittedly, it was a gift I initially did not accept but ultimately was able to deeply reflect upon. I was soon able to accept, and process, and course correct. The realization laid me low, but the re-awakening was essential. I let go of the “reward” of adulation and re-centered upon what stoked the fire for me in art school. Most importantly: I got back to my core values.

    Always Students

    Following that short-term regression, I was able to push forward in my personal design and career. And I could self-reflect as I got older to facilitate further growth and course correction as needed.

    As an example, let’s talk about the Large Hadron Collider. The LHC was designed “to help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, and in particular the interrelation between quantum mechanics and general relativity.” Thanks, Wikipedia.

    Around fifteen years ago, in one of my earlier professional roles, I designed the interface for the application that generated the LHC’s particle collision diagrams. These diagrams are the rendering of what’s actually happening inside the Collider during any given particle collision event and are often considered works of art unto themselves.

    Designing the interface for this application was a fascinating process for me, in that I worked with Fermilab physicists to understand what the application was trying to achieve, but also how the physicists themselves would be using it. To that end, in this role,

    I cut my teeth on usability testing, working with the Fermilab team to iterate and improve the interface. How they spoke and what they spoke about was like an alien language to me. And by making myself humble and working under the mindset that I was but a student, I made myself available to be a part of their world to generate that vital connection.

    I also had my first ethnographic observation experience: going to the Fermilab location and observing how the physicists used the tool in their actual environment, on their actual terminals. For example, one takeaway was that due to the level of ambient light-driven contrast within the facility, the data columns ended up using white text on a dark gray background instead of black text-on-white. This enabled them to pore over reams of data during the day and ease their eye strain. And Fermilab and CERN are government entities with rigorous accessibility standards, so my knowledge in that realm also grew. The barrier-free design was another essential form of connection.

    So to those core drivers of my visual problem-solving soul and ultimate fulfillment: discovery, exposure to new media, observation, human connection, and evolution. What opened the door for those values was me checking my ego before I walked through it.

    An evergreen willingness to listen, learn, understand, grow, evolve, and connect yields our best work. In particular, I want to focus on the words ‘grow’ and ‘evolve’ in that statement. If we are always students of our craft, we are also continually making ourselves available to evolve. Yes, we have years of applicable design study under our belt. Or the focused lab sessions from a UX bootcamp. Or the monogrammed portfolio of our work. Or, ultimately, decades of a career behind us.

    But all that said: experience does not equal “expert.”

    As soon as we close our minds via an inner monologue of ‘knowing it all’ or branding ourselves a “#thoughtleader” on social media, the designer we are is our final form. The designer we can be will never exist.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld helps you navigate the PC ecosystem to find the products you want and the advice you need to get the job done.
  • Lock in great flight deals with hundreds off Matt’s Flights

    Planning some summer travel but afraid of the airfare costs? With a service like Matt’s Flights, you can save on airfare and focus on enjoying your trip. You can save on a lifetime subscription to Matt’s Flights Premium Plan during a limited-time price drop running through April 21st. 

    Featured on The New York Times and Thrillist, Matt’s Flights gives you a personal aviation concierge, scanning airline prices for mistakes and discounts. When there’s a deal at your local airport, it’s delivered straight to your inbox with instructions on how to book.

    You can make custom search requests or stay flexible and jump on a deal to a destination you may not have considered. With a Premium Plan, you’ll get five times as many deals as free members and get one-on-one flight and travel planning support through Matt himself.

    One verified user saved $400 and wrote, “Just signed up for Matt’s Flights premium service and was amazed at the deals he found for an upcoming trip I am contemplating. He is super-fast, friendly, and responsive. What a great service he provides!”

    Travel on a budget this summer. Now through 11:59 pm PT on 4/21, you can get a lifetime subscription to Matt’s Flights Premium Plan for just $79.97 (reg. $1,800).


    Matt’s Flights Premium Plan (Lifetime Subscription) – Save up to 90% on Domestic & International flights – $79.97

    See Deal

    StackSocial prices subject to change.

  • Logitech has a mouse with an AI button

    On the heels of keyboards with a Cortana key…comes a mouse with a ChatGPT key? Well, mostly. Meet the Logitech Signature AI Edition M750 Mouse.

    Logitech plans to launch the Signature AI Edition Mouse in April in the United States and the United Kingdom for $49.99. Though it has a dedicated AI key — you do need a dedicated AI key on your mouse, right? — the button actually launches an app called the Logi AI Prompt Builder, powered by ChatGPT.

    That app is part of Logi Options+, the connective software that ties many of Logitech’s MX, or “Master,” series peripherals (plus its Studio Series, Ergo, and others) such as keyboards and mice. When a user presses the AI prompt button on the top of the AI Edition Mouse, they’re actually launching the Prompt Builder within the Options+ app. Existing Logitech peripherals that lack a dedicated ChatGPT key can remap another key to trigger the same function.

    Prompt Builder is designed to highlight, copy, and/or edit a portion of the text you’re working on. It looks like a more sophisticated version of the suggested actions Microsoft has tested within Copilot, where users can highlight a block of text, right-click it, and ask for a summary — Copilot key or not.

    Logi AI Prompt Builder software
    Logitech’s AI Prompt Builder with ChatGPT.
    Logi AI Prompt Builder software
    Logitech’s AI Prompt Builder with ChatGPT.


    Logi AI Prompt Builder software
    Logitech’s AI Prompt Builder with ChatGPT.



    In this case, Logitech is using ChatGPT to rephrase, summarize, reply, or author an email. A separate box allows drop-down stylistic options including “funny” and “professional.”

    Many people still don’t use AI for one reason or another. Logitech hopes to make it as easy as a click of the mouse.

  • AMD-powered Minisforum V3 Surface-style tablet is up for sale

    Some people have been waiting a long time for an alternative to Microsoft’s Surface series, and niche small form factor manufacturer Minisforum is ready to give it to them. The company’s V3 tablet, which proclaims itself the “world’s first AMD 3-in-1 Windows tablet,” is now available to purchase and shipping in a little over a week.

    Minisforum might be being a little grandiose in its description of the V3 — it’s not as if it was ever impossible to use a Ryzen platform in a Surface form factor, it’s just that no one saw a huge demand for it. But that being said, there’s no doubt that this is some pretty impressive hardware on paper. The base model of the V3 tablet uses an 8-core, 16-thread Ryzen 7 8840U processor, 32GB of RAM, and 1TB of storage. Again, that’s the base configuration, with a detachable keyboard included.

    The screen and body are considerably larger than the Surface, with a 14-inch 2K panel and a 165Hz refresh rate. Other hardware features include three USB-C ports (two USB-4, one with Vlink DP-in to use the tablet as a portable monitor), a full-sized SD card slot, a headphone jack, and a fingerprint reader in the power button.

    minisforum v3 tablet USB-C
    minisforum v3 tablet USB-C


    minisforum v3 tablet USB-C



    All that tech is going for $1,199 for the early bird special. Early buyers also get a free stylus, laptop sleeve, and glass screen protector. The price will go up to $1,499 after the first batch, but the equivalent build of the Surface Pro 9 is priced at over two grand. And even after more than a decade of hearing this complaint, Microsoft still wants you to pay extra for the keyboard.

    Even so, I’d be trepidatious about dropping more than a thousand dollars on hardware from a company that hasn’t made a portable device before. Feel free to put your money down, but I’d wait for the first batch of reviews to arrive.

  • Best external SSD for gaming 2024: Portable performance drives

    Maybe your gaming laptop doesn’t have enough storage. Or you simply want an easy way to make your game library portable. An external SSD can fix both of these issues (and more) by providing an easy way to expand storage that you can take on the go.

    But choosing an external SSD means sorting through a dizzying array of options, and making a poor choice can leave you feeling hard done by. Lucky for you, we’ve done the testing and can offer some sure-fire recommendations that are guaranteed to help, and not hinder, your gaming setup.

    Why you should trust us: We are PCWorld. Our reviewers have been putting computer hardware through its paces for decades. Our external drive evaluations are thorough and rigorous, testing the limits of every product — from performance benchmarks to the practicalities of regular use. As consumers ourselves, we know what makes a product exceptional. For more about our testing process, scroll to the bottom of this article.

    Updated April 13, 2024: See our new pick for the best portable 20Gbps drive, the Sabrent Rocket Nano V2. It offers a tempting combination of small size, handsome looks (including a tough-looking shock-absorbing jacket), and performance. Read more in our summary below.

    Lexar SL600 Blaze – Best 20Gbps external SSD for gaming

    Lexar SL600 Blaze - Best 20Gbps external SSD for gaming
    Lexar SL600 Blaze - Best 20Gbps external SSD for gaming
    Lexar SL600 Blaze - Best 20Gbps external SSD for gaming


    • Good 20Gbps performer
    • Top bang for the buck
    • Five-year warranty


    • 4TB model not yet available
    Best Prices Today: $129.99 at B & H Photo

    The competition is very close in the top tier of 20Gbps external drives, with the big name contenders trading wins up and down the benchmark charts. But a winner is a winner, and in the end, the Lexar overtook our previous champ, Crucial’s X10 Pro, even if only by a hair.

    The upshot is that you can expect excellent performance from the Lexar SL600. It also comes in a uniquely shaped form factor, complete with an opening to accommodate a lanyard, for easy toting. Gamers might appreciate that you can even add some bling by opting for the SL660 variant, which features RGB lighting within its miniature handle. The drive comes with the standard five year warranty.

    When performance is this closely matched among products, the determining factor should be price. And here, too, the SL600 is neck-and-neck with the Crucial X10 Pro, and priced slightly to significantly cheaper than some of its competitors, at least as of this writing — particularly at the 2TB level.

    Read our full Lexar SL600 Blaze 20Gbps USB SSD review

    Seagate FireCuda Gaming SSD (1TB) – Best premium 20Gbps external SSD for gaming

    Seagate FireCuda Gaming SSD (1TB) - Best premium 20Gbps external SSD for gaming
    Seagate FireCuda Gaming SSD (1TB) - Best premium 20Gbps external SSD for gaming
    Seagate FireCuda Gaming SSD (1TB) - Best premium 20Gbps external SSD for gaming


    • Drop-dead gorgeous
    • 2GBps transfers via SuperSpeed USB 20Gbps


    • Pricey
    • Superspeed USB 20Gbps ports are few and far between
    Best Prices Today: $189 at Amazon

    Seagate’s FireCuda Gaming SSD is a worthy alternative, but it has a much steeper price tag for similar performance. However, the FireCuda is an absolutely stunning external SSD and is worthy of a place on any desktop. It’s not just a pretty façade though—it’s compatible with a SuperSpeed USB 20Gbps port, meaning it can attain transfer rates of up to 2GBps. It’s certainly the coolest-looking external SSD on this list.

    Read our full Seagate FireCuda Gaming SSD (1TB) review

    Teamgroup M200 – Best budget 20Gbps external SSD for gaming

    Teamgroup M200 - Best budget 20Gbps external SSD for gaming
    Teamgroup M200 - Best budget 20Gbps external SSD for gaming
    Teamgroup M200 - Best budget 20Gbps external SSD for gaming


    • Fast everyday performance
    • Available in up to 8TB (eventually) capacity
    • Attractively styled


    • No TBW rating
    • Company will change components if shortages demand
    • Writes slow to 200MBps off cache
    Best Prices Today: $70.27 at Amazon

    The Teamgroup M200 provides excellent bang for your buck with 20Gbps transfer rates and up to 4TB of storage for a very reasonable price. It has great everyday performance, too.

    Its slick military-style design is based on the CheyTac M200 sniper rifle—a perfect fit for those late night frag sessions. Unfortunately, Teamgroup doesn’t provide a TBW rating or official IP rating for the M200 so it’s more difficult to compare it as a whole to its competitors. However the M200 is a fast, extremely well priced external SSD with a gamer-friendly design that will look good and perform well in almost any setup.

    Read our full Teamgroup T-Force M200 20Gbps USB SSD review

    Sabrent Rocket Nano V2 – Most portable 20Gbps external SSD for gaming

    Sabrent Rocket Nano V2 - Most portable 20Gbps external SSD for gaming
    Sabrent Rocket Nano V2 - Most portable 20Gbps external SSD for gaming
    Sabrent Rocket Nano V2 - Most portable 20Gbps external SSD for gaming


    • Extremely small profile
    • Shock-absorbing silicone jacket
    • Top-flight packaging
    • Good overall performance


    • A tad behind the 20Gbps curve performance-wise

    If you’re after a very small SSD that you can easily fit into a pocket, the Sabrent Rocket Nano V2 is that. This USB 3.2×2 20Gbps drive measures a petite 2.73 inches long, 1.16-inches wide, and 0.44-inches thick. It weighs a dainty 1.7 ounces.

    Of course, you’ll probably want to slide on its included shock-absorbing silicone jacket (shown in picture), which will add .06 inches to all its dimensions, while giving it a badass look.

    But looks aside, the Nano V2 is a solid performer. It wasn’t quite at the same level as our top picks in everything, but it traded wins and losses within the pack. For instance it was second only to the Crucial X10 Pro in our 450GB write test. And it took the top spot in CrystalDiskMark 8’s random writes, and was very competitive in random reads.

    This wee drive also comes in up to 4TB capacities, making it an all-around good choice if you’re looking to get a lot of storage and performance in a tiny package. We’re also fond of its five-year warranty and the nifty metal box it comes in, which can be repurposed for other uses.

    Read our full Sabrent Rocket Nano V2 review

    OWC Express 1M2 USB4 SSD – Best USB4/Thunderbolt combo for gaming

    OWC Express 1M2 USB4 SSD - Best USB4/Thunderbolt combo for gaming
    OWC Express 1M2 USB4 SSD - Best USB4/Thunderbolt combo for gaming
    OWC Express 1M2 USB4 SSD - Best USB4/Thunderbolt combo for gaming


    • Over 30Gbps transfers with USB4
    • Works with all USB and Thunderbolt 3/4 ports
    • Available unpopulated so you can leverage any NVMe SSD


    • A bit pricey when loaded with an SSD
    • Large (but beefy) for an external SSD
    Best Prices Today: $219.99 at OWC

    Sure the SanDisk Pro-G40 is a fast, dual USB/Thunderbolt drive, but equipped with USB4, the OWC Express 1M2 takes fast USB transfers to a whole new level. The new USB4 technology allows for lightning-fast data transfer rates up to a maximum of 40Gbps — the same as Thunderbolt 4. And in our tests, the 1M2 proves it can back up those claims. It was the fastest external drive over every bus in our testing: Thunderbolt 4, 20Gbps USB, and 10Gbps USB. That means that no matter what bus you choose to use, the 1M2 is the ultimate external SSD for speed.

    The design of the drive looks very much like a giant silver heatsink (the pink hue is due to the photo’s lighting) and it’s quite large and hefty compared to some of the other options on this list. We don’t mind it so much as the design makes the drive feel like it means business, but obviously the 1M2 isn’t the most portable external SSD. Regardless, if you’re feeling the need for speed and you don’t need something that fits into your pocket, then the OWC Express 1M2 is currently one of the absolute fastest external drives around.

    Read our full OWC Express 1M2 USB4 SSD review

    How we test external SSD game performance

    The biggest question you want to know is, how much does using an external drive hurt game performance. To give us an idea of how much it matters, we used UL’s new 3DMark Storage Benchmark. To create the benchmark, UL essentially records the drive access patterns during several common gaming tasks to make “traces.” These drive-access traces are then run on the tested storage device multiple times to duplicate the patterns without having to actually load the game.

    For its test, 3DMark reproduces what happens loading to the start menus of Battlefield V, Call of Duty: Black Ops 4, and Overwatch. 3DMark Storage also tests using OBS, or Open Broadcast System, to record Overwatch being played at 1080p resolution at 60fps, installing The Outer Worlds from the Epic launcher, and saving a game in The Outer Worlds. For the final test, 3DMark Storage tests copying the Steam folder for Counter-Strike: Global Offensive from an external SSD to the target drive.

    We used a 12th-gen Intel Core i9-12900K running Windows 11 on an Asus ROG Maximus Z690 Hero motherboard. The board features native Thunderbolt 4 and USB 3.2 10Gbps ports. We added a Silverstone ECU06 for USB 3.2 SuperSpeed 20Gbps support. We then used a Vantec M.2 NVMe SSD to USB 3.2 Gen2x2 20G Type C enclosure with a Western Digital SN700 NVMe SSD to test USB 3.2 20Gbps and 10Gbps performance. We also installed the same SN700 into a PCIe 3.0 riser card to test its native performance. This gives you an idea of how much you lose going from being installed inside the laptop or PC compared to using a USB port. For added contrast, we also ran 3DMark Storage on an older Plextor PX-512M7VG SATA SSD inside of a Silverstone MS09 SATA enclosure that was plugged into a USB 3.2 10Gbps port. And because you want to know how slow a hard drive would be, we also ran the same test on a Western Digital 14TB EasyStore hard drive plugged into a USB 3.2 10Gbps port. The EasyStore is actually limited to USB 3.2 SuperSpeed 5Gbps.

    Image of results of 3DMark Storage test running on various USB, SATA and PCIe interfaces
    Longer bars indicate better performance. Right mouse click and select “open in new tab” to view larger image.
    Image of results of 3DMark Storage test running on various USB, SATA and PCIe interfaces
    Longer bars indicate better performance. Right mouse click and select “open in new tab” to view larger image.


    Image of results of 3DMark Storage test running on various USB, SATA and PCIe interfaces
    Longer bars indicate better performance. Right mouse click and select “open in new tab” to view larger image.



    What should you make of the above results? Well, clearly if you can install an SSD inside of your PC, you’ll get the most performance out of it. But you should consider some of the context. If you’re only looking at the big long red bar at the top of the chart, consider that the particular test is measuring what would happen if you copied a large folder of files to the SSD. For most people, that’s only done once in a while.

    The more common scenario is waiting for a game to launch. Running an internal NVMe drive will still be faster, but the gap closes a little. Between the three popular USB interfaces: USB 20Gbps, USB 10Gbps, and SATA on USB 10Gbps, the fastest is USB 3.2 20Gbps. With a USB 3.2 20Gbps SSD, you might see Battlefield V shave 25 percent of the load time versus a USB 3.2 10Gbps drive. Of course, performance is also game dependent. For instance, both Call of Duty and Battlefield see 45 percent or so greater bandwidth on the internal SSD, but with the less graphically intense Overwatch, it’s closer to 30 percent.

    The other surprise is the performance of the SATA SSD versus the NVMe SSD when the NVMe SSD is in a USB 3.2 10Gbps port. In game loads, saves, and install scenarios, they’re fairly close. The NVMe external SSD does open up to huge lead over the slower SATA once you move to a task where you’re copying a huge amount of files—such as the CS:GO results. But again, how often do you do that?

    Of course we can’t leave this without pointing out just horrible hard drives are. Would it be more improved with a faster hard drive? Unlikely. The very minimum you should use if storing games on an external drive is a SATA SSD, so don’t run a game from your external hard drive unless you like to wait for everything.

  • Save $450 on this RTX 4060-loaded Lenovo gaming laptop

    Gamers, you better prepare yourselves because we’ve got one heck of a deal lined up for you today. Best Buy’s selling the Lenovo Legion Slim 5 for $899.99, which is a massive savings of $450. Not only does this machine come with RTX 4060 graphics, but it also has a spacious 16-inch 1200p display with a 144Hz refresh rate. It also features an aluminum/plastic chassis, a 1080p webcam, a backlit keyboard, and a microSD card slot.

    As for the hardware, the Lenovo Legion Slim 5 is rocking an AMD Ryzen 5 7640HS CPU, an Nvidia GeForce RTX 4060 GPU, 16GB of RAM, and 512GB SSD. That means it should be capable of blitzing through most triple-A titles on the higher graphics settings. The 16-inch 1200p display also has a 144Hz refresh rate, so visuals should be nice and smooth. The connectivity options are pretty diverse, too. You’re getting one HDMI 2.1, two DisplayPort 1.4, two USB-A 3.2, two USB-C 3.2, and one Ethernet.

    This is a killer deal, especially for a laptop with RTX 4060 graphics. Don’t miss out.

    Get the Lenovo Legion Slim 5 for $899.99 at Best Buy

CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
  • "Human beings had a play-based childhood from time immemorial," says author Jonathan Haidt. What caused teen mental health decline is "between 2010 and 2015, phones, screens come sweeping in...The most important thing that parents can do is delay the age at which their child gets immersed in internet culture."
  • Grammy award-winning artist Ne-Yo joins CNN's Laura Coates to discuss the impact of artificial intelligence on the music industry.
  • Fareed hosts a spirited debate on the House bill that could lead to a US ban on TikTok, with the American Enterprise Institute's Kori Schake and Glen Gerstell, former general counsel for the National Security Agency. They discuss national-security risks the Chinese-owned app might pose given its many American users.
  • A new government report warns that advanced Artificial Intelligence systems could pose an "extinction-level threat" to humans, and that the US must intervene. "I think we should be mindful of it," says Ret. Admiral James Stavridis. But he adds, "there have been big inventions in the past - the printing press, electricity, the internet - all of these have been a decried for the possibility of nefarious activity."
  • CNN was at the inaugural event of one of the world's biggest tech shows, which casts a spotlight on the region's thriving startup ecosystem.

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.