Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • Breaking Out of the Box

    CSS is about styling boxes. In fact, the whole web is made of boxes, from the browser viewport to elements on a page. But every once in a while a new feature comes along that makes us rethink our design approach.

    Round displays, for example, make it fun to play with circular clip areas. Mobile screen notches and virtual keyboards offer challenges to best organize content that stays clear of them. And dual screen or foldable devices make us rethink how to best use available space in a number of different device postures.

    Sketches of a round display, a common rectangular mobile display, and a device with a foldable display.

    These recent evolutions of the web platform made it both more challenging and more interesting to design products. They’re great opportunities for us to break out of our rectangular boxes.

    I’d like to talk about a new feature similar to the above: the Window Controls Overlay for Progressive Web Apps (PWAs).

    Progressive Web Apps are blurring the lines between apps and websites. They combine the best of both worlds. On one hand, they’re stable, linkable, searchable, and responsive just like websites. On the other hand, they provide additional powerful capabilities, work offline, and read files just like native apps.

    As a design surface, PWAs are really interesting because they challenge us to think about what mixing web and device-native user interfaces can be. On desktop devices in particular, we have more than 40 years of history telling us what applications should look like, and it can be hard to break out of this mental model.

    At the end of the day though, PWAs on desktop are constrained to the window they appear in: a rectangle with a title bar at the top.

    Here’s what a typical desktop PWA app looks like:

    Sketches of two rectangular user interfaces representing the desktop Progressive Web App status quo on the macOS and Windows operating systems, respectively. 

    Sure, as the author of a PWA, you get to choose the color of the title bar (using the Web Application Manifest theme_color property), but that’s about it.

    What if we could think outside this box, and reclaim the real estate of the app’s entire window? Doing so would give us a chance to make our apps more beautiful and feel more integrated in the operating system.

    This is exactly what the Window Controls Overlay offers. This new PWA functionality makes it possible to take advantage of the full surface area of the app, including where the title bar normally appears.

    About the title bar and window controls

    Let’s start with an explanation of what the title bar and window controls are.

    The title bar is the area displayed at the top of an app window, which usually contains the app’s name. Window controls are the affordances, or buttons, that make it possible to minimize, maximize, or close the app’s window, and are also displayed at the top.

    A sketch of a rectangular application user interface highlighting the title bar area and window control buttons.

    Window Controls Overlay removes the physical constraint of the title bar and window controls areas. It frees up the full height of the app window, enabling the title bar and window control buttons to be overlaid on top of the application’s web content. 

    A sketch of a rectangular application user interface using Window Controls Overlay. The title bar and window controls are no longer in an area separated from the app’s content.

    If you are reading this article on a desktop computer, take a quick look at other apps. Chances are they’re already doing something similar to this. In fact, the very web browser you are using to read this uses the top area to display tabs.

    A screenshot of the top area of a browser’s user interface showing a group of tabs that share the same horizontal space as the app window controls.

    Spotify displays album artwork all the way to the top edge of the application window.

    A screenshot of an album in Spotify’s desktop application. Album artwork spans the entire width of the main content area, all the way to the top and right edges of the window, and the right edge of the main navigation area on the left side. The application and album navigation controls are overlaid directly on top of the album artwork.

    Microsoft Word uses the available title bar space to display the auto-save and search functionalities, and more.

    A screenshot of Microsoft Word’s toolbar interface. Document file information, search, and other functionality appear at the top of the window, sharing the same horizontal space as the app’s window controls.

    The whole point of this feature is to allow you to make use of this space with your own content while providing a way to account for the window control buttons. And it enables you to offer this modified experience on a range of platforms while not adversely affecting the experience on browsers or devices that don’t support Window Controls Overlay. After all, PWAs are all about progressive enhancement, so this feature is a chance to enhance your app to use this extra space when it’s available.

    Let’s use the feature

    For the rest of this article, we’ll be working on a demo app to learn more about using the feature.

    The demo app is called 1DIV. It’s a simple CSS playground where users can create designs using CSS and a single HTML element.

    The app has two pages. The first lists the existing CSS designs you’ve created:

    A screenshot of the 1DIV app displaying a thumbnail grid of CSS designs a user created.

    The second page enables you to create and edit CSS designs:

    A screenshot of the 1DIV app editor page. The top half of the window displays a rendered CSS design, and a text editor on the bottom half of the window displays the CSS used to create it.

    Since I’ve added a simple web manifest and service worker, we can install the app as a PWA on desktop. Here is what it looks like on macOS:

    Screenshots of the 1DIV app thumbnail view and CSS editor view on macOS. This version of the app’s window has a separate control bar at the top for the app name and window control buttons.

    And on Windows:

    Screenshots of the 1DIV app thumbnail view and CSS editor view on the Windows operating system. This version of the app’s window also has a separate control bar at the top for the app name and window control buttons.

    Our app is looking good, but the white title bar in the first page is wasted space. In the second page, it would be really nice if the design area went all the way to the top of the app window.

    Let’s use the Window Controls Overlay feature to improve this.

    Enabling Window Controls Overlay

    The feature is still experimental at the moment. To try it, you need to enable it in one of the supported browsers.

    As of now, it has been implemented in Chromium, as a collaboration between Microsoft and Google. We can therefore use it in Chrome or Edge by going to the internal about://flags page, and enabling the Desktop PWA Window Controls Overlay flag.

    Using Window Controls Overlay

    To use the feature, we need to add the following display_override member to our web app’s manifest file:

    {
      "name": "1DIV",
      "description": "1DIV is a mini CSS playground",
      "lang": "en-US",
      "start_url": "/",
      "theme_color": "#ffffff",
      "background_color": "#ffffff",
      "display_override": [
        "window-controls-overlay"
      ],
      "icons": [
        ...
      ]
    }
    

    On the surface, the feature is really simple to use. This manifest change is the only thing we need to make the title bar disappear and turn the window controls into an overlay.

    However, to provide a great experience for all users regardless of what device or browser they use, and to make the most of the title bar area in our design, we’ll need a bit of CSS and JavaScript code.

    Here is what the app looks like now:

    Screenshot of the 1DIV app thumbnail view using Window Controls Overlay on macOS. The separate top bar area is gone, but the window controls are now blocking some of the app’s interface

    The title bar is gone, which is what we wanted, but our logo, search field, and NEW button are partially covered by the window controls because now our layout starts at the top of the window.

    It’s similar on Windows, with the difference that the close, maximize, and minimize buttons appear on the right side, grouped together with the PWA control buttons:

    Screenshot of the 1DIV app thumbnail display using Window Controls Overlay on the Windows operating system. The separate top bar area is gone, but the window controls are now blocking some of the app’s content.

    Using CSS to keep clear of the window controls

    Along with the feature, new CSS environment variables have been introduced:

    • titlebar-area-x
    • titlebar-area-y
    • titlebar-area-width
    • titlebar-area-height

    You use these variables with the CSS env() function to position your content where the title bar would have been while ensuring it won’t overlap with the window controls. In our case, we’ll use two of the variables to position our header, which contains the logo, search bar, and NEW button. 

    header {
      position: absolute;
      left: env(titlebar-area-x, 0);
      width: env(titlebar-area-width, 100%);
      height: var(--toolbar-height);
    }
    

    The titlebar-area-x variable gives us the distance from the left of the viewport to where the title bar would appear, and titlebar-area-width is its width. (Remember, this is not equivalent to the width of the entire viewport, just the title bar portion, which as noted earlier, doesn’t include the window controls.)

    By doing this, we make sure our content remains fully visible. We’re also defining fallback values (the second parameter in the env() function) for when the variables are not defined (such as on non-supporting browsers, or when the Windows Control Overlay feature is disabled).

    Screenshot of the 1DIV app thumbnail view on macOS with Window Controls Overlay and our CSS updated. The app content that the window controls had been blocking has been repositioned.
    Screenshot of the 1DIV app thumbnail view on the Windows operating system with Window Controls Overlay and our updated CSS. The app content that the window controls had been blocking has been repositioned.

    Now our header adapts to its surroundings, and it doesn’t feel like the window control buttons have been added as an afterthought. The app looks a lot more like a native app.

    Changing the window controls background color so it blends in

    Now let’s take a closer look at our second page: the CSS playground editor.

    Screenshots of the 1DIV app CSS editor view with Window Controls Overlay in macOS and Windows, respectively. The window controls overlay areas have a solid white background color, which contrasts with the hot pink color of the example CSS design displayed in the editor.

    Not great. Our CSS demo area does go all the way to the top, which is what we wanted, but the way the window controls appear as white rectangles on top of it is quite jarring.

    We can fix this by changing the app’s theme color. There are a couple of ways to define it:

    • PWAs can define a theme color in the web app manifest file using the theme_color manifest member. This color is then used by the OS in different ways. On desktop platforms, it is used to provide a background color to the title bar and window controls.
    • Websites can use the theme-color meta tag as well. It’s used by browsers to customize the color of the UI around the web page. For PWAs, this color can override the manifest theme_color.

    In our case, we can set the manifest theme_color to white to provide the right default color for our app. The OS will read this color value when the app is installed and use it to make the window controls background color white. This color works great for our main page with the list of demos.

    The theme-color meta tag can be changed at runtime, using JavaScript. So we can do that to override the white with the right demo background color when one is opened.

    Here is the function we’ll use:

    function themeWindow(bgColor) {
      document.querySelector("meta[name=theme-color]").setAttribute('content', bgColor);
    }

    With this in place, we can imagine how using color and CSS transitions can produce a smooth change from the list page to the demo page, and enable the window control buttons to blend in with the rest of the app’s interface.

    Screenshot of the 1DIV app CSS editor view on the Windows operating system with Window Controls Overlay and updated CSS demonstrating how the window control buttons blend in with the rest of the app’s interface.

    Dragging the window

    Now, getting rid of the title bar entirely does have an important accessibility consequence: it’s much more difficult to move the application window around.

    The title bar provides a sizable area for users to click and drag, but by using the Window Controls Overlay feature, this area becomes limited to where the control buttons are, and users have to very precisely aim between these buttons to move the window.

    Fortunately, this can be fixed using CSS with the app-region property. This property is, for now, only supported in Chromium-based browsers and needs the -webkit- vendor prefix. 

    To make any element of the app become a dragging target for the window, we can use the following: 

    -webkit-app-region: drag;

    It is also possible to explicitly make an element non-draggable: 

    -webkit-app-region: no-drag; 

    These options can be useful for us. We can make the entire header a dragging target, but make the search field and NEW button within it non-draggable so they can still be used as normal.

    However, because the editor page doesn’t display the header, users wouldn’t be able to drag the window while editing code. So let's use a different approach. We’ll create another element before our header, also absolutely positioned, and dedicated to dragging the window.

    <div class="drag"></div>
    <header>...</header>
    .drag {
      position: absolute;
      top: 0;
      width: 100%;
      height: env(titlebar-area-height, 0);
      -webkit-app-region: drag;
    }

    With the above code, we’re making the draggable area span the entire viewport width, and using the titlebar-area-height variable to make it as tall as what the title bar would have been. This way, our draggable area is aligned with the window control buttons as shown below.

    And, now, to make sure our search field and button remain usable:

    header .search,
    header .new {
      -webkit-app-region: no-drag;
    }

    With the above code, users can click and drag where the title bar used to be. It is an area that users expect to be able to use to move windows on desktop, and we’re not breaking this expectation, which is good.

    An animated view of the 1DIV app being dragged across a Windows desktop with the mouse.

    Adapting to window resize

    It may be useful for an app to know both whether the window controls overlay is visible and when its size changes. In our case, if the user made the window very narrow, there wouldn’t be enough space for the search field, logo, and button to fit, so we’d want to push them down a bit.

    The Window Controls Overlay feature comes with a JavaScript API we can use to do this: navigator.windowControlsOverlay.

    The API provides three interesting things:

    • navigator.windowControlsOverlay.visible lets us know whether the overlay is visible.
    • navigator.windowControlsOverlay.getBoundingClientRect() lets us know the position and size of the title bar area.
    • navigator.windowControlsOverlay.ongeometrychange lets us know when the size or visibility changes.

    Let’s use this to be aware of the size of the title bar area and move the header down if it’s too narrow.

    if (navigator.windowControlsOverlay) {
      navigator.windowControlsOverlay.addEventListener('geometrychange', () => {
        const { width } = navigator.windowControlsOverlay.getBoundingClientRect();
        document.body.classList.toggle('narrow', width < 250);
      });
    }

    In the example above, we set the narrow class on the body of the app if the title bar area is narrower than 250px. We could do something similar with a media query, but using the windowControlsOverlay API has two advantages for our use case:

    • It’s only fired when the feature is supported and used; we don’t want to adapt the design otherwise.
    • We get the size of the title bar area across operating systems, which is great because the size of the window controls is different on Mac and Windows. Using a media query wouldn’t make it possible for us to know exactly how much space remains.
    .narrow header {
      top: env(titlebar-area-height, 0);
      left: 0;
      width: 100%;
    }

    Using the above CSS code, we can move our header down to stay clear of the window control buttons when the window is too narrow, and move the thumbnails down accordingly.

    A screenshot of the 1DIV app on Windows showing the app’s content adjusted for a much narrower viewport.

    Thirty pixels of exciting design opportunities


    Using the Window Controls Overlay feature, we were able to take our simple demo app and turn it into something that feels so much more integrated on desktop devices. Something that reaches out of the usual window constraints and provides a custom experience for its users.

    In reality, this feature only gives us about 30 pixels of extra room and comes with challenges on how to deal with the window controls. And yet, this extra room and those challenges can be turned into exciting design opportunities.

    More devices of all shapes and forms get invented all the time, and the web keeps on evolving to adapt to them. New features get added to the web platform to allow us, web authors, to integrate more and more deeply with those devices. From watches or foldable devices to desktop computers, we need to evolve our design approach for the web. Building for the web now lets us think outside the rectangular box.

    So let’s embrace this. Let’s use the standard technologies already at our disposal, and experiment with new ideas to provide tailored experiences for all devices, all from a single codebase!


    If you get a chance to try the Window Controls Overlay feature and have feedback about it, you can open issues on the spec’s repository. It’s still early in the development of this feature, and you can help make it even better. Or, you can take a look at the feature’s existing documentation, or this demo app and its source code

  • How to Sell UX Research with Two Simple Questions

    Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, “Please, sir, I want some more.” 

    Here’s the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think it’s their idea. 

    In this article, I’ll show you how to collaboratively expose misalignment and gaps in the team’s shared understanding by bringing the team together around two simple questions:

    1. What are the objects?
    2. What are the relationships between those objects?

    A gauntlet between research and screen design

    These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, what’s ORCA?! Glad you asked.

    ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.

    The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.

    The four rounds and fifteen steps of the ORCA process. In the OOUX world, we love color-coding. Blue is reserved for objects! (Yellow is for core content, pink is for metadata, and green is for calls-to-action. Learn more about the color-coded object map and connecting CTAs to objects.)

    I sometimes say that ORCA is a “garbage in, garbage out” process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you don’t have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.

    ORCA strengthens the weak spot between research and design by helping distill research into solid information architecture—scaffolding for the screen design and interaction design to hang on.

    In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.

    Getting in the same curiosity-boat

    What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.

    Mark Twain

    The first two steps of the ORCA process—Object Discovery and Relationship Discovery—shine a spotlight on the dark, dusty corners of your team’s misalignments and any inherent complexity that’s been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:

    The original “Tree Swing Project Management” cartoon dates back to the 1960s or 1970s and has no artist attribution we could find.

    This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often can’t sell research: every decision-maker is confident in their own mental picture. 

    Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.

    But how we do this is important. However much we might want to, we can’t just tell everyone, “YOU ARE WRONG!” Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.

    Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.

    You might try to sell research by honestly saying: “We need to understand doctors better! What are their pain points? How do they use the current app?” But here’s the problem with that. Those questions are vague, and the answers to them don’t feel acutely actionable.

    Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Let’s listen in:

    “Wait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?”

    “Can a patient even have more than one primary doctor?”

    “Is it a ‘primary doctor’ or just a ‘primary caregiver’… Can’t that role be a nurse practitioner?”

    “No, caregivers are something else… That’s the patient’s family contacts, right?”

    “So are caregivers in scope for this redesign?”

    “Yeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note… Or on the appointment?”

    Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidence—gently and diplomatically.

    When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.

    If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.

    The two questions

    But how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably

    We can do this by starting with those two big questions that align to the first two steps of the ORCA process:

    1. What are the objects?
    2. What are the relationships between those objects?

    In practice, getting to these answers is easier said than done. I’m going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these “seed” questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.

    Prep work: Noun foraging

    In the next section, I’ll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.

    Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.

    Here are just a few great noun foraging sources:

    • the product’s marketing site
    • the product’s competitors’ marketing sites (competitive analysis, anyone?)
    • the existing product (look at labels!)
    • user interview transcripts
    • notes from stakeholder interviews or vision docs from stakeholders

    Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.

    As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if you’ll be creating an object map later!).

    You’ll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:

    1. Structure
    2. Instances
    3. Purpose

    Think of a library app, for example. Is “book” an object?

    Structure: can you think of a few attributes for this potential object? Title, author, publish date… Yep, it has structure. Check!

    Instance: what are some examples of this potential “book” object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops… OK, check!

    Purpose: why is this object important to the users and business? Well, “book” is what our library client is providing to people and books are why people come to the library… Check, check, check!

    SIP: Structure, Instances, and Purpose! (Here’s a flowchart where I elaborate even more on SIP.)

    As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickers—your UX system is not your design system! Components are just the packaging for objects—they are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.

    Let’s say we work for a startup disrupting the email experience. This is how I’d start my noun foraging.

    First I’d look at my own email client, which happens to be Gmail. I’d then look at Outlook and the new HEY email. I’d look at Yahoo, Hotmail…I’d even look at Slack and Basecamp and other so-called “email replacers.” I’d read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.

    (Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just don’t get lost in your own emails! Come back to me!)

    Drumroll, please…

    Here are a few nouns I came up with during my noun foraging:

    • email message
    • thread
    • contact
    • client
    • rule/automation
    • email address that is not a contact?
    • contact groups
    • attachment
    • Google doc file / other integrated file
    • newsletter? (HEY treats this differently)
    • saved responses and templates
    In the OOUX world, we love color-coding. Blue is reserved for objects! (Yellow is for core content, pink is for metadata, and green is for calls-to-action. Learn more about the color coded object map and connecting CTAs to objects.)

    Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google what’s googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.

    Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:

    • Record Locator
    • Incentive Home
    • Augmented Line Item
    • Curriculum-Based Measurement Probe

    This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.

    Facilitate an Object Definition Workshop

    You could actually start your workshop with noun foraging—this activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the time’s up, come together and find the overlap. Affinity mapping is your friend here!

    If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.

    HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You don’t need to let them know that you’re looking for gaps in the team’s understanding so that you can prove the need for more user research—that will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers. 

    Then, let the question whack-a-mole commence.

    1. What is this thing?

    Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.

    As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an “open questions” parking lot. 😉

    After definitions solidify, here’s a great follow-up:

    2. Do our users know what these things are? What do users call this thing?

    Stakeholder 1: They probably call email clients “apps.” But I’m not sure.

    Stakeholder 2: Automations are often called “workflows,” I think. Or, maybe users think workflows are something different.

    If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the users’ language and mindset.

    OK, moving on. 

    If you have two or more objects that seem to overlap in purpose, ask one of these questions:

    3. Are these the same thing? Or are these different? If they are not the same, how are they different?

    You: Is a saved response the same as a template?

    Stakeholder 1: Yes! Definitely.

    Stakeholder 2: I don’t think so… A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images. 

    Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your “open questions” parking lot.

    If you successfully determine that two similar things are, in fact, different, here’s your next follow-up question:

    4. What’s the relationship between these objects?

    You: Are saved responses and templates related in any way?

    Stakeholder 3:  Yeah, a template can be applied to a saved response.

    You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?

    Listen. Capture uncertainty. Once the list of “open questions” grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldn’t make it to the workshop. And many questions will need to be labeled “user.” 

    Do you see how we are building up to our UXR sales pitch?

    5. Is this object in scope?

    Your next question narrows the team’s focus toward what’s most important to your users. You can simply ask, “Are saved responses in scope for our first release?,” but I’ve got a better, more devious strategy.

    By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisingly—or not so surprisingly—it’s not unusual for the VP to rank something like “saved responses” as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.

    I did this for a startup a few years ago. We posted the three groups’ wildly different sort orders on the whiteboard.

    Here’s a snippet of the very messy middle from this session: three columns of object cards, showing the same cards prioritized completely differently by three different groups.

    The CEO stood back, looked at it, and said, “This is why we haven’t been able to move forward in two years.”

    Admittedly, it’s tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.

    Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.

    6. Create a visual representation of the objects’ relationships

    We’ve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple “has a” and “has many” statements.

    A work-in-progress system model of our new email solution.

    This system modeling activity brings up all sorts of new questions:

    • Can a saved response have attachments?
    • Can a saved response use a template? If so, if an email uses a saved response with a template, can the user override that template?
    • Do users want to see all the emails they sent that included a particular attachment? For example, “show me all the emails I sent with ProfessionalImage.jpg attached. I’ve changed my professional photo and I want to alert everyone to update it.” 

    Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.

    Light the fuse

    You’ve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.

    Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.

    Here’s your final step. Take those questions you’ve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, “if we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?” 

    With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry. 

    Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6–8 user interviews focused specifically on these questions. 

    HOT TIP: if you are new to UX research, please note that you’ll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and don’t lead the user into any default answers.

    Final words: Hold the screen design!

    Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?

    I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world. 

    I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, there’s more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots. 

    All the best of luck! Now go sell research!

  • A Content Model Is Not a Design System

    Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.

    But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content model—a definition of content types, attributes, and relationships that let people and systems understand content—with my more familiar design-system thinking would capsize my customer’s omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content. 

    I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot delivery—designing content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces. 

    A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic types—types named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.

    Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy can’t rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.

    Two essential principles for an effective content model

    We needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:

    1. Content models must define semantics instead of layout.
    2. And content models should connect content that belongs together.

    Semantic content models

    A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they don’t help delivery channels understand the content’s meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit. 

    When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.

    A semantic content model has several benefits:

    • Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand disruptive website redesigns. 
    • A semantic content model also provides a competitive edge. By adding structured data based on Schema.org’s types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.
    • Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions.

    For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.

    Image showing an event in a CMS passing data to a Google knowledge panel, a website, and a voice interface

    Content models that connect

    After struggling to describe what makes a good content model, I’ve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ item’s question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.

    Think about writing an article or essay. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.

    To illustrate, let’s look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make it as easy and as flexible as possible to add any number of tabs in the future?

    Because our design-system instincts were so familiar, it felt like we had needed a content type called “tab section” so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources. 

    Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. For example, how would another system have been able to tell which “tab section” referred to a product’s specifications or its resource list—would that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design system’s layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.

    Illustration showing a data tree flowing into a list of cards (data), flowing into a navigation menu on a website
    A content model based on design components is unnecessarily complex, and it’s unintelligible to systems.

    We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.

    In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content.

    Illustration showing a data tree flowing into a formatted list, flowing into a navigation menu on a website
    A good content model connects content that belongs together so it can be easily managed and reused.

    Conclusion

    In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if you’re working on a content model to support an omnichannel content strategy—or even if you just want to make sure that Google and other interfaces understand your content—remember:

    • A design system isn’t a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.
    • If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.org–based structured data in your website. Even if additional delivery channels aren’t on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.
    • Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They’ll be able to create new designs without the obstacle of compatibility between the design and the content, and ​they’ll be ready for the next big thing. 

    By rigorously advocating for these principles, you’ll help your team treat content the way that it deserves—as the most critical asset in your user experience and the best way to connect with your audience.

  • Design for Safety, An Excerpt

    Antiracist economist Kim Crayton says that “intention without strategy is chaos.” We’ve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical tech—but what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.

    This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech that’s safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)

    The process for inclusive safety

    When you are designing for safety, your goals are to:

    • identify ways your product can be used for abuse,
    • design ways to prevent the abuse, and
    • provide support for vulnerable users to reclaim power and control.

    The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). It’s a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:

    • Conducting research
    • Creating archetypes
    • Brainstorming problems
    • Designing solutions
    • Testing for safety
    Fig 5.1: Each aspect of the Process for Inclusive Safety can be incorporated into your design process where it makes the most sense for you. The times given are estimates to help you incorporate the stages into your design plan.

    The Process is meant to be flexible—it won’t make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.

    And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. It’s a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.

    If you’re working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.

    Step 1: Conduct research

    Design research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.

    Broad research

    Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.

    Specific research: Survivors

    When possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, you’ll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.

    Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. We’ll talk more about how to appropriately interview survivors in Chapter 6.

    Specific research: Abusers

    It’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.

    Step 2: Create archetypes

    Once you’ve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.

    The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.

    Fig 5.2: Harry Oleson, an abuser archetype for a fitness product, is looking for ways to stalk his ex-girlfriend through the fitness apps she uses.

    The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted (Fig 5.3)?

    Fig 5.3: The survivor archetype Lisa Zwaan suspects her husband is weaponizing their home’s IoT devices against her, but in the face of his insistence that she simply doesn’t understand how to use the products, she’s unsure. She needs some kind of proof of the abuse.

    You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.

    Fig 5.4: The survivor archetype Eric Mitchell knows he’s being stalked by his ex-boyfriend Rob but can’t figure out how Rob is learning his location information.

    It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll brainstorm how to prevent the abuser’s goals and assist the survivor’s goals.

    And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype.

    Step 3: Brainstorm problems

    After creating archetypes, brainstorm novel abuse cases and safety issues. “Novel” means things not found in your research; you’re trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.

    How could your product be used for any kind of abuse, outside of what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process.

    If you’re looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is great—it’s okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.

    After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry, “Have we really identified every possible harm? What if we’ve missed something?” If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.

    It’s impossible to guarantee you’ve thought of everything; instead of aiming for 100 percent assurance, recognize that you’ve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.

    Step 4: Design solutions

    At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.

    Some questions to ask yourself to help prevent harm and support your archetypes include:

    • Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?
    • How can you make the victim aware that abuse is happening through your product?
    • How can you help the victim understand what they need to do to make the problem stop?
    • Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?

    In some products, it’s possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.

    That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. We’ll walk through a good example of this in the next chapter.

    Step 5: Test for safety

    The final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.

    Ideally, safety testing happens along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.

    You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset—“retrofitting” it for safety is a good thing to do.

    Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.

    As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.

    Abuser testing

    The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.

    For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.

    If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.

    Survivor testing

    Survivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.

    However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times; if you couldn’t find that information, you would have more work to do in step 4.

    Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.

    Stress testing

    To make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called “stress cases,” and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.

  • Sustainable Web Design, An Excerpt

    In the 1950s, many in the elite running community had begun to believe it wasn’t possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasn’t built for the task. 

    But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, England—conditions no one expected to lend themselves to record-setting—and yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes. 

    This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannister’s record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.

    We achieve far more when we believe that something is possible, and we will believe it’s possible only when we see someone else has already done it—and as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.

    Establishing standards for a sustainable web

    In most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we aren’t held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.

    The primary goal in sustainable web design is to reduce carbon emissions. However, it’s almost impossible to actually measure the amount of CO2 produced by a web product. We can’t measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do? 

    If we can’t measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:

    1. Data transfer 
    2. Carbon intensity of electricity

    Let’s take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.

    Data transfer

    Most researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.

    For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. It’s fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).

    Fig 2.1: The Kinsta hosting dashboard displays data transfer alongside traffic volumes. If you divide data transfer by visits, you get the average data per visit, which can be used as a metric of efficiency.

    The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes. 

    Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile,” with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website. 

    History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.

    Fig 2.2: The historical page weight data from HTTP Archive can teach us a lot about what is possible in the future.

    You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.

    Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design. 

    We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website we’re replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class. 

    If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they don’t need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.

    Page weight budgets are easy to track throughout a design and development process. Although they don’t actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.

    In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as we’ll see next, since all web products demand some power, it’s important to consider the source of that electricity, too.

    Carbon intensity of electricity

    Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/kWh. 

    Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.

    We don’t have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).

    Fig 2.3: Tomorrow’s electricityMap shows live data for the carbon intensity of electricity by country.

    That said, we don’t want to locate our servers too far away from our users; it takes energy to transmit data through the telecom’s networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles”—and we want it to be as small as possible.

    Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea. 

    For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. That’s a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.

    Converting it back to carbon emissions

    If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.

    If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.

    Fig 2.4: The Website Carbon Calculator shows how the Riverford Organic website embodies their commitment to sustainability, being both low carbon and hosted in a data center using renewable energy.

    With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isn’t as intuitive—but carbon budgets do focus our minds on the primary thing we’re trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.

    Browser Energy

    Data transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.

    One part of the system we can look at in more detail is the energy used by end users’ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to users’ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the “thinking” work is done partly or entirely in the browser. 

    All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the user’s web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the user’s device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isn’t just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.

    In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users’ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).

    Fig 2.5: The Energy Impact meter in Safari (on the right) shows how a website consumes CPU energy.

    You know when you load a website and your computer’s cooling fans start spinning so frantically you think it might actually take off? That’s essentially what this tool is measuring. 

    It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
Pcworld.com
PCWorld helps you navigate the PC ecosystem to find the products you want and the advice you need to get the job done.
  • This $35 job interview skills package can help you get your dream career

    If you’re trying to move up the corporate ladder, then you’re going to have to undertake a job interview or two. But preparation is key. Not only should you anticipate questions, but prepare good answers well before. Want to learn more tips like these? Then the 2022 Premium Interviewing Skills Bundle may be your best bet, especially as it’s on sale for $34.99.

    The web-based package features 19 hours of comprehensive skills training that’ll get you ready to tackle any kind of interview, regardless of what it’s for. You’ll learn how to source jobs via LinkedIn, tips for better in-person interviews, best practices for virtual interviews, and find out how to analyze an interviewer so you can better frame your answers.

    The content is geared for the beginner, but can be taken by virtually anyone who wants to improve their interview skills. And each course is facilitated by experts such as Stefan Devito and Imran Afzal — both highly rated, each earning 4.6 out of 5 stars — so it’s a great way to increase your self-confidence.

    When you consider the amount of competition out there for jobs, then you need to take every advantage you can get. And since this one costs so little — it works out to just $4.38 per course — then this is an opportunity you might not be able to afford to miss. 

    The 2022 Premium Interviewing Skills Bundle – $34.99

    See Deal

    Prices subject to change.

  • ‘Gaming Chromebooks’ with RGB keyboards could be here soon

    Is it even possible to play computer games on a keyboard that doesn’t light up like a nuclear-powered Christmas tree? Yes, of course. But why would you want to? The inclusion of code supporting RGB-lit keyboards in the latest revision of Chrome OS indicates that long-awaited “gaming” Chromebook laptops are on their way. Possibly. Maybe. We’ll see.

    The report comes from 9to5Google, which spotted the new feature flag in the public-facing open-source code repository for Chromium (the project at the heart of the Chrome desktop and mobile browsers and the Chrome operating system used by Chromebooks). The code is simple at the moment, lacking any obvious tools for customization or game integration, but it’s undeniably there: someone at Google is thinking about RGB keyboard hardware.

    Surprisingly, this is just the latest in a long list of indications that Google wants its Chrome OS platform to have more gaming prowess in the near future. There have been indications for the better part of year that Valve is working on a version of the Steam store and distribution center for Chromebooks, running Linux-based games in a virtual machine. And of course, that’s leaving out Google’s own push for games both on Android’s Play Store (which Chrome OS has been able to access for years and is now expanding into Windows) and streaming PC games via Stadia. Competing streaming services, like GeForce Now and Xbox Game Pass, also work on Chromebooks via browser- and Android-based apps.

    9to5 claims that both HP and Lenovo are preparing gaming Chromebooks, in their Omen and Legion lines, respectively. All jokes aside, a “gaming” laptop needs more than a fancy keyboard to stand apart from the crowd. A discrete graphics card is necessary for high-powered 3D games, even on Linux, and a high-quality screen with fast refresh rates and a beefy battery to power it all are generally the bare minimum for a gaming laptop. You can add extras like specialized low-latency networking, exotic cooling solutions, and high-speed storage, too. There’s no real indication that these are coming to Chromebooks in the near future: while there are high-end Chromebooks on the market, they seem more focused on boardrooms than bedrooms.

    Still, a “gaming” Chromebook with its only claim to the label being an RGB keyboard would hardly be the first time gaming marketing had been applied to a fairly vanilla product. And with the segment still growing healthily, it’s possible that vendors might jump at the chance to give their models a little differentiation.

  • EcoFlow Delta Mini Portable Power Station review: What can’t it do?

    At a glance

    Expert’s Rating

    Pros

    • Robust set of features
    • Companion app is handy
    • Impressive recharge rate

    Cons

    • OK-ish power efficiency
    • Pricey

    Our Verdict

    The EcoFlow Delta Mini Portable Power Station has all the bells and whistles you’d want from a power station. If it’s within your budget, we have no real qualms with it. And the standard EcoFlow Delta doesn’t cost a whole lot more if you need extra capacity and outlets.

    Best Prices Today

    Retailer
    Price
    Delivery
    EcoFlow
    $999
    Adorama
    Not Available
    Free
    Price comparison from over 24,000 stores worldwide

    Maybe it’s because I’ve researched a lot of power stations, but I can’t seem to go anywhere on the internet without seeing an ad for the EcoFlow lineup. The sleekly designed power stations look great, so when the EcoFlow team reached out to see if I wanted to test one, I jumped at the chance. A week or so later, the $999 EcoFlow Delta Mini Portable power station (as well as the $400 EcoFlow 160W Solar Panel) arrived.

    Even though this station has Mini in its name, it’s not all that small. It weighs 23.6 pounds and measures 14.9 x 7.2 x 94 inches. 

    Note: This review is part of our roundup of portable power banks. Go there for details on competing products and our testing methods.

    I really like the overall design. It looks and feels like a premium product. There are ports on either end of the station, with one end also featuring an LCD screen. The screen is big and easy to read, detailing how much power is being used or input into the station, the hours remaining, and which power options are turned on. 

    Just below the display is an IoT button to enable the station’s Wi-Fi feature that allows you to connect it to your local Wi-Fi network. Then, using the EcoFlow app, you can remotely view all of its stats, update its firmware, and adjust settings without having physical access to the Delta Mini. 

    Below the IoT button is where you’ll find four ports: one USB-C 100W (20V/5A) port, two USB-A (5V/2.4A) ports, and a fast-charge USB-A (12V/1.5A 18W) port. Below those is a gold-colored power button. 

    On the opposite side you’ll find even more ports and connection options. There’s a small cover near the handle that flips up to reveal the input ports. From left to right is a port dedicated to charging via a solar panel or a car charger, an AC charging port, and the overload protection switch. Between the two charging ports there’s a switch that controls the AC charging speed, either “fast” or “slow,” going from a max of 800W to 200W (more on this in a minute). 

    EcoFlow DELTA mini Portable Power Station

    EcoFlow

    Below the input ports are five AC sockets, with a dedicated power button for the outputs in the center. And, finally, below those is a car outlet port and a DC5521 barrel port, along with the 12V power button. 

    The Delta Mini supports Pure Sine Wave output, meaning you should be able to use it with devices that have AC motors, such as a microwave or mini fridge, without issue. It can output a total of 1400W, with a surge capacity of 2100W. If you enable X-Boost in the app, or whenever an AC port detects that the power draw exceeds 1400W, X-Boost will automatically be enabled. However, EcoFlow recommends using just a single AC power outlet when you’re using X-Boost mode. 

    With that high of an output, the Delta Mini is able to power items like a hand saw or an electric skillet without any issues. 

    To measure the station’s efficiency, I connected my PortaPow power monitor along with a load tester to a USB port. The load tester constantly drains power, while the monitor records how much power is used. The end result was 669.446Wh of power used out of the 882Wh capacity. That translates to an efficiency of 75.90 percent. The average rating of all power stations I’ve tested is 83.51 percent—placing the EcoFlow Mini above only the Ego Power+ Nexus

    Another test I use to measure output is to connect a 4W desk lamp and record a time-lapse video of how long the lamp stays powered on. When it was all said and done, the desk lamp stayed lit up for 46 hours and 14 minutes. That’s the second-best showing out of all the power stations I’ve tested, putting it behind just the GoalZero Yeti 1000x, which achieved a staggering 111 hours and 29 minutes.

    As for charging time, the Delta Mini can be fully recharged in as little as 90 minutes using the included power adapter and enabling X-Boost. Doing so will charge the station at around 800W. If you’re not in a rush, you can charge the station at anywhere from 200W (takes about five hours for a full charge), all the way up to 900W. Keep in mind, though, that constantly fast charging the battery can have a negative impact on its overall life. I’d recommend using it sparingly. 

    I also connected EcoFlow’s 160W solar panel to the Delta Mini and monitored its charging rate. EcoFlow estimates eight hours of charging time with the panel, and that matches my experience: The power station showed it was receiving right at 140W of power from the panel, and that it would finish charging in eight hours. 

    Admittedly, the EcoFlow Delta Mini and the 160W Solar Panel may be expensive, but they’re also some of the nicest power station equipment I’ve tested. They both feel like premium products, with the solar panel including a cloth carrying case that doubles as a stand. Seriously, this is nice gear. 

    That said, another option in EcoFlow’s lineup is the $899 EcoFlow Delta Portable power station, which has a higher capacity and output and more ports for only $50 more than the Mini. I haven’t tested the standard Delta, but assuming it’s built to the same standards, I’d spend my money on it instead of the Mini.

  • Best laptop deals: Top picks from budget to extreme

    Whether you’re buying a new laptop for school or trying to find a high-end gaming laptop, it’s possible to find good laptop deals no matter the season. We’re scouring the web daily to find the laptop deals you don’t want to miss.

    Mind you, not all advertised laptop deals are actually deals, so we’ve only included the ones we consider actual bargains—and we’ve explained why. We’ll add new laptop deals as we see them daily and remove any expired sales. Right now, we’re seeing strong discounts on gaming laptops, Microsoft Surface devices, and more. If you’re looking for Chromebooks we’ve got those deals in here too!

    We’ve provided a handy list of laptop-specific shopping tips at the end of this post, and immediately below are the deals themselves.

    The best laptop deals in 2022

    Acer Aspire 5

    Acer aspire 5 facing from right

    Acer

    From: Walmart

    Was: $499

    Now: $429 ($70 off)

    This discount isn’t huge, but it’s still a good price for an everyday laptop. The Acer Aspire 5 features an Intel “Tiger Lake” quad-core, eight-thread Core i5-1135G7 with a boost to 4.2GHz. It has 8GB of RAM, but only 256GB of onboard storage. The display is 14-inches with 1080p resolution.

    Overall, this is a nice laptop. It lacks a significant amount of storage, but as a laptop that’s just for travel or working outside of the home office, it’s a nice choice. It’d also be good for students who keep a lot of their work in the cloud or those who don’t require a lot of storage. This laptop also has Wi-Fi 6 and it’s loaded with Windows 11 Home.

    See the Acer Aspire 5 at Walmart

    Lenovo IdeaPad Flex 5

    A flex 5 in display mode facing from left.

    Lenovo

    From: Walmart

    Was: $490.50

    Now: $349 ($141.50)

    The Lenovo IdeaPad Flex 5 may be slightly underpowered, but at this price you can forgive some of the flaws. First, this is a convertible laptop with a 14-inch 1080p touchscreen. It has a quad-core, eight thread Ryzen 3 5300U with a boost to 3.8GHz. The RAM is a little low for a touchscreen laptop at 4GB and storage is pretty light at 128GB. It comes loaded with Windows 10 in S mode, meaning you can only use apps from the Windows Store. That said, this laptop should perform pretty well given its four-core CPU. It’s also Windows 11 ready if you don’t want to stick with Windows 10.

    See the Lenovo IdeaPad Flex 5 at Walmart

    Lenovo IdeaPad 5 Pro

    A black Lenovo laptop facing front with Windows 11 running on the display.

    Lenovo

    From: Microsoft via eBay

    Was: $909.99

    Now: $557.99 ($352 off)

    This IdeaPad Pro features a 16-inch display with 1440p resolution and a max brightness of 350 nits. The processor is a Zen 3 Ryzen 5 5600H, which has six cores, twelve threads, and a maximum boost to 4.2GHz. It features 8GB of RAM and a 512GB SSD. It’s also rocking Wi-Fi 6 and Bluetooth 5.0.

    The port selection is a little odd. You’re getting one Type-C USB 2.0 port and two standard USB 3.1 Gen 1 ports. The laptop is also pretty light at 4.4 pounds.

    See the Lenovo IdeaPad 5 Pro at eBay

    HP 15-ef2126wm

    hp Silver laptop facing forward with a Ryzen 5 sticker prominently displayed

    HP

    From: Walmart

    Was: $549

    Now: $399 ($150 off)

    This is an excellent price for a 15.6-inch Ryzen-based laptop. It features a Ryzen 5 5500U, which has six cores, twelve threads, and a maximum boost to 4GHz. It also has 8GB of RAM and 256GB of onboard storage. This laptop is already rocking Windows 11 Home, so you don’t have to update. Overall, it’s an excellent choice as a day-to-day laptop, especially at this price.

    See the HP-ef2126wm at Walmart

    Gigabyte Aorus 15P

    a gray aorus gaming laptop facing from right

    Gigabyte

    From: Newegg

    Was: $2,399

    Now: $1,999 ($400 off after $200 MIR)

    Right now the best deals in gaming are usually found in laptops, especially where we’re still dealing with desktop graphics card madness. This model is rocking an Nvidia GeForce RTX 3080, an eight core, sixteen thread Intel “Tiger Lake” Core i7-11800H, 32GB of RAM, and a 1TB NVMe SSD. It’s a solid set of specs and you get to view all of this on a 15.6-inch display with 1080p resolution and a maximum refresh rate of 300Hz. The rebate offer ends on January 31st, but it’s not clear how long the initial sale price of $1,999 will last.

    See the Gigabyte Aorus 15P at Newegg

    MSI Summit E13 Flip Evo

    MSI Summit E13 Flip EVO

    MSI

    From: Newegg

    Was: $1,599

    Now: $999 ($600 off after the $100 rebate)

    Looking for a portable 2-in-1 laptop for work? Well, you’re in luck. Newegg is offering a great deal on the MSI Summit E13 Flip Evo. This laptop comes equipped with an Intel Core i7-1185G7 processor, Iris Xe graphics, 16GB of DDR4 RAM, and 512GB of NVMe SSD. In other words, you should expect fairly zippy performance. The port selection is pretty diverse, too. You’re getting one USB 3.2 Type-C, one USB 4.0 Type-C (DP/Thunderbolt 4), one USB 3.2 Type-A, a micro SD card reader, a webcam lock switch, and audio combo jack.

    The convertible aspect of this laptop is a major selling point, especially if you travel a lot. You can prop the laptop up like a tent or swing the 1080p touchscreen display around and use it like a tablet. This deal also includes a protective sleeve for the laptop and an MSI pen. The pen is a nice bonus, as this type of accessory is normally an additional cost. The minimalist aesthetic is perfect for a professional environment, too. The swanky bronze trim is the cherry on top, really.

    See the MSI Summit E13 Flip Evo at Newegg

    17.3-inch Asus TUF Gaming laptop

    Asus gaming laptop with an illuminated keyboard

    Asus

    From: Walmart

    Was: $1,099

    Now: $899 ($200 off)

    This 1080p laptop has a nice set of specs. The 17.3-inch 1080p display has a 144Hz refresh rate. The fast refresh rate means smoother gameplay. The CPU is an Intel Core i5-11260H. That’s six cores, 12 threads, and a boost to 4.4GHz. The GPU is an Nvidia GeForce RTX 3050 Ti, which is a solid choice for 1080p gaming. As for RAM and onboard storage, it has 8GB of memory and a 512GB NVMe SSD.

    See the 17.3-inch Asus TUF laptop at Walmart

    Microsoft Surface Pro 7+

    The Surface Pro 7+ on a deck near some water in sunlight.

    Mark Hachman/IDG

    From: Walmart

    Was: $999.99

    Now: $599 ($400.99 off)

    If you’re looking for a well designed Windows tablet, there’s no beating Microsoft’s Surface line and this Walmart’s sale offers an excellent bargain. This version of the Surface Pro 7+ comes with a Core i3 processor, 128GB of onboard storage, 8GB of RAM, and a black Type Cover. We reviewed the Surface Pro 7+ nearly a year ago, giving it 4.5 out of 5 stars and an Editors’ Choice Award. We called it “the most potent upgrade Microsoft’s Surface Pro line has offered in years.”

    See the Surface Pro 7+ at Walmart

    HP 17-by4061nr

    HP17

    HP

    From: Walmart

    Was: $679

    Now: $499.00 ($180 off)

    This HP laptop has a lot going for it. The CPU is an Intel “Tiger Lake” Core i5-1135G7 with four cores, eight threads and a boost to 4.2GHz. The processor is packing Iris Xe graphics, which will provide surprising performance for an integrated GPU. It also has 8GB of RAM, a 512GB NVMe SSD, and a 1080p display. If need a new laptop to kick off 2022, then this is a nice choice.

    See the HP 17-by4061nr at Walmart

    Asus L510

    Asus L510

    Asus

    From: Walmart

    Was: $279

    Now: $249 ($30 off)

    This deal puts us in an odd position. We’re not huge fans of laptops with just 128GB of onboard storage (especially this one’s onboard eMMC storage) and generally don’t recommend Windows PCs running Celeron processors. For a price around $200, however, we’re willing to overlook these shortcomings but with some big caveats.

    First, you’ll get exactly what you pay for with this clamshell, but that just might be a good thing given the price. It’s running Windows 10 Home in S Mode and we would not recommend upgrading this laptop to regular Windows 10. Instead, use this laptop like a Chromebook, so focus on using it for web apps like Google Docs or Office Online. Then, if you absolutely need a desktop program download, run whatever you need from the selection in the Windows Store. We wouldn’t try editing a photo on this since it has just 4GB of RAM and deathly slow flash storage. Still, the Intel Celeron N4020 will get the job done for basic uses and a 15.6-inch 1080p display offers a bigger display than what you’d get from a Chromebook around the same price.

    See the Asus L510 at Walmart

    HP Spectre x360 14

    HP Spectre x360 14

    HP

    From: HP

    Was: $1,399.99

    Now: $1,049.99 ($350 off)

    If you’re looking for the best thin and light laptop money can buy, you’ve come to the right place.

    This 14-inch HP Spectre convertible strikes a great balance between performance (from Intel’s Tiger Lake CPUs) and design, even if it’s a little on the heavy side at 3 pounds. Its 1920×1280 IPS display is another highlight, whether you’re making use of the 360-degree hinge and touchscreen or not. The deal highlighted here is on the model we reviewed, but all configurations are currently discounted.

    See the HP Spectre x360 14 at HP.com

    Lenovo IdeaPad 3i

    A grey Lenovo IdeaPad 3i with a purple abstract image on the screen.

    Lenovo

    From: Walmart

    Was: $699

    New: $429 ($270 off)

    The Lenovo IdeaPad 3i is a nice everyday use laptop. It has 512GB of storage and 8GB of RAM, which is more than enough for web browsing and whatnot. This laptop is running Windows 11 Home and the processor is a quad-core, eight thread Intel “Comet Lake” Core i5-10210U. That’s a generation behind, but it’s still a capable processor. The screen is also 14-inches with a 1080p resolution.

    See the Lenovo IdeaPad 3i at Walmart

    Acer Chromebook 315

    The Acer Chromebook 315 facing from right with a shore scene on the wallpaper.

    Acer

    From: Walmart

    Was: $289

    Now: $216.94 ($72.06 off)

    If you’d like something a little beefier than the Lenovo Chromebook at Best Buy, take a look at this deal at Walmart. The Acer 315 is a 15.6-inch laptop with a 1080p touch display. Again, this is not a convertible laptop so no bending back the keyboard for a tablet-like experience. The processor is the Intel Celeron N4020, which is pretty standard for Chromebooks. It has 4GB of RAM, 64GB of onboard storage, 802.11ac Wi-Fi, and Bluetooth 5.0.

    See the Acer Chromebook 315 at Walmart

    Gigabyte G5 MD

    From: Newegg

    Was: $1,199.00

    Now: $899 ($300 off after $100 rebate)

    Gaming laptops are tough to find at a discount right now, and this one is a decidedly mixed bag of pros and cons. Inside is a Core i5-11400H, Nvidia GeForce RTX 3050 Ti, 16GB of RAM, and a 512GB SSD behind a 144Hz 1080p screen. The RTX 3050 Ti is usually considered to be a bum deal compared to the RTX 3060…if you can find one? And, whoa—there’s a $100 rebate card you need to fill out? If you’re willing to jump through these hoops, though, the price and savings aren’t bad at all. This offer ends just before midnight Pacific time on Tuesday, November 30.

    See the Gigabyte G5 MD at Newegg

    Microsoft Surface Laptop Go (Platinum)

    The Surface Go on a white desk with various other desk items in the background

    Microsoft

    From: Microsoft Store

    Was: $699.99

    Now: $549.99 ($150 off)

    We gave the Microsoft Surface Laptop Go, Microsoft’s 12.4-inch budget laptop, 3.5 stars out of 5 in our Surface Laptop Go review. We felt it was a little overpriced. Dropping the price by $150 on its midrange version (Core i5/8GB RAM/128GB SSD) certainly helps! Just be aware that the Laptop Go’s display is sub-1080p quality—but, in our experience, it didn’t really matter.

    See the Surface Laptop Go on Microsoft.com

    Lenovo IdeaPad Duet

    the duet with its keyboard and display separated on a pick background

    Lenovo

    From: Walmart.com

    Was: $249.00

    Now: $207 ($42 off)

    If you like the concept of a Chrome OS tablet but think that the Chromebook Plus V2 price is too high, consider the Lenovo IdeaPad Duet, which we looked at last year. This tablet ships with 4GB of memory and 64GB of integrated storage.

    Support runs through June 2029.

    Buy the Lenovo IdeaPad Duet Chromebook at Walmart

    Laptop deal buying tips

    If you’ve shopped online before for laptop deals you’re probably aware that there’s a vast range of laptop configurations available.

    A good place to start is with the processor. Buy laptops with Intel 10-series Core chips or higher, such as the Core i5-10510U, or the Core i7-11800H (for even more details see our Intel 10th-gen mobile CPU buying guide); or go with an AMD Ryzen processor (but not an AMD Athlon or A-series chip). Avoid laptops with Pentium or Celeron processors unless it’s a Chromebook (running Chrome OS). You’re going to need to pay attention with gaming laptops, too, as some GPUs, like the RTX 3050 Ti, don’t offer much boost over their RTX 2xxx-series cousins, and Nvidia has dropped the Max-Q designation on certain low-power options. Our laptop CPU and GPU cheat sheet can help you shop smart.

    Display resolution is a gotcha. If you see a laptop labeled as “HD” resolution that means 1366-by-768 and often isn’t worth your time for a laptop under 13 inches unless the deal is absolutely standout. What you want is “Full HD” or “FHD,” which means 1080p.

    Don’t buy laptops with under 4GB of RAM or 128GB of SSD storage—though on a Chromebook, this configuration is acceptable. We have more explanation in our laptops versus Chromebooks buying guide, as well as in our primer on how to buy a budget laptop without getting screwed. Also watch out for eMMC storage, which is something we don’t recommend for a Windows laptop but works fine for a Chromebook.

    Reviews can be helpful. Even if you can’t find a review of a specific configuration, try related models. They’ll often give you a good idea of the build quality and performance. Also buy from brands you trust. Amazon’s daily laptop deals right now are full of brands we’ve never tested or talked to (Broage, Teclast, DaySky, Jumper) and it’s just a good idea to be wary.

    Most older laptops will run Windows 10, and that’s fine—there’s no rush to upgrade. Windows 10 in S Mode, though annoying, can be switched out of easily if you find it on a budget laptop. If you want to buy a Windows 10 PC with the intent of upgrading it to Windows 11, we recommend you start here with a list of older laptops that are Windows 11-eligible.

    Updated on January 27 with additional deals, and to remove expired deals.

  • Get ready for the big game with this 65-inch Vizio 4K TV for $500

    It’s just a little over two weeks until football’s national holiday and if you want to get a new set for the big game, now is your chance. Best Buy is selling a 65-inch Vizio 4K TV for $500, which is down from $600. The deal lasts until February 15.

    The Vizio V655-J09 features 3840-by-2160 resolution and it supports a variety of high dynamic range formats including Dolby Vision, HDR10, HDR10+, and HLG. However, neither Best Buy nor Vizio list the brightness and some reports put it at under 300 nits. In other words, it doesn’t hit the 1,000 nits that HDR usually requires. That said, it should still enhance the picture to a certain degree compared to non-HDR TVs.

    Vizio’s smart TV support a wide variety of services including Chromecast and Apple AirPlay 2. They also support Amazon’s Alexa, Apple HomeKit, and Google Assistant. That means you can control some of the TV’s functions with a variety of smart speakers and other devices. While there aren’t any voice assistants built-in, the TV does come with Vizio’s own push-to-talk voice remote.

    And, of course, this TV supports a bunch of streaming apps like Apple TV+, Disney Plus, HBO Max, Netflix, Prime Video, and more.

    This is an excellent set for enjoying the next Super Bowl and beyond and right now you can get it for $100 off.

    [Today’s deal: Vizio 65-inch 4K TV for $500 at Best Buy.]

CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.