Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • Standards for Writing Accessibly

    Writing to meet WCAG2 standards can be a challenge, but it’s worthwhile. Albert Einstein, the archetypical genius and physicist, once said, “Any fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.”

    Hopefully, this entire book will help you better write for accessibility. So far, you’ve learned:

    • Why clarity is important
    • How to structure messages for error states and stress cases
    • How to test the effectiveness of the words you write

    All that should help your writing be better for screen readers, give additional context to users who may need it, and be easier to parse.

    But there are a few specific points that you may not otherwise think about, even after reading these pages.

    Writing for Screen Readers

    People with little or no sight interact with apps and websites in a much different way than sighted people do. Screen readers parse the elements on the screen (to the best of their abilities) and read it back to the user. And along the way, there are many ways this could go wrong. As the interface writer, your role is perhaps most important in giving screen reader users the best context.

    Here are a few things to keep in mind about screen readers:

    • The average reading time for sighted readers is two to five words per second. Screen-reader users can comprehend text being read at an average of 35 syllables per second, which is significantly faster. Don’t be afraid to sacrifice brevity for clarity, especially when extra context is needed or useful.
    • People want to be able to skim long blocks of text, regardless of sight or audio, so it’s extremely important to structure your longform writing with headers, short paragraphs, and other content design best practices.

    Write Chronologically, Not Spatially

    Writing chronologically is about describing the order of things, rather than where they appear spatially in the interface. There are so many good reasons to do this (devices and browsers will render interfaces differently), but screen readers show you the most valuable reason. You’ll often be faced with writing tooltips or onboarding elements that say something like, “Click the OK button below to continue.” Or “See the instructions above to save your document.”

    Screen readers will do their job and read those instructions aloud to someone who can’t see the spatial relationships between words and objects. While many times, they can cope with that, they shouldn’t have to. Consider screen reader users in your language. Embrace the universal experience shared by humans and rely on their intrinsic understanding of the top is first, bottom is last paradigm. Write chronologically, as in Figure 5.5.

    A screenshot of a form with a password field. The password hints show up below the password field.
    FIGURE 5.5 Password hint microcopy below the password field won’t help someone using a screen reader who hasn’t made it there yet.

    Rather than saying:

    • Click the OK button below to continue.
    • (A button that scrolls you to the top of a page): Go to top.

    Instead, say:

    • Next, select OK to continue.
    • Go to beginning.

    Write Left to Right, Top to Bottom

    While you don’t want to convey spatial meaning in your writing, you still want to keep that spatial order in mind.

    Have you ever purchased a service or a product, only to find out later that there were conditions you didn’t know about before you paid for it? Maybe you didn’t realize batteries weren’t included in that gadget, or that signing up for that social network, you were implicitly agreeing to provide data to third-party advertisers.

    People who use screen readers face this all the time.

    Most screen readers will parse information from left to write, from top to bottom.1 Think about a few things when reviewing the order and placement of your words. Is there information critical to performing an action, or making a decision, that appears after (to the right or below) an action item, like in Figure 5.5? If so, consider moving it up in the interface.

    Instead, if there’s information critical to an action (rules around setting a password, for example, or accepting terms of service before proceeding), place it before the text field or action button. Even if it’s hidden in a tooltip or info button, it should be presented before a user arrives at a decision point.

    Don’t Use Colors and Icons Alone

    If you are a sighted American user of digital products, there’s a pretty good chance that if you see a message in red, you’ll interpret it as a warning message or think something’s wrong. And if you see a message in green, you’ll likely associate that with success. But while colors aid in conveying meaning to this type of user, they don’t necessarily mean the same thing to those from other cultures.

    For example, although red might indicate excitement, or danger in the U.S. (broadly speaking), in other cultures it means something entirely different:

    • In China, it represents good luck.
    • In some former-Soviet, eastern European countries it’s the color strongly associated with Communism.
    • In India, it represents purity.

    Yellow, which we in the U.S. often use to mean “caution” (because we’re borrowing a mental model from traffic lights), might convey another meaning for people in other cultures:

    • In Latin America, yellow is associated with death.
    • In Eastern and Asian cultures, it’s a royal color—sacred and often imperial.

    And what about users with color-blindness or low to no vision? And what about screen readers? Intrinsic meaning from the interface color means nothing for them. Be sure to add words that bear context so that if you heard the message being read aloud, you would understand what was being said, as in Figure 5.6.

    Two error tooltips: one that says 'Save your work!' and one that says 'Save your work before going to the next step'
    FIGURE 5.6 While a simple in-app message warning a user to save their work before proceeding is more effective, visually, if it is red and has a warning icon, as seen on the left, you should provide more context when possible. The example on the right explicitly says that a user won’t be able to proceed to the next step before saving their work.

    Describe the Action, Not the Behavior

    Touch-first interfaces have been steadily growing and replacing keyboard/mouse interfaces for years, so no longer are users “clicking” a link or a button. But they’re not necessarily “tapping” it either, especially if they’re using a voice interface or an adaptive device.

    Instead of microcopy that includes behavioral actions like:

    • Click
    • Tap
    • Press
    • See

    Try device-agnostic words that describe the action, irrespective of the interface, like:

    • Choose
    • Select
    • View

    There are plenty of exceptions to this rule. If your interface requires a certain action to execute a particular function, and you need to teach the user how their gesture affects the interface (“Pinch to zoom out,” for example), then of course you need to describe the behavior. But generally, the copy you’re writing will be simpler and more consistent if you stick with the action in the context of the interface itself.

  • Making Room for Variation

    Making a brand feel unified, cohesive, and harmonious while also leaving room for experimentation is a tough balancing act. It’s one of the most challenging aspects of a design system.

    Graphic designer and Pentagram partner Paula Scher faced this challenge with the visual identity for the Public Theater in New York. As she explained in a talk at Beyond Tellerrand:

    I began to realize that if you made everything the same, it was boring after the first year. If you changed it individually for each play, the theater lost recognizability. The thing to do, which I totally got for the first time after working there at this point for 17 years, is what they needed to have were seasons.

    You could take the typography and the color system for the summer festival, the Shakespeare in the Park Festival, and you could begin to translate it into posters by flopping the colors, but using some of the same motifs, and you could create entire seasons out of the graphics. That would become its own standards manual where I have about six different people making these all year (http://bkaprt.com/eds/04-01/).

    Scher’s strategy was to retain the Public Theater’s visual language every year, but to vary some of its elements (Fig 4.1–2). Colors would be swapped. Text would skew in different directions. New visual motifs would be introduced. The result is that each season coheres in its own way, but so does the identity of the Public Theater as a whole.

    Sixteen Public Theater posters in black, white, and yellow, with slanted wood type letterforms and high-contrast images of people.
    Fig 4.1: The posters for the 2014/15 season featured the wood type style the Public Theater is known for, but the typography was skewed. The color palette was restrained to yellow, black, and white, which led to a dynamic look when coupled with the skewed type (http://bkaprt.com/eds/04-02/).
    Twelve Public Theater posters using black, white, and pastel colors with wood type letterforms and softer images of people.
    Fig 4.2: For the 2018 season, the wood type letterforms were extended on a field of gradated color. The grayscale cut-out photos we saw in the 2014/15 season persisted, but this time in lower contrast to fit better with the softer color tones (http://bkaprt.com/eds/04-03/).

    Even the most robust or thoroughly planned systems will need to account for variation at some point. As soon as you release a design system, people will ask you how to deviate from it, and you’ll want to be armed with persuasive answers. In this chapter, I’m going to talk about what variation means for a design system, how to know when you need it, and how to manage it in a scalable way.

    What Is Variation?

    We’ve spent most of this book talking about the importance of unity, cohesion, and harmony in a design system. So why are we talking about variation? Isn’t that at odds with all of the goals we’ve set until now?

    Variation is a deviation from established patterns, and it can exist at every level of the system. At the component level, for instance, a team may discover that they need a component to behave in a slightly different way; maybe this particular component needs to appear without a photo, for example. At a design-language level, you may have a team that has a different audience, so they want to adjust their brand identity to serve that audience better. You can even have variation at the level of design principles: if a team is working on a product that is functionally different from your core product, they may need to adjust their principles to suit that context.

    There are three kinds of deviations that come up in a design system:

    • Unintentional divergence typically happens when designers can’t find the information they’re looking for. They may not know that a certain solution exists within a system, so they create their own style. Clear, easy-to-find documentation and usage guidelines can help your team avoid unintentional variation.
    • Intentional but unnecessary divergence usually results from designers not wanting to feel constrained by the system, or believing they have a better solution. Making sure your team knows how to push back on and contribute to the system can help mitigate this kind of variation.
    • Intentional, meaningful divergence is the goal of an expressive design system. In this case, the divergence is meaningful because it solves a very specific user problem that no existing pattern solves.

    We want to enable intentional, meaningful variation. To do this, we need to understand the needs and contexts for variation.

    Contexts for Variation

    Every variation we add makes our design system more complicated. Therefore, we need to take care to find the right moments for variation. Three big contextual changes are served by variation: brand, audience, and environment.

    Brand

    If you’re creating a system for multiple brands, each with its own brand language, then your system needs to support variations to reflect those brands.

    The key here is to find the common core elements and then set some criteria for how you should deviate. When we were creating the design system for our websites at Vox Media, we constantly debated which elements should feel more expressive. Should a footer be standardized, or should we allow for tons of customization? We went back to our core goals: our users were ultimately visiting our websites to consume editorial content. So the variations should be in service of the content, writing style, and tone of voice for each brand.

    The newsletter modules across Vox Media brands were an example of unnecessary variation. They were consistent in functionality and layout, but had variations in type, color, and visual treatments like borders (Fig 4.3). There was quite a bit of custom design within a very small area: Curbed’s newsletter component had a skewed background, for example, while Eater’s had a background image. Because these modules were so consistent in their user goals, we decided to unify their design and create less variation (Fig 4.4).

    Fig 4.3: Older versions of Vox Media’s newsletter modules contained lots of unnecessary visual variation.
    Three examples of newsletter modules, showing the same colors, fonts, and spacing.
    Fig 4.4: The new, unified newsletter modules.

    The unified design cleaned up some technical debt. In the previous design, each newsletter module had CSS overrides to achieve distinct styling. Some modules even had overrides on the primary button color so it would work better with the background color. Little CSS overrides like this add up over time. Whenever we released a new change, we’d have to manually update the spots containing CSS overrides.

    The streamlined design also placed a more appropriate emphasis on the newsletter module. While important, this module isn’t the star of the page. It doesn’t need loud backgrounds or fancy shapes to command attention, especially since it’s placed around article content. Variation in this module wasn’t necessary for expressing the brands.

    On the other hand, consider the variation in Vox Media’s global header components. When we were redesigning the Verge, its editorial teams were vocal about wanting more latitude to art-direct the page, guide attention toward big features, and showcase custom illustrations. We addressed this by creating a masthead component (Fig 4.5) that sits on top of the global header on homepages. It contains a logo, tagline, date, and customizable background image. Though at the time this was a one-off component, we felt that the variation was valuable because it would strengthen the Verge’s brand voice.

    Example of the Verge’s masthead component with magenta and blue abstractions. Example of the Verge’s masthead component with a city skyline in orange tones. Example of the Verge’s masthead component in pixelated black and white.
    Fig 4.5: Examples of the Verge's masthead component

    The Verge team commissions or makes original art that changes throughout the day. The most exciting part is that they can use the masthead and a one-up hero when they drop a big feature and use these flexible components to art-direct the page (Fig 4.6). Soon after launch, the Verge masthead even got a Twitter fan account (@VergeTaglines) that tweets every time the image changes.

    Comparison of the Verge’s homepage, changing based on the masthead design and hero photography. Comparison of the Verge’s homepage, changing based on the masthead design and hero photography.
    Fig 4.6: The Verge uses two generic components, the masthead and one-up hero, to art-direct its homepages.

    Though this component was built specifically for the Verge, it soon gained broader application with other brands that share Vox’s publishing platform, Chorus. The McElroy Family website, for example, needed to convey its sense of humor and Appalachian roots; the masthead component shines with an original illustration featuring an adorable squirrel (Fig 4.7).

    The masthead component for the McElroy Family, showing a blue navigation bar and a pastel illustration of a forest.
    Fig 4.7: The McElroy Family site uses the same masthead component as the Verge to display a custom illustration.
    The masthead component for the Chicago Sun-Times, showing a white background, stark black text, and a red Subscribe button.
    Fig 4.8: The same masthead component on the Chicago Sun-Times site.

    The Chicago Sun-Times—another Chorus platform site—is very different in content, tone, and audience from The McElroy Family, but the masthead component is just as valuable in conveying the tone of the organization’s high-quality investigative journalism and breaking news coverage (Fig 4.8).

    Why did the masthead variation work well while the newsletter variation didn’t? The variations on the newsletter design were purely visual. When we created them, we didn’t have a strategy for how variation should work; instead, we were looking for any opportunity to make the brands feel distinct. The masthead variation, by contrast, tied directly into the brand strategy. Even though it began as a one-off for the Verge, it was flexible and purposeful enough to migrate to other brands.

    Audience

    The next contextual variation comes from audience. If your products serve different audiences who all need different things, then your system may need to adapt to fit those needs.

    A good example of this is Airbnb’s listing pages. In addition to their standard listings, they also have Airbnb Plus—one-of-a-kind, high quality rentals at higher price points. Audiences booking a Plus listing are probably looking for exceptional quality and attention to detail.

    Both Airbnb’s standard listing page and Plus listing page are immediately recognizable as belonging to the same family because they use many consistent elements (Fig 4.9). They both use Airbnb’s custom font, Cereal. They both highlight photography. They both use many of the same components, like the date picker. The iconography is the same.

    Screenshot of AirBnB's standard listing Screenshot of AirBnB's Plus listing
    Fig 4.9: The same brand elements in Airbnb’s standard listings (above) are used in their Plus listings (below), but with variations that make the listing styles distinct.

    However, some of the design choices convey a different attitude. Airbnb Plus uses larger typography, airier vertical space, and a lighter weight of Cereal. It has a more understated color palette, with a deeper color on the call to action. These choices make Airbnb Plus feel like a more premium experience. You can see they’ve adjusted the density, weight, and scale levers to achieve a more elegant and sophisticated aesthetic.

    The standard listing page, on the other hand, is more functional, with the booking module front and center. The Plus design pulls the density and weight levers in a lighter, airier direction. The standard listing page has less size contrast between elements, making it feel more functional.

    Because they use the same core building blocks—the same typography, iconography, and components—both experiences feel like Airbnb. However, the variations in spacing, typographic weights, and color help distinguish the standard listing from the premium listing.

    Environment

    I’ve mainly been talking about adding variation to a system to allow for a range of content tones, but you may also need your system to scale based on environmental contexts. “Environment” in this context asks: Where will your products be used? Will that have an impact on the experience? Environments are the various constraints and pressures that surround and inform an experience. That can include lighting, ambient noise, passive or active engagement, expected focus level, or devices.

    Shopify’s Polaris design system initially grew out of Shopify’s Store Management product. When the Shopify Retail team kicked off a project to design the next generation point-of-sale (POS) system, they realized that the patterns in Polaris didn’t exactly fit their needs. The POS system needed to work well in a retail space, often under bright lighting. The app needed to be used at arm’s length, twenty-four to thirty-six inches away from the merchant. And unlike the core admin, where the primary interaction is between the merchant and the UI, merchants using the POS system needed to prioritize their interactions with their customers instead of the UI. The Retail team wanted merchants to achieve an “eyes-closed” level of mastery over the UI so they could maintain eye contact with their customers.

    The Retail team decided that the existing color palette, which only worked on a light background, would not be clear enough under the bright lights of a retail shop. The type scale was also too small to be used at arm’s length. And in order for merchants to use the POS system without breaking eye contact with customers, the buttons and other UI elements would need to be much larger.

    The Retail team recognized that the current design system didn’t support a variety of environmental scenarios. But after talking with the Polaris team, they realized that other teams would benefit from the solutions they created. The Warehouse team, for example, was also developing an app that needed to be used at arm’s length under bright lights. This work inspired the Polaris team to create a dark mode for the system (Fig 4.10).

    Comparison of light and dark modes for navigation menus in the Polaris design system.
    Fig 4.10: Polaris light mode (left) and dark mode (right).

    This feedback loop between product team and design system team is a great example of how to build the right variation into your system. Build your system around helping your users navigate your product more clearly and serving content needs and you’ll unlock scalable expression.

  • Request with Intent: Caching Strategies in the Age of PWAs

    Once upon a time, we relied on browsers to handle caching for us; as developers in those days, we had very little control. But then came Progressive Web Apps (PWAs), Service Workers, and the Cache API—and suddenly we have expansive power over what gets put in the cache and how it gets put there. We can now cache everything we want to… and therein lies a potential problem.

    Media files—especially images—make up the bulk of average page weight these days, and it’s getting worse. In order to improve performance, it’s tempting to cache as much of this content as possible, but should we? In most cases, no. Even with all this newfangled technology at our fingertips, great performance still hinges on a simple rule: request only what you need and make each request as small as possible.

    To provide the best possible experience for our users without abusing their network connection or their hard drive, it’s time to put a spin on some classic best practices, experiment with media caching strategies, and play around with a few Cache API tricks that Service Workers have hidden up their sleeves.

    Best intentions

    All those lessons we learned optimizing web pages for dial-up became super-useful again when mobile took off, and they continue to be applicable in the work we do for a global audience today. Unreliable or high latency network connections are still the norm in many parts of the world, reminding us that it’s never safe to assume a technical baseline lifts evenly or in sync with its corresponding cutting edge. And that’s the thing about performance best practices: history has borne out that approaches that are good for performance now will continue being good for performance in the future.

    Before the advent of Service Workers, we could provide some instructions to browsers with respect to how long they should cache a particular resource, but that was about it. Documents and assets downloaded to a user’s machine would be dropped into a directory on their hard drive. When the browser assembled a request for a particular document or asset, it would peek in the cache first to see if it already had what it needed to possibly avoid hitting the network.

    We have considerably more control over network requests and the cache these days, but that doesn’t excuse us from being thoughtful about the resources on our web pages.

    Request only what you need

    As I mentioned, the web today is lousy with media. Images and videos have become a dominant means of communication. They may convert well when it comes to sales and marketing, but they are hardly performant when it comes to download and rendering speed. With this in mind, each and every image (and video, etc.) should have to fight for its place on the page. 

    A few years back, a recipe of mine was included in a newspaper story on cooking with spirits (alcohol, not ghosts). I don’t subscribe to the print version of that paper, so when the article came out I went to the site to take a look at how it turned out. During a recent redesign, the site had decided to load all articles into a nearly full-screen modal viewbox layered on top of their homepage. This meant requesting the article required requests for all of the assets associated with the article page plus all the contents and assets for the homepage. Oh, and the homepage had video ads—plural. And, yes, they auto-played.

    I popped open DevTools and discovered the page had blown past 15 MB in page weight. Tim Kadlec had recently launched What Does My Site Cost?, so I decided to check out the damage. Turns out that the actual cost to view that page for the average US-based user was more than the cost of the print version of that day’s newspaper. That’s just messed up.

    Sure, I could blame the folks who built the site for doing their readers such a disservice, but the reality is that none of us go to work with the goal of worsening our users’ experiences. This could happen to any of us. We could spend days scrutinizing the performance of a page only to have some committee decide to set that carefully crafted page atop a Times Square of auto-playing video ads. Imagine how much worse things would be if we were stacking two abysmally-performing pages on top of each other!

    Media can be great for drawing attention when competition is high (e.g., on the homepage of a newspaper), but when you want readers to focus on a single task (e.g., reading the actual article), its value can drop from important to “nice to have.” Yes, studies have shown that images excel at drawing eyeballs, but once a visitor is on the article page, no one cares; we’re just making it take longer to download and more expensive to access. The situation only gets worse as we shove more media into the page. 

    We must do everything in our power to reduce the weight of our pages, so avoid requests for things that don’t add value. For starters, if you’re writing an article about a data breach, resist the urge to include that ridiculous stock photo of some random dude in a hoodie typing on a computer in a very dark room.

    Request the smallest file you can

    Now that we’ve taken stock of what we do need to include, we must ask ourselves a critical question: How can we deliver it in the fastest way possible? This can be as simple as choosing the most appropriate image format for the content presented (and optimizing the heck out of it) or as complex as recreating assets entirely (for example, if switching from raster to vector imagery would be more efficient).

    Offer alternate formats

    When it comes to image formats, we don’t have to choose between performance and reach anymore. We can provide multiple options and let the browser decide which one to use, based on what it can handle.

    You can accomplish this by offering multiple sources within a picture or video element. Start by creating multiple formats of the media asset. For example, with WebP and JPG, it’s likely that the WebP will have a smaller file size than the JPG (but check to make sure). With those alternate sources, you can drop them into a picture like this:

    <picture>
      <source srcset="my.webp" type="image/webp">
      <img src="/my.jpg" alt="Descriptive text about the picture.">
    </picture>

    Browsers that recognize the picture element will check the source element before making a decision about which image to request. If the browser supports the MIME type “image/webp,” it will kick off a request for the WebP format image. If not (or if the browser doesn’t recognize picture), it will request the JPG. 

    The nice thing about this approach is that you’re serving the smallest image possible to the user without having to resort to any sort of JavaScript hackery.

    You can take the same approach with video files:

    <video controls>
      <source src="/my.webm" type="video/webm">
      <source src="/my.mp4" type="video/mp4">
      <p>Your browser doesn’t support native video playback,
        but you can <a href="/my.mp4" download>download</a>
        this video instead.</p>
    </video>

    Browsers that support WebM will request the first source, whereas browsers that don’t—but do understand MP4 videos—will request the second one. Browsers that don’t support the video element will fall back to the paragraph about downloading the file.

    The order of your source elements matters. Browsers will choose the first usable source, so if you specify an optimized alternative format after a more widely compatible one, the alternative format may never get picked up.  

    Depending on your situation, you might consider bypassing this markup-based approach and handle things on the server instead. For example, if a JPG is being requested and the browser supports WebP (which is indicated in the Accept header), there’s nothing stopping you from replying with a WebP version of the resource. In fact, some CDN services—Cloudinary, for instance—come with this sort of functionality right out of the box.

    Offer different sizes

    Formats aside, you may want to deliver alternate image sizes optimized for the current size of the browser’s viewport. After all, there’s no point loading an image that’s 3–4 times larger than the screen rendering it; that’s just wasting bandwidth. This is where responsive images come in.

    Here’s an example:

    <img src="/medium.jpg"
      srcset="small.jpg 256w,
        medium.jpg 512w,
        large.jpg 1024w"
      sizes="(min-width: 30em) 30em, 100vw"
      alt="Descriptive text about the picture.">

    There’s a lot going on in this super-charged img element, so I’ll break it down:

    • This img offers three size options for a given JPG: 256 px wide (small.jpg), 512 px wide (medium.jpg), and 1024 px wide (large.jpg). These are provided in the srcset attribute with corresponding width descriptors.
    • The src defines a default image source, which acts as a fallback for browsers that don’t support srcset. Your choice for the default image will likely depend on the context and general usage patterns. Often I’d recommend the smallest image be the default, but if the majority of your traffic is on older desktop browsers, you might want to go with the medium-sized image.
    • The sizes attribute is a presentational hint that informs the browser how the image will be rendered in different scenarios (its extrinsic size) once CSS has been applied. This particular example says that the image will be the full width of the viewport (100vw) until the viewport reaches 30 em in width (min-width: 30em), at which point the image will be 30 em wide. You can make the sizes value as complicated or as simple as you want; omitting it causes browsers to use the default value of 100vw.

    You can even combine this approach with alternate formats and crops within a single picture. 🤯

    All of this is to say that you have a number of tools at your disposal for delivering fast-loading media, so use them!

    Defer requests (when possible)

    Years ago, Internet Explorer 11 introduced a new attribute that enabled developers to de-prioritize specific img elements to speed up page rendering: lazyload. That attribute never went anywhere, standards-wise, but it was a solid attempt to defer image loading until images are in view (or close to it) without having to involve JavaScript.

    There have been countless JavaScript-based implementations of lazy loading images since then, but recently Google also took a stab at a more declarative approach, using a different attribute: loading.

    The loading attribute supports three values (“auto,” “lazy,” and “eager”) to define how a resource should be brought in. For our purposes, the “lazy” value is the most interesting because it defers loading the resource until it reaches a calculated distance from the viewport.

    Adding that into the mix…

    <img src="/medium.jpg"
      srcset="small.jpg 256w,
        medium.jpg 512w,
        large.jpg 1024w"
      sizes="(min-width: 30em) 30em, 100vw"
      loading="lazy"
      alt="Descriptive text about the picture.">

    This attribute offers a bit of a performance boost in Chromium-based browsers. Hopefully it will become a standard and get picked up by other browsers in the future, but in the meantime there’s no harm in including it because browsers that don’t understand the attribute will simply ignore it.

    This approach complements a media prioritization strategy really well, but before I get to that, I want to take a closer look at Service Workers.

    Manipulate requests in a Service Worker

    Service Workers are a special type of Web Worker with the ability to intercept, modify, and respond to all network requests via the Fetch API. They also have access to the Cache API, as well as other asynchronous client-side data stores like IndexedDB for resource storage.

    When a Service Worker is installed, you can hook into that event and prime the cache with resources you want to use later. Many folks use this opportunity to squirrel away copies of global assets, including styles, scripts, logos, and the like, but you can also use it to cache images for use when network requests fail.

    Keep a fallback image in your back pocket

    Assuming you want to use a fallback in more than one networking recipe, you can set up a named function that will respond with that resource:

    function respondWithFallbackImage() {
      return caches.match( "/i/fallbacks/offline.svg" );
    }

    Then, within a fetch event handler, you can use that function to provide that fallback image when requests for images fail at the network:

    self.addEventListener( "fetch", event => {
      const request = event.request;
      if ( request.headers.get("Accept").includes("image") ) {
        event.respondWith(
          return fetch( request, { mode: 'no-cors' } )
            .then( response => {
              return response;
            })
            .catch(
              respondWithFallbackImage
            );
        );
      }
    });

    When the network is available, users get the expected behavior:

    Screenshot of a component showing a series of user profile images of users who have liked something
    Social media avatars are rendered as expected when the network is available.

    But when the network is interrupted, images will be swapped automatically for a fallback, and the user experience is still acceptable:

    Screenshot showing a series of identical generic user images in place of the individual ones which have not loaded
    A generic fallback avatar is rendered when the network is unavailable.

    On the surface, this approach may not seem all that helpful in terms of performance since you’ve essentially added an additional image download into the mix. With this system in place, however, some pretty amazing opportunities open up to you.

    Respect a user’s choice to save data

    Some users reduce their data consumption by entering a “lite” mode or turning on a “data saver” feature. When this happens, browsers will often send a Save-Data header with their network requests. 

    Within your Service Worker, you can look for this header and adjust your responses accordingly. First, you look for the header:

    let save_data = false;
    if ( 'connection' in navigator ) {
      save_data = navigator.connection.saveData;
    }

    Then, within your fetch handler for images, you might choose to preemptively respond with the fallback image instead of going to the network at all:

    self.addEventListener( "fetch", event => {
      const request = event.request;
      if ( request.headers.get("Accept").includes("image") ) {
        event.respondWith(
          if ( save_data ) {
            return respondWithFallbackImage();
          }
          // code you saw previously
        );
      }
    });

    You could even take this a step further and tune respondWithFallbackImage() to provide alternate images based on what the original request was for. To do that you’d define several fallbacks globally in the Service Worker:

    const fallback_avatar = "/i/fallbacks/avatar.svg",
          fallback_image = "/i/fallbacks/image.svg";

    Both of those files should then be cached during the Service Worker install event:

    return cache.addAll( [
      fallback_avatar,
      fallback_image
    ]);

    Finally, within respondWithFallbackImage() you could serve up the appropriate image based on the URL being fetched. In my site, the avatars are pulled from Webmention.io, so I test for that.

    function respondWithFallbackImage( url ) {
      const image = avatars.test( /webmention\.io/ ) ? fallback_avatar
                                                     : fallback_image;
      return caches.match( image );
    }

    With that change, I’ll need to update the fetch handler to pass in request.url as an argument to respondWithFallbackImage(). Once that’s done, when the network gets interrupted I end up seeing something like this:

    Screenshot showing a blog comment with a generic user profile image and image placeholder where the network could not load the actual images
    A webmention that contains both an avatar and an embedded image will render with two different fallbacks when the Save-Data header is present.

    Next, we need to establish some general guidelines for handling media assets—based on the situation, of course.

    The caching strategy: prioritize certain media

    In my experience, media—especially images—on the web tend to fall into three categories of necessity. At one end of the spectrum are elements that don’t add meaningful value. At the other end of the spectrum are critical assets that do add value, such as charts and graphs that are essential to understanding the surrounding content. Somewhere in the middle are what I would call “nice-to-have” media. They do add value to the core experience of a page but are not critical to understanding the content.

    If you consider your media with this division in mind, you can establish some general guidelines for handling each, based on the situation. In other words, a caching strategy.

    Media loading strategy, broken down by how critical an asset is to understanding an interface
    Media category Fast connection Save-Data Slow connection No network
    Critical Load media Replace with placeholder
    Nice-to-have Load media Replace with placeholder
    Non-critical Remove from content entirely

    When it comes to disambiguating the critical from the nice-to-have, it’s helpful to have those resources organized into separate directories (or similar). That way we can add some logic into the Service Worker that can help it decide which is which. For example, on my own personal site, critical images are either self-hosted or come from the website for my book. Knowing that, I can write regular expressions that match those domains:

    const high_priority = [
        /aaron\-gustafson\.com/,
        /adaptivewebdesign\.info/
      ];

    With that high_priority variable defined, I can create a function that will let me know if a given image request (for example) is a high priority request or not:

    function isHighPriority( url ) {
      // how many high priority links are we dealing with?
      let i = high_priority.length;
      // loop through each
      while ( i-- ) {
        // does the request URL match this regular expression?
        if ( high_priority[i].test( url ) ) {
          // yes, it’s a high priority request
          return true;
        }
      }
      // no matches, not high priority
      return false;
    }

    Adding support for prioritizing media requests only requires adding a new conditional into the fetch event handler, like we did with Save-Data. Your specific recipe for network and cache handling will likely differ, but here was how I chose to mix in this logic within image requests:

    // Check the cache first
      // Return the cached image if we have one
      // If the image is not in the cache, continue
    
    // Is this image high priority?
    if ( isHighPriority( url ) ) {
    
      // Fetch the image
        // If the fetch succeeds, save a copy in the cache
        // If not, respond with an "offline" placeholder
    
    // Not high priority
    } else {
    
      // Should I save data?
      if ( save_data ) {
    
        // Respond with a "saving data" placeholder
    
      // Not saving data
      } else {
    
        // Fetch the image
          // If the fetch succeeds, save a copy in the cache
          // If not, respond with an "offline" placeholder
      }
    }

    We can apply this prioritized approach to many kinds of assets. We could even use it to control which pages are served cache-first vs. network-first.

    Keep the cache tidy

    The  ability to control which resources are cached to disk is a huge opportunity, but it also carries with it an equally huge responsibility not to abuse it.

    Every caching strategy is likely to differ, at least a little bit. If we’re publishing a book online, for instance, it might make sense to cache all of the chapters, images, etc. for offline viewing. There’s a fixed amount of content and—assuming there aren’t a ton of heavy images and videos—users will benefit from not having to download each chapter separately.

    On a news site, however, caching every article and photo will quickly fill up our users’ hard drives. If a site offers an indeterminate number of pages and assets, it’s critical to have a caching strategy that puts hard limits on how many resources we’re caching to disk. 

    One way to do this is to create several different blocks associated with caching different forms of content. The more ephemeral content caches can have strict limits around how many items can be stored. Sure, we’ll still be bound to the storage limits of the device, but do we really want our website to take up 2 GB of someone’s hard drive?

    Here’s an example, again from my own site:

    const sw_caches = {
      static: {
        name: `${version}static`
      },
      images: {
        name: `${version}images`,
        limit: 75
      },
      pages: {
        name: `${version}pages`,
        limit: 5
      },
      other: {
        name: `${version}other`,
        limit: 50
      }
    }

    Here I’ve defined several caches, each with a name used for addressing it in the Cache API and a version prefix. The version is defined elsewhere in the Service Worker, and allows me to purge all caches at once if necessary.

    With the exception of the static cache, which is used for static assets, every cache has a limit to the number of items that may be stored. I only cache the most recent 5 pages someone has visited, for instance. Images are limited to the most recent 75, and so on. This is an approach that Jeremy Keith outlines in his fantastic book Going Offline (which you should really read if you haven’t already—here’s a sample).

    With these cache definitions in place, I can clean up my caches periodically and prune the oldest items. Here’s Jeremy’s recommended code for this approach:

    function trimCache(cacheName, maxItems) {
      // Open the cache
      caches.open(cacheName)
      .then( cache => {
        // Get the keys and count them
        cache.keys()
        .then(keys => {
          // Do we have more than we should?
          if (keys.length > maxItems) {
            // Delete the oldest item and run trim again
            cache.delete(keys[0])
            .then( () => {
              trimCache(cacheName, maxItems)
            });
          }
        });
      });
    }

    We can trigger this code to run whenever a new page loads. By running it in the Service Worker, it runs in a separate thread and won’t drag down the site’s responsiveness. We trigger it by posting a message (using postMessage()) to the Service Worker from the main JavaScript thread:

    // First check to see if you have an active service worker
    if ( navigator.serviceWorker.controller ) {
      // Then add an event listener
      window.addEventListener( "load", function(){
        // Tell the service worker to clean up
        navigator.serviceWorker.controller.postMessage( "clean up" );
      });
    }

    The final step in wiring it all up is setting up the Service Worker to receive the message:

    addEventListener("message", messageEvent => {
      if (messageEvent.data == "clean up") {
        // loop though the caches
        for ( let key in sw_caches ) {
          // if the cache has a limit
          if ( sw_caches[key].limit !== undefined ) {
            // trim it to that limit
            trimCache( sw_caches[key].name, sw_caches[key].limit );
          }
        }
      }
    });

    Here, the Service Worker listens for inbound messages and responds to the “clean up” request by running trimCache() on each of the cache buckets with a defined limit.

    This approach is by no means elegant, but it works. It would be far better to make decisions about purging cached responses based on how frequently each item is accessed and/or how much room it takes up on disk. (Removing cached items based purely on when they were cached isn’t nearly as useful.) Sadly, we don’t have that level of detail when it comes to inspecting the caches…yet. I’m actually working to address this limitation in the Cache API right now.

    Your users always come first

    The technologies underlying Progressive Web Apps are continuing to mature, but even if you aren’t interested in turning your site into a PWA, there’s so much you can do today to improve your users’ experiences when it comes to media. And, as with every other form of inclusive design, it starts with centering on your users who are most at risk of having an awful experience.

    Draw distinctions between critical, nice-to-have, and superfluous media. Remove the cruft, then optimize the bejeezus out of each remaining asset. Serve your media in multiple formats and sizes, prioritizing the smallest versions first to make the most of high latency and slow connections. If your users say they want to save data, respect that and have a fallback plan in place. Cache wisely and with the utmost respect for your users’ disk space. And, finally, audit your caching strategies regularly—especially when it comes to large media files.Follow these guidelines, and every one of your users—from folks rocking a JioPhone on a rural mobile network in India to people on a high-end gaming laptop wired to a 10 Gbps fiber line in Silicon Valley—will thank you.

  • Responsible JavaScript: Part III

    You’ve done everything you thought was possible to address your website’s JavaScript problem. You relied on the web platform where you could. You sidestepped Babel and found smaller framework alternatives. You whittled your application code down to its most streamlined form possible. Yet, things are just not fast enough. When websites fail to perform the way we as designers and developers expect them to, we inevitably turn on ourselves:

    “What are we failing to do?” “What can we do with the code we have written?” “Which parts of our architecture are failing us?”

    These are valid inquiries, as a fair share of performance woes do originate from our own code. Yet, assigning blame solely to ourselves blinds us to the unvarnished truth that a sizable onslaught of our performance problems comes from the outside.

    When the third wheel crashes the party

    Convenience always has a price, and the web is wracked by our collective preference for it.  JavaScript, in particular, is employed in a way that suggests a rapidly increasing tendency to outsource whatever it is that We (the first party) don’t want to do. At times, this is a necessary decision; it makes perfect financial and operational sense in many situations.

    But make no mistake, third-party JavaScript is never cheap. It’s a devil’s bargain where vendors seduce you with solutions to your problem, yet conveniently fail to remind you that you have little to no control over the side effects that solution introduces. If a third-party provider adds features to their product, you bear the brunt. If they change their infrastructure, you will feel the effects of it. Those who use your site will become frustrated, and they aren’t going to bother grappling with an intolerable user experience. You can mitigate some of the symptoms of third parties, but you can’t cure the ailment unless you remove the solutions altogether—and that’s not always practical or possible.

    In this installment of Responsible JavaScript, we’ll take a slightly less technical approach than in the previous installment. We are going to talk more about the human side of third parties. Then, we’ll go down some of the technical avenues for how you might go about tackling the problem.

    Hindered by convenience

    When we talk about the sorry state of the web today, some of us are quick to point out the role of developer convenience in contributing to the problem. While I share the view that developer convenience has a tendency to harm the user experience, they’re not the only kind of convenience that can turn a website into a sluggish, janky mess.

    Operational conveniences can become precursors to a very thorny sort of technical debt. These conveniences are what we reach for when we can’t solve a pervasive problem on our own. They represent third-party solutions that address problems in the absence of architectural flexibility and/or adequate development resources.

    Whenever an inconvenience arises, that is the time to have the discussion around how to tackle it in a way that’s comprehensive. So let’s talk about what it looks like to tackle that sort of scenario from a more human angle.

    The problem is pain

    The reason third parties come into play in the first place is pain. When a decision maker in an organization has felt enough pain around a certain problem, they’re going to do a very human thing, which is to find the fastest way to make that pain go away.

    Markets will always find ways to address these pain points, even if the way they do so isn’t sustainable or even remotely helpful. Web accessibility overlays—third-party scripts that purport to automatically fix accessibility issues—are among the worst offenders. First, you fork over your money for a fix that doesn’t fix anything. Then you pay a wholly different sort of price when that “fix” harms the usability of your website. This is not a screed to discredit the usefulness of the tools some third-party vendors provide, but to illustrate how the adoption of third-party solutions happens, even those that are objectively awful

    A depiction of a long task in a flame chart from the performance panel in Chrome DevTools.
    A Chrome performance trace of a long task kicked off by a third party’s web accessibility overlay script. The task occupies the main thread for roughly 600 ms on a 2017 Retina MacBook.

    So when a vendor rolls up and promises to solve the very painful problem we’re having, there’s a good chance someone is going to nibble. If that someone is high enough in the hierarchy, they’ll exert downward pressure on others to buy in—if not circumvent them entirely in the decision-making process. Conversely, adoption of a third-party solution can also occur when those in the trenches are under pressure and lack sufficient resources to create the necessary features themselves.

    Whatever the catalyst, it pays to gather your colleagues and collectively form a plan for navigating and mitigating the problems you’re facing.

    Create a mitigation plan

    Once people in an organization have latched onto a third-party solution, however ill-advised, the difficulty you’ll encounter in forcing a course change will depend on how urgent a need that solution serves. In fact, you shouldn’t try to convince proponents of the solution that their decision was wrong. Such efforts almost always backfire and can make people feel attacked and more resistant to what you’re telling them. Even worse, those efforts could create acrimony where people stop listening to each other completely, and that is a breeding ground for far worse problems to develop.

    Grouse and commiserate amongst your peers if you must—as I myself have often done—but put your grievances aside and come up with a mitigation plan to guide your colleagues toward better outcomes. The nooks and crannies of your specific approach will depend on the third parties themselves and the structure of the organization, but the bones of it could look like the following series of questions.

    What problem does this solution address?

    There’s a reason why a third-party solution was selected, and this question will help you suss out whether the rationale for its adoption is sound. Remember, there are times decisions are made when all the necessary people are not in the room. You might be in a position where you have to react to the aftermath of that decision, but the answer to this question will lead you to a natural follow-up.

    How long do we intend to use the solution?

    This question will help you identify the solution’s shelf life. Was it introduced as a bandage, with the intent to remove it once the underlying problem has been addressed, such as in the case of an accessibility overlay? Or is the need more long-term, such as the data provided by an A/B testing suite? The other possibility is that the solution can never be effectively removed because it serves a crucial purpose, as in the case of analytics scripts. It’s like throwing a mattress in a swimming pool: it’s easy to throw in, but nigh impossible to drag back out.

    In any case, you can’t know if a third-party script is here to stay if you don’t ask. Indeed, if you find out the solution is temporary, you can form a plan to eventually remove it from your site once the underlying problem it addresses has been resolved.

    Who’s the point of contact if issues arise?

    When a third-party solution is put into place, someone must be the point of contact for when—not if—issues arise.

    I’ve seen what happens (far too often) when a third-party script gets out of control. For example, when a tag manager or an A/B testing framework’s JavaScript grows slowly and insidiously because marketers aren’t cleaning out old tags or completed A/B tests. It’s for precisely these reasons that responsibility needs to be attached to a specific person in your organization for third-party solutions currently in use on your site. What that responsibility entails will differ in every situation, but could include:

    • periodic monitoring of the third-party script’s footprint;
    • maintenance to ensure the third-party script doesn’t grow out of control;
    • occasional meetings to discuss the future of that vendor’s relationship with your organization;
    • identification of overlaps of functionality between multiple third parties, and if potential redundancies can be removed;
    • and ongoing research, especially to identify speedier alternatives that may act as better replacements for slow third-party scripts.

    The idea of responsibility in this context should never be an onerous, draconian obligation you yoke your teammates with, but rather an exercise in encouraging mindfulness in your colleagues. Because without mindfulness, a third-party script’s ill effects on your website will be overlooked until it becomes a grumbling ogre in the room that can no longer be ignored. Assigning responsibility for third parties can help to prevent that from happening.

    Ensuring responsible usage of third-party solutions

    If you can put together a mitigation plan and get everyone on board, the work of ensuring the responsible use of third-party solutions can begin. Luckily for you, the actual technical work will be easier than trying to wrangle people. So if you’ve made it this far, all it will take to get results is time and persistence.

    Load only what’s necessary

    It may seem obvious, but load only what’s necessary. Judging by the amount of unused first-party JavaScript I see loaded—let alone third-party JavaScript—it’s clearly a problem. It’s like trying to clean your house by stuffing clutter into the closets. Regardless of whether they’re actually needed, it’s not uncommon for third-party scripts to be loaded on every single page, so refer to your point of contact to figure out which pages need which third-party scripts.

    As an example, one of my past clients used a popular third-party tool across multiple brand sites to get a list of retailers for a given product. It demonstrated clear value, but that script only needed to be on a site’s product detail page. In reality, it was frequently loaded on every page. Culling this script from pages where it didn’t belong significantly boosted performance for non-product pages, which ostensibly reduced the friction on the conversion path.

    Figuring out which pages need which third-party scripts requires you to do some decidedly untechnical work. You’ll actually have to get up from your desk and talk to the person who has been assigned responsibility for the third-party solution you’re grappling with. This is very difficult work for me, but it’s rewarding when good-faith collaboration happens, and good outcomes are realized as a result.

    Self-host your third-party scripts

    This advice isn’t a secret by any stretch. I even touched on it in the previous installment of this series, but it needs to be shouted from the rooftops at every opportunity: you should self-host as many third-party resources as possible. Whether this is feasible depends on the third-party script in question.

    Is it some framework you’re grabbing from Google’s hosted libraries, cdnjs, or other similar provider? Self-host that sucker right now.

    Casper found a way to self-host their Optimizely script and significantly reduced their start render time for their trouble. It really drives home the point that a major detriment of third-party resources is the fact that their mere existence on other servers is one of the worst performance bottlenecks we encounter.

    If you’re looking to self-host an analytics solution or a similar sort of script, there’s a higher level of difficulty to contend with to self-host it. You may find that some third-party scripts simply can’t be self-hosted, but that doesn’t mean it isn’t worth the trouble to find out. If you find that self-hosting isn’t an option for a third-party script, don’t fret. There are other mitigations you can try.

    Mask latency of cross-origin connections

    If you can’t self-host your third-party scripts, the next best thing is to preconnect to servers that host them. WebPageTest’s Connection View does a fantastic job of showing you which servers your site gathers resources from, as well as the latency involved in establishing connections to them.

    A screenshot of WebPageTest's connection view, which visualizes the latency involved with all the servers that serve content for a given page in a waterfall chart.
    WebPageTest’s Connection View shows all the different servers a page requests resources from during load.

    Preconnections are effective because they establish connections to third-party servers before the browser would otherwise discover them in due course. Parsing HTML takes time, and parsers are often blocked by stylesheets and other scripts. Wherever you can’t self-host third-party scripts, preconnections make perfect sense.

    Maybe don’t preload third-party scripts

    Preloading resources is one of those things that sounds fantastic at first—until you consider its potential to backfire, as Andy Davies points out. If you’re unfamiliar with preloading, it’s similar to preconnecting but goes a step further by instructing the browser to fetch a particular resource far sooner than it ordinarily would.

    The drawback of preloading is that while it’s great for ensuring a resource gets loaded as soon as possible, it changes the discovery order of that resource. Whenever we do this, we’re implicitly saying that other resources are less important—including resources crucial to rendering or even core functionality.

    It’s probably a safe bet that most of your third-party code is not as crucial to the functionality of your site as your own code. That said, if you must preload a third-party resource, ensure you’re only doing so for third-party scripts that are critical to page rendering.

    If you do find yourself in a position where your site’s initial rendering depends on a third-party script, refer to your mitigation plan to see what you can do to eliminate or ameliorate your dependence on it. Depending on a third party for core functionality is never a good position to be in, as you’re relinquishing a lot of control to others who might not have your best interests in mind.

    Lazy load non-essential third-party scripts

    The best request is no request. If you have a third-party script that doesn’t need to be loaded right away, consider lazy loading it with an Intersection Observer. Here’s what it might look like to lazy load a Facebook Like button when it’s scrolled into the viewport:

    
    let loadedFbScript = false;
    
    const intersectionListener = new IntersectionObserver(entries => {
      entries.forEach(entry => {
        if ((entry.isIntersecting || entry.intersectionRatio) && !loadedFbScript) {
          const scriptEl = document.createElement("script");
    
          scriptEl.defer = true;
          scriptEl.crossOrigin = "anonymous";
          scriptEl.src = "https://connect.facebook.net/en_US/sdk.js#xfbml=1&version=v3.0";
          scriptEl.onload = () => {
            loadedFbScript = true;
          };
          
          document.body.append(scriptEl);
        }
      });
    });
    
    intersectionListener.observe(document.querySelector(".fb-like"));
    
    

    In the above snippet, we first set a variable to track whether we’ve loaded the Facebook SDK JavaScript. After that, an IntersectionListener is created that checks whether the observed element is in the viewport, and whether the Facebook SDK has been loaded. If the SDK JavaScript hasn’t been loaded, a reference to it is injected into the DOM, which will kick off a request for it.

    You’re not going to be able to lazy load every third-party script. Some of them simply need to do their work at page load time, or otherwise can’t be deferred. Regardless, do the detective work to see if it’s possible to lazy load at least some of your third-party JavaScript.

    One of the common concerns I hear from coworkers when I suggest lazy loading third-party scripts is how it can delay whatever interactions the third party provides. That’s a reasonable concern, because when you lazy load anything, a noticeable delay may occur as the resource loads. You can get around this to some extent with resource prefetching. This is different than preloading, which we discussed earlier. Prefetching consumes a comparable amount of data, yes, but prefetched resources are given lower priority and are less likely to contend for bandwidth with critical resources.

    Staying on top of the problem

    Keeping an eye on your third-party JavaScript requires mindfulness bordering on hypervigilance. When you recognize poor performance for the technical debt that it truly is, you’ll naturally slip into a frame of mind where you’ll recognize and address it as you would any other kind of technical debt.

    Staying on top of third parties is refactoring—a sort that requires you to periodically perform tasks such as cleaning up tag managers and A/B tests, consolidating third-party solutions, eliminating any that are no longer needed, and applying the coding techniques discussed above. Moreover, you’ll need to work with your team to address this technical debt on a cyclical basis. This kind of work can’t be automated, so yes, you’ll need to knuckle down and have face-to-face, synchronous conversations with actual people.

    If you’re already in the habit of scheduling “cleanup sprints” on some interval, then that is the time and space for you to address performance-related technical debt, regardless of whether it involves third- or first-party code. There’s a time for feature development, but that time should not comprise the whole of your working hours. Development shops that focus only on feature development are destined to be wholly consumed by the technical debt that will inevitably result.

    So it will come to pass that in the fourth and final installment of this series we’ll discuss what it means to do the hard work of using JavaScript responsibly in the context of process. Therein, we’ll explore what it takes to unite your organization under the banner of making your website faster and more accessible, and therefore more usable for everyone, everywhere.

  • The Untapped Power of Vulnerability & Transparency in Content Strategy

    In marketing, transparency and vulnerability are unjustly stigmatized. The words conjure illusions of being frightened, imperfect, and powerless. And for companies that shove carefully curated personas in front of users, little is more terrifying than losing control of how people perceive the brand.

    Let’s shatter this illusioned stigma. Authentic vulnerability and transparency are strengths masquerading as weaknesses. And companies too scared to embrace both traits in their content forfeit bona fide user-brand connections for often shallow, misleading engagement tactics that create fleeting relationships.

    Transparency and vulnerability are closely entwined concepts, but each one engages users in a unique way. Transparency is how much information you share, while vulnerability is the truth and meaning behind your actions and words. Combining these ideas is the trick to creating empowering and meaningful content. You can’t tell true stories of vulnerability without transparency, and to be authentically transparent you must be vulnerable.

    To be vulnerable, your brand and its content must be brave, genuine, humble, and open, all of which are traits that promote long-term customer loyalty. And if you’re transparent with users about who you are and about your business practices, you’re courting 94 percent of consumers who say they’re more loyal to brands that offer complete openness and 89 percent of people who say they give transparent companies a second chance after a bad experience.

    For many companies, being completely honest and open with their customers—or employees, in some cases—only happens in a crisis. Unfortunately for those businesses, using vulnerability and transparency only as a crisis management strategy diminishes how sincere they appear and can reduce customer satisfaction.

    Unlocking the potential of being transparent and vulnerable with users isn’t a one-off tactic or quick-fix emergency response tool—it’s a commitment to intimate storytelling that embraces a user’s emotional and psychological needs, which builds a meaningful connection between the storyteller and the audience.

    The three storytelling pillars of vulnerable and transparent content

    In her book, Braving the Wilderness, sociologist Brené Brown explains that vulnerability connects us at an emotional level. She says that when we recognize someone is being vulnerable, we invest in their story and begin to develop an emotional bond. This interwoven connection encourages us to experience the storyteller’s joy and pain, and then creates a sense of community and common purpose among the person being vulnerable and the people who acknowledge that vulnerability.

    Three pillars in a company’s lifecycle embrace this bond and provide an outline for telling stories worthy of a user’s emotional investment. The pillars are:

    • the origins of a company, product, idea, or situation;
    • intimate narratives about customers’ life experiences;
    • and insights about product success and failure.

    Origin stories

    An origin story spins a transparent tale about how a company, product, service, or idea is created. It is often told by a founder, CEO, or industry innovator. This pillar is usually used as an authentic way to provide crisis management or as a method to change how users feel about a topic, product, or your brand.

    Customers’ life experiences

    While vulnerable origin stories do an excellent job of making users trust your brand, telling a customer’s personal life story is arguably the most effective way to use vulnerability to entwine a brand with someone’s personal identity.

    Unlike an origin story, the customer experiences pillar is focused on being transparent about who your customers are, what they’ve experienced, and how those journeys align with values that matter to your brand. Through this lens, you’ll empower your customers to tell emotional, meaningful stories that make users feel vulnerable in a positive way. In this situation, your brand is often a storytelling platform where users share their story with the brand and fellow customers.

    Product and service insights

    Origin stories make your brand trustworthy in a crisis, and customers’ personal stories help users feel an intimate connection with your brand’s persona and mission. The last pillar, product and service insights, combines the psychological principles that make origin and customer stories successful. The outcome is a vulnerable narrative that rallies users’ excitement about, and emotional investment in, what a company sells or the goals it hopes to achieve.

    Vulnerability, transparency, and the customer journey

    The three storytelling pillars are crucial to embracing transparency and vulnerability in your content strategy because they let you target users at specific points in their journey. By embedding the pillars in each stage of the customer’s journey, you teach users about who you are, what matters to you, and why they should care.

    For our purposes, let’s define the user journey as:

    • awareness;
    • interest;
    • consideration;
    • conversion;
    • and retention.

    Awareness

    People give each other seven seconds to make a good first impression. We’re not so generous with brands and websites. After discovering your content, users determine if it’s trustworthy within one-tenth of a second.

    Page design and aesthetics are often the determining factors in these split-second choices, but the information users discover after that decision shapes their long-term opinions about your brand. This snap judgement is why transparency and vulnerability are crucial within awareness content.

    When you only get one chance to make a positive first impression with your audience, what content is going to be more memorable?

    Typical marketing “fluff” about how your brand was built on a shared vision and commitment to unyielding customer satisfaction and quality products? Or an upfront, authentic, and honest story about the trials and tribulations you went through to get where you are now?

    Buffer, a social media management company that helped pioneer the radical transparency movement, chose the latter option. The outcome created awareness content that leaves a positive lasting impression of the brand.

    In 2016, Joel Gascoigne, cofounder and CEO of Buffer, used an origin story to discuss the mistakes he and his company made that resulted in laying off 10 employees.

    In the blog post “Tough News: We’ve Made 10 Layoffs. How We Got Here, the Financial Details and How We’re Moving Forward,” Gascoigne wrote about Buffer’s over-aggressive growth choices, lack of accountability, misplaced trust in its financial model, explicit risk appetite, and overenthusiastic hiring. He also discussed what he learned from the experience, the changes Buffer made based on these lessons, the consequences of those changes, and next steps for the brand.

    Gascoigne writes about each subject with radical honesty and authenticity. Throughout the article, he’s personable and relatable; his tone and voice make it obvious he’s more concerned about the lives he’s irrevocably affected than the public image of his company floundering. Because Gascoigne is so transparent and vulnerable in the blog post, it’s easy to become invested in the narrative he’s telling. The result is an article that feels more like a deep, meaningful conversation over coffee instead of a carefully curated, PR-approved response.

    Yes, Buffer used this origin story to confront a PR crisis, but they did so in a way that encouraged users to trust the brand. Buffer chose to show up and be seen when they had no control over the outcome. And because Gascoigne used vulnerability and transparency to share the company’s collective pain, the company reaped positive press coverage and support on social media—further improving brand awareness, user engagement, and customer loyalty.

    However, awareness content isn’t always brand focused. Sometimes, smart awareness content uses storytelling to teach users and shape their worldviews. The 2019 State of Science Index is an excellent example.

    The annual State of Science Index evaluates how the global public perceives science. The 2019 report shows that 87 percent of people acknowledge that science is necessary to solve the world’s problems, but 33 percent are skeptical of science and believe that scientists cause as many problems as they solve. Furthermore, 57 percent of respondents are skeptical of science because of scientists’ conflicting opinions about topics they don’t understand.

    3M, the multinational science conglomerate that publishes the report, says the solution for this anti-science mindset is to promote intimate storytelling among scientists and layfolk.

    3M creates an origin story with its awareness content by focusing on the ins and outs of scientific research. The company is open and straightforward with its data and intentions, eliminating any second guesses users might have about the content they’re digesting.

    The company kicked off this strategy on three fronts, and each storytelling medium interweaves the benefits of vulnerability and transparency by encouraging researchers to tell stories that lead with how their findings benefit humanity. Every story 3M tells focuses on breaking through barriers the average person faces when they encounter science and encouraging scientists to be vulnerable and authentic with how they share their research.

    First, 3M began a podcast series known as Science Champions. In the podcast, 3M Chief Science Advocate Jayshree Seth interviews scientists and educators about the global perception of science and how science and scientists affect our lives. The show is currently in its second season and discusses a range of topics in science, technology, engineering, and math.

    Second, the company worked with science educators, journalist Katie Couric, actor Alan Alda, and former NASA astronaut Scott Kelly to develop the free Scientists as Storytellers Guide. The ebook helps STEM researchers improve how and why they communicate their work with other people—with a special emphasis on being empathetic with non-scientists. The guide breaks down how to develop communications skills, overcome common storytelling challenges, and learn to make science more accessible, understandable, and engaging for others.

    Last, 3M created a film series called Beyond the Beaker that explores the day-to-day lives of 3M scientists. In the short videos, scientists give the viewer a glimpse into their hobbies and home life. The series showcases how scientists have diverse backgrounds, hobbies, goals, and dreams.

    Unlike Buffer, which benefits directly from its awareness content, 3M’s three content mediums are designed to create a long-term strategy that changes how people understand and perceive science, by spreading awareness through third parties. It’s too early to conclude that the strategy will be successful, but it’s off to a good start. Science Champions often tops “best of” podcast lists for science lovers, and the Scientists as Storytellers Guide is a popular resource among public universities.

    Interest

    How do you court new users when word-of-mouth and organic search dominate how people discover new brands? Target their interests.

    Now, you can be like the hundreds of other brands that create a “10 best things” list and hope people stumble onto your content organically and like what they see. Or, you can use content to engage with people who are passionate about your industry and have genuine, open discussions about the topics that matter to you both.

    The latter option is a perfect fit for the product and service insights pillar, and the customers’ life experiences pillar.

    To succeed in these pillars you must balance discussing the users’ passions and how your brand plays into that topic against appearing disingenuous or becoming too self-promotional.

    Nonprofits have an easier time walking this taut line because people are less judgemental when engaging with NGOs, but it’s rare for a for-profit company to achieve this balance. SpaceX and Thinx are among the few brands that are able to walk this tightrope.

    Thinx, a women’s clothing brand that sells period-proof underwear, uses its blog to generate awareness, interest, and consideration content via the customers’ life experiences pillar. The blog, aptly named Periodical, relies on transparency and vulnerability as a cornerstone to engage users about reproductive and mental health.

    Toni Brannagan, Thinx’s content editor, says the brand embraces transparency and vulnerability by sharing diverse ideas and personal experiences from customers and experts alike, not shying away from sensitive subjects and never misleading users about Thinx or the subjects Periodical discusses.

    As a company focused on women’s healthcare, the product Thinx sells is political by nature and entangles the brand with themes of shame, cultural differences, and personal empowerment. Thinx’s strategy is to tackle these subjects head-on by having vulnerable conversations in its branding, social media ads, and Periodical content.

    “Vulnerability and transparency play a role because you can’t share authentic diverse ideas and experiences about those things—shame, cultural differences, and empowerment—without it,” Brannagan says.

    A significant portion of Thinx’s website traffic is organic, which means Periodical’s interest-driven content may be a user’s first touchpoint with the brand.

    “We’ve seen that our most successful organic content is educational, well-researched articles, and also product-focused blogs that answer the questions about our underwear, in a way that’s a little more casual than what’s on our product pages,” Brannagan says. “In contrast, our personal essays and ‘more opinionated’ content performs better on social media and email.”

    Thanks in part to the blog’s authenticity and open discussions about hard-hitting topics, readers who find the brand through organic search drive the most direct conversions.

    Conversations with users interested in the industry or topic your company is involved in don’t always have to come from the company itself. Sometimes a single person can drive authentic, open conversations and create endearing user loyalty and engagement.

    For a company that relies on venture capital investments, NASA funding, and public opinion for its financial future, crossing the line between being too self-promotional and isolating users could spell doom. But SpaceX has never shied away from difficult or vulnerable conversations. Instead, the company’s founder, Elon Musk, embraces engaging with users interests in public forums like Twitter and press conferences.

    Twitter thread showing an exchange between Elon Musk and a user

    Musk’s tweets about SpaceX are unwaveringly authentic and transparent. He often tweets about his thoughts, concerns, and the challenges his companies face. Plus, Musk frequently engages with his Twitter followers and provides candid answers to questions many CEOs avoid discussing. This authenticity has earned him a cult-like following.

    Elon Musk gives an honest, if not flattering, response on Twitter to a user

    Musk and SpaceX create conversations that target people’s interests and use vulnerability to equally embrace failure and success. Both the company and its founder give the public and investors an unflinching story of space exploration.

    And despite laying off 10 percent of its workforce in January of 2019, SpaceX is flourishing. In May 2019, its valuation had risen to $33.3 billion and reported annual revenue exceeded $2 billion. It also earned global media coverage from launching Musk’s Tesla Roadster into space, recently completed a test flight of its Crew Dragon space vehicle, and cemented multiple new payload contracts.

    By engaging with users on social media and through standard storytelling mediums, Thinx and SpaceX bolster customer loyalty and brand engagement.

    Consideration

    Modern consumers argue that ignorance is not bliss. When users are considering converting with a brand, 86 percent of consumers say transparency is a deciding factor. Transparency remains crucial even after they convert, with 85 percent of users saying they’ll support a transparent brand during a PR crisis.

    Your brand must be open, clear, and honest with users; there is no longer another viable option.

    So how do you remain transparent while trying to sell someone a product? One solution employed by REI and Everlane is to be openly accountable to your brand and your users via the origin stories and product insights pillars.

    REI, a national outdoor equipment retailer, created a stewardship program that behaves as a multifaceted origin story. The program’s content highlights the company’s history and manufacturing policies, and it lets users dive into the nitty-gritty details about its factories, partnerships, product production methods, manufacturing ethics, and carbon footprint.

    Screenshot of the Collaborating for Good website

    REI also employs a classic content hub strategy to let customers find the program and explore its relevant information. From a single landing page, users can easily find the program through the website’s global navigation and then navigate to every tangential topic the program encompasses.

    REI also publishes an annual stewardship report, where users can learn intimate details about how the company makes and spends its money.

    Screenshot of REI's stewardship report

    Everlane, a clothing company, is equally transparent about its supply chain. The company promotes an insider’s look into its global factories via product insights stories. These glimpses tell the personal narratives of factory employees and owners, and provide insights into the products manufactured and the materials used. Everlane also published details of how they comply with the California Transparency in Supply Chains Act to guarantee ethical working conditions throughout its supply chain, including refusing to partner with human traffickers.

    Screenshot of Everlane's page about the factory in Lima

    The crucial quality that Everlane and REI share is they publicize their transparency and encourage users to explore the shared information. On each website, users can easily find information about the company’s transparency endeavors via the global navigation, social media campaigns, and product pages.

    The consumer response to transparent brands like REI and Everlane is overwhelmingly positive. Customers are willing to pay price premiums for the additional transparency, which gives them comfort by knowing they’re purchasing ethical products.

    REI’s ownership model has further propelled the success of its transparency by using it to create unwavering customer engagement and loyalty. As a co-op where customers can “own” part of the company for a one-time $20 membership fee, REI is beholden to its members, many of which pay close attention to its supply chain and the brands REI partners with.

    After a deadly school shooting in Parkland, Florida, REI members urged the company to refuse to carry CamelBak products because the brand’s parent company manufactures assault-style weapons. Members argued the partnership violated REI’s supply chain ethics. REI listened and halted orders with CamelBak. Members rejoiced and REI earned a significant amount of positive press coverage.

    Conversion

    Imagine you’ve started incorporating transparency throughout your company, and promote the results to users. Your brand also begins engaging users by telling vulnerable, meaningful stories via the three pillars. You’re seeing great engagement metrics and customer feedback from these efforts, but not much else. So, how do you get your newly invested users to convert?

    Provide users with a full-circle experience.

    If you combine the three storytelling pillars with blatant transparency and actively promote your efforts, users often transition from the consideration stage into the conversion state. Best of all, when users convert with a company that already earned their trust on an emotional level, they’re more likely to remain loyal to the brand and emotionally invested in its future.

    The crucial step in combining the three pillars is consistency. Your brand’s stories must always be authentic and your content must always be transparent. The outdoor clothing brand Patagonia is among the most popular and successful companies to maintain this consistency and excel with this strategy.

    Patagonia is arguably the most vocal and aggressive clothing retailer when it comes to environmental stewardship and ethical manufacturing.

    In some cases, the company tells users not to buy its clothing because rampant consumerism harms the environment too much, which they care about more than profits. This level of radical transparency and vulnerability skyrocketed the company’s popularity among environmentally-conscious consumers.

    In 2011, Patagonia took out a full-page Black Friday ad in the New York Times with the headline “Don’t Buy This Jacket.” In the ad, Patagonia talks about the environmental toll manufacturing clothes requires.

    “Consider the R2 Jacket shown, one of our best sellers. To make it required 135 liters of water, enough to meet the daily needs (three glasses a day) of 45 people. Its journey from its origin as 60 percent recycled polyester to our Reno warehouse generated nearly 20 pounds of carbon dioxide, 24 times the weight of the finished product. This jacket left behind, on its way to Reno, two-thirds [of] its weight in waste.”

    The ad encourages users to not buy any new Patagonia clothing if their old, ratty clothes can be repaired. To help, Patagonia launched a supplementary subdomain to its e-commerce website to support its Common Thread Initiative, which eventually got rebranded as the Worn Wear program.

    Patatgonia’s Worn Wear subdomain gets users to engage with the company about causes each party cares about. Through Worn Wear, Patagonia will repair your old gear for free. If you’d rather have new gear, you can instead sell the worn out clothing to Patagonia, and they’ll repair it and then resell the product at a discount. This interaction encourages loyalty and repeat brand-user engagement.

    In addition, the navigation on Patagonia’s main website practically begs users to learn about the brand’s non-profit initiatives and its commitment to ethical manufacturing.

    Screenshot of Patagonia's page on environmental responsibility

    Today, Patagonia is among the most respected, profitable, and trusted consumer brands in the United States.

    Retention

    Content strategy expands through nearly every aspect of the marketing stack, including ad campaigns, which take a more controlled approach to vulnerability and transparency. To target users in the retention stage and keep them invested in your brand, your goal is to create content using the customers’ life experiences pillar to amplify the emotional bond and brand loyalty that vulnerability creates.

    Always took this approach and ended up with one of its most successful social media campaigns.

    An Always ad portraying a determined girl holding a baseball

    In June 2014, Always launched its #LikeAGirl campaign to empower adolescent and teenage girls by transforming the phrase “like a girl” from a slur into a meaningful and positive statement.

    The campaign is centered on a video in which Always tasked children, teenagers, and adults to behave “like a girl” by running, punching, and throwing while mimicking their perception of how a girl performs the activity. Young girls performed the tasks wholeheartedly and with gusto, while boys and adults performed overly feminine and vain characterizations. The director then challenged the person on their portrayal, breaking down what doing things “like a girl” truly means. The video ends with a powerful, heart-swelling statement:

    “If somebody else says that running like a girl, or kicking like a girl, or shooting like a girl is something you shouldn’t be doing, that’s their problem. Because if you’re still scoring, and you’re still getting to the ball in time, and you’re still being first...you’re doing it right. It doesn’t matter what they say.”

    This customer story campaign put the vulnerability girls feel during puberty front and center so the topic would resonate with users and give the brand a powerful, relevant, and purposeful role in this connection, according to an Institute for Public Relations campaign analysis.

    Consequently, the #LikeAGirl campaign was a rousing success and blew past the KPIs Always established. Initially, Always determined an “impactful launch” for the video meant 2 million video views and 250 million media impressions, the analysis states.

    Five years later, the campaign video has more than 66.9 million views and 42,700 comments on YouTube, with more than 85 percent of users reacting positively. Here are a few additional highlights the analysis document points out:

    • Eighty-one percent of women ages 16–24 support Always in creating a movement to reclaim “like a girl” as a positive and inspiring statement.
    • More than 1 million people shared the video.
    • Thirteen percent of users created user-generated content about the campaign.
    • The #LikeAGirl program achieved 4.5 billion global impressions.
    • The campaign received 290 million social impressions, with 133,000 social mentions, and it caused a 195.3 percent increase in the brand’s Twitter followers.

    Among the reasons the #LikeAGirl content was so successful is that it aligned with Brené Brown’s concept that experiencing vulnerability creates a connection centered on powerful, shared emotions. Always then amplified the campaign’s effectiveness by using those emotions to encourage specific user behavior on social media.

    How do you know if you’re making vulnerable content?

    Designing a vulnerability-focused content strategy campaign begins by determining what kind of story you want to tell, why you want to tell it, why that story matters, and how that story helps you or your users achieve a goal.

    When you’re brainstorming topics, the most important factor is that you need to care about the stories you’re telling. These tales need to be meaningful because if you’re weaving a narrative that isn’t important to you, it shows. And ultimately, why do you expect your users to care about a subject if you don’t?

    Let’s say you’re developing a content campaign for a nonprofit, and you want to use your brand’s emotional identity to connect with users. You have a handful of possible narratives but you’re not sure which one will best unlock the benefits of vulnerability. In a Medium post about telling vulnerable stories, Cayla Vidmar presents a list of seven self-reflective questions that can reveal what narrative to choose and why.

    If you can answer each of Vidmar’s questions, you’re on your way to creating a great story that can connect with users on a level unrivaled by other methods. Here’s what you should ask yourself:

    • What meaning is there in my story?
    • Can my story help others?
    • How can it help others?
    • Am I willing to struggle and be vulnerable in that struggle (even with strangers)?
    • How has my story shaped my worldview (what has it made me believe)?
    • What good have I learned from my story?
    • If other people discovered this good from their story, would it change their lives?

    While you’re creating narratives within the three pillars, refer back to Vidmar’s list to maintain the proper balance between vulnerability and transparency.

    What’s next?

    You now know that vulnerability and transparency are an endless fountain of strength, not a weakness. Vulnerable content won’t make you or your brand look weak. Your customers won’t flee at the sight of imperfection. Being human and treating your users like humans isn’t a liability.

    It’s time for your brand to embrace its untapped potential. Choose to be vulnerable, have the courage to tell meaningful stories about what matters most to your company and your customers, and overcome the fear of controlling how users will react to your content.

    Origin story

    Every origin story has six chapters:

    • the discovery of a problem or opportunity;
    • what caused this problem or opportunity;
    • the consequences of this discovery;
    • the solution to these consequences;
    • lessons learned during the process;
    • and next steps.

    Customers’ life experiences

    Every customer journey narrative has six chapters:

    • plot background to frame the customer’s experiences;
    • the customer’s journey;
    • how the brand plays into that journey (if applicable);
    • how the customer’s experiences changed them;
    • what the customer learned from this journey;
    • and how other people can use this information to improve their lives.

    Product and service insights

    Narratives about product and service insights have seven chapters:

    • an overview of the product/service;
    • how that product/service affects users;
    • why the product/service is important to the brand’s mission or to users;
    • what about this product/service failed or succeeded;
    • why did that success or failure happen;
    • what lessons did this scenario create;
    • and how are the brand and its users moving forward.

    You have the tools and knowledge necessary to be transparent, create vulnerable content, and succeed. And we need to tell vulnerable stories because sharing our experiences and embracing our common connections matters. So go ahead, put yourself out into the open, and see how your customers respond.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.