Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • Beware the Cut ‘n’ Paste Persona

    This Person Does Not Exist is a website that generates human faces with a machine learning algorithm. It takes real portraits and recombines them into fake human faces. We recently scrolled past a LinkedIn post stating that this website could be useful “if you are developing a persona and looking for a photo.” 

    We agree: the computer-generated faces could be a great match for personas—but not for the reason you might think. Ironically, the website highlights the core issue of this very common design method: the person(a) does not exist. Like the pictures, personas are artificially made. Information is taken out of natural context and recombined into an isolated snapshot that’s detached from reality. 

    But strangely enough, designers use personas to inspire their design for the real world. 

    Personas: A step back

    Most designers have created, used, or come across personas at least once in their career. In their article “Personas - A Simple Introduction,” the Interaction Design Foundation defines personas as “fictional characters, which you create based upon your research in order to represent the different user types that might use your service, product, site, or brand.” In their most complete expression, personas typically consist of a name, profile picture, quotes, demographics, goals, needs, behavior in relation to a certain service/product, emotions, and motivations (for example, see Creative Companion’s Persona Core Poster). The purpose of personas, as stated by design agency Designit, is “to make the research relatable, [and] easy to communicate, digest, reference, and apply to product and service development.”

    The decontextualization of personas

    Personas are popular because they make “dry” research data more relatable, more human. However, this method constrains the researcher’s data analysis in such a way that the investigated users are removed from their unique contexts. As a result, personas don’t portray key factors that make you understand their decision-making process or allow you to relate to users’ thoughts and behavior; they lack stories. You understand what the persona did, but you don’t have the background to understand why. You end up with representations of users that are actually less human.

    This “decontextualization” we see in personas happens in four ways, which we’ll explain below. 

    Personas assume people are static 

    Although many companies still try to box in their employees and customers with outdated personality tests (referring to you, Myers-Briggs), here’s a painfully obvious truth: people are not a fixed set of features. You act, think, and feel differently according to the situations you experience. You appear different to different people; you might act friendly to some, rough to others. And you change your mind all the time about decisions you’ve taken. 

    Modern psychologists agree that while people generally behave according to certain patterns, it’s actually a combination of background and environment that determines how people act and take decisions. The context—the environment, the influence of other people, your mood, the entire history that led up to a situation—determines the kind of person you are in each specific moment. 

    In their attempt to simplify reality, personas do not take this variability into account; they present a user as a fixed set of features. Like personality tests, personas snatch people away from real life. Even worse, people are reduced to a label and categorized as “that kind of person” with no means to exercise their innate flexibility. This practice reinforces stereotypes, lowers diversity, and doesn’t reflect reality. 

    Personas focus on individuals, not the environment

    In the real world, you’re designing for a context, not for an individual. Each person lives in a family, a community, an ecosystem, where there are environmental, political, and social factors you need to consider. A design is never meant for a single user. Rather, you design for one or more particular contexts in which many people might use that product. Personas, however, show the user alone rather than describe how the user relates to the environment. 

    Would you always make the same decision over and over again? Maybe you’re a committed vegan but still decide to buy some meat when your relatives are coming over. As they depend on different situations and variables, your decisions—and behavior, opinions, and statements—are not absolute but highly contextual. The persona that “represents” you wouldn’t take into account this dependency, because it doesn’t specify the premises of your decisions. It doesn’t provide a justification of why you act the way you do. Personas enact the well-known bias called fundamental attribution error: explaining others’ behavior too much by their personality and too little by the situation.

    As mentioned by the Interaction Design Foundation, personas are usually placed in a scenario that’s a “specific context with a problem they want to or have to solve”—does that mean context actually is considered? Unfortunately, what often happens is that you take a fictional character and based on that fiction determine how this character might deal with a certain situation. This is made worse by the fact that you haven’t even fully investigated and understood the current context of the people your persona seeks to represent; so how could you possibly understand how they would act in new situations? 

    Personas are meaningless averages

    As mentioned in Shlomo Goltz’s introductory article on Smashing Magazine, “a persona is depicted as a specific person but is not a real individual; rather, it is synthesized from observations of many people.” A well-known critique to this aspect of personas is that the average person does not exist, as per the famous example of the USA Air Force designing planes based on the average of 140 of their pilots’ physical dimensions and not a single pilot actually fitting within that average seat. 

    The same limitation applies to mental aspects of people. Have you ever heard a famous person say, “They took what I said out of context! They used my words, but I didn’t mean it like that.” The celebrity’s statement was reported literally, but the reporter failed to explain the context around the statement and didn’t describe the non-verbal expressions. As a result, the intended meaning was lost. You do the same when you create personas: you collect somebody’s statement (or goal, or need, or emotion), of which the meaning can only be understood if you provide its own specific context, yet report it as an isolated finding. 

    But personas go a step further, extracting a decontextualized finding and joining it with another decontextualized finding from somebody else. The resulting set of findings often does not make sense: it’s unclear, or even contrasting, because it lacks the underlying reasons on why and how that finding has arisen. It lacks meaning. And the persona doesn’t give you the full background of the person(s) to uncover this meaning: you would need to dive into the raw data for each single persona item to find it. What, then, is the usefulness of the persona?

    Composite image of a man composed of many different photos

    The relatability of personas is deceiving

    To a certain extent, designers realize that a persona is a lifeless average. To overcome this, designers invent and add “relatable” details to personas to make them resemble real individuals. Nothing captures the absurdity of this better than a sentence by the Interaction Design Foundation: “Add a few fictional personal details to make the persona a realistic character.” In other words, you add non-realism in an attempt to create more realism. You deliberately obscure the fact that “John Doe” is an abstract representation of research findings; but wouldn’t it be much more responsible to emphasize that John is only an abstraction? If something is artificial, let’s present it as such.

    It’s the finishing touch of a persona’s decontextualization: after having assumed that people’s personalities are fixed, dismissed the importance of their environment, and hidden meaning by joining isolated, non-generalizable findings, designers invent new context to create (their own) meaning. In doing so, as with everything they create, they introduce a host of biases. As phrased by Designit, as designers we can “contextualize [the persona] based on our reality and experience. We create connections that are familiar to us.” This practice reinforces stereotypes, doesn’t reflect real-world diversity, and gets further away from people’s actual reality with every detail added. 

    To do good design research, we should report the reality “as-is” and make it relatable for our audience, so everyone can use their own empathy and develop their own interpretation and emotional response.

    Dynamic Selves: The alternative to personas

    If we shouldn’t use personas, what should we do instead? 

    Designit has proposed using Mindsets instead of personas. Each Mindset is a “spectrum of attitudes and emotional responses that different people have within the same context or life experience.” It challenges designers to not get fixated on a single user’s way of being. Unfortunately, while being a step in the right direction, this proposal doesn’t take into account that people are part of an environment that determines their personality, their behavior, and, yes, their mindset. Therefore, Mindsets are also not absolute but change in regard to the situation. The question remains, what determines a certain Mindset?

    Another alternative comes from Margaret P., author of the article “Kill Your Personas,” who has argued for replacing personas with persona spectrums that consist of a range of user abilities. For example, a visual impairment could be permanent (blindness), temporary (recovery from eye surgery), or situational (screen glare). Persona spectrums are highly useful for more inclusive and context-based design, as they’re based on the understanding that the context is the pattern, not the personality. Their limitation, however, is that they have a very functional take on users that misses the relatability of a real person taken from within a spectrum. 

    In developing an alternative to personas, we aim to transform the standard design process to be context-based. Contexts are generalizable and have patterns that we can identify, just like we tried to do previously with people. So how do we identify these patterns? How do we ensure truly context-based design? 

    Understand real individuals in multiple contexts

    Nothing is more relatable and inspiring than reality. Therefore, we have to understand real individuals in their multi-faceted contexts, and use this understanding to fuel our design. We refer to this approach as Dynamic Selves.

    Let’s take a look at what the approach looks like, based on an example of how one of us applied it in a recent project that researched habits of Italians around energy consumption. We drafted a design research plan aimed at investigating people’s attitudes toward energy consumption and sustainable behavior, with a focus on smart thermostats. 

    1. Choose the right sample

    When we argue against personas, we’re often challenged with quotes such as “Where are you going to find a single person that encapsulates all the information from one of these advanced personas[?]” The answer is simple: you don’t have to. You don’t need to have information about many people for your insights to be deep and meaningful. 

    In qualitative research, validity does not derive from quantity but from accurate sampling. You select the people that best represent the “population” you’re designing for. If this sample is chosen well, and you have understood the sampled people in sufficient depth, you’re able to infer how the rest of the population thinks and behaves. There’s no need to study seven Susans and five Yuriys; one of each will do. 

    Similarly, you don’t need to understand Susan in fifteen different contexts. Once you’ve seen her in a couple of diverse situations, you’ve understood the scheme of Susan’s response to different contexts. Not Susan as an atomic being but Susan in relation to the surrounding environment: how she might act, feel, and think in different situations. 

    Given that each person is representative of a part of the total population you’re researching, it becomes clear why each should be represented as an individual, as each already is an abstraction of a larger group of individuals in similar contexts. You don’t want abstractions of abstractions! These selected people need to be understood and shown in their full expression, remaining in their microcosmos—and if you want to identify patterns you can focus on identifying patterns in contexts.

    Yet the question remains: how do you select a representative sample? First of all, you have to consider what’s the target audience of the product or service you are designing: it might be useful to look at the company’s goals and strategy, the current customer base, and/or a possible future target audience. 

    In our example project, we were designing an application for those who own a smart thermostat. In the future, everyone could have a smart thermostat in their house. Right now, though, only early adopters own one. To build a significant sample, we needed to understand the reason why these early adopters became such. We therefore recruited by asking people why they had a smart thermostat and how they got it. There were those who had chosen to buy it, those who had been influenced by others to buy it, and those who had found it in their house. So we selected representatives of these three situations, from different age groups and geographical locations, with an equal balance of tech savvy and non-tech savvy participants. 

    2. Conduct your research

    After having chosen and recruited your sample, conduct your research using ethnographic methodologies. This will make your qualitative data rich with anecdotes and examples. In our example project, given COVID-19 restrictions, we converted an in-house ethnographic research effort into remote family interviews, conducted from home and accompanied by diary studies.

    To gain an in-depth understanding of attitudes and decision-making trade-offs, the research focus was not limited to the interviewee alone but deliberately included the whole family. Each interviewee would tell a story that would then become much more lively and precise with the corrections or additional details coming from wives, husbands, children, or sometimes even pets. We also focused on the relationships with other meaningful people (such as colleagues or distant family) and all the behaviors that resulted from those relationships. This wide research focus allowed us to shape a vivid mental image of dynamic situations with multiple actors. 

    It’s essential that the scope of the research remains broad enough to be able to include all possible actors. Therefore, it normally works best to define broad research areas with macro questions. Interviews are best set up in a semi-structured way, where follow-up questions will dive into topics mentioned spontaneously by the interviewee. This open-minded “plan to be surprised” will yield the most insightful findings. When we asked one of our participants how his family regulated the house temperature, he replied, “My wife has not installed the thermostat’s app—she uses WhatsApp instead. If she wants to turn on the heater and she is not home, she will text me. I am her thermostat.”

    3. Analysis: Create the Dynamic Selves

    During the research analysis, you start representing each individual with multiple Dynamic Selves, each “Self” representing one of the contexts you have investigated. The core of each Dynamic Self is a quote, which comes supported by a photo and a few relevant demographics that illustrate the wider context. The research findings themselves will show which demographics are relevant to show. In our case, as our research focused on families and their lifestyle to understand their needs for thermal regulation, the important demographics were family type, number and nature of houses owned, economic status, and technological maturity. (We also included the individual’s name and age, but they’re optional—we included them to ease the stakeholders’ transition from personas and be able to connect multiple actions and contexts to the same person).

    Three cards, each showing a different lifestyle photo, a quote that correlates to that dynamic self's attitude about technology, and some basic demographic info

    To capture exact quotes, interviews need to be video-recorded and notes need to be taken verbatim as much as possible. This is essential to the truthfulness of the several Selves of each participant. In the case of real-life ethnographic research, photos of the context and anonymized actors are essential to build realistic Selves. Ideally, these photos should come directly from field research, but an evocative and representative image will work, too, as long as it’s realistic and depicts meaningful actions that you associate with your participants. For example, one of our interviewees told us about his mountain home where he used to spend every weekend with his family. Therefore, we portrayed him hiking with his little daughter. 

    At the end of the research analysis, we displayed all of the Selves’ “cards” on a single canvas, categorized by activities. Each card displayed a situation, represented by a quote and a unique photo. All participants had multiple cards about themselves.

    A collection of many cards representing many dynamic self personas

    4. Identify design opportunities

    Once you have collected all main quotes from the interview transcripts and diaries, and laid them all down as Self cards, you will see patterns emerge. These patterns will highlight the opportunity areas for new product creation, new functionalities, and new services—for new design. 

    In our example project, there was a particularly interesting insight around the concept of humidity. We realized that people don’t know what humidity is and why it is important to monitor it for health: an environment that’s too dry or too wet can cause respiratory problems or worsen existing ones. This highlighted a big opportunity for our client to educate users on this concept and become a health advisor.

    Benefits of Dynamic Selves

    When you use the Dynamic Selves approach in your research, you start to notice unique social relations, peculiar situations real people face and the actions that follow, and that people are surrounded by changing environments. In our thermostat project, we have come to know one of the participants, Davide, as a boyfriend, dog-lover, and tech enthusiast. 

    Davide is an individual we might have once reduced to a persona called “tech enthusiast.” But we can have tech enthusiasts who have families or are single, who are rich or poor. Their motivations and priorities when deciding to purchase a new thermostat can be opposite according to these different frames. 

    Once you have understood Davide in multiple situations, and for each situation have understood in sufficient depth the underlying reasons for his behavior, you’re able to generalize how he would act in another situation. You can use your understanding of him to infer what he would think and do in the contexts (or scenarios) that you design for.

    A comparison. On one side, three people are fused into one to create a persona; in the second, the three people exist as separate dynamic selves.

    The Dynamic Selves approach aims to dismiss the conflicted dual purpose of personas—to summarize and empathize at the same time—by separating your research summary from the people you’re seeking to empathize with. This is important because our empathy for people is affected by scale: the bigger the group, the harder it is to feel empathy for others. We feel the strongest empathy for individuals we can personally relate to.  

    If you take a real person as inspiration for your design, you no longer need to create an artificial character. No more inventing details to make the character more “realistic,” no more unnecessary additional bias. It’s simply how this person is in real life. In fact, in our experience, personas quickly become nothing more than a name in our priority guides and prototype screens, as we all know that these characters don’t really exist. 

    Another powerful benefit of the Dynamic Selves approach is that it raises the stakes of your work: if you mess up your design, someone real, a person you and the team know and have met, is going to feel the consequences. It might stop you from taking shortcuts and will remind you to conduct daily checks on your designs.

    And finally, real people in their specific contexts are a better basis for anecdotal storytelling and therefore are more effective in persuasion. Documentation of real research is essential in achieving this result. It adds weight and urgency behind your design arguments: “When I met Alessandra, the conditions of her workplace struck me. Noise, bad ergonomics, lack of light, you name it. If we go for this functionality, I’m afraid we’re going to add complexity to her life.”

    Conclusion

    Designit mentioned in their article on Mindsets that “design thinking tools offer a shortcut to deal with reality’s complexities, but this process of simplification can sometimes flatten out people’s lives into a few general characteristics.” Unfortunately, personas have been culprits in a crime of oversimplification. They are unsuited to represent the complex nature of our users’ decision-making processes and don’t account for the fact that humans are immersed in contexts. 

    Design needs simplification but not generalization. You have to look at the research elements that stand out: the sentences that captured your attention, the images that struck you, the sounds that linger. Portray those, use them to describe the person in their multiple contexts. Both insights and people come with a context; they cannot be cut from that context because it would remove meaning. 

    It’s high time for design to move away from fiction, and embrace reality—in its messy, surprising, and unquantifiable beauty—as our guide and inspiration.

  • Immersive Content Strategy

    Beyond the severe toll of the coronavirus pandemic, perhaps no other disruption has transformed user experiences quite like how the tethers to our formerly web-biased era of content have frayed. We’re transitioning to a new world of remote work and digital content. We’re also experimenting with unprecedented content channels that, not too long ago, elicited chuckles at the watercooler, like voice interfaces, digital signage, augmented reality, and virtual reality.

    Many factors are responsible. Perhaps it’s because we yearn for immersive spaces that temporarily resurrect the Before Times, or maybe it’s due to the boredom and tedium of our now-cemented stuck-at-home routines. But aural user experiences slinging voice content, and immersive user experiences unlocking new forms of interacting with formerly web-bound content, are no longer figments of science fiction. They’re fast becoming a reality in the here and now.

    The idea of immersive experiences is all the rage these days, and content strategists and designers are now seriously examining this still-amorphous trend. Immersive experiences embrace concepts like geolocation, digital signage, and extended reality (XR). XR encompasses augmented reality (AR) and virtual reality (VR) as well as their fusion: mixed reality (MR). Sales of immersive equipment like gaming and VR headsets have skyrocketed during the pandemic, and content strategists are increasingly attuned to the kaleidoscope of devices and interfaces users now interact with on a daily basis to acquire information.

    Immersive user experiences are becoming commonplace, and, more importantly, new tools and frameworks are emerging for designers and developers looking to get their hands dirty. But that doesn’t mean our content is ready for prime time in settings unbound from the web like physical spaces, digital signage, or extended reality. Recasting your fixed web content in more immersive ways will enable more than just newfangled user experiences; it’ll prepare you for flexibility in an unpredictable future as well.

    Agnostic content for immersive experiences

    These days, we interact with content through a slew of devices. It’s no longer the case that we navigate information on a single desktop computer screen. In my upcoming book Voice Content and Usability (A Book Apart, coming June 2021), I draw a distinction between what I call macrocontent—the unwieldy long-form copy plastered across browser viewports—and Anil Dash’s definition of microcontent: the kind of brisk, contextless bursts of content that we find nowadays on Apple Watches, Samsung TVs, and Amazon Alexas.

    Today, content also has to be ready for contextless situations—not only in truncated form when we struggle to make out tiny text on our smartwatches or scroll through new television series on Roku but also in places it’s never ended up before. As the twenty-first century continues apace, our clients and our teams are beginning to come to terms with the fact that the way copy is consumed in just a few decades will bear no resemblance whatsoever to the prosaic browsers and even smartphones of today.

    What do we mean by immersive content?

    Immersive experiences are those that, according to Forrester, blur “the boundaries between the human, digital, physical, and virtual realms” to facilitate smarter, more interactive user experiences. But what do we mean by immersive content? I define immersive content as content that plays in the sandbox of physical and virtual space—copy and media that are situationally or locationally aware rather than rooted in a static, unmoving computer screen.

    Whether a space is real or virtual, immersive content (or spatialcontent) will be a key way in which our customers and users deal with information in the coming years. Unlike voice content, which deals with time and sound, immersive content works with space and sight. Immersive content operates not along the axis of links and page changes but rather along situational changes, as the following figure illustrates.

    In this illustration, each rectangle represents different displays that appear based on situational changes such as movement in space or adjustment of perspective that result in the delivery of different content from the previous context. One of these, such as the rightmost display, can be a web-enabled content display with links to other content presented in the same display. This illustration thus demonstrates two forms of navigation: traditional link navigation and immersive situational navigation.

    Acknowledging the actual or imagined surroundings of where we are as human beings will have vast implications for content strategy, omnichannel marketing, usability testing, and accessibility. Before we dig deeper, let’s define a few clear categories of immersive content:

    • Digital signage content. Though it may seem a misnomer, digital signage is one of the most widespread examples of immersive content already in use today. For example, you may have seen it used to display a guide of stores at a mall or to aid wayfinding in an airport. While still largely bound to flat screens, it’s an example of content in space.
    • Locational content. Locational content involves copy that is delivered to a user on a personal device based on their current location in the world or within an identified physical space. Most often mediated through Bluetooth low-energy (BLE) beacon technology or GPS location services, it’s an example of content at a point in space.
    • Augmented reality content. Unlike locational content, which doesn’t usually adjust itself seamlessly based on how users move in real-world space, AR content is now common in museums and other environments—typically as overlays that are superimposed over actual physical surroundings and adjust dynamically according to the user’s position and perspective. It’s content projected into real-world space.
    • Virtual reality content. Like AR content, VR content is dependent on its imagined surroundings in terms of how it displays, but it’s part of a nonexistent space that is fully immersive, an example of content projected into virtual space.
    • Navigable content. Long a gimmicky playground for designers and developers interested in pushing the envelope, navigable content is copy that users can move across and sift through as if it were a physical space itself: true content as space.

    The following illustration depicts these types of immersive content in their typical habitats.

    Digital signage content typically appears to everyone within a space. Locational content is delivered via personal devices. AR is content projected into the real world through an overlay, while VR creates an immersive virtual environment. Finally, navigable content is content as the space itself.

    Why auditing immersive content is important

    Alongside conversational and voice content, immersive content is a compelling example of breaking content out of the limiting box where it has long lived: the browser viewport, the computer screen, and the 8.5”x11” or broadsheet borders of print media. For centuries, our written copy has been affixed to the staid standards of whatever bookbinders, newspaper printing presses, and screen manufacturers decided. Today, however, for the first time, we’re surmounting those arbitrary barriers and situating content in contexts that challenge all the assumptions we’ve made since the era of Gutenberg—and, arguably, since clay tablets, papyrus manuscripts, and ancient scrolls.

    Today, it’s never been more pressing to implement an omnichannel content strategy that centers the reality our customers increasingly live in: a world in which information can end up on any device, even if it has no tether to a clickable or scrollable setting. One of the most important elements of such a future-proof content strategy is an omnichannel content audit that evaluates your content from a variety of standpoints so you can manage and plan it effectively. These audits generally consist of several steps:

    • Write a questionnaire. Each content item needs to be examined from the perspective of each channel through a series of channel-relevant questions, like whether content is legible or discoverable on every conduit through which it travels.
    • Settle the criteria. No questionnaire is complete for a content audit without evaluation criteria that measure how the content performs and recommendation criteria that determine necessary steps to improve its efficacy.
    • Discuss with stakeholders. At the end of any content audit, it’s important to leaf through the results and any recommendations in a frank discussion with stakeholders, including content strategists, editors, designers, and others.

    In my previous article for A List Apart, I shared the work we did on a conversational content audit for Ask GeorgiaGov, the first (but now decommissioned) Alexa skill for residents of the state of Georgia. Such a content audit is just one facet of the multifaceted omnichannel content strategy along various dimensions you’ll need to consider. Nonetheless, there are a few things all content audits share in terms of foundational evaluation criteria across all content delivery channels:

    • Content legibility. Is the content readable or easily consumable from a variety of vantage points and perspectives? In the case of immersive content, this can include examining verbosity tolerance (how long content can be before users zone out, a big factor in digital signage) and phantom references (like links and calls to action that make sense on the web but not on a VR headset).
    • Content discoverability. It’s no longer guaranteed in immersive content experiences that every piece of content can be accessed from other content items, and content loses almost all of its context when displayed unmoored from other content in digital signs or AR overlays. For discoverability’s sake, avoid relegating content to unreachable siloes, whether content is inaccessible due to physical conditions (like walls or other obstacles) or technical ones (like a finicky VR headset).

    Like voice content, immersive content requires ample attention to the ways in which users approach and interact with content in physical and virtual spaces. And as I write in Voice Content and Usability, it’s also the case that cross-channel interactions can influence how we work with copy and media. After all, how often do subway and rail commuters glance up while scrolling through service advisories on their smartphones to consult a potentially more up-to-date alert on a digital sign?

    Digital signage content: Content in space

    Signage has long been a fixture of how we find our way through physical spaces, ever since the earliest roads crisscrossed civilizations. Today, digital signs are becoming ubiquitous across shopping centers, university campuses, and especially transit systems, with the New York City subway recently introducing countdown clocks that display service advisories on a ticker along the bottom of the screen, just below train arrival times.

    Digital signs can deliver critical content at important times, such as during emergencies, without the limitations imposed by the static nature of analog signs. News tickers on digital signs, for instance, can stretch for however long they need to, though succinctness is still highly prized. But digital signage’s rich potential to deliver immersive content also presents challenges when it comes to content modeling and governance.

    Are news items delivered to digital signs simply teaser or summary versions of full articles? Without a fully functional and configurable digital sign in your office, how will you preview them in context before they go live? To solve this problem for the New York City subway, the Metropolitan Transportation Authority (MTA) manages all digital signage content across all signs within a central Drupal content management system (CMS), which synthesizes data such as train arrival times from real-time feeds and transit messages administered in the CMS for arbitrary delivery to any platform across the network.

    How to present content items in digital signs also poses problems. As the following figure illustrates, do you overtake the entire screen at the risk of obscuring other information, do you leave it in a ticker that may be ignored, or do you use both depending on the priority or urgency of the content you’re presenting?

    On the left are examples of digital signage where informational messages obscure important data. On the right are examples of digital signage where informational messages are constricted to a small scrolling ticker at the bottom of the screen.

    While some digital signs have the benefit of touch screens and occupying entire digital kiosks, many are tasked with providing key information in as little space as possible, where users don’t have the luxury of manipulating the interface to customize the content they wish to view. The New York City subway makes a deliberate choice to allow urgent alerts to spill across the entire screen, which limits the sign’s usefulness for those who simply need to know when the next train is arriving in the interest of more important information that is relevant to all passengers—and those who need captions for loudspeaker announcements.

    Auditing for digital signage content

    Because digital signs value brevity and efficiency, digital signage content often isn’t the main focus of what’s displayed. Digital signs on the São Paulo metro, for instance, juggle service alerts, breaking news, and health advisories. For this reason, auditing digital signage content for legibility and discoverability is key to ensuring users can interact with it gracefully, regardless of how often it appears, how highly prioritized it is, or what it covers.

    When it comes to legibility, ask yourself these questions and consider the digital sign content you’re authoring based on these concerns:

    • Font size and typography. Many digital signs use sans-serif typefaces, which are easier to read from a distance, and many also employ uppercase for all text, especially in tickers. Consider which typefaces advance rather than obscure legibility, even when the digital sign content overtakes the entire screen.
    • Angles and perspective. Is your digital sign content readily readable from various angles and various vantage points? Does the reflectivity of the screen impact your content’s legibility when standing just below the sign? How does your content look when it’s displayed to a user craning their neck and peering at it askew?
    • Color contrast and lighting. Digital signs are no longer just fixtures of subterranean worlds; they’re above-ground and in well-lit spaces too. Color contrast and lighting strongly influence how legible your digital sign content can be.

    As for discoverability, digital signs present challenges of both physical discoverability (can the sign itself be easily found and consulted?) and content discoverability (how long does a reader have to stare at the sign for the content they need to show up?):

    • Physical discoverability. Are signs placed in prominent locations where users will come across them? The MTA was criticized for the poor placement of many of its digital countdown clocks in the New York City subway, something that can block a user from ever accessing content they need.
    • Content discoverability. Because digital signs can only display so much content at once, even if there’s a large amount of copy to deliver eventually, users of digital signs may need to wait too long for their desired content to appear, or the content they seek may be too deprioritized for it to show up while they’re looking at the sign.

    Both legibility and discoverability of digital sign content require thorough approaches when authoring, designing, and implementing content for digital signs.

    Usability and accessibility in digital signage content

    In addition to audits, in any physical environment, immersive content on digital signs requires a careful and bespoke approach to consider not only how content will be consumed on the sign itself but also all the ways in which users move around and refer to digital signage as they consult it for information. After all, our content is no longer couched in a web page or recited by a screen reader, both objects we can control ourselves; instead, it’s flashed and displayed on flat screens and kiosks in physical spaces. 

    Consider how the digital sign and the content it presents appear to people who use mobility aids such as wheelchairs or walkers. Is the surrounding physical environment accessible enough so that wheelchair users can easily read and discover the content they seek on a digital sign, which may be positioned too high for a seated reader? By the same token, can colorblind and dyslexic people read the chosen typeface in the color scheme it’s rendered in? Is there an aural equivalent of the content for Blind people navigating your digital signage, in close proximity to the sign itself, serving as synchronized captions?

    Locational content: Content at a point in space

    Unlike digital signage content, which is copy or media displayed in a space, locational (or geolocational) content is copy or media delivered to a device—usually a phone or watch—based on a point in space (if precise location is acquired through GPS location services) or a swath of space (typically driven by Bluetooth Low Energy beacons that have certain ranges). For smartphone and smartwatch users, GPS location services can often pinpoint a relatively accurate sense of where a person is, while Bluetooth Low Energy (BLE) beacons can triangulate their position based on devices that have Bluetooth enabled.

    Examples of locational content might include links to more detailed information online, coupons, and sales relevant to merchandise or objects near the person viewing it.

    Though BLE beacons remain a fairly finicky and untested realm of spatial technology, they’ve quickly gained traction in large shopping centers and public spaces such as airports where users agree to receive content relevant to their current location, most often in the form of push notifications that whisk users away into a separate view with more comprehensive information. But because these tiny chunks of copy are often tightly contained and contextless, teams designing for locational content need to focus on how users interact with their devices as they move through physical spaces.

    Auditing for locational content

    Fortunately, because locational content is often delivered to the same visual devices that we use on a regular basis—smartphones, smartwatches, and tablets—auditing for content legibility can embrace many of the same principles we employ to evaluate other content. For discoverability, some of the most important considerations include:

    • Locational discoverability. BLE beacons are notorious for their imprecision, though they continue to improve in quality. GPS location, too, can be an inaccurate measure of where someone is at any given time. The last thing you want your customers to experience is an incorrect triangulation of where they are leading to embarrassing mistakes and bewilderment when unexpected content travels down the wire.
    • Proximity. Because of the relative lack of precision when it comes to BLE beacons and GPS location services, placing content items too close together in a coordinate map may trigger too many notifications or resource deliveries to a user, thus overwhelming them, or a certain content item may inadvertently supersede another because they’re spaced too closely together.

    As push notifications and location sharing become more common, locational content is rapidly becoming an important way to funnel users toward somewhat longer-form content that might otherwise go unnoticed when a customer is in a brick-and-mortar store.

    Usability and accessibility in locational content

    Because locational content requires users to move around physical spaces and trigger triangulation, consider how different types of users will move and also whether unforeseen issues can arise. For example, researchers in Japan found that users who walk while staring at their phones are highly disruptive to the flow and movement of those around them. Is your locational content possibly creating a situation where users bump into others, or worse, get into accidents? For instance, writing copy that’s quick and to the point or preventing notifications from being prematurely dismissed could allow users to ignore their devices until they have time to safely glance at them.

    Limited mobility and cognitive disabilities can place many disabled users of locational content at a deep disadvantage. While gamification may encourage users to seek as many items of locational content as possible in a given span of time for promotional purposes, consider whether it excludes wheelchair users or people who encounter obstacles when switching between contexts rapidly. There are good use cases for locational content, but what’s compelling for some users might be confounding for others.

    AR and VR content: Content projected into space

    Augmented reality, once the stuff of science fiction holograms and futuristic cityscapes, is becoming more available to the masses thanks to wearable AR devices, high-performing smartphones and tablets, and innovation in machine vision capabilities, though the utopian future of true “holographic” content remains as yet unrealized. Meanwhile, virtual reality has seen incredible growth over the pandemic as homebound users—by interacting with copy and media in fictional worlds—increasingly seek escapist ways to access content normally spread across flat screens.

    While AR and VR content is still in its infancy, the vast majority is currently couched in overlays that are superimposed over real-world environments or objects and can be opaque (occupying some of a device’s field of vision) or semi-transparent (creating an eerie, shimmery film on which text or media is displayed). Thanks to advancements in machine vision, these content overlays can track the motion of perceived objects in the physical or virtual world, bamboozling us into thinking these overlays are traveling in our fields of vision just like the things we see around us do.

    Formerly restricted to realms like museums, expensive video games, and gimmicky prototypes, AR and VR content is now becoming much more popular among companies that are interested in more immersive experiences capable of delivering content alongside objects in real-life brick-and-mortar environments, as well as virtual or imagined landscapes, like fully immersive brand experiences that transport customers to a pop-up store in their living room.

    To demonstrate this, my former team at Acquia Labs built an experimental proof of concept that examines how VR content can be administered within a CMS and a pilot project for grocery stores that explores what can happen when product information is displayed as AR content next to consumer goods in supermarket aisles. The following illustration shows, in the context of this latter experiment, how a smartphone camera interacts with a machine vision service and a Drupal CMS to acquire information to render alongside the item.

    A diagram depicting how someone might look at a physical object through their phone, and AR tools can connect to a CMS to download and display relevant information about the object virtually beside it.

    Auditing for AR and VR content

    Because AR and VR content, unlike other forms of immersive content, fundamentally plays in the same sandbox as the real world (or an imaginary one), legibility and discoverability can become challenging. The potential risks for AR and VR content are in many regards a fusion of the problems found in both digital signage and locational content, encompassing both physical placement and visual perspective, especially when it comes to legibility:

    • Content visibility. Is the AR or VR overlay too transparent to comfortably read the copy or view the image contained therein, or is it so opaque that it obscures its surroundings? AR and VR content must coexist gracefully with its exterior, and the two must enhance rather than obfuscate each other. Does the way your content is delivered compromise a user’s feeling of immersion in the environment behind it?
    • Content perspective. Unless you’re limited to a smartphone or similar handheld device, many AR and VR overlays, especially in immersive headsets, don’t display content or media as an immobile rectangular box, as it defeats the purpose of the illusion and can be jarring to users as they adjust their field of vision, breaking them out of the fantasy you’re hoping to create. For this reason, your AR or VR experience must not only dictate how environments and objects are angled and lit but also how the content associated with them is perceived. Is your content readable from various angles and points in the AR view or VR world?

    When it comes to discoverability of your AR and VR content, issues like accuracy in machine vision and triangulation of your user’s location and orientation become much more important:

    • Machine vision. Most relevantly for AR content, if your copy or media is predicated on machine vision that perceives an object by identifying it according to certain characteristics, how accurate is that prediction? Does some content go undiscovered because certain objects go undetected in your AR-enabled device?
    • Location accuracy. If your content relies on the user’s current location and orientation in relation to some point in space, as is common in both AR and VR content use cases, how accurately do devices dictate correct delivery at just the right time and place? Are the ranges within which content is accessible too limited, leading to flashes of content as you take a step to the left or right? Are there locations that simply can’t be reached, leading to forever-siloed copy or media?

    Due to the intersection of technical considerations and design concerns, AR and VR content, like voice content and indeed other forms of immersive content, requires a concerted effort across multiple teams to ensure resources are delivered not just legibly but also discoverably.

    Usability and accessibility in AR and VR content

    Out of all the forms of immersive content we’ve covered so far, AR and VR content is possibly the medium that demands the most assiduously crafted solutions in accessibility testing and usability testing. Because AR and VR content, especially in headsets or wearable devices, requires motion through real or imagined space, its impact on accessibility cannot be overstated. Adding a third dimension—and arguably, a fourth: time—to our perception of content requires attention not only to how content is accessed but also all the other elements that comprise a fully immersive visual experience.

    VR headsets commonly induce virtual reality motion sickness in many individuals. Poorly implemented transitions between states occurring in quick succession where content is visible and then invisible, and then visible again, can lead to epileptic seizures if not built with the utmost care. Finally, users moving quickly through spaces may inadvertently trigger vertigo in themselves or even collide with hazardous objects, resulting in potentially serious injuries. There’s a reason we aren’t wearing wearable headsets outside carefully secured environments.

    Navigable content: Content as space

    This is only the beginning of immersive content. Increasingly, we’re also toying with ideas that seemed harebrained even a few decades ago, like navigable content—copy and media that can be traversed as if the content itself were a navigable space. Imagine zooming in and out of tracts of text and stepping across glyphs like hopping between islands in a Super Mario game. Ambitious designers and developers are exploring this emerging concept of navigable content in exciting ways, both in and out of AR and VR. In many ways, truly navigable content is the endgame of how virtual reality presents information.

    Imagining an encyclopedia that we can browse like the classic 1990s opening sequence of the BBC’s Eyewitness television episodes is no longer as far-fetched as we think. Consider, for instance, Robby Leonardi’s interactive résumé, which invites you to play a character as you learn about his career, or Bruno Simon’s ambitious portfolio, where you drive an animated truck around his website. For navigable content, the risks and rewards for user experience and accessibility remain largely unexplored, just like the hazy fringes of the infinite maps VR worlds make possible.

    Conclusion

    The story of immersive content is in its early stages. As newly emerging channels for content see greater adoption, requiring us to relay resources like text and media to never-before-seen destinations like digital signage, location-enabled devices, and AR and VR overlays, the demands on our content strategy and design approaches will become both fascinating and frustrating. As seemingly fantastical new interfaces continue to emerge over the horizon, we’ll need an omnichannel content strategy to guide our own journeys as creatives and to orient the voyages of our users into the immersive.

    Content audits and effective content strategies aren’t just the domain of staid websites and boxy mobile or tablet interfaces—or even aurally rooted voice interfaces. They’re a key component of our increasingly digitized spaces, too, cornerstones of immersive experiences that beckon us to consume content where we are at any moment, unmoored from a workstation or a handheld. Because it lacks long-standing motifs of the web like context and clickable links, immersive content invites us to revisit our content with a fresh perspective. How will immersive content reinvent how we deliver information like the web did only a few decades ago, like voice has done in the past ten years?

    Only the test of time, and the allure of immersion, will tell.

  • Do You Need to Localize Your Website?

    Global markets give you access to new customers. All you need to do is inform potential buyers about your product or service. 

    Your website is a good place to introduce your product or service outside your locale. Localizing your web content sounds like the right way to reach out to the global market. Localization will bridge the language barriers, or the wider scope of differing cultures. 

    Before we move on further with the discussion, let’s focus on the definition of “localization.” 

    What is localization?

    According to the Cambridge Dictionary, localization (as a marketing term) is “the process of making a product or service more suitable for a particular country, area, etc.,” while translation is “something that is translated, or the process of translating something, from one language to another.” 

    In practice, the difference can be a little blurred. While it’s true that localization includes both language and non-language aspects, most cultural adjustments in the localization process are done through the language. Hence, the two terms are often interchangeable. 

    Good translators will not simply find an equivalent of a word in another language. They will actively research their materials and have an in-depth understanding of the languages they work in

    Depending on the situation, they may or may not convert measurement units and date formats. Technical guide books may need accurate unit conversion, but changing “Fahrenheit 451” to “Celsius 233” would be simply awkward. A good translator will suggest what to change and what to leave as it is. 

    Some people call this conversion process “localization.” The truth is, unit conversions had become a part of translation, long before the word “localization” was used to describe the process. 

    When we talk about linguistic versus non-linguistic aspects of a medium, and view them as separate entities, localization and translation may look different. However, when we look at the whole process of translating the message, seeing both elements as translatable items, the terms are interchangeable. 

    In this article, the terms “localization” and “translation” will be used interchangeably. We are going to discuss how to use a website as a communication tool to gain a new market in different cultures. 

    Localization: who is it for?

    A good localization is not cheap, so it would be wise to ask yourselves several questions beforehand: 

    • Who is your audience?
    • What kind of culture do they live in?
    • What kind of problems may arise during the localization process? 

    I will explain the details below. 

    Who is your ideal audience?

    Knowing your target audience should be at the top of your business plan. 

    For some, localization is not needed because they live in the same region and speak the same language as their target market. For example, daycare services, local coffee shops, and restaurants. 

    In some cases, people who live in the same region may speak different languages. In a bilingual society, you may want to cater to speakers of both languages as a sign of respect. In a multilingual society, aim to translate to the lingua franca and/or the language used by the majority. It makes people feel seen and it can create a positive image to your brand. 

    Sometimes, website translation is required by law. In Quebec, for instance, where French is spoken as the provincial language, you’ll need to include a French version of your website. You may also want to check other types of linguistic experiences you need to provide

    If your target market lives across the sea and speaks a different language, you may not have any choice but to localize. However, if those people can speak your  language, consider other aspects (cultural and/or legal) to make an informed decision on whether to translate.  

    Although there are many benefits of website translation, you don’t always have to do it now. Especially when your budget is tight or you can spend it on something more urgent. It’s better to postpone than to have a badly translated website. The price of cheap translation is costly.  

    If you’re legally required to launch a bilingual website but you don’t have the budget, you may want to check if you can be exempted. If you are not exempted, hire volunteers or seek government support, if possible. 

    Unless required otherwise by law, there is nothing wrong with using your current language in your product or service. You can maintain the already-formed relationship by focusing on what you have in common: the same interest. 

    Understanding cultural and linguistic intricacies

    For example, you have a coding tutorial website. Your current audience is IT professionals—mostly college graduates. You see an opportunity to expand to India. 

    Localization is unlikely to be needed in this case, as most Indian engineers have a good grasp of English. So, instead of doing a web translation project, you can use your money to improve or develop a new product or service for your Indian audience. Maybe you want to set up a workshop or a meetup in India. Or a bootcamp retreat in the country. 

    You can achieve this by focusing on the similarities you have with your audience. 

    The same rule applies to other countries where English language is commonly used by IT professionals. In the developing world, where English is rarely used, some self-taught programmers become “good hackers” to earn some money. You may wonder how, despite their lack of English skill, they can learn programming.

    There’s an explanation for it. 

    There are two types of language skills: passive (listening, reading) and active (speaking, writing). Passive language skills are usually learned first. Active language skills are developed later. You learn to speak by listening, and learn to write by reading. You go through this process as a child and, again, when you learn a new language as an adult. (This is not to confuse language acquisition with language learning, but to note that the process is relatively the same.) 

    As most free IT course materials are available online in English, some programmers may have to adapt and study English (passively) as they go. They may not be considered “fluent” in a formal way, but it doesn’t mean they lack the ability to grasp the language. They may not be able to speak or write perfectly, but they can understand technical texts. 

    In short, passive and active language skills can grow at different speeds. This fact leads you to a new group of potential audience: those who can understand English, but only passively. 

    If your product is in a text format, translation won’t be necessary for this type of audience. If it’s an audio or video format, you may need to add subtitles, since native English speakers speak in so many different accents and at various speeds. Captioning will also help the hard of hearing. It may be required by regional or national accessibility legislation too. And it’s the right thing to do.

    One might argue that if these people can understand English, they will understand the text better in their native tongue. 

    Well, if all the programs you’re using or referring to are available in their native language version, it may not be a problem. But in reality, this is often not the case. 

    Linguistic consistency helps programmers work faster. And this alone should trump the presumed ease that comes with translation. 

    Some problems with localization 

    I was once involved in a global internet company’s localization project in Indonesia. 

    Indonesian SMEs mostly speak Indonesian since they mainly serve the domestic market. So, it was the right decision to target Indonesian SMEs using Indonesian language. 

    The company had the budget to target Indonesia’s market of 58 million SMEs, and there weren’t too many competitors yet. I think the localization plan was justified. But even with this generally well-thought-out plan, there were some problems in its execution. 

    The materials were filled with jargon and annoying outlinks. You could not just read an instruction until it was completed, because after a few words, you would be confronted with a “smart term.” Now to understand this smart term, you would have to follow a link that would take you to a separate page that was supposed to explain everything, but in that page you would find more smart terms that you’d need to click. At this point, the scent of information would have grown cold, and you’d likely have forgotten what you were reading or why. 

    Small business owners are among the busiest folks you can find. They do almost everything by themselves. They would not waste their time trying to read pages of instructions that throw them right and left. 

    Language-wise, the instructions could have been simplified. Design-wise, a hover/focus pop-up containing a brief definition or description could have been used to explain special terms. 

    I agree pop-ups can be distracting, but in terms of ease, for this use case, they would have worked far better than outlinks. There are some ways to improve hover/focus pop-ups to make them more readable. 

    However, if the content of those pop-ups (definition, description, etc.) cannot be brief, it is wiser to write it down as a separate paragraph. 

    In my client’s case, they could have started each instruction by describing the definitions of those special terms. Those definitions ought to be written in one page so as to reduce the amount of time spent on clicking and returning to the intended page. This solution can also be applied when a definition is too long to be put inside a hover/focus bubble. 

    The text problem, in my client’s case, came with the source language. It was later transferred to the target language thanks to localization. They could have solved the problem at the source language level, but I think it would have been too late at that point. 

    Transcreation, i.e., “taking a concept in one language and completely recreating it in another language,” doesn’t solve a problem like this because the issue is more technical than linguistic. Translators would still have to adjust their work to the given environment. They’d still have to retain all the links and translate all jargon-laden content. 

    The company should have hired a local writer to rewrite the content in the target language. It would have worked better. They didn’t take this route for a reason: namely, those “smart terms” were used as keywords. So as much as we hated them, we had to keep them there.  

    How to prepare a web localization project

    Let’s say you have considered everything. You’ve learned about your target audience, how your product will solve their problem, and that you have the budget to reach out to them. Naturally, you want to reach them now before your competitors do. 

    Now you can proceed with your web localization project plan. 

    One thing I want to repeat is that localization will transfer any errors you have in your original content to the translated pages. So you’ll need to do some content pre-checks before starting a web translation project. It will be cheaper to fix the problems before the translation project commences. 

    Pre-localization checks should include assessing the text you intend to translate. Ask someone outside the team to read the text and ask them to give their feedback. It’s even better if that someone represents the target audience. 

    Then make corrections, if need be. Use as little jargon as possible. Let readers focus on one article with no interruption. 

    Some companies like to coin new terms to create keywords that will lead people to their sites. This can be a smart move, and it is arguably good for search engine optimization. But if you want to build rapport with your audience, you must make your message clear and understandable. Clear communication, not the invention of new words, should be your priority. 

    Following this course of action might mean sacrificing keywords for clarity, but it also promises a lower bounce rate since visitors will stay longer on your site. After all, people are more likely to read your writing to the end if they are not being frustrated by difficult terms.

    Once your text is ready, you can start your localization project. You can hire a language agency or build your own team. 

    If you have a lot of content, it may be wise to outsource your project to a language agency. Doing so can save you time and money. An outside specialist consultancy will have the technology and skills to work on various types of localization projects. They can also translate your website to different languages at once. 

    As an alternative, you might directly hire freelance editors and translators to work on your project. Depending on many factors, this might end up less or more expensive than hiring an agency. 

    Make sure that the translators you hire, whether directly or through an agency, have relevant experience. If your text is about marketing, for instance, the translators and editors must be experts in this field. This is to make sure they can get your message across. 

    Most translation tools used today can retain sentence formatting, links, and HTML code, so you don’t need to worry about these. 

    Focus on the message you want to carry to your target audience. Be sensitive about cultural remarks and be careful about any potential misunderstanding caused by your translation. Consult with your language team about certain phrases that may become problematic when translated. Pick your words carefully. Choose the right expressions. 

    If you localize a website, you must be sure to provide customer service support in target-friendly language. This allows you to reply to customers immediately, rather than having to wait for a translator to become involved.  

    In summary, don’t be hasty when doing a web localization/translation project. There are a lot of things to consider beforehand. A well prepared plan will yield a better result. A good quality translation will not only bridge the language gap but it can also build trust and solidify your brand image in the mind of your target audience.

  • Human-Readable JavaScript: A Tale of Two Experts

    Everyone wants to be an expert. But what does that even mean? Over the years I’ve seen two types of people who are referred to as “experts.” Expert 1 is someone who knows every tool in the language and makes sure to use every bit of it, whether it helps or not. Expert 2 also knows every piece of syntax, but they’re pickier about what they employ to solve problems, considering a number of factors, both code-related and not. 

    Can you take a guess at which expert we want working on our team? If you said Expert 2, you’d be right. They’re a developer focused on delivering readable code—lines of JavaScript others can understand and maintain. Someone who can make the complex simple. But “readable” is rarely definitive—in fact, it’s largely based on the eyes of the beholder. So where does that leave us? What should experts aim for when writing readable code? Are there clear right and wrong choices? The answer is, it depends.

    The obvious choice

    In order to improve developer experience, TC39 has been adding lots of new features to ECMAScript in recent years, including many proven patterns borrowed from other languages. One such addition, added in ES2019, is Array.prototype.flat() It takes an argument of depth or Infinity, and flattens an array. If no argument is given, the depth defaults to 1.

    Prior to this addition, we needed the following syntax to flatten an array to a single level.

    let arr = [1, 2, [3, 4]];
    
    [].concat.apply([], arr);
    // [1, 2, 3, 4]

    When we added flat(), that same functionality could be expressed using a single, descriptive function.

    arr.flat();
    // [1, 2, 3, 4]

    Is the second line of code more readable? The answer is emphatically yes. In fact, both experts would agree.

    Not every developer is going to be aware that flat() exists. But they don’t need to because flat() is a descriptive verb that conveys the meaning of what is happening. It’s a lot more intuitive than concat.apply().

    This is the rare case where there is a definitive answer to the question of whether new syntax is better than old. Both experts, each of whom is familiar with the two syntax options, will choose the second. They’ll choose the shorter, clearer, more easily maintained line of code.

    But choices and trade-offs aren’t always so decisive.

    The gut check

    The wonder of JavaScript is that it’s incredibly versatile. There is a reason it’s all over the web. Whether you think that’s a good or bad thing is another story.

    But with that versatility comes the paradox of choice. You can write the same code in many different ways. How do you determine which way is “right”? You can’t even begin to make a decision unless you understand the available options and their limitations.

    Let’s use functional programming with map() as the example. I’ll walk through various iterations that all yield the same result.

    This is the tersest version of our map() examples. It uses the fewest characters, all fit into one line. This is our baseline.

    const arr = [1, 2, 3];
    let multipliedByTwo = arr.map(el => el * 2);
    // multipliedByTwo is [2, 4, 6]

    This next example adds only two characters: parentheses. Is anything lost? How about gained? Does it make a difference that a function with more than one parameter will always need to use the parentheses? I’d argue that it does. There is little to no detriment  in adding them here, and it improves consistency when you inevitably write a function with multiple parameters. In fact, when I wrote this, Prettier enforced that constraint; it didn’t want me to create an arrow function without the parentheses.

    let multipliedByTwo = arr.map((el) => el * 2);

    Let’s take it a step further. We’ve added curly braces and a return. Now this is starting to look more like a traditional function definition. Right now, it may seem like overkill to have a keyword as long as the function logic. Yet, if the function is more than one line, this extra syntax is again required. Do we presume that we will not have any other functions that go beyond a single line? That seems dubious.

    let multipliedByTwo = arr.map((el) => {
      return el * 2;
    });

    Next we’ve removed the arrow function altogether. We’re using the same syntax as before, but we’ve swapped out for the function keyword. This is interesting because there is no scenario in which this syntax won’t work; no number of parameters or lines will cause problems, so consistency is on our side. It’s more verbose than our initial definition, but is that a bad thing? How does this hit a new coder, or someone who is well versed in something other than JavaScript? Is someone who knows JavaScript well going to be frustrated by this syntax in comparison?

    let multipliedByTwo = arr.map(function(el) {
      return el * 2;
    });

    Finally we get to the last option: passing just the function. And timesTwo can be written using any syntax we like. Again, there is no scenario in which passing the function name causes a problem. But step back for a moment and think about whether or not this could be confusing. If you’re new to this codebase, is it clear that timesTwo is a function and not an object? Sure, map() is there to give you a hint, but it’s not unreasonable to miss that detail. How about the location of where timesTwo is declared and initialized? Is it easy to find? Is it clear what it’s doing and how it’s affecting this result? All of these are important considerations.

    const timesTwo = (el) => el * 2;
    let multipliedByTwo = arr.map(timesTwo);

    As you can see, there is no obvious answer here. But making the right choice for your codebase means understanding all the options and their limitations. And knowing that consistency requires parentheses and curly braces and return keywords.

    There are a number of questions you have to ask yourself when writing code. Questions of performance are typically the most common. But when you’re looking at code that is functionally identical, your determination should be based on humans—how humans consume code.

    Maybe newer isn’t always better

    So far we’ve found a clear-cut example of where both experts would reach for the newest syntax, even if it’s not universally known. We’ve also looked at an example that poses a lot of questions but not as many answers.

    Now it’s time to dive into code that I’ve written before...and removed. This is code that made me the first expert, using a little-known piece of syntax to solve a problem to the detriment of my colleagues and the maintainability of our codebase.

    Destructuring assignment lets you unpack values from objects (or arrays). It typically looks something like this.

    const {node} = exampleObject;

    It initializes a variable and assigns it a value all in one line. But it doesn’t have to.

    let node
    ;({node} = exampleObject)

    The last line of code assigns a variable to a value using destructuring, but the variable declaration takes place one line before it. It’s not an uncommon thing to want to do, but many people don’t realize you can do it.

    But look at that code closely. It forces an awkward semicolon for code that doesn’t use semicolons to terminate lines. It wraps the command in parentheses and adds the curly braces; it’s entirely unclear what this is doing. It’s not easy to read, and, as an expert, it shouldn’t be in code that I write.

    let node
    node = exampleObject.node

    This code solves the problem. It works, it’s clear what it does, and my colleagues will understand it without having to look it up. With the destructuring syntax, just because I can doesn’t mean I should.

    Code isn’t everything

    As we’ve seen, the Expert 2 solution is rarely obvious based on code alone; yet there are still clear distinctions between which code each expert would write. That’s because code is for machines to read and humans to interpret. So there are non-code factors to consider!

    The syntax choices you make for a team of JavaScript developers is different than those you should make for a team of polyglots who aren’t steeped in the minutiae. 

    Let’s take spread vs. concat() as an example.

    Spread was added to ECMAScript a few years ago, and it’s enjoyed wide adoption. It’s sort of a utility syntax in that it can do a lot of different things. One of them is concatenating a number of arrays.

    const arr1 = [1, 2, 3];
    const arr2 = [9, 11, 13];
    const nums = [...arr1, ...arr2];

    As powerful as spread is, it isn’t a very intuitive symbol. So unless you already know what it does, it’s not super helpful. While both experts may safely assume a team of JavaScript specialists are familiar with this syntax, Expert 2 will probably question whether that’s true of a team of polyglot programmers. Instead, Expert 2 may select the concat() method instead, as it’s a descriptive verb that you can probably understand from the context of the code.

    This code snippet gives us the same nums result as the spread example above.

    const arr1 = [1, 2, 3];
    const arr2 = [9, 11, 13];
    const nums = arr1.concat(arr2);

    And that’s but one example of how human factors influence code choices. A codebase that’s touched by a lot of different teams, for example, may have to hold more stringent standards that don’t necessarily keep up with the latest and greatest syntax. Then you move beyond the main source code and consider other factors in your tooling chain that make life easier, or harder, for the humans who work on that code. There is code that can be structured in a way that’s hostile to testing. There is code that backs you into a corner for future scaling or feature addition. There is code that’s less performant, doesn’t handle different browsers, or isn’t accessible. All of these factor into the recommendations Expert 2 makes.

    Expert 2 also considers the impact of naming. But let’s be honest, even they can’t get that right most of the time.

    Conclusion

    Experts don’t prove themselves by using every piece of the spec; they prove themselves by knowing the spec well enough to deploy syntax judiciously and make well-reasoned decisions. This is how experts become multipliers—how they make new experts.

    So what does this mean for those of us who consider ourselves experts or aspiring experts? It means that writing code involves asking yourself a lot of questions. It means considering your developer audience in a real way. The best code you can write is code that accomplishes something complex, but is inherently understood by those who examine your codebase.

    And no, it’s not easy. And there often isn’t a clear-cut answer. But it’s something you should consider with every function you write.

  • Now THAT’S What I Call Service Worker!

    The Service Worker API is the Dremel of the web platform. It offers incredibly broad utility while also yielding resiliency and better performance. If you’ve not used Service Worker yet—and you couldn’t be blamed if so, as it hasn’t seen wide adoption as of 2020—it goes something like this:

    1. On the initial visit to a website, the browser registers what amounts to a client-side proxy powered by a comparably paltry amount of JavaScript that—like a Web Worker—runs on its own thread.
    2. After the Service Worker’s registration, you can intercept requests and decide how to respond to them in the Service Worker’s fetch() event.

    What you decide to do with requests you intercept is a) your call and b) depends on your website. You can rewrite requests, precache static assets during install, provide offline functionality, and—as will be our eventual focus—deliver smaller HTML payloads and better performance for repeat visitors.

    Getting out of the woods

    Weekly Timber is a client of mine that provides logging services in central Wisconsin. For them, a fast website is vital. Their business is located in Waushara County, and like many rural stretches in the United States, network quality and reliability isn’t great.

    A screenshot of a wireless coverage map for Waushara County, Wisconsin with a color overlay. Most of the overlay is colored tan, which represents areas of the county which have downlink speeds between 3 and 9.99 megabits per second. There are sparse light blue and dark blue areas which indicate faster service, but are far from being the majority of the county.
    Figure 1. A wireless coverage map of Waushara County, Wisconsin. The tan areas of the map indicate downlink speeds between 3 and 9.99 Mbps. Red areas are even slower, while the pale and dark blue areas are faster.

    Wisconsin has farmland for days, but it also has plenty of forests. When you need a company that cuts logs, Google is probably your first stop. How fast a given logging company’s website is might be enough to get you looking elsewhere if you’re left waiting too long on a crappy network connection.

    I initially didn’t believe a Service Worker was necessary for Weekly Timber’s website. After all, if things were plenty fast to start with, why complicate things? On the other hand, knowing that my client services not just Waushara County, but much of central Wisconsin, even a barebones Service Worker could be the kind of progressive enhancement that adds resilience in the places it might be needed most.

    The first Service Worker I wrote for my client’s website—which I’ll refer to henceforth as the “standard” Service Worker—used three well-documented caching strategies:

    1. Precache CSS and JavaScript assets for all pages when the Service Worker is installed when the window’s load event fires.
    2. Serve static assets out of CacheStorage if available. If a static asset isn’t in CacheStorage, retrieve it from the network, then cache it for future visits.
    3. For HTML assets, hit the network first and place the HTML response into CacheStorage. If the network is unavailable the next time the visitor arrives, serve the cached markup from CacheStorage.

    These are neither new nor special strategies, but they provide two benefits:

    • Offline capability, which is handy when network conditions are spotty.
    • A performance boost for loading static assets.

    That performance boost translated to a 42% and 48% decrease in the median time to First Contentful Paint (FCP) and Largest Contentful Paint (LCP), respectively. Better yet, these insights are based on Real User Monitoring (RUM). That means these gains aren’t just theoretical, but a real improvement for real people.

    A screenshot of request/response timings in Chrome's developer tools. It depicts a service worker on a page serving a static asset from CacheStorage in roughly 23 milliseconds.
    Figure 2. A breakdown of request/response timings depicted in Chrome’s developer tools. The request is for a static asset from CacheStorage. Because the Service Worker doesn’t need to access the network, it takes about 23 milliseconds to “download” the asset from CacheStorage.

    This performance boost is from bypassing the network entirely for static assets already in CacheStorage—particularly render-blocking stylesheets. A similar benefit is realized when we rely on the HTTP cache, only the FCP and LCP improvements I just described are in comparison to pages with a primed HTTP cache without an installed Service Worker.

    If you’re wondering why CacheStorage and the HTTP cache aren’t equal, it’s because the HTTP cache—at least in some cases—may still involve a trip to the server to verify asset freshness. Cache-Control’s immutable flag gets around this, but immutable doesn’t have great support yet. A long max-age value works, too, but the combination of Service Worker API and CacheStorage gives you a lot more flexibility.

    Details aside, the takeaway is that the simplest and most well-established Service Worker caching practices can improve performance. Potentially more than what well-configured Cache-Control headers can provide. Even so, Service Worker is an incredible technology with far more possibilities. It’s possible to go farther, and I’ll show you how.

    A better, faster Service Worker

    The web loves itself some “innovation,” which is a word we equally love to throw around. To me, true innovation isn’t when we create new frameworks or patterns solely for the benefit of developers, but whether those inventions benefit people who end up using whatever it is we slap up on the web. The priority of constituencies is a thing we ought to respect. Users above all else, always.

    The Service Worker API’s innovation space is considerable. How you work within that space can have a big effect on how the web is experienced. Things like navigation preload and ReadableStream have taken Service Worker from great to killer. We can do the following with these new capabilities, respectively:

    • Reduce Service Worker latency by parallelizing Service Worker startup time and navigation requests.
    • Stream content in from CacheStorage and the network.

    Moreover, we’re going to combine these capabilities and pull out one more trick: precache header and footer partials, then combine them with content partials from the network. This not only reduces how much data we download from the network, but it also improves perceptual performance for repeat visits. That’s innovation that helps everyone.

    Grizzled, I turn to you and say “let’s do this.”

    Laying the groundwork

    If the idea of combining precached header and footer partials with network content on the fly seems like a Single Page Application (SPA), you’re not far off. Like an SPA, you’ll need to apply the “app shell” model to your website. Only instead of a client-side router plowing content into one piece of minimal markup, you have to think of your website as three separate parts:

    • The header.
    • The content.
    • The footer.

    For my client’s website, that looks like this:

    A screenshot of the Weekly Timber website color coded to delineate each partial that makes up the page. The header is color coded as blue, the footer as red, and the main content in between as yellow.
    Figure 3. A color coding of the Weekly Timber website’s different partials. The Footer and Header partials are stored in CacheStorage, while the Content partial is retrieved from the network unless the user is offline.

    The thing to remember here is that the individual partials don’t have to be valid markup in the sense that all tags need to be closed within each partial. The only thing that counts in the final sense is that the combination of these partials must be valid markup.

    To start, you’ll need to precache separate header and footer partials when the Service Worker is installed. For my client’s website, these partials are served from the /partial-header and /partial-footer pathnames:

    self.addEventListener("install", event => {
      const cacheName = "fancy_cache_name_here";
      const precachedAssets = [
        "/partial-header",  // The header partial
        "/partial-footer",  // The footer partial
        // Other assets worth precaching
      ];
    
      event.waitUntil(caches.open(cacheName).then(cache => {
        return cache.addAll(precachedAssets);
      }).then(() => {
        return self.skipWaiting();
      }));
    });

    Every page must be fetchable as a content partial minus the header and footer, as well as a full page with the header and footer. This is key because the initial visit to a page won’t be controlled by a Service Worker. Once the Service Worker takes over, then you serve content partials and assemble them into complete responses with the header and footer partials from CacheStorage.

    If your site is static, this means generating a whole other mess of markup partials that you can rewrite requests to in the Service Worker’s fetch() event. If your website has a back end—as is the case with my client—you can use an HTTP request header to instruct the server to deliver full pages or content partials.

    The hard part is putting all the pieces together—but we’ll do just that.

    Putting it all together

    Writing even a basic Service Worker can be challenging, but things get real complicated real fast when assembling multiple responses into one. One reason for this is that in order to avoid the Service Worker startup penalty, we’ll need to set up navigation preload.

    Implementing navigation preload

    Navigation preload addresses the problem of Service Worker startup time, which delays navigation requests to the network. The last thing you want to do with a Service Worker is hold up the show.

    Navigation preload must be explicitly enabled. Once enabled, the Service Worker won’t hold up navigation requests during startup. Navigation preload is enabled in the Service Worker’s activate event:

    self.addEventListener("activate", event => {
      const cacheName = "fancy_cache_name_here";
      const preloadAvailable = "navigationPreload" in self.registration;
    
      event.waitUntil(caches.keys().then(keys => {
        return Promise.all([
          keys.filter(key => {
            return key !== cacheName;
          }).map(key => {
            return caches.delete(key);
          }),
          self.clients.claim(),
          preloadAvailable ? self.registration.navigationPreload.enable() : true
        ]);
      }));
    });

    Because navigation preload isn’t supported everywhere, we have to do the usual feature check, which we store in the above example in the preloadAvailable variable.

    Additionally, we need to use Promise.all() to resolve multiple asynchronous operations before the Service Worker activates. This includes pruning those old caches, as well as waiting for both clients.claim() (which tells the Service Worker to assert control immediately rather than waiting until the next navigation) and navigation preload to be enabled.

    A ternary operator is used to enable navigation preload in supporting browsers and avoid throwing errors in browsers that don’t. If preloadAvailable is true, we enable navigation preload. If it isn’t, we pass a Boolean that won’t affect how Promise.all() resolves.

    With navigation preload enabled, we need to write code in our Service Worker’s fetch() event handler to make use of the preloaded response:

    self.addEventListener("fetch", event => {
      const { request } = event;
    
      // Static asset handling code omitted for brevity
      // ...
    
      // Check if this is a request for a document
      if (request.mode === "navigate") {
        const networkContent = Promise.resolve(event.preloadResponse).then(response => {
          if (response) {
            addResponseToCache(request, response.clone());
    
            return response;
          }
    
          return fetch(request.url, {
            headers: {
              "X-Content-Mode": "partial"
            }
          }).then(response => {
            addResponseToCache(request, response.clone());
    
            return response;
          });
        }).catch(() => {
          return caches.match(request.url);
        });
    
        // More to come...
      }
    });

    Though this isn’t the entirety of the Service Worker’s fetch() event code, there’s a lot that needs explaining:

    1. The preloaded response is available in event.preloadResponse. However, as Jake Archibald notes, the value of event.preloadResponse will be undefined in browsers that don’t support navigation preload. Therefore, we must pass event.preloadResponse to Promise.resolve() to avoid compatibility issues.
    2. We adapt in the resulting then callback. If event.preloadResponse is supported, we use the preloaded response and add it to CacheStorage via an addResponseToCache() helper function. If not, we send a fetch() request to the network to get the content partial using a custom X-Content-Mode header with a value of partial.
    3. Should the network be unavailable, we fall back to the most recently accessed content partial in CacheStorage.
    4. The response—regardless of where it was procured from—is then returned to a variable named networkContent that we use later.

    How the content partial is retrieved is tricky. With navigation preload enabled, a special Service-Worker-Navigation-Preload header with a value of true is added to navigation requests. We then work with that header on the back end to ensure the response is a content partial rather than the full page markup.

    However, because navigation preload isn’t available in all browsers, we send a different header in those scenarios. In Weekly Timber’s case, we fall back to a custom X-Content-Mode header. In my client’s PHP back end, I’ve created some handy constants:

    <?php
    
    // Is this a navigation preload request?
    define("NAVIGATION_PRELOAD", isset($_SERVER["HTTP_SERVICE_WORKER_NAVIGATION_PRELOAD"]) && stristr($_SERVER["HTTP_SERVICE_WORKER_NAVIGATION_PRELOAD"], "true") !== false);
    
    // Is this an explicit request for a content partial?
    define("PARTIAL_MODE", isset($_SERVER["HTTP_X_CONTENT_MODE"]) && stristr($_SERVER["HTTP_X_CONTENT_MODE"], "partial") !== false);
    
    // If either is true, this is a request for a content partial
    define("USE_PARTIAL", NAVIGATION_PRELOAD === true || PARTIAL_MODE === true);
    
    ?>

    From there, the USE_PARTIAL constant is used to adapt the response:

    <?php
    
    if (USE_PARTIAL === false) {
      require_once("partial-header.php");
    }
    
    require_once("includes/home.php");
    
    if (USE_PARTIAL === false) {
      require_once("partial-footer.php");
    }
    
    ?>

    The thing to be hip to here is that you should specify a Vary header for HTML responses to take the Service-Worker-Navigation-Preload (and in this case, the X-Content-Mode header) into account for HTTP caching purposes—assuming you’re caching HTML at all, which may not be the case for you.

    With our handling of navigation preloads complete, we can then move onto the work of streaming content partials from the network and stitching them together with the header and footer partials from CacheStorage into a single response that the Service Worker will provide.

    Streaming partial content and stitching together responses

    While the header and footer partials will be available almost instantaneously because they’ve been in CacheStorage since the Service Worker’s installation, it’s the content partial we retrieve from the network that will be the bottleneck. It’s therefore vital that we stream responses so we can start pushing markup to the browser as quickly as possible. ReadableStream can do this for us.

    This ReadableStream business is a mind-bender. Anyone who tells you it’s “easy” is whispering sweet nothings to you. It’s hard. After I wrote my own function to merge streamed responses and messed up a critical step—which ended up not improving page performance, mind you—I modified Jake Archibald’s mergeResponses() function to suit my needs:

    async function mergeResponses (responsePromises) {
      const readers = responsePromises.map(responsePromise => {
        return Promise.resolve(responsePromise).then(response => {
          return response.body.getReader();
        });
      });
    
      let doneResolve,
          doneReject;
    
      const done = new Promise((resolve, reject) => {
        doneResolve = resolve;
        doneReject = reject;
      });
    
      const readable = new ReadableStream({
        async pull (controller) {
          const reader = await readers[0];
    
          try {
            const { done, value } = await reader.read();
    
            if (done) {
              readers.shift();
    
              if (!readers[0]) {
                controller.close();
                doneResolve();
    
                return;
              }
    
              return this.pull(controller);
            }
    
            controller.enqueue(value);
          } catch (err) {
            doneReject(err);
            throw err;
          }
        },
        cancel () {
          doneResolve();
        }
      });
    
      const headers = new Headers();
      headers.append("Content-Type", "text/html");
    
      return {
        done,
        response: new Response(readable, {
          headers
        })
      };
    }

    As usual, there’s a lot going on:

    1. mergeResponses() accepts an argument named responsePromises, which is an array of Response objects returned from either a navigation preload, fetch(), or caches.match(). Assuming the network is available, this will always contain three responses: two from caches.match() and (hopefully) one from the network.
    2. Before we can stream the responses in the responsePromises array, we must map responsePromises to an array containing one reader for each response. Each reader is used later in a ReadableStream() constructor to stream each response’s contents.
    3. A promise named done is created. In it, we assign the promise’s resolve() and reject() functions to the external variables doneResolve and doneReject, respectively. These will be used in the ReadableStream() to signal whether the stream is finished or has hit a snag.
    4. The new ReadableStream() instance is created with a name of readable. As responses stream in from CacheStorage and the network, their contents will be appended to readable.
    5. The stream’s pull() method streams the contents of the first response in the array. If the stream isn’t canceled somehow, the reader for each response is discarded by calling the readers array’s shift() method when the response is fully streamed. This repeats until there are no more readers to process.
    6. When all is done, the merged stream of responses is returned as a single response, and we return it with a Content-Type header value of text/html.

    This is much simpler if you use TransformStream, but depending on when you read this, that may not be an option for every browser. For now, we’ll have to stick with this approach.

    Now let’s revisit the Service Worker’s fetch() event from earlier, and apply the mergeResponses() function:

    self.addEventListener("fetch", event => {
      const { request } = event;
    
      // Static asset handling code omitted for brevity
      // ...
    
      // Check if this is a request for a document
      if (request.mode === "navigate") {
        // Navigation preload/fetch() fallback code omitted.
        // ...
    
        const { done, response } = await mergeResponses([
          caches.match("/partial-header"),
          networkContent,
          caches.match("/partial-footer")
        ]);
    
        event.waitUntil(done);
        event.respondWith(response);
      }
    });

    At the end of the fetch() event handler, we pass the header and footer partials from CacheStorage to the mergeResponses() function, and pass the result to the fetch() event’s respondWith() method, which serves the merged response on behalf of the Service Worker.

    Are the results worth the hassle?

    This is a lot of stuff to do, and it’s complicated! You might mess something up, or maybe your website’s architecture isn’t well-suited to this exact approach. So it’s important to ask: are the performance benefits worth the work? In my view? Yes! The synthetic performance gains aren’t bad at all:

    A bar graph comparing First Contentful Paint and Largest Contentful Paint performance for the Weekly Timber website for scenarios in which there is no service worker, a "standard" service worker, and a streaming service worker that stitches together content partials from CacheStorage and the network. The first two scenarios are basically the same, while the streaming service worker delivers measurably better performance for both FCP and LCP—especially for FCP!
    Figure 4. A bar chart of median FCP and LCP synthetic performance data across various Service Worker types for the Weekly Timber website.

    Synthetic tests don’t measure performance for anything except the specific device and internet connection they’re performed on. Even so, these tests were conducted on a staging version of my client’s website with a low-end Nokia 2 Android phone on a throttled “Fast 3G” connection in Chrome’s developer tools. Each category was tested ten times on the homepage. The takeaways here are:

    • No Service Worker at all is slightly faster than the “standard” Service Worker with simpler caching patterns than the streaming variant. Like, ever so slightly faster. This may be due to the delay introduced by Service Worker startup, however, the RUM data I’ll go over shows a different case.
    • Both LCP and FCP are tightly coupled in scenarios where there’s no Service Worker or when the “standard” Service Worker is used. This is because the content of the page is pretty simple and the CSS is fairly small. The Largest Contentful Paint is usually the opening paragraph on a page.
    • However, the streaming Service Worker decouples FCP and LCP because the header content partial streams in right away from CacheStorage.
    • Both FCP and LCP are lower in the streaming Service Worker than in other cases.
    A bar chart comparing the RUM median FCP and LCP performance of no service worker, a "standard" service worker, and a streaming service worker. Both the "standard" and streaming service worker offer better FCP and LCP performance over no service worker, but the streaming service worker excels at FCP performance, while only being slightly slower at LCP than the "standard" service worker.
    Figure 5. A bar chart of median FCP and LCP RUM performance data across various Service Worker types for the Weekly Timber website.

    The benefits of the streaming Service Worker for real users is pronounced. For FCP, we receive an 79% improvement over no Service Worker at all, and a 63% improvement over the “standard” Service Worker. The benefits for LCP are more subtle. Compared to no Service Worker at all, we realize a 41% improvement in LCP—which is incredible! However, compared to the “standard” Service Worker, LCP is a touch slower.

    Because the long tail of performance is important, let’s look at the 95th percentile of FCP and LCP performance:

    A bar chart comparing the RUM median FCP and LCP performance of no service worker, a "standard" service worker, and a streaming service worker. Both the "standard" and streaming service workers are faster than no service worker at all, but the streaming service worker beats out the "standard" service worker for both FCP and LCP.
    Figure 6. A bar chart of 95th percentile FCP and LCP RUM performance data across various Service Worker types for the Weekly Timber website.

    The 95th percentile of RUM data is a great place to assess the slowest experiences. In this case, we see that the streaming Service Worker confers a 40% and 51% improvement in FCP and LCP, respectively, over no Service Worker at all. Compared to the “standard” Service Worker, we see a reduction in FCP and LCP by 19% and 43%, respectively. If these results seem a bit squirrely compared to synthetic metrics, remember: that’s RUM data for you! You never know who’s going to visit your website on which device on what network.

    While both FCP and LCP are boosted by the myriad benefits of streaming, navigation preload (in Chrome’s case), and sending less markup by stitching together partials from both CacheStorage and the network, FCP is the clear winner. Perceptually speaking, the benefit is pronounced, as this video would suggest:

    Figure 7. Three WebPageTest videos of a repeat view of the Weekly Timber homepage. On the left is the page not controlled by a Service Worker, with a primed HTTP cache. On the right is the same page controlled by a streaming Service Worker, with CacheStorage primed.

    Now ask yourself this: If this is the kind of improvement we can expect on such a small and simple website, what might we expect on a website with larger header and footer markup payloads?

    Caveats and conclusions

    Are there trade-offs with this on the development side? Oh yeah.

    As Philip Walton has noted, a cached header partial means the document title must be updated in JavaScript on each navigation by changing the value of document.title. It also means you’ll need to update the navigation state in JavaScript to reflect the current page if that’s something you do on your website. Note that this shouldn’t cause indexing issues, as Googlebot crawls pages with an unprimed cache.

    There may also be some challenges on sites with authentication. For example, if your site’s header displays the current authenticated user on log in, you may have to update the header partial markup provided by CacheStorage in JavaScript on each navigation to reflect who is authenticated. You may be able to do this by storing basic user data in localStorage and updating the UI from there.

    There are certainly other challenges, but it’ll be up to you to weigh the user-facing benefits versus the development costs. In my opinion, this approach has broad applicability in applications such as blogs, marketing websites, news websites, ecommerce, and other typical use cases.

    All in all, though, it’s akin to the performance improvements and efficiency gains that you’d get from an SPA. Only the difference is that you’re not replacing time-tested navigation mechanisms and grappling with all the messiness that entails, but enhancing them. That’s the part I think is really important to consider in a world where client-side routing is all the rage.

    “What about Workbox?,” you might ask—and you’d be right to. Workbox simplifies a lot when it comes to using the Service Worker API, and you’re not wrong to reach for it. Personally, I prefer to work as close to the metal as I can so I can gain a better understanding of what lies beneath abstractions like Workbox. Even so, Service Worker is hard. Use Workbox if it suits you. As far as frameworks go, its abstraction cost is very low.

    Regardless of this approach, I think there’s incredible utility and power in using the Service Worker API to reduce the amount of markup you send. It benefits my client and all the people that use their website. Because of Service Worker and the innovation around its use, my client’s website is faster in the far-flung parts of Wisconsin. That’s something I feel good about.

    Special thanks to Jake Archibald for his valuable editorial advice, which, to put it mildly, considerably improved the quality of this article.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
  • Tested: How Nvidia Reflex can make you a better esports gamer

    “Frames win games,” Nvidia likes to say, but there’s more to esports domination than raw frame rates. How those frames get delivered matters too. Latency—the time it takes for an on-screen action to happen after you press a button—reigns supreme in the blink-and-you’re-dead competitive esports scene. If your game looks beautiful but feels sluggish, you’ll find yourself outgunned by rivals playing with crummy visual settings to increase responsiveness.

    Enter Nvidia Reflex, introduced alongside the GeForce RTX 3080 and RTX 3090. If you’ve heard of it before, you probably associate it with low-latency features being added to games like Call of Duty: Warzone, Valorant, and Fortnite. But Reflex is actually Nvidia’s overarching brand name for a wide range of new latency-obsessed tools. Yes, the Low Latency Mode being added to games is part of it, but on Tuesday, Nvidia and its partners are also rolling out blisteringly fast 360Hz G-Sync Esports monitors with Reflex Latency Analyzer built in. If you’ve invested in compatible accessories, Reflex Latency Analyzer keeps tabs on the entire pipeline from the millisecond you click your mouse to the millisecond the game renders your gun shooting, helping you identify which parts of your system are acting as a bottleneck.

    To read this article in full, please click here

  • Google Assistant’s Broadcast feature can now reach you from your phone
    Family members can now get broadcasts from Google Assistant even when they’re not near a Google smart speaker. Also: Family Bell gets a much-needed update.
  • Google will automatically enroll users in two-factor authentication soon

    Most security experts agree that two-factor authentication (2FA) is a critical part of securing your online accounts. Google agrees, but it’s taking an extra step: It’s going to automatically sign Google account holders up for two-factor accounts.

    In a way, Google sees two-factor authentication as a replacement for passwords, which Mark Risher, Google’s director of product management for identity and user security, in a statement called “the single biggest threat to your online security.” Because they’re easy to steal and hard to remember, users will end up reusing passwords. If stolen, they can be used to unlock multiple user accounts, adding to the risk.

    To read this article in full, please click here

  • Roku loses YouTube TV: What's a cord-cutter to do?
    Roku loses YouTube TV: Here are the alternatives for cord cutters
  • Best password managers: Reviews of the top products
CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
  • Facebook told to investigate its role in insurrection
    Facebook's independent oversight board on Wednesday upheld the company's decision to suspend former President Donald Trump. But one of the most important aspects of the decision wasn't really about the former President's social media accounts at all; instead it was a recommendation for a deeper review of the role the platform played in the spread of election conspiracy theories that, in the board's words, "exacerbated tensions that culminated in the violence in the United States on January 6."
  • IBM says it has created the world's smallest and most powerful microchip
    The semiconductor industry's constant challenge is to make microchips that are smaller, faster, more powerful and more energy efficient — simultaneously.
  • China can't stop talking about the Bill and Melinda Gates divorce
    The divorce of Bill and Melinda Gates has sent shockwaves through China, where the Microsoft co-founder has achieved a level of fame unlike almost any other Western entrepreneur.
  • I tracked my kid with Apple's Airtags to test its privacy features
    I clipped a keychain with one of Apple's tiny new Bluetooth trackers, AirTags, onto my son's book bag and waved goodbye to him on the school bus. I watched on my iPhone's Find My app as the bus stopped at a light a few blocks down from our street.
  • SpaceX lands Mars rocket prototype for the first time
    SpaceX just launched another test flight of an early Mars rocket prototype at its South Texas facility, sending the towering silver vehicle soaring up to about six miles above Earth, then putting it through a series of aerial acrobatics before re-lighting two of its engines and landing it upright back on a landing pad.
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.