Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • The FAQ as Advice Column

    Dear A List Apart,

    I have a problem that may be harming my content strategy career. In my current position, no one likes FAQs … except for me. The question-and-answer format is satisfying and efficient. Whenever I mention adding an FAQ section to a website, though, I receive numerous suggestions that I should wean myself off FAQs one question at a time or go cold turkey.

    Perhaps that is overdoing it, but sometimes I feel like defending FAQs by pen, sword, or Dothraki horde. Should I keep my addiction to myself, or should I embrace this oddity and champion a format I believe in?

    Signed,
    FAQ Fanatic

    Dear FAQ Fanatic,

    You’re not wrong: FAQs are as out of vogue as a fat footer. You’re also not alone. As an aspiring advice columnist, I’ve been wondering why the format is so unpopular even though it remains on many websites. In fact, in a recent A List Apart piece, Richard Rabil, Jr. listed the FAQ as one of many legitimate patterns of organization that you can use when writing.

    To address your query properly, I propose some soul-searching through a series of FAQs about FAQs. Let’s tackle the toughest question first.

    Can I trust FAQs?

    If you are a content strategist or information architect, chances are good you’ve been burned. Lisa Wright nails every single reason why the FAQ can be bad news. It is a poor excuse for a proper content strategy that would generate “purposeful information” across a website. For example, if you see an FAQ, you know right away that the website is duplicating content, which often leads to discrepancies.

    FAQs may also lead to a bigger design issue: accordion abuse. The typical FAQ design involves expand-and-collapse features. In theory, this makes it easier for users to scan to find what they need. But in a content migration or consolidation, I’ve seen desperate freelancers or webmasters shove entire web pages under a question just to make an old page fit a new design. If a user is coming to an FAQ for a quick-hit answer, as is often the case, imagine how horrifying it can be to expand a question and see an answer the length of David Foster Wallace’s Infinite Jest tucked underneath.

    Example of long text displaying in an expanded accordion within an FAQ.
    How many times have you opened an FAQ accordion and been overwhelmed by the novella beneath?

    Can the FAQ and I still be friends?

    Ah, you must be a content author. If you’re a content author on a budget, under a deadline, or both, the FAQ will become your bestie—whether you planned on it or not. In my experience, teams bust out the FAQs not because they are lazy but because they find them to be a reliable way to structure content.

    When I worked at an agency, a few of my projects were microsites that weren’t so “micro.” Some clients wanted a small site on a minimal CMS within an even more minimal timeline, but the content kept ballooning, leaving no time for true content modeling. The only way to build the content on time was to use the FAQ as a content model or spine.

    Like you, I now work with people who avoid FAQs. Since my current agency specializes in site redesigns for higher-ed clients, it’s expected that the information has more structure to begin with—and it usually does. Plus, my particular agency gives information architects and content strategists more time than the norm. From the get-go, our sitemaps, wireframes, and patterns serve as a stable foundation for the content.

    Sometimes, though, even the most stable foundations won’t prevent the appearance of an FAQ. If a content team doesn’t get enough time to inventory their content, they’ll probably encounter numerous FAQs. They’ll need to figure out a way to get that content over to the new site somehow … which means those FAQs aren’t going anywhere.

    So, do I have to quit FAQs cold turkey?

    No. The FAQ structure has held up for so long because it is a brilliant pattern. Think the Socratic method. Or the catechism. Or Usenet. Or “F.A.Q.s about F.A.Q.s.” Or—you guessed it—“Dear Prudence,” “Dear Sugar,” or any other popular advice column. Users will always have questions, and they will always want answers.

    What makes FAQs troublesome is incorrect or lazy use. Lisa Wright has already shared what not to do, but perhaps the best way to start an FAQ is to choose each question with great care. For example, advice columnists spend plenty of time selecting what questions they will answer each week. In general, listeners want to hear the advice columnist flexing their mental muscles to resolve the most complicated situations.

    If you’re using FAQs correctly, start with the best content possible, and align that content with what both content authors and content consumers want. Content authors can rely on the Q&A structure to deliver quality content on a regular basis while reassuring content consumers that they are receiving the best answers.

    FAQ-appropriate content

    What is the best content for an FAQ? Thus far, I’ve discussed choosing your questions wisely and keeping your answers short (and, yes, shorter than the answers of a typical advice columnist). Since I’ve worked in higher ed, I’ve had the chance to speak with people who support admissions and enrollment, and they spend most of their time answering frequently asked questions from students and parents.

    In one stakeholder interview session with the staff of a community college, it became clear that the questions the staff handled fell into two camps: the questions people ask over and over and the head-scratching edge cases. For example, questions about transcripts or financial aid awards are timeless. As for the edge cases, a full-time student might ask if he or she can get a discount if they want to take a yoga class through a community education program.

    For the common content, FAQs shouldn’t repeat what’s already on the website, but they are called “frequently asked questions” for a reason. As long as you provide the content only twice—once in the FAQ and once on a relevant content page—you’re fine. Your authors shouldn’t have to manage anything past that.

    With an edge case, the question might be so specific that the answer wouldn’t have a clear home on any page—in my example, the yoga class question would straddle full-time registration and community education. Therefore, even though the off-the-wall question isn’t “frequently asked,” it can still live in the FAQ, and if the edge cases pile up (as they can in the world of higher ed), then you could shift these questions to a blog, which could provide a source of fresh content.

    I wouldn’t have known about the full-time student who wants to take a community ed class if it hadn’t emerged during the stakeholder interview. For that reason, you want to talk to customers or students and ask them what questions they’ve had in the past. If you don’t have time for that, read over user research to find out what users typically ask.

    Or use Google Search Console to look at the search queries that lead to your site, and figure out how well your site answers those questions. You may find that many of the queries leading to your site are written as questions. In fact, according to a study by Moz and Jumpshot, questions make up approximately 8% of search queries, so this research may help you populate your FAQ. And if you’re looking for inspiration, you could try a tool like Answer the Public (free to access UK data; a monthly payment required for other regions). Type in a keyword like “college applications,” and the tool will serve up a range of questions people have asked in their search queries.

    The final way to articulate what works for an FAQ is to describe what doesn’t work. If your answer begins to spin into a narrative instead of a straightforward answer, you might need to add a separate page of content to your sitemap. And if your answer starts to sound too much like marketing copy, then it belongs elsewhere on the site. FAQs exist for those who are further along in the sales process or those who are already sold. Continuing to sell to that audience in an FAQ will only annoy them.

    When you know which questions you’re going to cover, you can start to refine the language for your main audiences: authors and consumers.

    FAQs for content authors: your in-house reference desk

    A clever content author can use an FAQ as a core research document. Armed with a CMS that has a decent back-end search, a content author will have a much easier time keeping content aligned and fact-checked if the FAQ itself is treated as a trustworthy source of information.

    For that reason, what Wright calls “documentation-by-FAQ” might not be the worst situation in the world, depending on how much content you’re working with. If you actually have someone tending the FAQ like a garden, your content will always change, but you can be sure of its accuracy.

    To convince your more skeptical peers of the value of maintaining your FAQ page or database, tell them that the FAQ is a content opportunity that may save them time. Think of how delightful it is when you get your “Dear Prudence” newsletter or a podcast notification for the latest Han and Matt Know It All. Whenever you add a new question or update a new fact, spread the word among your users. These updates can help feed the social-media-marketing content beast while proving that you want to keep users informed and engaged.

    FAQs for content consumers: give them power

    Speaking of keeping users informed and engaged, a good FAQ can help the audience even more than it helps content authors. The best way to ensure that the FAQ works for the audience is to give them more control over the questions and answers they see.

    Most FAQs, including those on higher-ed sites, chunk up the FAQs by content category, tuck answers into accordions, and stop right there. More effective FAQs, though, provide other forms of interaction. Some allow users to refine information through filters, searches, and tags so the user isn’t stuck opening and closing accordion windows.

    For example, the website for Columbia Undergraduate Admissions has a fairly standard format, but the FAQ answers are tagged, so users have another option for navigating through the information. Other higher-ed services, like the syndicated financial aid web channel FATV, answer common FAQs with videos. Changing up the format and providing text, video, and audio options help prospective students feel like they are receiving more personal attention.

    Beyond higher-ed FAQs, Amazon encourages users to vote FAQs up and down, Reddit-style, which can lead to fun interactions and enables the users to rate the quality—or humor—of the information they receive.

    Amazon’s customer questions and answers feature, which includes the ability to search and vote on answers.
    Amazon’s more interactive FAQ model, in which users can vote on answers and search for questions.

    You can also remind skeptics that FAQs aren’t always what they expect. The FAQ format has experienced a renaissance in the form of our newly beloved voice gadgets. Some content creators are even using their existing FAQs as the foundation for their Alexa skills. For example, georgia.gov created an Alexa skill by transforming its “Popular Topics” FAQ database, working with Acquia to structure the Q&A format so Alexa can answer common questions from Georgia residents. When describing the project, user experience designer Rachel Hart writes:

    If you say “No” to an FAQ question, Alexa skips to the next FAQ, and the next, until you say something sounds helpful or Alexa runs out of questions.

    When the user chooses what they want to hear, they need to know exactly what they’re committing to. We need to make sure that our [labeling]—for both titles and FAQs—is clear.

    Read that again, dear FAQ fanatic. The complications for Alexa skills arise in the labeling, not in the FAQ itself. In fact, it’s the FAQ content that makes Alexa skills like the one for georgia.gov possible.

    So I can make peace with my quirky love of the FAQ?

    Indeed. Let your FAQ flag fly. FAQs—or dialogues that convey information—will always exist in some way, shape, or form. As for the accordion, though, the jury is out.

  • The Psychology of Design

    There are a number of debates about which additional skills designers should learn. Should designers code, write, or understand business? These skills are incredibly valuable but perhaps not essential. However, I would argue that every designer should learn the fundamentals of psychology. As humans, we have an underlying “blueprint” for how we perceive and process the world around us, and the study of psychology helps us define this blueprint. As designers, we can leverage psychology to build more intuitive, human-centered products and experiences. Instead of forcing users to conform to the design of a product or experience, we can use some key principles from psychology as a guide for designing how people actually are.

    But knowing where to start can be a challenge. Which principles from psychology are useful? What are some examples of these principles at work? In this article, I’ll cover the basics, and discuss the ethical implications of using psychology in design.

    Key principles

    The intersection of psychology and design is extensive. There’s an endless list of principles that occupy this space, but there are a few that I’ve found more ubiquitous than others. Let’s take a look at what these are and where they are effectively leveraged by products and experiences we interact with everyday.

    Hick’s Law

    One of the primary functions we have as designers is to synthesize information and present it in a way that it doesn’t overwhelm users—after all, good communication strives for clarity. This directly relates to our first key principle: Hick’s Law. Hick’s Law predicts that the time it takes to make a decision increases with the number and complexity of choices available. It was formulated by psychologists William Edmund Hick and Ray Hyman in 1952 after examining the relationship between the number of stimuli present and an individual’s reaction time to any given stimulus.

    It turns out there is an actual formula to represent this relationship: RT = a + b log2 (n). Fortunately, we don’t need to understand the math behind this formula to grasp what it means. The concept is quite simple: the time it takes for users to respond directly correlates to the number and complexity of options available. It implies that complex interfaces result in longer processing time for users, which is important because it’s related to a fundamental theory in psychology known as cognitive load.

    Cognitive load

    Cognitive load refers to the mental processing power being used by our working memory. Our brains are similar to computer processors in that we have limited processing power: when the amount of information coming in exceeds the space available, cognitive load is incurred. Our performance suffers and tasks become more difficult, which results in missed details and even frustration.

    Examples

    Three photos of television remotes where the majority of buttons have been covered, leaving only holes for the channel, volume, and number buttons.

    Modified TV remotes that simplify the “interface” for grandparents.

    There are examples of Hick’s Law in action everywhere, but we’ll start with a common one: remote controls. As features available in TVs increased over the decades, so did the options available on their corresponding remotes. Eventually we ended up with remotes so complex that using them required either muscle memory from repeated use or a significant amount of mental processing. This led to the phenomenon known as “grandparent-friendly remote.” By taping off everything except for the essential buttons, grandkids were able to improve the usability of remotes for their loved ones, and they also did us all the favor of sharing them online.

    A photo of the Apple TV remote, which has a gesture-friendly touch pad and only six buttons.

    Apple TV remote, which simplifies the controls to only those absolutely necessary.

    In contrast, we have smart TV remotes: the streamlined cousin of the previous example, simplifying the controls to only those absolutely necessary. The result is a remote that doesn’t require a substantial amount of working memory and therefore incurs much less cognitive load. By transferring complexity to the TV interface itself, information can be effectively organized and progressively disclosed within menus.

    Screenshots of Slack's onboarding experience where users learn the system by chatting with Slackbot before being introduced to the rest of the UI.

    Screenshots from Slack’s progressive onboarding experience.

    Let’s take a look at another example of Hick’s Law. Onboarding is a crucial but risky process for new users, and few nail it as well as Slack. Instead of dropping users into a fully featured app after enduring a few onboarding slides, they use a bot (Slackbot) to engage users and prompt them to learn the messaging feature consequence-free. To prevent new users from feeling overwhelmed, Slack hides all features except for the messaging input. Once users have learned how to message via Slackbot, they are progressively introduced to additional features.

    This is a more effective way to onboard users because it mimics the way we actually learn: we build upon each subsequent step, and add to what we already know. By revealing features at just the right time, we enable our users to adapt to complex workflows and feature sets without feeling overwhelmed.

    Key takeaways

    • Too many choices will increase the cognitive load for users.
    • Break up long or complex processes into screens with fewer options.
    • Use progressive onboarding to minimize cognitive load for new users.

    Miller’s Law

    Another key principle is Miller’s Law, which predicts that the average person can only keep 7 (± 2) items in their working memory. It originates from a paper published in 1956 by cognitive psychologist George Miller, who discussed the limits of short-term memory and memory span. Unfortunately there has been a lot of misinterpretation regarding this heuristic over the years, and it’s led to the “magical number seven” being used to justify unnecessary limitations (for example, limiting interface menus to no more than seven items).

    Chunking

    Miller’s fascination with short-term memory and memory span centered not on the number seven, but on the concept of “chunking” and our ability to memorize information accordingly. When applied to design, chunking can be an incredibly valuable tool. Chunking describes the act of visually grouping related information into small, distinct units of information. When we chunk content in design, we are effectively making it easier to process and understand. Users can scan the content and quickly identify what they are interested in, which is aligned with how we tend to consume digital content.

    Examples

    Two example numbers side by side, one a single unbroken string of ten digits, the other with parentheses around the first three digits, and a dash after the second three digits (resembling a phone number much like Jenny's from Tommy Tutone's classic song).

    An example of chunking with strings like phone numbers.

    The simplest example of chunking can be found with how we format phone numbers. Without chunking, a phone number would be a long string of digits, which increases the difficulty to process and remember it. Alternatively, a phone number that has been formatted (chunked) becomes much easier to interpret and memorize. This is similar to how we perceive a “wall of text” in comparison to well-formatted content with appropriate headline treatments, line-length, and content length.

    A screenshot of Bloomberg's homepage with visually grouped blocks outlined in blue to show how they have been chunked together.

    Chunking can organize content to help users process, understand, and memorize easily. At right, I’ve highlighted how Bloomberg grouped information.

    Another example of chunking being used effectively in design is with layout. We can use this technique to help users understand underlying relationships and hierarchy by grouping content into distinctive modules. Especially in information-dense experiences, chunking can be leveraged to provide structure to the content. Not only is the result more visually pleasing, but it’s more scannable.

    Key takeaways

    • Don’t use the “magical number seven” to justify unnecessary design limitations.
    • Organize content into smaller chunks to help users process, understand, and memorize easily.

    Jakob’s Law

    The last principle we’ll look at is Jakob’s Law (short for Jakob’s Law of Internet User Experience), which states that users spend most of their time on other sites, and they prefer your site to work the same way as all the other sites they already know. In 2000, it was put forth by usability expert Jakob Nielsen, who described the tendency for users to develop an expectation of design patterns based on their cumulative experience from other websites. This principle encourages designers to follow common design patterns in order to avoid confusing users, which can result in higher cognitive load.

    Mental models

    I know what you’re thinking: if all websites followed the same design patterns, that would make for quite the boring web. The answer is yes, that is probably true. But there is something incredibly valuable to be found in familiarity for users, which leads us to another fundamental concept in psychology that is valuable for designers: mental models.

    A mental model is what we think we know about a system, especially about how it works. Whether it’s a website or a car, we form models of how a system works, and then we apply that model to new situations where the system is similar. In other words, we use knowledge we already have from past experiences when interacting with something new.

    Mental models are valuable for designers, because we can match our user’s mental model to improve their experience. Consequently, users can easily transfer their knowledge from one product or experience to another without taking time to understand how the new system works. Good user experiences are made possible when the designer’s mental model is aligned with the user’s mental model. The task of shrinking the gap between our mental models and those of our users is one of our biggest challenges, and to achieve this we use a variety of methods: user interviews, personas, journey maps, empathy maps, and more. The point of all this is to gain a deeper insight into not only the goals and objectives of our users but also their pre-existing mental models, and how that applies to the product or experience we are designing.

    Examples

    Have you ever wondered why form controls look the way they do? It’s because the humans designing them had a mental model for what these elements should look like, which they based on control panels they were already familiar with in the physical world. Things like form toggles, radio inputs, and even buttons originated from the design of their tactile counterparts.

    Two images side by side: one showing buttons and radio inputs on a physical control panel and the other showing form inputs on the web.

    Comparison between control panel elements and typical form elements.

    As designers, we must close the gap that exists between our mental models and that of our users. It’s important we do this because there will be problems when they aren’t aligned, which can affect how users perceive the products and experiences we’ve helped build. This misalignment is called mental model discordance, and it occurs when a familiar product is suddenly changed.

    Screenshots from before and after Snapchat's redesign show a drastically different interface that confused many users.

    Snapchat redesign before-and-after comparison.

    Take for example Snapchat, which rolled out a major redesign in early 2018. They launched a reformatted layout, which in turn confused users by making it difficult to access features they used on a daily basis. These unhappy users immediately took to Twitter and expressed their disapproval en masse. Even worse was the subsequent migration of users to Snapchat’s competitor, Instagram. Snapchat had failed to ensure the mental model of their users would be aligned with the redesigned version of their app, and the resulting discordance caused major backlash.

    By contrast to Snapchat, Google allowed users to explore and test YouTube's redesign ahead of time which lead to a more successful launch.

    Before and after comparison of YouTube redesign in 2017.

    But major redesigns don’t always have to result in backlash—just ask Google. Google has a history of allowing users to opt in to redesigned versions of their products like Google Calendar, YouTube, and Gmail. When they launched the new version of YouTube in 2017 after years of essentially the same design, they allowed desktop users to ease into the new Material Design UI without having to commit. Users could preview the new design, gain some familiarity, submit feedback, and even revert to the old version if they preferred it. As a result, the inevitable mental model discordance was avoided by simply empowering users to switch when they were ready.

    Key takeaways

    • Users will transfer expectations they have built around one familiar product to another that appears similar.
    • By leveraging existing mental models, we can create superior user experiences in which the user can focus on their task rather than learning new models.
    • Minimize discordance by empowering users to continue using a familiar version for a limited time.

    Recap

    You might be thinking, “These principles are great, but how do I use them in my projects?” While nothing will replace actual user research and data specific to our projects, we can use these psychological principles to serve as a guide for designing more intuitive, human-centered products and experiences. Being mindful of these principles helps us create designs that consider how people actually are, as opposed to forcing them to conform to the technology. To quickly recap:

    • Hick’s Law can help guide us to reduce cognitive load for users by minimizing choice and breaking long or complex processes into screens with fewer options.
    • Miller’s Law teaches us to use chunking to organize content into smaller clusters to help users process, understand, and memorize easily.
    • Jakob’s Law reminds us that users will transfer expectations they have built around one familiar product to another that appears similar. Therefore, we can leverage existing mental models to create superior user experiences.

    We’ve covered some key principles that are useful for building more intuitive, human-centered products and experiences. Now let’s touch on their ethical implications and how easy it can be to fall into the trap of exploiting users with psychology.

    A note on ethics

    On the one hand, designers can use psychology to create more intuitive products and experiences; on the other, they can use it to exploit how our minds work, for the sake of creating more addictive apps and websites. Let’s first take a look at why this is a problem, and then consider some potential solutions.

    Problem

    One doesn’t have to go far to see why the well-being of users being deprioritized in favor of profit is a problem. When was the last time you were on a subway, on a sidewalk, or in a car and didn’t see someone glued to their smartphone? There are some that would argue we’re in the middle of an epidemic, and that our attention is being held captive by the mini-computers that we carry with us everywhere.

    It wouldn’t be an exaggeration to say that the mobile platforms and social networks that connect us also put a lot of effort into how they can keep us glued, and they’re getting better at it every day. The effects of this addiction are beginning to become well-known: from sleep reduction and anxiety to deterioration of social relationships, it’s becoming apparent that the race for our attention has some unintended consequences. These effects become problematic when they start to change how we form relationships and how we view ourselves.

    Solution

    As designers, our responsibility is to create products and experiences that support and align with the goals and well-being of users. In other words, we should build technology for augmenting the human experience, not replacing it with virtual interaction and rewards. The first step in making ethical design decisions is to acknowledge how the human mind can be exploited.

    We must also question what we should and shouldn’t build. We can find ourselves on quite capable teams that have the ability to build almost anything you can imagine, but that doesn’t always mean we should—especially if the goals of what we are building don’t align with the goals of our users.

    Lastly, we must consider metrics beyond usage data. Data tells us lots of things, but what it doesn’t tell us is why users are behaving a certain way or how the product is impacting their lives. To gain insight into why, we must both listen and be receptive to our users. This means getting out from behind a screen, talking with them, and then using this qualitative research to inform how we evolve the design.

    Examples

    Google's Digital Wellbeing initiative website details how they're helping users spend less time staring at screens.

    Google’s Digital Wellbeing initiative website.

    It’s been great to see companies taking the right steps when it comes to considering the digital well-being of users. Take for example Google, which just announced tools and features at their latest I/O event that focus on helping people better understand their tech usage, focus on what matters most, disconnect when needed, and create healthy digital habits. Features like an app dashboard that provides a usage overview, additional control over alerts and notifications, and Family Link for setting digital ground rules for the little ones all are geared towards protecting users.

    Facebook released a video detailing how they're defining new success criteria around meaningful connections.

    Screenshot from Facebook’s “News Feed FYI: Bringing People Closer Together” video.

    Some companies are even redefining their success metrics. Instead of time on site, companies like Facebook are defining success through meaningful interactions. This required them to restructure their news feed algorithm to prioritize the content that people actually find valuable over the stuff we mindlessly consume. Content from friends and family now takes precedence, even if the result means users spend a little less time in their app.

    These examples are just a glimpse into the steps that many companies are taking, and I hope to see many more in the coming years. The technology we play a part in building can significantly impact people’s lives, and it’s crucial that we ensure that impact is positive. It’s our responsibility to create products and experiences that support and align with the goals and well-being of users. We can make ethical design decisions by acknowledging how the human mind can be exploited, consider what we should and shouldn’t build, and talk with users to gain qualitative feedback on how the products and experiences we build affect their lives.

    Resources

    There are tons of great resources we can reference for making our designs more intuitive for users. Here are a few I have referenced quite frequently:

    • Laws of UX: A website I created for designers to learn more about psychological principles that relate to UX/UI design.
    • Cognitive UXD: This hand-selected publication curated by Norbi Gaal is a great resource for anyone interested in the intersection of psychology and UX.
    • Center for Humane Technology: A world-class team of former tech insiders and CEOs who are advancing thoughtful solutions to change the culture, business incentives, design techniques, and organizational structures driving how technology hijacks our brains.
    • The Design of Everyday Things: Revised and Expanded Edition: An absolute classic that explores the communication between object and user through design, how to optimize this communication, and ultimately how psychology plays a part in designing for how humans actually are.
    • Designing for Emotion: A look at the importance of emotion when expressing a brand’s personality, and how designers can go beyond functionality, reliability, and usability to design for humans as opposed to machines.
    • Hooked: How to Build Habit-Forming Products: A guide that provides insight into the behavioral techniques used by companies like Twitter, Instagram, and Pinterest.
  • Web Developer Representation in W3C

    During its annual general member meeting on October 19th, the Fronteers board will propose both to become a member of the World Wide Web Consortium (W3C) and to hire Rachel Andrew as our representative in that standards body.

    What is Fronteers? Why does it want to become a W3C member? And how can you help? Read on; we’ll start with the second question.

    Browser vendors have the greatest say

    As we all know, W3C sets the web standards. In practice, browser vendors have the greatest say in W3C. They have the most experience in implementing web standards, and they have so much at stake in the standardization process that they’re willing to allocate budget for their W3C representatives.

    All in all, this system works decently. Although all browser vendors have their own priorities and pet features and are willing to push them to the point of implementing them before a formal specification is available, they are all aware that building the web is a communal effort, and that their ideas won’t succeed in the long term without other vendors implementing them as well.

    Web developers have little say

    Nonetheless, it’s us web developers who are primarily responsible for actually using web standards. We have as much experience as browser vendors in the practical usage of web standards, though our viewpoint is somewhat different.

    It’s good when we have the web developers’ viewpoint represented at the W3C table. An example is gutters in CSS Grid. Originally, the specification did not include them, but when Rachel Andrew, as a W3C Invited Expert, was evangelizing the (then) upcoming Grid specification, it turned out web developers wanted them. Since she could prove that web developers working in the wild wanted them, Rachel was able to convince the CSS Working Group (CSS WG) of the need for gutters. Without her volunteering to be in the CSS WG, this might not have happened, or would have happened later, forcing web developers to work around the lack of gutters with hacks.

    It’s not that they don’t want our input

    W3C is willing, even eager, to listen to web developers. Any web developer willing to enter the W3C discussions is welcomed with open arms—in fact, the role of Invited Expert was created exactly for this reason. Lack of interest by W3C is not the problem.

    Instead, the problem is one of time and money. An independent web developer who participates in W3C discussions has to volunteer a lot of time—time that could otherwise be spent earning money. Serious involvement additionally means attending the face-to-face meetings held all across the globe—more time, more money.

    The web would be better served by having more professional viewpoints than just the browser makers’. Web developers in general would be served by having a W3C representative.

    But it all comes down to money. How do we pay our W3C representative?

    Superficially, this is a thorny problem. What we need is an organization of web professionals that has enough income to pay not only the representative’s fee, but also their W3C membership dues and travel costs.

    If we don’t do it, nobody else will

    This problem disappears once you know such an organization actually exists: Fronteers, the professional association of Dutch front-end developers.

    Founded in 2007, Fronteers is best known for its annual Fronteers conference in October, but that’s not all it does. It has been locally active with workshops, meetups, a job board, and other activities for Dutch front-enders. More to the point, some of its activities are pretty profitable, and it has spent far less money than it has made in the past eleven years.

    That’s why the Fronteers board decided to take the plunge and propose both to become a W3C member and to hire a representative. The need is clear, we have the money, and if we don’t do it, nobody else will.

    Members will vote on this proposal at the annual general meeting on October 19th, and I, for one, fervently hope they’ll vote in favor. If the members agree, Fronteers will apply for W3C membership and contract Rachel Andrew as soon as is feasible.

    The choice for Rachel as our representative is an obvious one. Not only is she instrumental in specifying and evangelizing CSS Grid, but she is also well-acquainted with the W3C’s processes, politics, and agenda, having served as an Invited Expert for many years. We couldn’t think of any better candidate to serve as web developers’ first representative in W3C.

    But we can’t do it alone

    But that’s not all we’re planning. We want the rest of the world to become involved as well.

    We see Rachel as the W3C voice of web developers around the world and not just as our private Fronteers representative. Though we are taking the lead for practical reasons (and because we have the funding), we see ourselves as the creators of a framework that web communities in other countries can join—and contribute to financially.

    There are two practical problems we cannot solve on our own. First, while new W3C non-profit members pay only 25% of the annual W3C fee for the first two years, beginning with the third year we’ll be on the hook for the full fee. Second, W3C membership gives us the right to appoint four representatives, but that is beyond our means.

    Again, it all boils down to money. While we could probably afford the full annual W3C fee, and our budget might conceivably be expanded and restructured enough to hire half of a second representative, that’s about the limit of what we can do without our treasurer being afflicted with a permanent sadface and other stress-induced symptoms.

    If we want to continue web developer representation in W3C beyond two years and one representative, we need outside help. We can’t do it on our own.

    Here’s how you can help

    Ask yourself: do you believe in the idea of having independent web developers’ voices represented in W3C? Do you think that it will make a difference, that it will make your work easier in the future? Do you feel this is something that has to be done?

    If so, please consider helping us.

    We would love to see other organizations of web professionals similar to Fronteers around the globe, contributing to a general fund to cover the cost of a collective W3C membership and compensating our four representatives for their time.

    Yes, that’s a lot of work. But we did it, so it’s possible. Besides, you likely won’t have to do the work all by yourself. If you like the idea, others will as well and will jump in to help. Collaborating for a common cause is something the web community is rather good at.

    Will this work? We have no idea. We do know, however, that there’s a limit to what Fronteers can do. While we’re happy to take the lead for two years, we cannot shoulder this burden permanently by ourselves.

    You can learn more about our plans at the Fronteers site, where you can also let us know how you can help. Rachel has also written about her work as a W3C Invited Expert over at Smashing Magazine.

  • Responsive Images

    I come here not to bury img, but to praise it.

    Well, mostly.

    Historically, I like img just fine. It’s refreshingly uncomplicated, on the surface: it fires off a request for the file in its src attribute, renders the contents of that file, and provides assistive technologies with an alternative narration. It does so quickly, efficiently, and seamlessly. For most of the web’s life, that’s all img has ever had to do—and thanks to years and years of browsers competing on rendering performance, it keeps getting better at it.

    But there’s a fine line between “reliable” and “stubborn,” and I’ve known img to come down on both sides of it.

    Though I admit to inadvertently hedging my bets a little by contributing to the jQuery Mobile Project—a framework originally dedicated to helping produce “mobile sites”—I’ve always come down squarely in the responsive web design (RWD) camp. For me, the appeal of RWD wasn’t in building a layout that adapted to any viewport—though I do still think that’s pretty cool. The real appeal was in finding a technique that could adapt to the unknown-unknowns. RWD felt—and still feels—like a logical and ongoing extension of the web’s strengths: resilience, flexibility, and unpredictability.

    That said, I would like to call attention to one thing that m-dot sites (dedicated mobile versions of sites, usually found at a URL beginning with the letter m followed by a dot) did have over responsively designed websites, back in the day: specially tailored assets.

    Tailoring Assets

    In a responsive layout, just setting a max-width: 100% in your CSS ensures that your images will always look right—but it also means using image sources that are at least as large as the largest size at which they’ll be displayed. If an image is meant to be displayed anywhere from 300 pixels wide to 2000 pixels wide, that same 2000-pixel-wide image is getting served up to users in all contexts. A user on a small, low-resolution display gets saddled with all of the bandwidth costs of massive, high-resolution images, but ends up with none of the benefits. A high-resolution image on a low-resolution display looks like any other low-resolution image; it just costs more to transfer and takes longer to appear.

    Even beyond optimization, it wasn’t uncommon to show or hide entire blocks of content, depending on the current viewport size, during those early days of RWD. Though the practice became less common as we collectively got the hang of working responsively, img came with unique concerns when serving disparate content across breakpoints: our markup was likely to be parsed long before our CSS, so an img would have no way of knowing whether it would be displayed at the current viewport size. Even an img (or its container) set to display: none would trigger a request, by design. More bandwidth wasted, with no user-facing benefit.

    Our earliest attempts

    I am fortunate enough to have played a tiny part in the history of RWD, having worked alongside Filament Group and Ethan Marcotte on the Boston Globe website back in 2011.

    It was, by any measure, a project with weight. The Globe website redesign gave us an opportunity to prove that responsive web design was not only a viable approach to development, but that it could scale beyond the “it might be fine for a personal blog” trope—it could work for a massive news organization’s website. It’s hard to imagine that idea has ever needed proving, looking back on it now, but this was a time when standalone m-dot sites were widely considered a best practice.

    While working on the Globe, we tried developing a means of delivering larger images to devices with larger screens, beginning with the philosophy that the technique should err on the side of mobile: start with a mobile-sized and -formatted image, then swap that with a larger version depending on the user’s screen size. This way, if anything should break down, we’re still erring on the side of caution. A smaller—but still perfectly representative—image.

    The key to this was getting the screen’s width in JavaScript, in the head of the document, and relaying that information to the server in time to defer requests for images farther down the page. At the time, that JavaScript would be executed prior to any requests in body being made; we used that script to set a cookie about the user’s viewport size, which would be carried along with those img requests on the same page load. A bit of server-side scripting would read the cookie and determine which asset to send in response.

    It worked well, but it was squarely in the realm of “clever hack”—that parsing behavior wasn’t explicitly defined in any specifications. And in the end, as even the cleverest hacks are wont to do, it broke.

    Believe it or not, that was good news.

    Prefetching—or “speculative preparsing”—is a huge part of what makes browsers feel fast: before we can even see the page, the browser starts requesting assets so they’re closer to “ready” by the time the page appears. Around the time the Globe’s site launched, several major browsers made changes to the way they handled prefetching. Part of those changes meant that an image source might be requested before we had a chance to apply any of our custom logic.

    Now, when browsers compete on performance, users win—those improvements to speculative preparsing were great news for performance, improving load times by as much as 20 percent. But there was a disconnect here—the fastest request is the one that never gets made. Good ol’ reliable img was single-mindedly requesting the contents of its src faster than ever, but often the contents of those requests were inefficient from the outset, no matter how quickly the browser managed to request, parse, and render them—the assets were bigger than they’d ever need to be. The harm was being done over the wire.

    So we set out to find a new hack. What followed was a sordid tale of noscript tags and dynamically injected base tags, of document.write and evalof rendering all of our page’s markup in a head element, to break preparsing altogether.

    For some of you, the preceding lines will require no explanation, and for that you have my sincerest condolences. For everyone else: know that it was the stuff of scary developer campfire stories (or, I guess, scary GIF-of-a-campfire stories). Messy, hard-to-maintain hacks all the way down, relying entirely on undocumented, unreliable browser quirks.

    Worse than those means, though, were the ends: none of it really worked. We were always left with compromises we’d be foisting on a whole swath of users—wasted requests for some, blurry images for others. It was a problem we simply couldn’t solve with sufficiently clever JavaScript; even if we had been able to, it would’ve meant working around browser-level optimizations rather than taking advantage of them. We were trying to subvert browsers’ improvements, rather than work with them. Nothing felt like the way forward.

    We began hashing out ideas for a native solution: if HTML5 offered us a way to solve this, what would that way look like?

    A native solution

    What began in a shared text file eventually evolved into one of the first and largest of the W3C’s Community Groups—places where developers could build consensus and offer feedback on evolving specifications. Under the banner of the “Responsive Images Community Group,” we—well, at the risk of ruining the dramatic narrative, we argued on mailing lists.

    One such email, from Bruce Lawson, proposed a markup pattern for delivering context-appropriate images that fell in line with the existing rich-media elements in HTML5—like the video tag—even borrowing the media attribute. He called it picture; image was already taken as an ancient alias of img, after all.

    What made this proposal special was the way it used our reliable old friend img. Rather than a standalone element, picture came to exist as a wrapper—and a decision engine—for an inner img element:

    <picture>
      <source …>
      <img src="source.jpg" alt="…">
    </picture>

    That img inside picture would give us an incredibly powerful fallback pattern—it wouldn’t be the sort of standard where we have to wait for browser support to catch up before we could make use of it. Browsers that didn’t understand picture and its source elements would ignore it and still render the inner img. Browsers that did understand picture could use criteria attached to source elements to tell the inner img which source file to request.

    Most important of all, though, it meant we didn’t have to recreate all of the features of img on a brand-new element: because picture didn’t render anything in and of itself, we’d still be leaning on the performance and accessibility features of that img.

    This made a lot of sense to us, so we took it to the Web Hypertext Application Technology Working Group (WHATWG), one of the two groups responsible for the ongoing development of HTML.

    If you’ve been in the industry for a few years, this part of the story may sound a little familiar. Some of you may have caught whispers of a fight between the WHATWG’s srcset and the picture element put forth by a scrappy band of web-standards rebels and their handsome, charismatic, and endlessly humble Chair. Some of you read the various calls to arms, or donated when we raised funds to hire Yoav Weiss to work full-time on native implementations. Some of you have RICG T-shirts, which—I don’t mind saying—were rad.

    A lot of dust needed to settle, and when it finally did, we found ourselves with more than just one new element; edge cases begat use cases, and we discovered that picture alone wouldn’t be enough to suit all of the image needs of our increasingly complex responsive layouts. We got an entire suite of enhancements to the img element as well: native options for dealing with high-resolution displays, with the size of an image in a layout, with alternate image formats—things we had never been able to do natively, prior to that point.

  • Breaking the Deadlock Between User Experience and Developer Experience

    In early 2013, less than 14% of all web traffic came from mobile devices; today, that number has grown to 53%. In other parts of the world the difference is even more staggering: in African countries, more than 64% of web traffic is from mobile devices; in India, nearly 78% of traffic is mobile. This is a big deal, because all 248 million new internet users in 2017 lived outside the United States.

    And while internet connections are getting faster, there are still dozens of countries that access the web at speeds of less than 2 Mbps. Even in developed nations, people on mobile devices see spotty coverage, flaky wifi connections, and coverage interruptions (like train tunnels or country roads).

    This means we can no longer talk about user experience (UX) without including performance as a first-class requirement. A Google study found that 53% of mobile users abandon a page if it takes longer than three seconds to load—and none of us are willing to lose half our traffic, right?

    User experience and performance are already aligned—in theory

    User experience designers and researchers lay a solid foundation for building modern web apps. By thinking about who the user is, what they’re trying to accomplish, and what environments they might be in when using our app, we already spot several performance necessities: a commuter, for example, will be accessing the app from their phone or a low-speed public wifi connection with spotty coverage.

    For that type of user, we know to focus on a fast load time—remember, three seconds or more and we’ll lose half our visitors—in addition to an experience that works well, even on unstable connections. And since downloading huge files will also take a long time for this user, reducing the amount of code we ship becomes necessary as well.

    UX and performance have issues in practice

    My sister loves dogs. Once, as a kid, she attack-hugged our dog and loved it so hard that it panicked and bit her.

    The web community’s relationship with UX is not unlike my sister’s with dogs: we’re trying so hard to love our users that we’re making them miserable.

    Our efforts to measure and improve UX are packed with tragically ironic attempts to love our users: we try to find ways to improve our app experiences by bloating them with analytics, split testing, behavioral analysis, and Net Promoter Score popovers. We stack plugins on top of third-party libraries on top of frameworks in the name of making websites “better”—whether it’s something misguided, like adding a carousel to appease some executive’s burning desire to get everything “above the fold,” or something truly intended to help people, like a support chat overlay. Often the net result is a slower page load, a frustrating experience, and/or (usually “and”) a ton of extra code and assets transferred to the browser.

    The message we appear to be sending is, “We care so much about your experience as a user that we’re willing to grind it to a halt so we can ask you about it, and track how you use the things we build!”

    Making it worse by trying to make it better

    We’re not adding this bloat because we’re intentionally trying to ruin the experience for our users; we’re adding it because it’s comprised of tools that solve hard development problems, and so we don’t have to reinvent the wheel.

    When we add these tools, we’re still trying to improve the experience, but we’ve now shifted our focus to a different user: developers. There’s a large ecosystem of products and tools aimed toward making developers’ lives easier, and it’s common to roll up these developer-facing tools under the term developer experience, or DX.

    Stacking tools upon tools may solve our problems, but it’s creating a Jenga tower of problems for our users. This paradox—that the steps we take to make it easier to help our users are inadvertently making the experience worse for them—leads to what Nicole Sullivan calls a “deadlock between developer experience [and] user experience.”

    A tweet from Nicole Sullivan (@stubornella) reading “More seriously, we need to take a step back and break the deadlock between developer experience and user experience. Why are they at odds with each other? (Perf: file size transferred and memory footprint. Ok, so how do we make that not a thing?”
    This tweet by Nicole Sullivan inspired this article.

    Developer experience goes beyond the tech stack

    Let’s talk about cooking experience (CX). When I’m at home, I enjoy cooking. (Stick with me; I have a point.) I have a cast-iron skillet, a gas range, and a prep area that I have set up just the way I like it. And if you’re fortunate enough to find yourself at my table for a weekend brunch, you’re in for one of the most delicious breakfast sandwiches of your life.

    However, when I’m traveling, I hate cooking. The cookware in Airbnbs is always cheap IKEA pans and dull knives and cooktops with uneven heat, and I don’t know where anything is. Whenever I try to cook in these environments, the food comes out edible, but it’s certainly not great.

    It might be tempting to say that if I need my own kitchen to produce an edible meal, I’m just not a great cook. But really, the high-quality tools and well-designed environment in my kitchen at home creates a better CX, which in turn leads to my spending more time focused on the food and less time struggling with my tools.

    In the low-quality kitchens, the bad CX means I’m unable to focus on cooking, because I’m spending too much time trying to manage the hot spots in the pan or searching the drawers and cabinets for a utensil.

    Good developer experience is having the freedom to forget

    Like in cooking, if our development tools are well-suited to the task at hand, we can do excellent work without worrying about the underlying details.

    When I wrote my first lines of HTML and CSS, I used plain old Notepad. No syntax highlighting, autocorrect, or any other assistance available. Just me, a reference book, and a game of Where’s Waldo? to find the tag I’d forgotten to close. The experience was slow, frustrating, and painful.

    Today, I use an editor that not only offers syntax highlighting but also auto-completes my variable names, formats my code, identifies potential problems, helps me debug my code as I type, and even lets me share my current editing session with a coworker to get help debugging a problem. An enormous number of incremental improvements now exist that let us forget about the tiny details, instead letting us focus on the task at hand. These tools aim to make the right thing the easy thing, leading developers to follow best practices by default because our tools are designed to do the right thing on our behalf.

    It’s hard to overstate the impact on my productivity that modern development environments have made.

    And that’s just my editor.

    UX and DX are at odds with each other

    There is no one-size-fits-all way to build an app, but most developer tools are built with a one-size-fits-all approach. To make this work, most tools are built to solve one thing in a general purpose way, such as date management or cryptography. This, of course, necessitates stacking multiple tools together to achieve our goals. From a DX standpoint, this is amazing: we can almost always find an open source solution to problems that aren’t ultra-specific to the project we’re working on.

    However, stacking a half-dozen tools to improve our DX harms the UX of our apps. Add a few kilobytes for this tool, a few more for that tool, and before we know it we’re shipping mountains of code. In today’s front-end landscape, it’s not uncommon to see apps shipping multiple megabytes of JavaScript—just open the Network tab of your browser’s developer tools, and softly weep as you notice your favorite sites dump buckets of JavaScript into your browser.

    A screenshot of Chrome's developer tools open for Forbes.com, showing a total of 10 MiB of JavaScript downloaded over 762 requests while the page has been idle for 30 minutes.
    I left forbes.com open for thirty minutes. It sent 3,273 requests and loaded 10 MB of JavaScript.

    In addition to making pages slower to download, scripts put strain on our users’ devices. For someone on a low-powered phone (for example, a cheap smartphone, or an older iPhone), the download time is only the first barrier to viewing the app; after downloading, the device has to parse all that JavaScript. As an example, 1 MB of JavaScript takes roughly six seconds to parse on a Samsung Galaxy Note II.

    On a 3G connection, adding 1 MB of JavaScript can mean adding ten or more seconds to your app’s download-and-parse time. That’s bad UX.

    Patching the holes in our UX comes at a price

    Of course, we can solve some of these problems. We can manually optimize our apps by loading only the pieces we actually use. We can find lightweight copies of libraries to reduce our overall bundle size. We can add performance budgets, tests, and other checks to alert us if the codebase starts getting too large.

    But now we’re adding audits, writing bespoke code to manage the foundation of our apps, and moving into the uncharted, unsupported territory of wiring unrelated tooling together—which means we can’t easily find help online. And once we step outside the known use cases for a given abstraction, we’re on our own.

    Once we find ourselves building and managing bespoke solutions to our problems, many of the DX benefits we were previously enjoying are lost.

    Good UX often necessitates bad DX

    There are a number of frameworks that exist to help developers get up and running with almost no overhead. We’re able to start building an app without first needing to learn all the boilerplate and configuration work that goes into setting up the development environment. This is a popular approach to front-end development—often referred to as “zero-config” to signify how easy it is to get up and running—because it removes the need to start from scratch. Instead of spending our time setting up the foundational code that doesn’t really vary between projects, we can start working on features immediately.

    This is true, in the beginning, until our app steps outside the defined use cases, which it likely will. And then we’re plunged into a world of configuration tuning, code transpilers, browser polyfills, and development servers—even for seasoned developers, this can be extremely overwhelming.

    Each of these tools on its own is relatively straightforward, but trying to learn how to configure a half-dozen new tools just so you can start working is a very real source of fatigue and frustration. As an example, here’s how it feels to start a JavaScript project from scratch in 2018:

    • install Node and npm;
    • use npm to install Yarn;
    • use Yarn to install React, Redux, Babel (and 1–5 Babel plugins and presets), Jest, ESLint, webpack, and PostCSS (plus plugins);
    • write configuration files for Babel, Jest, ESLint, webpack, and PostCSS;
    • write several dozen lines of boilerplate code to set up Redux;
    • and finally start doing things that are actually related to the project’s requirements.

    This can add up to entire days spent setting up boilerplate code that is nearly identical between projects. Starting with a zero-config option gets us up and running much faster, but it also immediately throws us into the deep end if we ever need to do something that isn’t a standard use case.

    And while the open source developers who maintain these abstractions do their best to meet the needs of everyone, if we start looking at improving the UX of our individual apps, there’s a high likelihood that we’ll find ourselves off the beaten path, buried up to our elbows in Byzantine configuration files, cursing the day we chose web development as a career.

    Someone always pays the cost

    On the surface, it might look like this is just the job: web developers are paid to deliver a good UX, so we should just suck it up and suffer through the hard parts of development.  Unfortunately, this doesn’t pan out in practice.

    Developers are stretched thin, and most companies can’t afford to hire a specialist in accessibility, performance, and every other area that might affect UX. Even a seasoned developer with a deep understanding of her stack would likely struggle to run a full UX audit on every piece of an average web app. There are too many things to do and never enough time to do it all. That’s a recipe for trouble, and it results in things falling through the cracks.

    Under time pressure, this gets worse. Developers cut corners by shipping code that’s buggy with // FIXME oh god I'm so sorry attached. They de-prioritize UX concerns—for example, making sure screen reader users can, you know, read things—as something “to revisit later.” They make decisions in the name of hitting deadlines and budgets that, ultimately, force our users to pay the cost of our DX.

    Developers do the best they can with the available time and tools, but due more to a lack of time and resources than to negligence, when there’s a trade-off to be made between UX and DX, all too often the cost rolls downhill to the users.

    How do we break the deadlock between DX and UX?

    While it’s true that someone always pays the cost, there are ways to approach both UX and DX that keep the costs low or—in best-case scenarios—allow developers to pay the cost once, and reap the DX benefits indefinitely without any trade-offs in the resulting UX.

    Understand the cost of an outstanding user experience

    In any given project, we should use the ideal UX as our starting point. This ideal UX should be built from user research, lo-fi testing, and an iterative design process so we can be sure it’s actually what our users want.

    Once we know what the ideal UX is, we should start mapping UX considerations to technical tasks. This is the process of breaking down abstract concepts like “feels fast” into concrete metrics: how can we measure that a UX goal has been met?

    By converting our UX goals into measurable outcomes, we can start to get an idea of the impact, both from a UX and a DX perspective.

    A 2D plot with the Y axis labeled “user value”, and the X axis labeled “effort by organization”. The plot is divided into quadrants, with areas reading in clockwise order “maybe”, “yes!!!”, “maybe”, and “no”.
    A prioritization matrix to determine whether the benefits outweigh the cost of a particular task. Source: Nielsen Norman Group

    From a planning perspective, we can get an idea of which tasks will have the largest impact on UX, and which will require the highest level of effort from the developers. This helps us understand the costs and the relative trade-offs: if something is high-effort and low-impact, maybe it’s OK to let the users pay that cost. But if something will have a high impact on UX, it’s probably not a good idea to skip it in favor of good DX.

    Consider the cost when choosing solutions

    Once we’re able to understand the relative cost and trade-offs of a given task, we can start to analyze it in detail. We already know how hard the problem is to solve, so we can start looking at the how of solving it. In general terms, there are three major categories of solving problems:

    • Invent your own solution from scratch.
    • Research what the smartest people in the community are doing, and apply their findings to a custom solution.
    • Leverage the collective efforts of the open source community by using a ready-made solution.

    Each category comes with trade-offs, and knowing whether the costs outweigh the benefits for any given problem requires working through the requirements of the project. Without a clear map of what’s being built—and what it will cost to build it—any decisions made about tools are educated guesses at best.

    When to invent your own solution

    Early in my stint as a front-end architect at IBM, I led a project to roll out a GraphQL layer for front-end teams to rapidly build apps in our microservice-based architecture. We started with open source tools, but at the time nothing existed to solve the particular challenges we were facing. We ended up building GrAMPS, which we open sourced in late 2017, to scratch our particular itch.

    In this situation, building something custom was our lowest-cost option: we knew that GraphQL would solve a critical problem for us, but no tools existed for running GraphQL in a microservice architecture. The cost of moving away from microservices was prohibitively high, and the cost of keeping things the way they were wasn’t manageable in the long term. Spending the time to create the tooling we needed paid dividends through increased productivity and improved DX for our teams.

    The caveat in this story, though, is that IBM is a rare type of company that has deep pockets and a huge team. Letting a team of developers work full-time to create low-level tooling—tools required just to start working on the actual goal—is rarely feasible.

    And while the DX improved for teams that worked with the improved data layer we implemented, the DX for our team as we built the tools was pretty rough.

    Sometimes the extra effort and risk is worth it long-term, but as Eric Lee says, every line of code you write is a liability, not an asset. Before choosing to roll a custom solution, give serious thought to whether you have the resources to manage that liability.

    When to apply research and lessons from experts in the field

    A step further up the tooling ladder, we’re able to leverage and implement the research of industry experts. We’re not inventing solutions anymore; we’re implementing the solutions designed by the foremost experts in a given field.

    With a little research, we have access to industry best practices for accessibility thanks to experts like Léonie Watson and Marcy Sutton; for web standards via Jeffrey Zeldman and Estelle Weyl; for performance via Tim Kadlec and Addy Osmani.

    By leveraging the collective knowledge of the web’s leading experts, we get to not only learn what the current best practices are but also become experts ourselves by implementing those best practices.

    But the web moves fast, and for every solution we have time to research and implement, a dozen more will see major improvements. Keeping up with best practices becomes a thankless game of whack-a-mole, and even the very best developers can’t keep up with the entire industry’s advancements. This means that while we implement the latest techniques in one area of our app, other areas will go stale, and technical debt will start to pile up.

    While learning all of these new best practices feels really great, the DX of implementing those solutions can be pretty rough—in many cases making the cost higher than a given team can afford.

    Continued learning is an absolutely necessary part of being a web developer—we should always be working to learn and improve—but it doesn’t scale if it’s our only approach to providing a great UX. To paraphrase Jem Young, we have to look at the trade-offs, and we should make the decision that improves the team’s DX. Our goal is to make the team more productive, and we need to know where to draw the line between understanding the down-and-dirty details of each piece of our app and shipping a high-quality experience to our users in a reasonable amount of time.

    To put it another way: keeping up with industry best practices is an excellent tool for weighing the trade-offs between building in-house solutions or using an existing tool, but we need to make peace with the fact that there’s simply no way we can keep up with everything happening in the industry.

    When to use off-the-shelf solutions

    While it’s overwhelming to try and keep up with the rapidly changing front-end landscape, the ever-evolving ecosystem of open source tools is also an incredible source of prepaid DX.

    There are dozens of incredibly smart, incredibly passionate people working to solve problems on the web, and many of those solutions are open source. This gives developers like you and me unprecedented access to prepaid solutions: the community has already paid the cost, so you and I can deliver an amazing UX without giving up our DX.

    This class of tooling was designed with both UX and DX in mind. As best practices evolve, each project has hundreds of contributors working together to ensure that these tools are always using the best possible approach. And each has generated an ecosystem of tutorials, examples, articles, and discussion to make the DX even better.

    By taking advantage of the collective expertise of the web community, we’re able to sidestep all the heartache and frustration of figuring these things out; the open source community has prepaid the cost on our behalf. We can enjoy a dramatically improved DX, confident that many of the hardest parts of creating good UX are taken care of already.

    The trade-off—because there is always at least one—is that we need to accept and work within the assumptions and constraints of these frameworks to get the great DX. Because again, as soon as we step outside the happy path, we’re on our own again. Before adopting any solution—whether it’s open source, SaaS, or bespoke—it’s important to have a thorough understanding of what we’re trying to accomplish and to compare and contrast that understanding to the goals and limitations of a proposed tool. Otherwise we’re running a significant risk: that today’s DX improvements will become tomorrow’s technical debt.

    If we’re willing to accept that trade-off, we find ourselves in a great position: we get to confidently ship apps, knowing that UX is a first-class consideration at every level of our stack, and we get to work in an environment that’s optimized to give our teams an incredible DX.

    Deadlock is a (solvable) design problem

    It’s tempting to frame UX and DX as opposing forces in a zero-sum game: for one to get better, the other needs to get worse. And in many apps, that certainly appears to be the case.

    DX at the expense of UX is a design problem. If software is designed to make developers’ lives easier without considering the user, it’s no wonder that problems arise later on. If the user’s needs aren’t considered at the core of every decision, we see problems creep in: instead of recognizing that users will abandon our sites on mobile if they take longer than three seconds to load, our projects end up bloated and take twice that long to load on 4G—and even longer on 3G. We send hundreds of kilobytes of bloat, because optimizing images or removing unused code is tedious. Simply put: we get lazy, and our users suffer for it.

    Similarly, if a team ignores its tools and focuses only on delivering great UX, the developers will suffer. Arduous quality assurance checklists full of manual processes can ensure that the UX of our projects is top-notch, but it’s a slog that creates a terrible, mind-numbing DX for the teams writing the code. In an industry full of developers who love to innovate and create, cumbersome checklists tend to kill employee engagement, which is ultimately bad for the users, the developers, and the whole company.

    But if we take a moment at the outset of our projects to consider both sides, we’re able to spot trade-offs, and make intelligent design decisions before problems emerge. We can treat both UX and DX as first-class concerns, and prevent putting them at odds with each other—or, at least, we can minimize the trade-offs when conflicts happen. We can provide an excellent experience for our users while also creating a robust suite of tools and frameworks that make development enjoyable and maintainable for the entire lifespan of the project.

    Whether we do that by choosing existing tools to take work off our plates, by spending an appropriate amount of time properly planning custom solutions, or some combination thereof, we can make a conscious effort to make smart design decisions, so we can keep users and developers happy.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.