Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • Designing for Conversions

    What makes creative successful? Creative work often lives in the land of feeling—we can say we like something, point to how happy the client is, or talk about how delighted users will be, but can’t objectively measure feelings. Measuring the success of creative work doesn’t have to stop with feeling. In fact, we can assign it numbers, do math with it, and track improvement to show clients objectively how well our creative is working for them.

    David Ogilvy once said, “If it doesn’t sell, it isn’t creative.” While success may not be a tangible metric for us, it is for our clients. They have hard numbers to meet, and as designers, we owe it to them to think about how our work can meet those goals. We can track sales, sure, but websites are ripe with other opportunities for measuring improvements. Designing for conversions will not only make you a more effective designer or copywriter, it will make you much more valuable to your clients, and that’s something we should all seek out.

    Wait—what’s a conversion?

    Before designing for conversions, let’s establish a baseline for what, exactly, we’re talking about. A conversion is an action taken by the user that accomplishes a business goal. If your site sells things, a conversion would be a sale. If you collect user information to achieve your business goals, like lead aggregation, it would be a form submission. Conversions can also be things like newsletter sign-ups or even hits on a page containing important information that you need users to read. You need some tangible action to measure the success of your site—that’s your conversion.

    Through analytics, you know how many people are coming to your site. You can use this to measure what percentage of users are converting. This number is your conversion rate, and it’s the single greatest metric for measuring the success of a creative change. In your analytics, you can set up goals and conversion funnels to track this for you (more on conversion funnels shortly). It doesn’t matter how slick that new form looks or how clever that headline is—if the conversion rate drops, it’s not a success. In fact, once you start measuring success by conversion rate, you’ll be surprised to see how even the cleverest designs applied in the wrong places can fail to achieve your goals.

    Conversions aren’t always a one-step process. Many of us have multi-step forms or long check-out processes where it can be very useful to track how far a user gets. It’s possible to set up multiple goals along the way so your analytics can give you this data. This is called a conversion funnel. Ideally, you’ll coordinate with the rest of your organization to get data beyond the website as well. For instance, changing button copy may lead to increased form submissions but a drop in conversions from lead to sale afterward. In this case, the button copy update probably confused users rather than selling them on the product. A good conversion funnel will safeguard against false positives like that.

    It’s also important to track the bounce rate, which is the percentage of users that hit a page and leave without converting or navigating to other pages. A higher bounce rate is an indication that there’s a mismatch between the user’s expectations when landing on your site and what they find once landing there. Bounce rate is really a part of the conversion funnel, and reducing bounce rate can be just as important as improving conversion rate.

    Great. So how do we do that?

    When I was first getting started in conversion-driven design, it honestly felt a little weird. It feels shady to focus obsessively on getting the user to complete an action. But this focus is in no way about tricking the user into doing something they don’t want to do—that’s a bad business model. As Gerry McGovern has commented, if business goals don’t align with customer goals, your business has no future. So if we’re not tricking users, what are we doing?

    Users come to your site with a problem, and they’re looking for a solution. The goal is to find users whose problems will be solved by choosing your product. With that in mind, improving the conversion rate doesn’t mean tricking users into doing something—it means showing the right users how to solve their problem. That means making two things clear: that your product will solve the user’s problem, and what the user must do to proceed.

    The first of these two points is the value proposition. This is how the user determines whether your product can solve his or her problem. It can be a simple description of the benefits, customer testimonials, or just a statement about what the product will do for the user. A page is not limited to one value proposition—it’s good to have several. (Hint: the page’s headline should almost always be a value proposition!) The user should be able to determine quickly why your product will be helpful in solving their problem. Once the value of your product has been made clear, you need to direct the user to convert with a call to action.

    A call to action tells the user what they must do to solve their problem—which, in your case, means to convert. Most buttons and links should be calls to action, but a bit of copy directly following a value proposition is a good place too. Users should never have to look around to find out what the next step is—it should be easy to spot and clear in its intention. Also, ease of access is a big success factor here. My team’s testing found that replacing a Request Information button (that pointed to a form page) with an actual form on every page significantly boosted the conversion rate. If you’re also trying to get information from a user, consider a big form at the top of the page so users can’t miss it. When they scroll down the page and are ready to convert, they remember the form and have no question as to what they have to do.

    So improving conversion rate (and, to some degree, decreasing bounce rate) is largely about adding clarity around the value proposition and call to action. There are other factors as well, like decreasing friction in the conversion process and improving performance, but these two things are where the magic happens, and conversion problems are usually problems with one of them.

    So, value propositions…how do I do those?

    The number one thing to remember when crafting a value proposition is that you’re not selling a product—you’re selling a solution. Value propositions begin with the user’s problem and focus on that. Users don’t care about the history of your company, how many awards you’ve won, or what clever puns you’ve come up with—they care about whether your product will solve their problem. If they don’t get the impression that it can do that, they will leave and go to a competitor.

    In my work with landing pages for career schools, we initially included pictures of people in graduation gowns and caps. We assumed that the most exciting part of going back to school was graduating. Data showed us that we were wrong. Our testing showed that photos of people doing the jobs they would be training for performed much better. In short, our assumption was that showing the product (the school) was more important than showing the benefit (a new career). The problem users were trying to solve wasn’t a diploma—it was a career, and focusing on the user showed a significant improvement in conversion rate.

    We had some clients that insisted on using their branding on the landing pages, including one school that wanted to use an eagle as their hero image because their main website had eagles everywhere. This absolutely bombed in conversions. No matter how strong or consistent your branding is, it will not outperform talking about users and their problems.

    Websites that get paid for clicks have mastered writing headlines this way. Clickbait headlines get a groan from copywriters—especially since they often use their powers for evil and not good—but there are some important lessons we can learn from them. Take this headline, for instance:

    Get an Associate’s degree in nursing

    Just like in the example above with the college graduates, we’re selling the product—not the benefit. This doesn’t necessarily show that we understand the user’s problem, and it does nothing to get them excited about our program. Compare that headline to this one:

    Is your job stuck in a rut? Get trained for a new career in nursing in only 18 months!

    In this case, we lead with the user’s problem. That immediately gets users’ attention. We then skip to a benefit: a quick turnaround. No time is wasted talking about the product—we save that for the body copy. The headline focuses entirely on the user.

    In your sign-up or check-out process, always lead with the information the user is most interested in. In our case, letting the user first select their school campus and area of study showed a significant improvement over leading with contact information. Similarly, put the less-exciting content last. In our testing, users were least excited about sharing their telephone number. Moving that field to be the last one in the form decreased form abandonment and improved conversions.

    As designers, be cognizant of what your copywriters are doing. If the headline is the primary value proposition (as it should be), make sure the headline is the focal point of your design. Ensure the messaging behind your design is in line with the messaging in the content. If there’s a disagreement in what the user’s problem is or how your product will solve that problem, the conversion rate will suffer.

    Once the value proposition has been clearly defined and stated, it’s time to focus on the call to action.

    What about the call to action?

    For conversion-driven sites, a good call to action is the most important component. If a user is ready to convert and has insufficient direction on how to do so, you lose a sale at 90 percent completion. It needs to be abundantly clear to the user how to proceed, and that’s where the call to action steps in.

    When crafting a call to action, don’t be shy. Buttons should be large, forms should be hard to miss, and language should be imperative. A call to action should be one of the first things the user notices on the page, even if he or she won’t be thinking about it again until after doing some research on the page. Having the next step right in front of the user vastly increases the chance of conversion, so users need to know that it’s there waiting.

    That said, a call to action should never get in the way of a value proposition. I see this all the time: a modal window shows as soon as I get to a site, asking me to subscribe to their mailing list before I have an inkling of the value the site can give me. I dismiss these without looking, and that call to action is completely missed. Make it clear how to convert, and make it easy, but don’t ask for a conversion before the user is ready. For situations like the one above, a better strategy might be asking me to subscribe as I exit the site; marketing to visitors who are leaving has been shown to be effective.

    In my former team’s tests, there were some design choices that could improve calls to action. For instance, picking a bright color that stood out from the rest of the site for the submit button did show an improvement in conversions, and reducing clutter around the call to action improved conversion rates by 232%. But most of the gains here were in either layout or copy; don’t get so caught up in minor design changes that you ignore more significant changes like these.

    Ease of access is another huge factor to consider. As mentioned above, when my team was getting started, we had a Request Information link in the main navigation and a button somewhere on the page that would lead the user to the form. The single biggest positive change we saw involved putting a form at the top of every page. For longer forms, we would break this form up into two or three steps, but having that first step in sight was a huge improvement, even if one click doesn’t seem like a lot of effort.

    Another important element is headings. Form headings should ask the user to do something. It’s one thing to label a form “Request Information”; it’s another to ask them to “Request Information Now.” Simply adding action words, like “now” or “today,” can change a description into an imperative action and improve conversion rates.

    With submit buttons, always take the opportunity to communicate value. The worst thing you can put on a submit button is the word “Submit.” We found that switching this button copy out with “Request Information” showed a significant improvement. Think about the implied direction of the interaction. “Submit” implies the user is giving something to us; “Request Information” implies we’re giving something to the user. The user is already apprehensive about handing over their information—communicate to them that they’re getting something out of the deal.

    Changing phrasing to be more personal to the user can also be very effective. One study showed that writing button copy in first person—for instance, “Create My Account” versus “Create Your Account”—showed a significant boost in conversions, boosting click-through rates by 90%.

    Users today are fearful that their information will be used for nefarious purposes. Make it a point to reassure them that their data is safe. Our testing showed that the best way to do this is to add a link to the privacy policy (“Your information is secure!”) with a little lock icon right next to the submit button. Users will often skip right over a small text link, so that lock icon is essential—so essential, in fact, that it may be more important than the privacy policy itself. I’m somewhat ashamed to admit this, but I once forgot to create a page for the privacy policy linked to from a landing page, so that little lock icon linked out to a 404. I expected a small boost in conversions when I finally uploaded the privacy policy, but nope—nobody noticed. Reassurance is a powerful thing.

    Measure, measure, measure

    One of the worst things you can do is push out a creative change, assume it’s great, and move on to the next task. A/B testing is ideal and will allow you to test a creative change directly against the old creative, eliminating other variables like time, media coverage, and anything else you might not be thinking of. Creative changes should be applied methodically and scientifically—just because two or three changes together show an improvement in conversion rate doesn’t mean that one of them wouldn’t perform better alone.

    Measuring tangible things like conversion rate not only helps your client or business, but can also give new purpose to your designs and creative decisions. It’s a lot easier to push for your creative decisions when you have hard data to back up why they’re the best choice for the client or project. Having this data on hand will give you more authority in dealing with clients or marketing folks, which is good for your creative and your career. If my time in the design world has taught me anything, it’s that, in the realm of creativity, certainty can be hard to come by. So, perhaps most importantly, objective measures of success give you, and your client, the reassurance that you’re doing the right thing.

  • Paint the Picture, Not the Frame: How Browsers Provide Everything Users Need

    Kip Williams, professor of psychology sciences at Purdue University, conducted a fascinating experiment called “cyberball.” In his experiment, a test subject and two other participants played a computer game of catch. At a predetermined time, the test subject was excluded from the game, forcing them to only observe as the clock ran down.

    From the cyberball game, three outlined figures playing catch. Player 1 is mid-throw to Player 3.

    The experience showed increases in self-reported levels of anger and sadness, as well as lowering levels of the four needs. The digital version of the experiment created results that matched the results of the original physical one, meaning that these feelings occurred regardless of context.

    After the game was concluded, the test subject was told that the other participants were robots, not other human participants. Interestingly, the reveal of automated competitors did not lessen the negative feelings reported. In fact, it increased feelings of anger, while also decreasing participants’ sense of willpower and/or self-regulation.

    In other words: people who feel they are rejected by a digital system will feel hurt and have their sense of autonomy reduced, even when they believe there isn’t another human directly responsible.

    So, what does this have to with browsers?

    Every adjustment to the appearance and behavior of the features browsers let you manipulate is a roll of the dice, gambling on the delight of some at the expense of alienating others.

    When using a browser to navigate the web, there’s a lot of sameness, until there isn’t. Most of the time we’re hopping from page-to-page and site-to-site, clicking links, pressing buttons, watching videos, filling out forms, writing messages, etc. But every once in awhile we stumble across something new and novel that makes us pause to figure out what’s going on.

    Every website and web app is its own self-contained experience, with its own ideas of how things should look and behave. Some are closer to others, but each one requires learning how to operate the interface to a certain degree.

    Some browsers can also have parts of their functionality and appearance altered, meaning that as with websites, there can be unexpected discrepancies. We’ll unpack some of the nuance behind some of these features, and more importantly, why most of them are better off left alone.

    Scroll-to-top

    All the major desktop browsers allow you to hit the Home key on the keyboard to jump to the top of the page. Some scrollbar implementations allow you to click on the top of the scrollbar area to do the same. Some browsers allow you to type Command+Up (macOS) / Ctrl+Up (Windows), as well. People who use assistive technology like screen readers can use things like banner landmarks to navigate the same way (provided they are correctly declared in the site’s HTML).

    However, not every device has an easily discoverable way to invoke this functionality: many laptops don’t have a Home key on their keyboard. The tap-the-clock-to-jump-to-the-top functionality on iOS is difficult to discover, and can be surprising and frustrating if accidentally activated. You need specialized browser extensions to recreate screen reader landmark navigation techniques.

    One commonly implemented UI solution for longer pages is the scroll-to-top button. It’s often fixed to the bottom-right corner of the screen. Activating this control will take the user to the top of the page, regardless of how far down they’ve scrolled.

    If your site features a large amount of content per page, it may be worth investigating this UI pattern. Try looking at analytics and/or conducting user tests to see where and how often this feature is used. The caveat being if it’s used too often, it might be worth taking a long, hard look at your information architecture and content strategy.

    Three things I like about the scroll-to-top pattern are:

    • Its functionality is pretty obvious (especially if properly labeled).
    • Provided it is designed well, it can provide a decent-sized touch target in a thumb-friendly area. For motor control considerations, its touch target can be superior to narrow scroll or status bars, which can make for frustratingly small targets to hit.
    • It does not alter or remove existing scroll behavior, augmenting it instead. If somebody is used to one way of scrolling to the top, you’re not overriding it or interrupting it.

    If you’re implementing this sort of functionality, I have four requests to help make the experience work for everyone (I find the Smooth Scroll library to be a helpful starting place):

    • Honor user requests for reduced motion. The dramatic scrolling effect of whipping from the bottom of the page to the top may be a vestibular trigger, a situation where the system that controls your body’s sense of physical position and orientation in the world is disrupted, causing things like headaches, nausea, vertigo, migraines, and hearing loss.
    • Ensure keyboard focus is moved to the top of the document, mirroring what occurs visually. Applying this practice will improve all users’ experiences. Otherwise, hitting Tab after scrolling to the top would send the user down to the first interactive element that follows where the focus had been before they activated the scroll button.
    • Ensure the button does not make other content unusable by obscuring it. Be sure to account for when the browser is in a zoomed-in state, not just in its default state.
    • Be mindful of other fixed-position elements. I’ve seen my fair share of websites that also have a chatbot or floating action button competing to live in the same space.
    A red chat icon overlaps with a corner of the scroll to top icon, obscuring a portion of the arrow.

    Scrollbars

    If you’re old enough to remember, it was once considered fashionable to style your website scrollbars. Internet Explorer allowed this customization via a series of vendor-specific properties. At best, they looked great! If the designer and developer were both skilled and detail-oriented, you’d get something that looked like a natural extension of the rest of the website.

    However, the stakes for a quality design were pretty high: scrollbars are part of an application’s interface, not a website’s. In inclusive design, it’s part of what we call external consistency. External consistency is the idea that an object’s functionality is informed and reinforced by similar implementations elsewhere. It’s why you can flip a wall switch in most houses and be guaranteed the lights come on instead of flushing the toilet.

    While scrollbars have some minor visual differences between operating systems (and operating system versions), they’re consistent externally in function. Scrollbars are also consistent internally, in that every window and program on the OS that requires scrolling has the same scrollbar treatment.

    If you customize your website’s scrollbar colors, for less technologically literate people, yet another aspect of the interface has changed without warning or instruction on how to change it back. If the user is already confused about how things on the screen work, it’s one less familiar thing for them to cling to as stable and reliable.

    You might be rolling your eyes reading this, but I’d ask you to check out this incredible article by Jennifer Morrow instead. In it, she describes conducting a guerilla user test at a mall, only to have the session completely derailed when she discovers someone who has never used a computer before.

    What she discovers is as important as it is shocking. The gist of it is that some people (even those who have used a computer before) don’t understand the nuance of the various “layers” you navigate through to operate a computer: the hardware, the OS, the browser installed on the OS, the website the browser is displaying, the website’s modals and disclosure statements, etc. To them, the experience is flat.

    We should not expect these users to juggle this kind of cognitive overhead. These kinds of abstractions are crafted to be analogous to real-world objects, specifically so people can get what they want from a digital system without having to be programmers. Adding unnecessary complexity weakens these metaphors and gives users one less reference point to rely on.

    Remember the cyberball experiment. When a user is already in a distressed emotional state, our poorly-designed custom scrollbar might be the death-by-a-thousand-paper-cuts moment where they give up on trying to get what they want and reject the system entirely.

    While Morrow’s article was written in 2011, it’s just as relevant now as it was then. More and more people are using the internet globally, and more and more services integral to living daily life are getting digitized. It’s up to us as responsible designers and developers to be sure we make everyone, regardless of device, circumstance, or ability feel welcome.

    In addition to unnecessarily abandoning external consistency, there is the issue of custom scrollbar styling potentially not having sufficient color contrast. The too-light colors can create a situation where a person experiencing low-vision conditions won’t be able to perceive, and therefore operate, a website’s scrolling mechanism.

    This article won’t even begin to unpack the issues involved with custom implementations of scrollbars, where instead of theming the OS’s native scrollbars with CSS, one instead replaces them with a JavaScript solution. Trust me when I say I have yet to see one implemented in a way that could successfully and reliably recreate all features and functionality across all devices, OSes, browsers, and browsing modes.

    In my opinion? Don’t alter the default appearance of an OS’s scrollbars. Use that time to work on something else instead, say, checking for and fixing color contrast problems.

    Scrolling

    The main concern about altering scrolling behavior is one of consent: it’s taking an externally consistent, system-wide behavior and suddenly altering it without permission. The term scrolljacking has been coined to describe this practice. It is not to be confused with scrollytelling, a more considerate treatment of scrolling behavior that honors the OS’s scrolling settings.

    Altering the scrolling behavior on your website or web app can fly in the face of someone’s specific, expressed preferences. For some people, it’s simply an annoyance. For people with motor control concerns, it could make moving through a site difficult. In some extreme cases, the unannounced discrepancy between the amount of scrolling and the distance traveled can also be vestibular triggers. Another consideration is if your modified scrolling behavior accidentally locks out people who don’t use mice, touch, or trackpads to scroll.

    All in all, I think Robin Rendle said it best:

    Scrolljacking, as I shall now refer to it both sarcastically and honestly, is a failure of the web designer’s first objective; it attacks a standardised pattern and greedily assumes control over the user’s input.

    Highlighting

    Another OS feature we’re permitted to style in the browser is highlighted text. Much like scrollbars, this is an interface element that is shared by all apps on the OS, not just the browser.

    Breaking the external consistency of the OS’s highlighting color has a lot of the same concerns as styled scrollbars, namely altering the expected behavior of something that functions reliably everywhere else. It’s potentially disorienting and alienating, and may deny someone’s expressed preferences.

    Some people highlight text as they read. If your custom highlight style has a low contrast ratio between the highlighted text color and the highlighted text’s background color, the person reading your website or web app may be unable to perceive the text they’re highlighting. The effect will cause the text to seemingly disappear as they try to read.

    Other people just may not care for your aesthetic sensibilities. Both macOS and Windows allow you to specify a custom highlight color. In a scenario where someone has deliberately set a preference other than the system default, a styled highlight color may override their stated specifications.

    For me, the potential risks far outweigh the vanity of a bespoke highlight style—better to just leave it be.

    Text resizing

    Lots of people change text size to suit their needs. And that’s a good thing. We want people to be able to read our content and act upon it, regardless of whatever circumstances they may be experiencing.

    For the problem of too-small text, some designers turn to text resizing widgets, a custom UI pattern that lets a person cycle through a number of preset CSS font-size values. Commonly found in places with heavy text content, text resizing widgets are often paired with complex, multicolumn designs. News sites are a common example.

    Before I dive into my concerns with text resizing widgets, I want to ask: if you find that your site needs a specialized widget to manage your text size, why not just take the simpler route and increase your base text size?

    Like many accessibility concerns, a request for a larger font size isn’t necessarily indicative of a permanent disability condition. It’s often circumstantial, such as a situation where you’re showing a website on your office’s crappy projector.

    Browsers allow users to change their preferred default font size, resizing text across websites accordingly. Browsers excel at handling this setting when you write CSS that takes advantage of unitless line-height values and relative font-size units.

    Some designers may feel that granting this liberty to users somehow detracts from their intended branding. Good designers understand that there’s more to branding than just how something looks. It’s about implementing the initial design in the browser, then working with the browser’s capabilities to best serve the person using it. Even if things like the font size are adjusted, a strong brand will still shine through with the ease of your user flows, quality of your typography and palette, strength of your copywriting, etc.

    Unfortunately, custom browser text resizing widgets lack a universal approach. If you rely on browser text settings, it just works—consistently, with the same controls, gestures, and keyboard shortcuts, for every page on every website, even in less-than-ideal conditions. You don’t have to write and maintain extra code, test for regressions, or write copy instructing the user on where to find your site’s text resizing widget and how to use it.

    Behavioral consistency is incredibly important. Browser text resizing is applied to all text on the page proportionately every time the setting is changed. These settings are also retained for the next time you visit. Not every custom text resizing widget does this, nor will it resize all content to the degree stipulated by the Web Content Accessibility Guidelines.

    High-contrast themes

    When I say high-contrast themes, I’m not talking about things like a dark mode. I’m talking about a response to people reporting that they need to change your website or web app’s colors to be more visually accessible to them.

    Much like text resizing controls, themes that are designed to provide higher contrast color values are perplexing: if you’re taking the time to make one, why not just fix the insufficient contrast values in your regular CSS? Effectively managing themes in CSS is a complicated, resource-intensive affair, even under ideal situations.

    Most site-provided high-contrast themes are static in that the designer or developer made decisions about which color values to use, which can be a problem. Too much contrast has been known to be a trigger for things like migraines, as well as potentially making it difficult to focus for users with some forms of attention-deficit hyperactivity disorder (ADHD).

    The contrast conundrum leads us to a difficult thing to come to terms with when it comes to accessibility: what works for one person may actually inhibit another. Because of this, it’s important to make things open and interoperable. Leave ultimate control up to the end user so they may decide how to best interact with content.

    If you are going to follow through on providing this kind of feature, some advice: model it after the Windows High Contrast mode. It’s a specialized Windows feature that allows a person to force a high color palette onto all aspects of the OS’s UI, including anything the browser displays. It offers four themes out of the box but also allows a user to suit their individual needs by specifying their own colors.

    Your high contrast mode feature should do the same. Offer a range of themes with different palettes, and let the user pick colors that work best for them—it will guarantee that if your offerings fail, people still have the ability to self-select.

    Moving focus

    Keyboard focus is how people who rely on input such as keyboards, switch controls, voice inputs, eye tracking, and other forms of assistive technology navigate and operate digital interfaces. While you can do things like use the autofocus attribute to move keyboard focus to the first input on a page after it loads, it is not recommended.

    For people experiencing low- and no-vision conditions, it is equivalent to being abruptly and instantaneously moved to a new location. It’s a confusing and disorienting experience—there’s a reason why there’s a trope in sci-fi movies of people vomiting after being teleported for the first time.

    For people with motor control concerns, moving focus without their permission means they may be transported to a place where they didn’t intend to go. Digging themselves out of this location becomes annoying at best and effort-intensive at worst. Websites without heading elements or document landmarks to serve as navigational aids can worsen this effect.

    This is all about consent. Moving focus is fine so long as a person deliberately initiates an action that requires it (shifting focus to an opened modal, for example). I don’t come to your house and force you to click on things, so don’t move my keyboard focus unless I specifically ask you to.

    Let the browser handle keyboard focus. Provided you use semantic markup, browsers do this well. Some tips:

    The clipboard and browser history

    The clipboard is sacred space. Don’t prevent people from copying things to it, and don’t append extra content to what they copy. The same goes for browser history and back and forward buttons. Don’t mess around with time travel, and just let the browser do its job.

    Wrapping up

    In the game part of cyberball, the fun comes from being able to participate with others, passing the ball back and forth. With the web, fun comes from being able to navigate through it. In both situations, fun stops when people get locked out, forced to watch passively from the sidelines.

    Fortunately, the web doesn’t have to be one long cyberball experiment. While altering the powerful, assistive technology-friendly features of browsers can enhance the experience for some users, it carries a great risk of alienating others if changes are made with ignorance about exactly how much will be affected.

    Remember that this is all in the service of what ultimately matters: creating robust experiences that allow people to successfully use your website or web app regardless of their ability or circumstance. Sometimes the best strategy is to let things be.

  • UX in the Age of Personalization

    If you listened to episode 180 of The Big Web Show, you heard two key themes: 1) personalization is now woven into much of the fabric of our digital technology, and 2) designers need to be much more involved in its creation and deployment. In my previous article we took a broad look at the first topic: the practice of harvesting user data to personalize web content, including the rewards (this website gets me!) and risks (creepy!). In this piece, we will take a more detailed look at the UX practitioner’s emerging role in personalization design: from influencing technology selection, to data modeling, to page-level implementation. And it’s high time we did.

    A call to arms

    Just as UX people took up the torch around content strategy years ago, there is a watershed moment quickly approaching for personalization strategy. Simply put, the technology in this space is far outpacing the design practice. For example, while “personalized” emails have been around forever (“Dear COOLIN, …”), it’s now estimated that some 45% of organizations [PDF] have attempted to personalize their homepage. If that scares you, it should: the same report indicated that fewer than a third think it’s actually working.

    A bar chart showing the most commonly personalized experiences (in order of highest ranking to lowest): Email content at 71%, Home page at 45%, Landing pages at 37%, Interior pages at 28%, Product detail pages at 27%, Blog at 20%, Navigation at 18%, Search at 17%, Pricing at 14%, and App screens at 13%.
    While good old “mail merge” personalization has been around forever, more organizations are now personalizing their website content. Source: Researchscape International survey of 300 marketing professionals from five countries, conducted February 22 to March 28, 2018.

    As Jeff MacIntyre points out, “personalization failures are typically design failures.” Indeed, many personalization programs are still driven primarily out of marketing and IT departments, a holdover from the legacy of the inbound, “creepy” targeted ad. Fixing that model will require the same paradigm shift we’ve used to tackle other challenges in our field: intentionally moving design “upstream,” in this case to technology selection, data collection, and page-level implementation.

    That’s where you come in. In fact, if you’re anything like me, you’ve been doing this, quietly, already. Here are just a few examples of UX-specific tasks I’ve completed on recent design projects that had personalization aspects:

    • aligning personalization to the core content strategy;
    • working with the marketing team to understand goals and objectives;
    • identifying user segments (personas) that may benefit from personalized content;
    • drafting personalization use cases;
    • assisting the technical team with product selection;
    • helping to define the user data model, including first- and third-party sources;
    • wireframing personalized components in the information architecture;
    • taking inventory of existing content to repurpose for personalization;
    • writing or editing new personalized copy;
    • working with the design team to create personalized images;
    • developing a personalization editorial calendar and governance model;
    • helping to set up and monitor results from a personalization pilot;
    • partnering with the analytics team to make iterative improvements;
    • being a voice for the personalization program’s ethical standards;
    • and monitoring customer feedback to make sure people aren’t freaking the f* out.

    Sound familiar? Many of these are simply variants on the same, user-centered tactics you’ve relied on for years. The difference now is that personalization creates a “third dimension” of complexity relative to audience and content. We’ll define that complexity further in two parts: technical design and information design. (We should note again that the focus of this article is personalizing web content, although many of the same principles also apply to email and native applications.)

    Part 1: Personalization technical design

    Influencing technology decisions

    When clients or internal stakeholders come to you with a desire to “do personalization,” the first thing to ask is what does that mean. As you’ve likely noticed, the technology landscape has now matured to the point where you can “personalize” a digital experience based on just about anything, from basic geolocation to complex machine learning algorithms. What’s more, such features are increasingly baked into your own CMS or readily available from third-party plugins (see chart below). So defining what personalization is—and isn’t—is a critical first step.

    To accomplish this, I suggest asking two questions: 1) What data can you ethically collect on your users, and 2) which tactics best complement this data. Some capabilities may already exist in your current systems; some you may need to build into your future technology roadmap. The following is by no means an exhaustive list but highlights a few of the popular tactics out there today, and tools that support them:

    Tactic Definition Examples
    Geolocation Personalizing based on the physical location of the user, via a geolocation-enabled device or a web browser IP address (which can triangulate your position based on nearby wifi devices). Examples: If I’m in Washington, DC, show me promotions for DC. If I’m in Paris, show me promotions for Paris, in French.

    Sample Tools: MaxMind, HTML5 API
    Quizzes and Profile Info A simple, cost-effective way to gather first-party user data by asking basic questions to help assign someone to a segment. Often done as a layover “intercept” when the user arrives, which can then be modified based on a cookied profile. Generally must be exceptionally brief to be effective. Examples: Are you interested in our service for home use or business use? Are you in the market to buy or sell a house?
    Campaign Source One of the most popular methods of personalization, it directs a user to a customized landing page based on incoming campaign data. Can be used for everything from passing a unique discount code to personalizing content on the entire site. Examples: Customize landing page based on incoming email campaigns, social media campaigns, and paid search campaigns.
    Clicks or Pages Viewed Slightly more advanced approach to personalizing based on behavior; common on ecommerce. Examples: Products you previously viewed; suggested content you’ve recently been looking at.

    Sample Tools: Dynamic Yield, Optimizely
    SIC and NAICS Codes Standard Industrial Classification (SIC) and North American Industry Classification System (NAICS) for classifying industries based on a universal four-digit code, e.g., Manufacturing 2000–3999. Helpful for determining who is visiting you from a business location, based on incoming IP address. Examples: Show me a different message if I work in the fashion industry vs. hog farming.

    Sample Tools: Marketo, Oracle (BlueKai), Demandbase
    Geofencing Contextual personalization within a “virtual perimeter.” Establishes a fixed geographical boundary based on your device location, typically through RFID or GPS. Your device can then take an action when you enter or leave the location. Examples: Show me my boarding pass when I’m at the airport. Remind me about unused gift cards when I enter the store.

    Sample Tools: Simpli.fi, Thinknear, Google Geofencing API.
    Behavioral Profiling Add a user to a segment based on similar users who fall into that segment. Often combined with machine learning to identify new segments that humans wouldn’t be able to predict. Examples: Sitecore pattern cards, e.g., impulse purchaser, buys in bulk, bargain hunter; expedites shipping.
    Machine Learning Identify patterns across large sets of data (often across channels) to better predict what a user will want. In theory, improves over time as algorithms “learn” from thousands of interactions. (Obvious downside: your site will need to support thousands of interactions.) Examples: Azure Machine Learning Studio, BloomReach (Hippo), Sitecore (xConnect, Cortex), Adobe Sensei.

    As you can see, the best tactic(s) can vary dramatically based on your audience and how they interact with you. For example, if you’re a high-volume, B2C ecommerce site, you may have enough click-stream data to support useful personalized product recommendations. Conversely, if you’re a B2B business with a qualified lead model and fewer unique visitors, you may be better served by third-party data to help you tailor your message based on industry type (NAICS code) or geography. To help illustrate this idea, let’s do a quick mapping of tactics relative to visitor volume and session time:

    A quadrant chart with Number of Visitors for the Y-Axis and Session Time for the X-Axis. In the top left quadrant (titled Advanced Segmentation) lie Geo-Fencing and Clicks or Pages Viewed. Directly between the top left and top right quadrant lies Behavioral Profiling. In the top right quadrant (titled Big Data 1-to-1) lies Machine Learning. In the bottom left quadrant (titled Basic Segmentation) lies Campaign Source, SIC/NASIC Codes, and Geo-Location. And finally in the bottom right quadrant (titled Basic Self Selection) lies Quizzes and Profile Info.
    To find your personalization “sweet spot,” consider your audience in terms of volume (number of visits) and average attention span (time on site).

    The good news here is that you needn’t have a massive data platform in place; you can begin to build audience profiles simply by asking users to self-identify via quizzes or profile info. But in either scenario, your goal is the same: help guide the technology decision toward a personalization approach that provides actual value to your audience, not “because we can.”

    Part 2: Personalization information design

    Personalization deliverables

    Once you have a sense of the technical possibilities, it’s time to determine how the personalized experience will look. Let’s pretend we’re designing for a venture several of you inquired about in my previous article: Reindeer Hugs International. As the name implies, this is a nonprofit that provides hugs to reindeer. RHI recently set new business goals and wants to personalize the website to help achieve them.

    The very reputable-looking logo of Reindeer Hugs International. It seems legit.
    Seems reputable.

    To address this goal, we propose four UX-specific deliverables:

    1. segments worksheet;
    2. campaigns worksheet;
    3. personalization wireframes;
    4. and personalization copy deck.

    Following the technical model we discussed earlier, the first thing we do is define our audience based on existing site interaction patterns. We discover that RHI doesn’t get a ton of organic traffic, but they do have a reasonably active set of authenticated users (existing members) as well as some paid social media campaigns. Working with the marketing team, we propose personalizing the site for three high-potential segments, as follows:

    Segments worksheet

    Segment How to Identify Personalization Goal Messaging Strategy
    Current Members Logged in or made guest contribution (track via cookie) Improve engagement with current members by 10% You’re a hugging rock star, but you can hug it out even more.
    Non-member Males Inbound Facebook and Instagram campaigns Improve conversion with non-member males age 25–34 by 5% Make reindeer hugging manly again.
    Non-member Parents Inbound Facebook and Instagram campaigns Improve conversion with non-member parents age 31–49 by 5% Reindeer hugging is great for the kids.

    Next, let’s determine the specific value we could add for these segments when they come to the site. To do this, we’ll revisit a model that we looked at previously for the four personalization content types. This will help us organize the collective content or “campaign” we show each segment based on a specific personalization goal:

    The four contrasting tasks at hand: Alert, Make Easier, Cross-Sell, and Enrich
    A Personalization Content Model showing four flavors of personalized content.

    For example, current members who are logged in might benefit from a “Make Easier” campaign of links to members-only content. Conversely, each of our three segments could benefit from a personalized “Cross-Sell” campaign to help generate awareness. Let’s capture our ideas like this:

    Campaigns worksheet

    Segment Alert Make Easier Cross-Sell Enrich
    Current Members Geolocation Banner
    Hugs needed in your area (displays to any user with location data).
    Links for members who are logged in, such as to profile information, a member directory, and reindeer friends catalog. Capital Campaign
    Generate awareness by audience (minimum three distinct messages).
    Current Member Blog
    Invest in creating original, hug-provoking content to further our brand.
    Non-member Males Age 25–34 Non-Member CTA In the non-member experience, this will be replaced by a CTA. Thought Leadership
    Demonstrate that we are the definitive source for reindeer hugs.
    Non-member Parents Age 28–39

    Personalization wireframes

    Now let’s decide where on the site we want to run these personalized campaigns. This isn’t too dissimilar from the work you already do around templates and components, with the addition that we can now have personalized zones. You can think of these as blocks where the CMS (or third-party plugin) will be running a series of calculations to determine the user segment in real-time (or based on a previously cached profile). To get the most coverage, these are typically dropped in at the template level. Here are examples for our home page template and interior page template:

    Two separate wireframes with corresponding colored boxes showing which portions of the page relate to each type of personalization.
    Showing component-level “zoning” on homepage and landing page templates. The colors correspond to the personalization content type.

    Everything in white is the non-personalized, or “static,” content, which never changes, regardless of who you are. The personalized zones themselves (color-coded based on our content model) will also have an underlying default or canonical content set that appears if the system doesn’t get a personalized match. (Note: this is also the version of the content that is typically indexed by search engines.)

    As you can see, an important rule of thumb is to personalize around the main content, not the entire page. There are a variety of reasons for this, including the risk of getting the audience wrong, effects on search indexing, and what’s known as the infinite content problem, i.e., can you realistically create content for every single audience on every single component? (Hint: no.)

    OK, we’re getting close! Finally, let’s look at what specifically we want the system to show in these slots. Based on our campaigns worksheet, we know how many permutations of content we need. We sit down with the creative team to design our targeted messages, including the copy, images, and calls to action. Here’s what the capital campaign (the blue zone) might look like for our three audiences:

    Personalization copy deck

    Reindeer Hugs International: Capital Campaign (Cross-Sell)
    Element Definition Asset
    Message A:
    Current Member
    Headline: Take Your Hugs to the Next Level

    Copy: You’re a hugging expert. But did you know you could hug two reindeers at once?

    Primary CTA: Sign up for our Two-for-One Hugs

    Secondary CTA: Learn More
    A young woman hugging a very handsome reindeer.
    Source: Current-Member.jpg
    Full-size render: 900x450
    Thumbnail render: 300x200
    Message B:
    Real Men Hug
    Headline: Real Men Hug Reindeer

    Copy: Are you a real man?

    Primary CTA: Prove It

    Secondary CTA: [None]
    A bearded man hugging another handsome reindeer.
    Source: Man-Hug.jpg
    Full-size render: 900x450
    Thumbnail render: 300x200
    Message C:
    Parents with Young Kids
    Headline: Looking for a fun activity to do with the kids?

    Copy: Reindeer hugs are 100% kid-friendly and 200% environmentally-friendly.

    Primary CTA: Shop Our Family Plan

    Secondary CTA: Learn More
    A young child happily hugging a cute, unthreatening reindeer
    Source: Parents-Kids.jpg
    Full-size render: 900x450
    Thumbnail render: 300x200

    That’s a pretty good start. We would want to follow a similar approach to detail our other three content campaigns, including alerts (e.g., hugs needed in your area), make easier (e.g., member shortcuts), and enrichment content (e.g., blog articles on latest reindeer fashions). When all the campaigns are up and running, we might expect the homepage to look something like this when seen by two different audiences, simultaneously, in real-time, in different browser sessions:

    Two more detailed wireframes that show what the home page might look. On the left, one block has member links and info and another section has a members-only blog post. On the right, one block has a CTA on benefits that members get and a more general blog post.
    Wireframes illustrating the anticipated homepage delivery to two distinct audiences: Current Member (left) and Non-Member Male 25–34 (right). If the system did not get an audience match, a default or non-personalized set of content would be shown.

    Part 3: Advanced personalization techniques

    Digital Experience Platforms

    Of course, all of that work was fairly manual. If you are lucky enough to be working with an advanced DMP (Data Management Platform) or integrated DXP (Digital Experience Platform) then you have even more possibilities at your disposal. For example, machine learning and behavior profiling can help you discover segments over time that you might never have dreamed of (the study we referenced earlier showed that 26% of marketing programs have tried some form of algorithmic one-to-one approach; 68% still use rules-based targeting to segments). This can be enhanced via parametric scoring, where actioning off of multiple data inputs can help you create blends of audience types (in our example, a thirty-three-year-old dad might get 60 percent Parent and 40 percent Real Man … or whatever). Likewise, on the content side, content scoring can help you deliver more nuanced content. (For example, we might tag an article with 20 percent Reindeer Advocacy and 80 percent Hug Best Practices.) Platforms like Sitecore can even illustrate these metrics, like in this example of a pattern card:

    Examples of a hexagonally shaped behavior diagram with the following personality traits at each corner clockwise from the top left: research, impulse purchase, returns merchandise, expedites shipping, bargain hunting, and buys in bulk.
    The diagram at left shows how a particular user scores (some combination of research and returns merchandise). This most closely correlates to the “Neurotic Shopper” card, so we might show this user content on our free-returns policy. Source: The Berndt Group.

    Cult of the complex

    While all of that is super cool, even the most tech-savvy among us will benefit from starting out “simple,” lest you fall prey to the cult of the complex. The manual process of identifying your target audience and use cases, for example, is foundational to building an extensible personalization program, regardless of your tech stack. At a minimum, this approach will help you get buy-in from your team and organization vs. just telling everyone the site will be personalized in a “black box” somewhere. And even with the best-in-class products, I have yet to find seamless “one-click” personalization, where the system somehow magically does everything from finding audiences to pumping out content, all in real time. We’ll get there one day, perhaps.

    But, in the meantime, it’s up to you.

  • Conversations with Robots: Voice, Smart Agents & the Case for Structured Content

    In late 2016, Gartner predicted that 30 percent of web browsing sessions would be done without a screen by 2020. Earlier the same year, Comscore had predicted that half of all searches would be voice searches by 2020. Though there’s recent evidence to suggest that the 2020 picture may be more complicated than these broad-strokes projections imply, we’re already seeing the impact that voice search, artificial intelligence, and smart software agents like Alexa and Google Assistant are making on the way information is found and consumed on the web.

    In addition to the indexing function that traditional search engines perform, smart agents and AI-powered search algorithms are now bringing into the mainstream two additional modes of accessing information: aggregation and inference. As a result, design efforts that focus on creating visually effective pages are no longer sufficient to ensure the integrity or accuracy of content published on the web. Rather, by focusing on providing access to information in a structured, systematic way that is legible to both humans and machines, content publishers can ensure that their content is both accessible and accurate in these new contexts, whether or not they’re producing chatbots or tapping into AI directly. In this article, we’ll look at the forms and impact of structured content, and we’ll close with a set of resources that can help you get started with a structured content approach to information design.

    The role of structured content

    In their recent book, Designing Connected Content, Carrie Hane and Mike Atherton define structured content as content that is “planned, developed, and connected outside an interface so that it’s ready for any interface.” A structured content design approach frames content resources—like articles, recipes, product descriptions, how-tos, profiles, etc.—not as pages to be found and read, but as packages composed of small chunks of content data that all relate to one another in meaningful ways.

    In a structured content design process, the relationships between content chunks are explicitly defined and described. This makes both the content chunks and the relationships between them legible to algorithms. Algorithms can then interpret a content package as the “page” I’m looking for—or remix and adapt that same content to give me a list of instructions, the number of stars on a review, the amount of time left until an office closes, and any number of other concise answers to specific questions.

    Structured content is already a mainstay of many types of information on the web. Recipe listings, for instance, have been based on structured content for years. When I search, for example, “bouillabaisse recipe” on Google, I’m provided with a standard list of links to recipes, as well as an overview of recipe steps, an image, and a set of tags describing one example recipe:

    Google search results page for a bouillabaisse recipe including an image, numbered directions, and tags.
    A “featured snippet” for allrecipes.com on the Google results page.
    Google Structured Data Testing tool showing the markup for a bouillabaisse recipe website on the left half of the screen and the structured data attributes and values for structured content on the right half of the screen.
    The same allrecipes.com page viewed in Google’s Structured Data Testing Tool. The pane on the right shows the machine-readable values.

    This “featured snippet” view is possible because the content publisher, allrecipes.com, has broken this recipe into the smallest meaningful chunks appropriate for this subject matter and audience, and then expressed information about those chunks and the relationships between them in a machine-readable way. In this example, allrecipes.com has used both semantic HTML and linked data to make this content not merely a page, but also legible, accessible data that can be accurately interpreted, adapted, and remixed by algorithms and smart agents. Let’s look at each of these elements in turn to see how they work together across indexing, aggregation, and inference contexts.

    Software agent search and semantic HTML

    Semantic HTML is markup that communicates information about the meaningful relationships between document elements, as opposed to simply describing how they should look on screen. Semantic elements such as heading tags and list tags, for instance, indicate that the text they enclose is a heading (<h1>) for the set of list items (<li>) in the ordered list (<ol>) that follows.

    A combined HTML code editor and preview window showing markup and results for heading, ordered list, and list item HTML tags.

    HTML structured in this way is both presentational and semantic because people know what headings and lists look like and mean, and algorithms can recognize them as elements with defined, interpretable relationships.

    HTML markup that focuses only on the presentational aspects of a “page” may look perfectly fine to a human reader but be completely illegible to an algorithm. Take, for example, the City of Boston website, redesigned a few years ago in collaboration with top-tier design and development partners. If I want to find information about how to pay a parking ticket, a link from the home page takes me directly to the “How to Pay a Parking Ticket” screen (scrolled to show detail):

    The City of Boston website's “How to Pay a Parking Ticket” page, showing a tabbed view of ways to pay and instructions for the first of those ways, paying online.

    As a human reading this page, I easily understand what my options are for paying: I can pay online, in person, by mail, or over the phone. If I ask Google Assistant how to pay a parking ticket in Boston, however, things get a bit confusing:

    Google Assistant app on iPhone with the results of a “how do I pay a parking ticket in Boston” query, showing results only weakly related to the intended content.

    None of the links provided in the Google Assistant results take me directly to the “How to Pay a Parking Ticket” page, nor do the descriptions clearly let me know I’m on the right track. (I didn’t ask about requesting a hearing.) This is because the content on the City of Boston parking ticket page is styled to communicate content relationships visually to human readers but is not structured semantically in a way that also communicates those relationships to inquisitive algorithms.

    The City of Seattle’s “Pay My Ticket” page, though it lacks the polished visual style of Boston’s site, also communicates parking ticket payment options clearly to human visitors:

    The City of Seattle website‘s “Pay My Ticket” page, showing four methods to pay a parking ticket in a simple, all-text layout.

    The equivalent Google Assistant search, however, offers a much more helpful result than we see with Boston. In this case, the Google Assistant result links directly to the “Pay My Ticket” page and also lists several ways I can pay my ticket: online, by mail, and in person.

    Google Assistant app on iPhone with the results of a “how do I pay a parking ticket in Seattle” query, showing nearly the same results as on the desktop web page referenced above.

    Despite the visual simplicity of the City of Seattle parking ticket page, it more effectively ensures the integrity of its content across contexts because it’s composed of structured content that is marked up semantically. “Pay My Ticket” is a level-one heading (<h1>), and each of the options below it are level-two headings (<h2>), which indicate that they are subordinate to the level-one element.

    The City of Seattle website’s “Pay My Ticket” page, with the HTML heading elements outlined and labeled for illustration.

    These elements, when designed well, communicate information hierarchy and relationships visually to readers, and semantically to algorithms. This structure allows Google Assistant to reasonably surmise that the text in these <h2> headings represents payment options under the <h1> heading “Pay My Ticket.”

    While this use of semantic HTML offers distinct advantages over the “page display” styling we saw on the City of Boston’s site, the Seattle page also shows a weakness that is typical of manual approaches to semantic HTML. You’ll notice that, in the Google Assistant results, the “Pay by Phone” option we saw on the web page was not listed. If we look at the markup of this page, we can see that while the three options found by Google Assistant are wrapped in both <strong> and <h2> tags, “Pay by Phone” is only marked up with an <h2>. This irregularity in semantic structure may be what’s causing Google Assistant to omit this option from its results.

    The City of Seattle website’s 'Pay My Ticket' page, with two HTML heading elements outlined and labeled for illustration, and an open inspector panel, where we can see that the headings look the same to viewers but are marked up differently in the code.

    Although each of these elements would look the same to a sighted human creating this page, the machine interpreting it reads a difference. While WYSIWYG text entry fields can theoretically support semantic HTML, in practice they all too often fall prey to the idiosyncrasies of even the most well-intentioned content authors. By making meaningful content structure a core element of a site’s content management system, organizations can create semantically correct HTML for every element, every time. This is also the foundation that makes it possible to capitalize on the rich relationship descriptions afforded by linked data.

    Linked data and content aggregation

    In addition to finding and excerpting information, such as recipe steps or parking ticket payment options, search and software agent algorithms also now aggregate content from multiple sources by using linked data.

    In its most basic form, linked data is “a set of best practices for connecting structured data on the web.” Linked data extends the basic capabilities of semantic HTML by describing not only what kind of thing a page element is (“Pay My Ticket” is an <h1>), but also the real-world concept that thing represents: this <h1> represents a “pay action,” which inherits the structural characteristics of “trade actions” (the exchange of goods and services for money) and “actions” (activities carried out by an agent upon an object). Linked data creates a richer, more nuanced description of the relationship between page elements, and it provides the structural and conceptual information that algorithms need to meaningfully bring data together from disparate sources.

    Say, for example, that I want to gather more information about two recommendations I’ve been given for orthopedic surgeons. A search for a first recommendation, Scott Ruhlman, MD, brings up a set of links as well as a Knowledge Graph info box containing a photo, location, hours, phone number, and reviews from the web.

    Google search results page for Scott Ruhlman, MD, showing a list of standard links and an info box with an image, a map, ratings, an address, and reviews information.

    If we run Dr. Ruhlman’s Swedish Hospital profile page through Google’s Structured Data Testing Tool, we can see that content about him is structured as small, discrete elements, each of which is marked up with descriptive types and attributes that communicate both the meaning of those attributes’ values and the way they fit together as a whole—all in a machine-readable format.

    Google Structured Data Testing tool, showing the markup for Dr. Ruhlman's profile page on the left half of the screen, and the structured data attributes and values for the structured content on that page on the right half of the screen.

    In this example, Dr. Ruhlman’s profile is marked up with microdata based on the schema.org vocabulary. Schema.org is a collaborative effort backed by Google, Yahoo, Bing, and Yandex that aims to create a common language for digital resources on the web. This structured content foundation provides the semantic base on which additional content relationships can be built. The Knowledge Graph info box, for instance, includes Google reviews, which are not part of Dr. Ruhlman’s profile, but which have been aggregated into this overview. The overview also includes an interactive map, made possible because Dr. Ruhlman’s office location is machine-readable.

    Google search results info box for Dr. Ruhlman, showing an photo; a map; ratings; an address; reviews; buttons to ask a question, leave a review, and add a photo; and other people searched for.

    The search for a second recommendation, Stacey Donion, MD, provides a very different experience. Like the City of Boston site above, Dr. Donion’s profile on the Kaiser Permanente website is perfectly intelligible to a sighted human reader. But because its markup is entirely presentational, its content is virtually invisible to software agents.

    Google search results page for Dr. Donion, showing a list of standard links for Dr. Donion, and a 'Did you mean: Dr Stacy Donlon MD' link at the top. There is a Google info box, as with the previous search results page example. But in this case the box does not display information about the doctor we searched for, Dr. Donion, but rather for 'Kaiser Permanente Orthopedics: Morris Joseph MD.'

    In this example, we can see that Google is able to find plenty of links to Dr. Donion in its standard index results, but it isn’t able to “understand” the information about those sources well enough to present an aggregated result. In this case, the Knowledge Graph knows Dr. Donion is a Kaiser Permanente physician, but it pulls in the wrong location and the wrong physician’s name in its attempt to build a Knowledge Graph display.

    You’ll also notice that while Dr. Stacey Donion is an exact match in all of the listed search results—which are numerous enough to fill the first results page—we’re shown a “did you mean” link for a different doctor. Stacy Donlon, MD, is a neurologist who practices at MultiCare Neuroscience Center, which is not affiliated with Kaiser Permanente. Multicare does, however, provide semantic and linked data-rich profiles for their physicians.

    Voice queries and content inference

    The increasing prevalence of voice as a mode of access to information makes providing structured, machine-intelligible content all the more important. Voice and smart software agents are not just freeing users from their keyboards, they’re changing user behavior. According to LSA Insider, there are several important differences between voice queries and typed queries. Voice queries tend to be:

    • longer;
    • more likely to ask who, what, and where;
    • more conversational;
    • and more specific.

    In order to tailor results to these more specifically formulated queries, software agents have begun inferring intent and then using the linked data at their disposal to assemble a targeted, concise response. If I ask Google Assistant what time Dr. Ruhlman’s office closes, for instance, it responds, “Dr. Ruhlman’s office closes at 5 p.m.,” and displays this result:

    Google Assistant app on iPhone with the results of a “what time does dr. ruhlman office close” query. The results displayed include a card with “8:30AM–5:00PM” and the label, “Dr. Ruhlman Scott MD, Tuesday hours,” as well as links to call the office, search on Google, get directions, and visit a website. Additionally, there are four buttons labeled with the words “directions,” “phone number,” and “address,” and a thumbs-up emoji.

    These results are not only aggregated from disparate sources, but are interpreted and remixed to provide a customized response to my specific question. Getting directions, placing a phone call, and accessing Dr. Ruhlman’s profile page on swedish.org are all at the tips of my fingers.

    When I ask Google Assistant what time Dr. Donion’s office closes, the result is not only less helpful but actually points me in the wrong direction. Instead of a targeted selection of focused actions to follow up on my query, I’m presented with the hours of operation and contact information for MultiCare Neuroscience Center.

    Google Assistant app on iPhone with the results of a “what time does Doc Dr Stacey donion office close” query. The results displayed include a card with “8AM–5PM” and the label “MulitCare Neuroscience Center, Monday hours,” as well as links to call the office, search on Google, get directions, or visit a website.

    MultiCare Neuroscience Center, you’ll recall, is where Dr. Donlon—the neuroscientist Google thinks I may be looking for, not the orthopedic surgeon I’m actually looking for—practices. Dr. Donlon’s profile page, much like Dr. Ruhlman’s, is semantically structured and marked up with linked data.

    To be fair, subsequent trials of this search did produce the generic (and partially incorrect) practice location for Dr. Donion (“Kaiser Permanente Orthopedics: Morris Joseph MD”). It is possible that through repeated exposure to the search term “Dr. Stacey Donion,” Google Assistant fine-tuned the responses it provided. The initial result, however, suggests that smart agents may be at least partially susceptible to the same availability heuristic that affects humans, wherein the information that is easiest to recall often seems the most correct.

    There’s not enough evidence in this small sample to support a broad claim that algorithms have “cognitive” bias, but even when we allow for potentially confounding variables, we can see the compounding problems we risk by ignoring structured content. “Donlon,” for example, may well be a more common name than “Donion” and may be easily mistyped on a QWERTY keyboard. Regardless, the Kaiser Permanente result we’re given above for Dr. Donion is for the wrong physician. Furthermore, in the Google Assistant voice search, the interaction format doesn’t verify whether we meant Dr. Donlon; it just provides us with her facility’s contact information. In these cases, providing clear, machine-readable content can only work to our advantage.

    The business case for structured content design

    In 2012, content strategist Karen McGrane wrote that “you don’t get to decide which platform or device your customers use to access your content: they do.”

    This statement was intended to help designers, strategists, and businesses prepare for the imminent rise of mobile. It continues to ring true for the era of linked data. With the growing prevalence of smart assistants and voice-based queries, an organization’s website is less and less likely to be a potential visitor’s first encounter with rich content. In many cases—such as finding location information, hours, phone numbers, and ratings—this pre-visit engagement may be a user’s only interaction with an information source.

    These kinds of quick interactions, however, are only one small piece of a much larger issue: linked data is increasingly key to maintaining the integrity of content online. The organizations I’ve used as examples, like the hospitals, government agencies, and colleges I’ve consulted with for years, don’t measure the success of their communications efforts in page views or ad clicks. Success for them means connecting patients, constituents, and community members with services and accurate information about the organization, wherever that information might be found. This communication-based definition of success readily applies to virtually any type of organization working to further its business goals on the web.

    The model of building pages and then expecting users to discover and parse those pages to answer questions, though time-tested in the pre-voice era, is quickly becoming insufficient for effective communication. It precludes organizations from participating in emergent patterns of information seeking and discovery. And—as we saw in the case of searching for information about physicians—it may lead software agents to make inferences based on insufficient or erroneous information, potentially routing customers to competitors who communicate more effectively.

    By communicating clearly in a digital context that now includes aggregation and inference, organizations are more effectively able to speak to their users where users actually are, be it on a website, a search engine results page, or a voice-controlled digital assistant. They are also able to maintain greater control over the accuracy of their messages by ensuring that the correct content can be found and communicated across contexts.

    Getting started: who and how

    Design practices that build bridges between user needs and technology requirements to meet business goals are crucial to making this vision a reality. Information architects, content strategists, developers, and experience designers all have a role to play in designing and delivering effective structured content solutions.

    Practitioners from across the design community have shared a wealth of resources in recent years on creating content systems that work for humans and algorithms alike. To learn more about implementing a structured content approach for your organization, these books and articles are a great place to start:

  • Taming Data with JavaScript

    I love data. I also love JavaScript. Yet, data and client-side JavaScript are often considered mutually exclusive. The industry typically sees data processing and aggregation as a back-end function, while JavaScript is just for displaying the pre-aggregated data. Bandwidth and processing time are seen as huge bottlenecks for dealing with data on the client side. And, for the most part, I agree. But there are situations where processing data in the browser makes perfect sense. In those use cases, how can we be successful?

    Think about the data

    Working with data in JavaScript requires both complete data and an understanding of the tools available without having to make unnecessary server calls. It helps to draw a distinction between trilateral data and summarized data.

    Trilateral data consists of raw, transactional data. This is the low-level detail that, by itself, is nearly impossible to analyze. On the other side of the spectrum you have your summarized data. This is the data that can be presented in a meaningful and thoughtful manner. We’ll call this our composed data. Most important to developers are the data structures that reside between our transactional details and our fully composed data. This is our “sweet spot.” These datasets are aggregated but contain more than what we need for the final presentation. They are multidimensional in that they have two or more different dimensions (and multiple measures) that provide flexibility for how the data can be presented. These datasets allow your end users to shape the data and extract information for further analysis. They are small and performant, but offer enough detail to allow for insights that you, as the author, may not have anticipated.

    Getting your data into perfect form so you can avoid any and all manipulation in the front end doesn’t need to be the goal. Instead, get the data reduced to a multidimensional dataset. Define several key dimensions (e.g., people, products, places, and time) and measures (e.g., sum, count, average, minimum, and maximum) that your client would be interested in. Finally, present the data on the page with form elements that can slice the data in a way that allows for deeper analysis.

    Creating datasets is a delicate balance. You’ll want to have enough data to make your analytics meaningful without putting too much stress on the client machine. This means coming up with clear, concise requirements. Depending on how wide your dataset is, you might need to include a lot of different dimensions and metrics. A few things to keep in mind:

    • Is the variety of content an edge case or something that will be used frequently? Go with the 80/20 rule: 80% of users generally need 20% of what’s available.
    • Is each dimension finite? Dimensions should always have a predetermined set of values. For example, an ever-increasing product inventory might be too overwhelming, whereas product categories might work nicely.
    • When possible, aggregate the data—dates especially. If you can get away with aggregating by years, do it. If you need to go down to quarters or months, you can, but avoid anything deeper.
    • Less is more. A dimension that has fewer values is better for performance. For instance, take a dataset with 200 rows. If you add another dimension that has four possible values, the most it will grow is 200 * 4 = 800 rows. If you add a dimension that has 50 values, it’ll grow 200 * 50 = 10,000 rows. This will be compounded with each dimension you add.
    • In multidimensional datasets, avoid summarizing measures that need to be recalculated every time the dataset changes. For instance, if you plan to show averages, you should include the total and the count. Calculate averages dynamically. This way, if you are summarizing the data, you can recalculate averages using the summarized values.

    Make sure you understand the data you’re working with before attempting any of the above. You could make some wrong assumptions that lead to misinformed decisions. Data quality is always a top priority. This applies to the data you are both querying and manufacturing.

    Never take a dataset and make assumptions about a dimension or a measure. Don’t be afraid to ask for data dictionaries or other documentation about the data to help you understand what you are looking at. Data analysis is not something that you guess. There could be business rules applied, or data could be filtered out beforehand. If you don’t have this information in front of you, you can easily end up composing datasets and visualizations that are meaningless or—even worse—completely misleading.

    The following code example will help explain this further. Full code for this example can be found on GitHub.

    Our use case

    For our example we will use BuzzFeed’s dataset from “Where U.S. Refugees Come From—and Go—in Charts.” We’ll build a small app that shows us the number of refugees arriving in a selected state for a selected year. Specifically, we will show one of the following depending on the user’s request:

    • total arrivals for a state in a given year;
    • total arrivals for all years for a given state;
    • and total arrivals for all states in a given year.

    The UI for selecting your state and year would be a simple form:

    Our UI for our data input

    The code will:

    1. Send a request for the data.
    2. Convert the results to JSON.
    3. Process the data.
    4. Log any errors to the console. (Note: To ensure that step 3 does not execute until after the complete dataset is retrieved, we use the then method and do all of our data processing within that block.)
    5. Display results back to the user.

    We do not want to pass excessively large datasets over the wire to browsers for two main reasons: bandwidth and CPU considerations. Instead, we’ll aggregate the data on the server with Node.js.

    Source data:

    [{"year":2005,"origin":"Afghanistan","dest_state":"Alabama","dest_city":"Mobile","arrivals":0},
    {"year":2006,"origin":"Afghanistan","dest_state":"Alabama","dest_city":"Mobile","arrivals":0},
    ... ]

    Multidimensional Data:

    [{"year": 2005, "state": "Alabama","total": 1386}, 
     {"year": 2005, "state": "Alaska", "total": 989}, 
    ... ]
    Transactional Details show several items with Year, Origin, Destination, City, and Arrivals. This is filtered through semi-aggregate data: By Year, By State, and Total. In the final column, we see a table with the fully composed data resulting from running the Transactional Details through the semi-aggregate data.

    How to get your data structure into place

    AJAX and the Fetch API

    There are a number of ways with JavaScript to retrieve data from an external source. Historically you would use an XHR request. XHR is widely supported but is also fairly complex and requires several different methods. There are also libraries like Axios or jQuery’s AJAX API. These can be helpful to reduce complexity and provide cross-browser support. These might be an option if you are already using these libraries, but we want to opt for native solutions whenever possible. Lastly, there is the more recent Fetch API. This is less widely supported, but it is straightforward and chainable. And if you are using a transpiler (e.g., Babel), it will convert your code to a more widely supported equivalent.

    For our use case, we’ll use the Fetch API to pull the data into our application:

    window.fetchData = window.fetchData || {};
      fetch('./data/aggregate.json')
      .then(response => {
          // when the fetch executes we will convert the response
          // to json format and pass it to .then()
          return response.json();
      }).then(jsonData => {
          // take the resulting dataset and assign to a global object
          window.fetchData.jsonData = jsonData;
      }).catch(err => {
          console.log("Fetch process failed", err);
      });

    This code is a snippet from the main.js in the GitHub repo

    The fetch() method sends a request for the data, and we convert the results to JSON. To ensure that the next statement doesn’t execute until after the complete dataset is retrieved, we use the then() method and do all our data processing within that block. Lastly, we console.log() any errors.

    Our goal here is to identify the key dimensions we need for reporting—year and state—before we aggregate the number of arrivals for those dimensions, removing country of origin and destination city. You can refer to the Node.js script /preprocess/index.js from the GitHub repo for more details on how we accomplished this. It generates the aggregate.json file loaded by fetch() above.

    Multidimensional data

    The goal of multidimensional formatting is flexibility: data detailed enough that the user doesn’t need to send a query back to the server every time they want to answer a different question, but summarized so that your application isn’t churning through the entire dataset with every new slice of data. You need to anticipate the questions and provide data that formulates the answers. Clients want to be able to do some analysis without feeling constrained or completely overwhelmed.

    As with most APIs, we’ll be working with JSON data. JSON is a standard that is used by most APIs to send data to applications as objects consisting of name and value pairs. Before we get back to our use case, let’s look at a sample multidimensional dataset:

    const ds = [{
      "year": 2005,
      "state": "Alabama",
      "total": 1386,
      "priorYear": 1201
    }, {
      "year": 2005,
      "state": "Alaska",
      "total": 811,
      "priorYear": 1541
    }, {
      "year": 2006,
      "state": "Alabama",
      "total": 989,
      "priorYear": 1386
    }];

    With your dataset properly aggregated, we can use JavaScript to further analyze it. Let’s take a look at some of JavaScript’s native array methods for composing data.

    How to work effectively with your data via JavaScript

    Array.filter()

    The filter() method of the Array prototype (Array.prototype.filter()) takes a function that tests every item in the array, returning another array containing only the values that passed the test. It allows you to create meaningful subsets of the data based on select dropdown or text filters. Provided you included meaningful, discrete dimensions for your multidimensional dataset, your user will be able to gain insight by viewing individual slices of data.

    ds.filter(d => d.state === "Alabama");
    
    // Result
    [{
      state: "Alabama",
      total: 1386,
      year: 2005,
      priorYear: 1201
    },{
      state: "Alabama",
      total: 989,
      year: 2006,
      priorYear: 1386
    }]

    Array.map()

    The map() method of the Array prototype (Array.prototype.map()) takes a function and runs every array item through it, returning a new array with an equal number of elements. Mapping data gives you the ability to create related datasets. One use case for this is to map ambiguous data to more meaningful, descriptive data. Another is to take metrics and perform calculations on them to allow for more in-depth analysis.

    Use case #1—map data to more meaningful data:

    ds.map(d => (d.state.indexOf("Alaska")) ? "Contiguous US" : "Continental US");
    
    // Result
    [
      "Contiguous US", 
      "Continental US", 
      "Contiguous US"
    ]

    Use case #2—map data to calculated results:

    ds.map(d => Math.round(((d.priorYear - d.total) / d.total) * 100));
    
    // Result
    [-13, 56, 40]

    Array.reduce()

    The reduce() method of the Array prototype (Array.prototype.reduce()) takes a function and runs every array item through it, returning an aggregated result. It’s most commonly used to do math, like to add or multiply every number in an array, although it can also be used to concatenate strings or do many other things. I have always found this one tricky; it’s best learned through example.

    When presenting data, you want to make sure it is summarized in a way that gives insight to your users. Even though you have done some general-level summarizing of the data server-side, this is where you allow for further aggregation based on the specific needs of the consumer. For our app we want to add up the total for every entry and show the aggregated result. We’ll do this by using reduce() to iterate through every record and add the current value to the accumulator. The final result will be the sum of all values (total) for the array.

    ds.reduce((accumulator, currentValue) => 
    accumulator + currentValue.total, 0);
    
    // Result
    3364

    Applying these functions to our use case

    Once we have our data, we will assign an event to the “Get the Data” button that will present the appropriate subset of our data. Remember that we have several hundred items in our JSON data. The code for binding data via our button is in our main.js:

     document.getElementById("submitBtn").onclick =
      function(e){
          e.preventDefault();
          let state = document.getElementById("stateInput").value || "All"
          let year = document.getElementById("yearInput").value || "All"
          let subset = window.fetchData.filterData(year, state);
          if (subset.length == 0  )
            subset.push({'state': 'N/A', 'year': 'N/A', 'total': 'N/A'})
          document.getElementById("output").innerHTML =
          `<table class="table">
            <thead>
              <tr>
                <th scope="col">State</th>
                <th scope="col">Year</th>
                <th scope="col">Arrivals</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>${subset[0].state}</td>
                <td>${subset[0].year}</td>
                <td>${subset[0].total}</td>
              </tr>
            </tbody>
          </table>`
      }
    The final output once our code is applied

    If you leave either the state or year blank, that field will default to “All.” The following code is available in /js/main.js. You’ll want to look at the filterData() function, which is where we keep the lion’s share of the functionality for aggregation and filtering.

    // with our data returned from our fetch call, we are going to 
    // filter the data on the values entered in the text boxes
    fetchData.filterData = function(yr, state) {
      // if "All" is entered for the year, we will filter on state 
      // and reduce the years to get a total of all years
      if (yr === "All") {
        let total = this.jsonData.filter(
          // return all the data where state
          // is equal to the input box
          dState => (dState.state === state)
            .reduce((accumulator, currentValue) => {
              // aggregate the totals for every row that has 
              // the matched value
              return accumulator + currentValue.total;
            }, 0);
    
        return [{'year': 'All', 'state': state, 'total': total}];
      }
    
      ...
    
      // if a specific year and state are supplied, simply
      // return the filtered subset for year and state based 
      // on the supplied values by chaining the two function
      // calls together 
      let subset = this.jsonData.filter(dYr => dYr.year === yr)
        .filter(dSt => dSt.state === state);
    
      return subset; 
    };
    
    // code that displays the data in the HTML table follows this. See main.js.

    When a state or a year is blank, it will default to “All” and we will filter down our dataset to that particular dimension, and summarize the metric for all rows in that dimension. When both a year and a state are entered, we simply filter on the values.

    We now have a working example where we:

    • Start with a raw, transactional dataset;
    • Create a semi-aggregated, multidimensional dataset;
    • And dynamically build a fully composed result.

    Note that once the data is pulled down by the client, we can manipulate the data in a number of different ways without having to make subsequent calls to the server. This is especially useful because if the user loses connectivity, they do not lose the ability to manipulate the data. This is useful if you are creating a progressive web app (PWA) that needs to be available offline. (If you are not sure if your web app should be a PWA, this article can help.)

    Once you get a firm handle on these three methods, you can create just about any analysis that you want on a dataset. Map a dimension in your dataset to a broader category and summarize using reduce. Combined with a library like D3, you can map this data into charts and graphs to allow a fully customizable data visualization.

    Conclusion

    This article gives a better sense of what is possible in JavaScript when working with data. As I mentioned, client-side JavaScript is in no way a substitute for translating and transforming data on the server, where the heavy lifting should be done. But by the same token, it also shouldn’t be completely ruled out when datasets are treated properly.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
  • Best true wireless earbuds: Free yourself from the tyranny of cords
    Truly wireless earbuds let you ditch all cables in our post-headphone jack world, but like with anything else, their quality varies. Our top picks offer great audio without sacrificing battery life or comfort.
  • Best e-readers for digital-book lovers

    Folks used to think that e-readers would relegate traditional paper books to the scrapyard of the past and destroy the publishing industry as we knew it. But, in the time since the first Kindle e-reader was unveiled in 2007, the dire declarations of what effect the devices might have on our reading habits and on publishers have given way to widespread acceptance from industry wonks and bookworms alike, for one simple reason: E-readers are pretty great.

    Lightweight, easily readable in direct sunlight or, on models equipped with a built-in backlight, in the dead of night, an e-reader is an excellent choice for browsing periodicals, documents, comic books, and of course, books. Most are capable of storing thousands of books—and with power-efficient E Ink displays, word aficionados can typically read for weeks at a time before their device’s battery will need to topping off. These are all great features but, as they’re all features that most e-readers share, choosing which device to buy can be daunting. Don’t worry, we’re here to help you find the device that suits your needs. We’ve assembled reviews of the most popular e-readers on the market today—as well as some you might not have heard of that deserve your attention.

    To read this article in full, please click here

  • Best smart sprinkler controller
    Whether you’re motivated by water conservation, saving money, a drive to render every aspect of your home smart, or all the above, a smart irrigation controller will scratch that itch.
  • Windows 7's support deadline may expire in July, if you don't apply this patch

    Windows users still running Windows 7 already face a deadline of Jan. 14, 2020, when Windows 7 support expires. But if you routinely block updates, break that habit briefly to allow an upcoming March patch. Otherwise, Windows 7 support will effectively stop in July.

    Here’s what’s going on: Microsoft delivers updates using either the SHA-1 or SHA-2 encryption algorithm for security’s sake. But the company recently decided to phase out support for SHA-1 entirely in an upcoming security update, due to begin delivery on March 12. That update will mark Windows’ shift to using the more secure SHA-2 hash algorithm. In July, Microsoft will begin delivering Windows 7 security updates using SHA-2. The upshot: if your Windows 7 PC hasn’t installed the March 12 update enabling SHA-2 support by July 16, your Windows updates will effectively end.

    To read this article in full, please click here

  • Get TotalAV Essential AntiVirus for $19.99 (80% off)

    The term “computer virus” calls to mind imagery of pathogenic creepy-crawlies bringing down a device’s operating system, their flagella wriggling as they multiply into hordes that infiltrate its chips and wires. And while it’s true that our computers can be infected with literal biological bacteria like staphylococci, per Science Illustrated, the threat of malicious codes and programs intent on corrupting data and files looms far larger: According to a recent study from the University of Maryland’s Clark School of Engineering, attacks on computers with internet access is virtually ceaseless, with an incident occurring every 39 seconds on average, affecting a third of Americans every year.

    To read this article in full, please click here

CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.