Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • Working with External User Researchers: Part II

    In the first installment of the Working with External User Researchers series, we explored the reasons why you might hire a user researcher on contract and helpful things to consider in choosing one. This time, we talk about getting the actual work done.

    You’ve hired a user researcher for your project. Congrats! On paper, this person (or team of people) has everything you need and more. You might think the hardest part of your project is complete and that you can be more hands off at this point. But the real work hasn’t started yet. Hiring the researcher is just the beginning of your journey.

    Let’s recap what we mean by an external user researcher. Hiring a contract external user researcher means that a person or team is brought on for the duration of a contract to conduct research.

    This situation is most commonly found in:

    • organizations without researchers on staff;
    • organizations whose research staff is maxed out;
    • and organizations that need special expertise.

    In other words, external user researchers exist to help you gain the insight from your users when hiring one full-time is not an option. Check out Part I to learn more about how to find external user researchers, the types of projects that will get you the most value for your money, writing a request for proposal, and finally, negotiating payment.

    Working together

    Remember why you hired an external researcher

    No project or work relationship is perfect. Before we delve into more specific guidelines on how to work well together, remember the reasons why you decided to hire an external researcher (and this specific one) for your project. Keeping them in mind as you work together will help you keep your priorities straight.

    External researchers are great for bringing in a fresh, objective perspective

    You could ask your full-time designer who also has research skills to wear the research hat. This isn’t uncommon. But a designer won’t have the same depth and breadth of expertise as a dedicated researcher. In addition, they will probably end up researching their own design work, which will make it very difficult for them to remain unbiased.

    Product managers sometimes like to be proactive and conduct some form of guerrilla user research themselves, but this is an even riskier idea. They usually aren’t trained on how to ask non-leading questions, for example, so they tend to only hear feedback that validates their ideas.

    It isn’t a secret—but it’s well worth remembering—that research participants tend to be more comfortable sharing critical feedback with someone who doesn’t work for the product that is being tested.

    The real work begins

    In our experience the most important work starts once a researcher is hired. Here are some key considerations in setting them and your own project team up for success.

    Be smart about the initial brain dump

    Do share background materials that provide important context and prevent redundant work from being done. It’s likely that some insight is already known on a topic that will be researched, so it’s important to share this knowledge with your researcher so they can focus on new areas of inquiry. Provide things such as report templates to ensure that the researcher presents their learnings in a way that’s consistent with your organization’s unique culture. While you’re at it, consider showing them where to find documentation or tutorials about your product, or specific industry jargon.

    Make sure people know who they are

    Conduct a project kick-off meeting with the external researcher and your internal stakeholders. Influence is often partially a factor of trust and relationships, and for this reason it’s sometimes easy for internal stakeholders to question or brush aside projects conducted by research consultants, especially if they disagree with research insights and recommendations. (Who is this person I don’t know trying to tell me what is best for my product?)

    Conduct a kick-off meeting with the broader team

    A great way to prevent this potential pushback is to conduct a project kick-off meeting with the external researcher and important internal stakeholders or consumers of the research. Such a meeting might include activities such as:

    • Team introductions.
    • A discussion about the research questions, including an exercise for prioritizing the questions. Especially with contracted-out projects, it’s common for project teams to be tempted to add more questions—question creep—which is why it’s important to have clear priorities from the start.
    • A summary of what’s out of scope for the research. This is another important task in setting firm boundaries around project priorities from the start so the project is completed on time and within budget.
    • A summary of any incoming hypotheses the project team might have—in other words, what they think the answers to the research questions are. This can be an especially impactful exercise to remind stakeholders how their initial thinking changed in response to study findings upon the study being completed.
    • A review of the project phases and timeline, and any threats that could get in the way of the project being completed on time.
    • A review of prior research and what’s already known, if available. This is important for both the external researcher and the most important internal consumers of the research, as it’s often the case that the broader project team might not be aware of prior research and why certain questions already answered aren’t being addressed in the project at hand.

    Use a buddy system

    Appoint an internal resource who can answer questions that will no doubt arise during the project. This might include questions on how to use an internal lab, questions about whom to invite to a critical meeting, or clarifying questions regarding project priorities. This is also another opportunity to build trust and rapport between your project team and external researcher.

    Conducting the research

    While an external researcher or agency can help plan and conduct a study for you, don’t expect them to be experts on your product and company culture. It’s like hiring an architect to build your house or a designer to furnish a room: you need to provide guidance early and often, or the end result may not be what you expected. Here are some things to consider to make the engagement more effective.

    Be available

    A good research contractor will ask lots of questions to make sure they’re understanding important details, such as your priorities and research questions, and to collect feedback on the study plan and research report. While it can sometimes feel more efficient to handle most of these types of questions over email, email can often result in misinterpretations. Sometimes it’s faster to speak to questions that require lots of detail and context rather than type a response. Consider establishing weekly remote or in-person status checks to discuss open questions and action items.

    Be present

    If moderated sessions are part of the research, plan on observing as many of these as possible. While you should expect the research agency to provide you with a final report, you should not expect them to know which insights are most impactful to your project. They don’t have the background from internal meetings, prior decisions, and discussions about future product directions that an internal stakeholder has. Many of the most insightful findings come from conversations that happen immediately after a session with a research participant. The research moderator and client contact can share their perspectives on what the participant just did and said during their session.

    Be proactive

    Before the researcher drafts their final report, set up a meeting between them and your internal stakeholders to brainstorm over the main research findings. This will help the researcher identify more insights and opportunities that reflect internal priorities and limitations. It also helps stakeholders build trust in the research findings.

    In other words, it’s a waste of everyone’s time if a final report is delivered and basic questions arise from stakeholders that could have been addressed by involving them earlier. This is also a good opportunity to get feedback from stakeholders’ stakeholders, who may have a different (but just as important) influence on the project’s success.

    Be reasonable

    Don’t treat an external contractor like a PowerPoint jockey. Changing fonts and colors to your liking is fine, but only to a point. Your researcher should provide you with a polished report free from errors and in a professional format, but minute changes are not a constructive use of time and money. Focus more on decisions and recommendations than the aesthetics of the deliverables. You can prevent this kind of situation by providing any templates you want used in your initial brain dump, so the findings don’t have to be replicated in the “right” format for presenting.

    When it’s all said and done

    Just because the project has been completed and all the agreed deliverables have been received doesn’t mean you should close the door on any additional learning opportunities for both the client and researcher. At the end of the project, identify what worked, and find ways to increase buy-in for their recommendations.

    Tell them what happened

    Try to identify a check-in point in the future (such as two weeks or months) to let the researcher know what happened because of the research: what decisions were made, what problems were fixed, or other design changes. While you shouldn’t expect your researcher to be perpetually available, if you encounter problems with buy-in, they might be able to provide a quick recommendation.

    Maintain a relationship

    While it’s typical for vendors to treat their clients to dinner or drinks, don’t be afraid to invite your external researcher to your own happy hour or event with your staff. The success of your next project may rely on getting the right researcher, and you’ll want them to be excited to make themselves available to help you when you need them again.

  • Going Offline

    A note from the editors: We’re excited to share Chapter 1 of Going Offline by Jeremy Keith, available this month from A Book Apart.

    Businesses are built on the web. Without the web, Twitter couldn’t exist. Facebook couldn’t exist. And not just businesses—Wikipedia couldn’t exist. Your favorite blog couldn’t exist without the web. The web doesn’t favor any one kind of use. It’s been deliberately designed to accommodate many and varied activities.

    Just as many wonderful things are built upon the web, the web itself is built upon the internet. Though we often use the terms web and internet interchangeably, the World Wide Web is just one application that uses the internet as its plumbing. Email, for instance, is another.

    Like the web, the internet was designed to allow all kinds of services to be built on top of it. The internet is a network of networks, all of them agreeing to use the same protocols to shuttle packets of data around. Those packets are transmitted down fiber-optic cables across the ocean floor, bounced around with Wi-Fi or radio signals, or beamed from satellites in freakin’ space.

    As long as these networks are working, the web is working. But sometimes networks go bad. Mobile networks have a tendency to get flaky once you’re on a train or in other situations where you’re, y’know, mobile. Wi-Fi networks work fine until you try to use one in a hotel room (their natural enemy).

    When the network fails, the web fails. That’s just the way it is, and there’s nothing we can do about it. Until now.

    Weaving the Web

    For as long as I can remember, the World Wide Web has had an inferiority complex. Back in the ’90s, it was outshone by CD-ROMs (ask your parents). They had video, audio, and a richness that the web couldn’t match. But they lacked links—you couldn’t link from something in one CD-ROM to something in another CD-ROM. They faded away. The web grew.

    Later, the web technologies of HTML, CSS, and JavaScript were found wanting when compared to the whiz-bang beauty of Flash. Again, Flash movies were much richer than regular web pages. But they were also black boxes. The Flash format seemed superior to the open standards of the web, and yet the very openness of those standards made the web an unstoppable force. Flash—under the control of just one company—faded away. The web grew.

    These days it’s native apps that make the web look like an underachiever. Like Flash, they’re under the control of individual companies instead of being a shared resource like the web. Like Flash, they demonstrate all sorts of capabilities that the web lacks, such as access to device APIs and, crucially, the ability to work even when there’s no network connection.

    The history of the web starts to sound like an endless retelling of the fable of the tortoise and the hare. CD-ROMs, Flash, and native apps outshine the web in the short term, but the web always seems to win the day somehow.

    Each of those technologies proved very useful for the expansion of web standards. In a way, Flash was like the R&D department for HTML, CSS, and JavaScript. Smooth animations, embedded video, and other great features first saw the light of day in Flash. Having shown their usefulness, they later appeared in web standards. The same thing is happening with native apps. Access to device features like the camera and the accelerometer is beginning to show up in web browsers. Most exciting of all, we’re finally getting the ability for a website to continue working even when the network isn’t available.

    Service Workers

    The technology that makes this bewitching offline sorcery possible is a browser feature called service workers. You might have heard of them. You might have heard that they’re something to do with JavaScript, and technically they are…but conceptually they’re very different from other kinds of scripts.

    Usually when you’re writing some JavaScript that’s going to run in a web browser, it’s all related to the document currently being displayed in the browser window. You might want to listen out for events triggered by the user interacting with the document (clicks, swipes, hovers, etc.). You might want to update the contents of the document: add some markup here, remove some text there, manipulate some values somewhere else. The sky’s the limit. And it’s all made possible thanks to the Document Object Model (DOM), a representation of what the browser is rendering. Through the combination of the DOM and JavaScript—DOM scripting, if you will—you can conjure up all sorts of wonderful magic.

    Well, a service worker can’t do any of that. It’s still a script, and it’s still written in the same language—JavaScript—but it has no access to the DOM. Without any DOM scripting capabilities, this kind of script might seem useless at first glance. But there’s an advantage to having a script that never needs to interact with the current document. Adding, editing, and deleting parts of the DOM can be hard work for the browser. If you’re not careful, things can get very sluggish very quickly. But if there’s a whole class of script that isn’t allowed access to the DOM, then the browser can happily run that script in parallel to its regular rendering activities, safe in the knowledge that it’s an entirely separate process.

    The first kind of script to come with this constraint was called a web worker. In a web worker, you could write some JavaScript to do number-crunching calculations without slowing down whatever else was being displayed in the browser window. Spin up a web worker to generate larger and larger prime numbers, for instance, and it will merrily do so in the background.

    A service worker is like a web worker with extra powers. It still can’t access the DOM, but it does have access to the fundamental inner workings of the browser.

    Browsers and servers

    Let’s take a step back and think about how the World Wide Web works. It’s a beautiful ballet of client and server. The client is usually a web browser—or, to use the parlance of web standards, a user agent: a piece of software that acts on behalf of the user.

    The user wants to accomplish a task or find some information. The URL is the key technology that will empower the user in their quest. They will either type a URL into their web browser or follow a link to get there. This is the point at which the web browser—or client—makes a request to a web server. Before the request can reach the server, it must traverse the internet of undersea cables, radio towers, and even the occasional satellite (Fig 1.1).

    Diagram of the request/response cycle between a user and a server
    Fig 1.1: Browsers send URL requests to servers, and servers respond by sending files.

    Imagine if you could leave instructions for the web browser that would be executed before the request is even sent. That’s exactly what service workers allow you to do (Fig 1.2).

    Diagram of the request/response cycle between a user and a server with a service worker being the first thing the response hits
    Fig 1.2: Service workers tell the web browser to do something before they send the request to queue up a URL.

    Usually when we write JavaScript, the code is executed after it’s been downloaded from a server. With service workers, we can write a script that’s executed by the browser before anything else happens. We can tell the browser, “If the user asks you to retrieve a URL for this particular website, run this corresponding bit of JavaScript first.” That explains why service workers don’t have access to the Document Object Model; when the service worker is run, there’s no document yet.

    Getting your head around service workers

    A service worker is like a cookie. Cookies are downloaded from a web server and installed in a browser. You can go to your browser’s preferences and see all the cookies that have been installed by sites you’ve visited. Cookies are very small and very simple little text files. A website can set a cookie, read a cookie, and update a cookie. A service worker script is much more powerful. It contains a set of instructions that the browser will consult before making any requests to the site that originally installed the service worker.

    A service worker is like a virus. When you visit a website, a service worker is surreptitiously installed in the background. Afterwards, whenever you make a request to that website, your request will be intercepted by the service worker first. Your computer or phone becomes the home for service workers lurking in wait, ready to perform man-in-the-middle attacks. Don’t panic. A service worker can only handle requests for the site that originally installed that service worker. When you write a service worker, you can only use it to perform man-in-the-middle attacks on your own website.

    A service worker is like a toolbox. By itself, a service worker can’t do much. But it allows you to access some very powerful browser features, like the Fetch API, the Cache API, and even notifications. API stands for Application Programming Interface, which sounds very fancy but really just means a tool that you can program however you want. You can write a set of instructions in your service worker to take advantage of these tools. Most of your instructions will be written as “when this happens, reach for this tool.” If, for instance, the network connection fails, you can instruct the service worker to retrieve a backup file using the Cache API.

    A service worker is like a duck-billed platypus. The platypus not only lactates, but also lays eggs. It’s the only mammal capable of making its own custard. A service worker can also…Actually, hang on, a service worker is nothing like a duck-billed platypus! Sorry about that. But a service worker is somewhat like a cookie, and somewhat like a virus, and somewhat like a toolbox.

    Safety First

    Service workers are powerful. Once a service worker has been installed on your machine, it lies in wait, like a patient spider waiting to feel the vibrations of a particular thread.

    Imagine if a malicious ne’er-do-well wanted to wreak havoc by impersonating a website in order to install a service worker. They could write instructions in the service worker to prevent the website ever appearing in that browser again. Or they could write instructions to swap out the content displayed under that site’s domain. That’s why it’s so important to make sure that a service worker really belongs to the site it claims to come from. As the specification for service workers puts it, they “create the opportunity for a bad actor to turn a bad day into a bad eternity.”1

    To prevent this calamity, service workers require you to adhere to two policies:

    Same origin.

    HTTPS only.

    The same-origin policy means that a website at example.com can only install a service worker script that lives at example.com. That means you can’t put your service worker script on a different domain. You can use a domain like for hosting your images and other assets, but not your service worker script. That domain wouldn’t match the domain of the site installing the service worker.

    The HTTPS-only policy means that https://example.com can install a service worker, but http://example.com can’t. A site running under HTTPS (the S stands for Secure) instead of HTTP is much harder to spoof. Without HTTPS, the communication between a browser and a server could be intercepted and altered. If you’re sitting in a coffee shop with an open Wi-Fi network, there’s no guarantee that anything you’re reading in browser from http://newswebsite.com hasn’t been tampered with. But if you’re reading something from https://newswebsite.com, you can be pretty sure you’re getting what you asked for.

    Securing your site

    Enabling HTTPS on your site opens up a whole series of secure-only browser features—like the JavaScript APIs for geolocation, payments, notifications, and service workers. Even if you never plan to add a service worker to your site, it’s still a good idea to switch to HTTPS. A secure connection makes it trickier for snoopers to see who’s visiting which websites. Your website might not contain particularly sensitive information, but when someone visits your site, that’s between you and your visitor. Enabling HTTPS won’t stop unethical surveillance by the NSA, but it makes the surveillance slightly more difficult.

    There’s one exception. You can use a service worker on a site being served from localhost, a web server on your own computer, not part of the web. That means you can play around with service workers without having to deploy your code to a live site every time you want to test something.

    If you’re using a Mac, you can spin up a local server from the command line. Let’s say your website is in a folder called mysite. Drag that folder to the Terminal app, or open up the Terminal app and navigate to that folder using the cd command to change directory. Then type:

    python -m SimpleHTTPServer 8000

    This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.

    This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.


    But if you then put the site live at, say, http://mysite.com, the service worker won’t run. You’ll need to serve the site from https://mysite.com instead. To do that, you need a secure certificate for your server.

    There was a time when certificates cost money and were difficult to install. Now, thanks to a service called Certbot, certificates are free. But I’m not going to lie: it still feels a bit intimidating to install the certificate. There’s something about logging on to a server and typing commands that makes me simultaneously feel like a l33t hacker, and also like I’m going to break everything. Fortunately, the process of using Certbot is relatively jargon-free (Fig 1.3).

    Screenshot of certbot.eff.org
    Fig 1.3: The website of EFF’s Certbot.

    On the Certbot website, you choose which kind of web server and operating system your site is running on. From there you’ll be guided step-by-step through the commands you need to type in the command line of your web server’s computer, which means you’ll need to have SSH access to that machine. If you’re on shared hosting, that might not be possible. In that case, check to see if your hosting provider offers secure certificates. If not, please pester them to do so, or switch to a hosting provider that can serve your site over HTTPS.

    Another option is to stay with your current hosting provider, but use a service like Cloudflare to act as a “front” for your website. These services can serve your website’s files from data centers around the world, making sure that the physical distance between your site’s visitors and your site’s files is nice and short. And while they’re at it, these services can make sure all of those files are served over HTTPS.

    Once you’re set up with HTTPS, you’re ready to write a service worker script. It’s time to open up your favorite text editor. You’re about to turbocharge your website!

    Footnotes

  • Planning for Everything

    A note from the editors: We’re pleased to share an excerpt from Chapter 7 (“Reflecting”) of Planning for Everything: The Design of Paths and Goals by Peter Morville, available now from Semantic Studios.

    Once upon a time, there was a happy family. Every night at dinner, mom, dad, and two girls who still believed in Santa played a game. The rules are simple. Tell three stories about your day, two true, one false, and see who can detect the fib. Today I saw a lady walk a rabbit on a leash. Today I found a tooth in the kitchen. Today I forgot my underwear. The family ate, laughed, and learned together, and lied happily ever after.

    There’s truth in the tale. It’s mostly not false. We did play this game, for years, and it was fun. We loved to stun and bewilder each other, yet the big surprise was insight. In reflecting on my day, I was often amazed by oddities already lost. If not for the intentional search for anomaly, I’d have erased these standard deviations from memory. The misfits we find, we rarely recall.

    We observe a tiny bit of reality. We understand and remember even less. Unlike most machines, our memory is selective and purposeful. Goals and beliefs define what we notice and store.  To mental maps we add places we predict we’ll need to visit later. It’s not about the past. The intent of memory is to plan.

    In reflecting we look back to go forward. We search the past for truths and insights to shift the future. I’m not speaking of nostalgia, though we are all borne back ceaselessly and want what we think we had. My aim is redirection. In reflecting on inconvenient truths, I hope to change not only paths but goals.

    A path showing Framing ('The Here and Now'), Imagining, Narrowing, Deciding, Executing, and Reflecting ('The Goal')
    Figure 7-1. Reflection changes direction.

    We all have times for reflection. Alone in the shower or on a walk, we retrace the steps of a day. Together at lunch for work or over family dinner, we share memories and missteps. Some of us reflect more rigorously than others. Given time, it shows.

    People who as a matter of habit extract underlying principles or rules from new experiences are more successful learners than those who take their experiences at face value, failing to infer lessons that can be applied later in similar situations.1

    In Agile, the sprint retrospective offers a collaborative context for reflection. Every two to four weeks, at the end of a sprint, the team meets for an hour or so to look back. Focal questions include 1) what went well? 2) what went wrong? 3) how might we improve? In reflecting on the plan, execution, and results, the team explores surprises, conflicts, roadblocks, and lessons.

    In addition to conventional analysis, a retrospective creates an opportunity for double loop learning. To edit planned actions based on feedback is normal, but revising assumptions, goals, values, methods, or metrics may effect change more profound. A team able to expand the frame may hack their habits, beliefs, and environment to be better prepared to succeed and learn.

    A loop showing Beliefs leading to Actions leading to Results. Loop 1 leads back to Actions, Loop 2 leads back to Beliefs.
    Figure 7-2. Double loop learning.

    Retrospectives allow for constructive feedback to drive team learning and bonding, but that’s what makes them hard. We may lack courage to be honest, and often people can’t handle the truth. Our filters are as powerful as they are idiosyncratic, which means we’re all blind men touching a tortoise, or is it a tree or an elephant? It hurts to reconcile different perceptions of reality, so all too often we simply shut up and shut down.

    Search for Truth

    To seek truth together requires a culture of humility and respect. We are all deeply flawed and valuable. We must all speak and listen. Ideas we don’t implement may lead to those we do. Errors we find aren’t about fault, since our intent is a future fix. And counterfactuals merit no more confidence than predictions, as we never know what would have happened if.

    Reflection is more fruitful if we know our own minds, but that is harder than we think. An imperfect ability to predict actions of sentient beings is a product of evolution. It’s quick and dirty yet better than nothing in the context of survival in a jungle or a tribe. Intriguingly, cognitive psychology and neuroscience have shown we use the same theory of mind to study ourselves.

    Self-awareness is just this same mind reading ability, turned around and employed on our own mind, with all the fallibility, speculation, and lack of direct evidence that bedevils mind reading as a tool for guessing at the thought and behavior of others.2

    Empirical science tells us introspection and consciousness are unreliable bases for self-knowledge. We know this is true but ignore it all the time. I’ll do an hour of homework a day, not leave it to the end of vacation. If we adopt a dog, I’ll walk it. If I buy a house, I’ll be happy. I’ll only have one drink. We are more than we think, as Walt Whitman wrote in Song of Myself.

    Do I contradict myself?
    Very well then I contradict myself
    (I am large, I contain multitudes.)

    Our best laid plans go awry because complexity exists within as well as without. Our chaotic, intertwingled bodyminds are ecosystems inside ecosystems. No wonder it’s hard to predict. Still, it’s wise to seek self truth, or at least that’s what I think.

    Upon reflection, my mirror neurons tell me I’m a shy introvert who loves reading, hiking, and planning. I avoid conflict when possible but do not lack courage. Once I set a goal, I may focus and filter relentlessly. I embrace habit and eschew novelty. If I fail, I tend to pivot rather than persist. Who I am is changing. I believe it’s speeding up. None of these traits is bad or good, as all things are double-edged. But mindful self awareness holds value. The more I notice the truth, the better my plans become.

    Years ago, I planned a family vacation on St. Thomas. I kept it simple: a place near a beach where we could snorkel. It was a wonderful, relaxing escape. But over time a different message made it past my filters. Our girls had been bored. I dismissed it at first. I’d planned a shared experience I recalled fondly. It hurt to hear otherwise. But at last I did listen and learn. They longed not for escape but adventure. Thus our trip to Belize. I found planning and executing stressful due to risk, but I have no regrets. We shared a joyful adventure we’ll never forget.

    Way back when we were juggling toddlers, we accidentally threw out the mail. Bills went unpaid, notices came, we swore we’d do better, then lost mail again. One day I got home from work to find an indoor mailbox system made with paint cans. My wife Susan built it in a day. We’ve used it to sort and save mail for 15 years. It’s an epic life hack I’d never have done. My ability to focus means I filter things out. I ignore problems and miss fixes. I’m not sure I’ll change. Perhaps it merits a prayer.

    God grant me the serenity
    to accept the things I cannot change,
    courage to change the things I can,
    and wisdom to know the difference.

    We also seek wisdom in others. This explains our fascination with the statistics of regret. End of life wishes often include:

    I wish I’d taken more risks, touched more lives, stood up to bullies, been a better spouse or parent or child. I should have followed my dreams, worked and worried less, listened more. If only I’d taken better care of myself, chosen meaningful work, had the courage to express my feelings, stayed in touch. I wish I’d let myself be happy.

    While they do yield wisdom, last wishes are hard to hear. We are skeptics for good reason. Memory prepares for the future, and that too is the aim of regret. It’s unwise to trust the clarity of rose-colored glasses. The memory of pain and anxiety fades in time, but our desire for integrity grows. When time is short, regret is a way to rectify. I’ve learned my lesson. I’m passing it on to you. I’m a better person now. Don’t make my mistakes. It’s easy to say “I wish I’d stood up to bullies,” but hard to do at the time. There’s wisdom in last wishes but bias and self justification too. Confabulation means we edit memories with no intention to deceive. The truth is elusive. Reflection is hard.

    Footnotes

    • 1. Make It Stick by Peter Brown et. al. (2014), p.133.
    • 2. Why You Don’t Know Your Own Mind by Alex Rosenberg (2016).
  • Meeting Design

    A note from the editors: We’re pleased to share an excerpt from Chapter 2 (“The Design Constraint of All Meetings”) of Meeting Design: For Managers, Makers, and Everyone by Kevin Hoffman, available now from Two Waves.

    Jane is a “do it right, or I’ll do it myself ” kind of person. She leads marketing, customer service, and information technology teams for a small airline that operates between islands of the Caribbean. Her work relies heavily on “reservation management system” (RMS) software, which is due for an upgrade. She convenes a Monday morning meeting to discuss an upgrade with the leadership from each of her three teams. The goal of this meeting is to identify key points for a proposal to upgrade the outdated software.

    Jane begins by reviewing the new software’s advantages. She then goes around the room, engaging each team’s representatives in an open discussion. They capture how this software should alleviate current pain points; someone from marketing takes notes on a laptop, as is their tradition. The meeting lasts nearly three hours, which is a lot longer than expected, because they frequently loop back to earlier topics as people forget what was said. It concludes with a single follow-up action item: the director of each department will provide her with two lists for the upgrade proposal. First, a list of cost savings, and second, a list of timesaving outcomes. Each list is due back to Jane by the end of the week.

    The first team’s list is done early but not organized clearly. The second list provides far too much detail to absorb quickly, so Jane puts their work aside to summarize later. By the end of the following Monday, there’s no list from the third team—it turns out they thought she meant the following Friday. Out of frustration, Jane calls another meeting to address the problems with the work she received, which range from “not quite right” to “not done at all.” Based on this pace, her upgrade proposal is going to be finished two weeks later than planned.

    What went wrong? The plan seemed perfectly clear to Jane, but each team remembered their marching orders differently, if they remembered them at all. Jane could have a meeting experience that helps her team form more accurate memories. But for that meeting to happen, she needs to understand where those memories are formed in her team and how to form them more clearly.

    Better Meetings Make Better Memories

    If people are the one ingredient that all meetings have in common, there is one design constraint they all bring: their capacity to remember the discussion. That capacity lives in the human brain.

    The brain shapes everything believed to be true about the world. On the one hand, it is a powerful computer that can be trained to memorize thousands of numbers in random sequences.1 But brains are also easily deceived, swayed by illusions and pre-existing biases. Those things show up in meetings as your instincts. Instincts vary greatly based on differences in the amount and type of previous experience. The paradox of ability and deceive-ability creates a weird mix of unpredictable behavior in meetings. It’s no wonder that they feel awkward.

    What is known about how memory works in the brain is constantly evolving. To cover that in even a little detail is beyond the scope of this book, so this chapter is not meant to be an exhaustive look at human memory. However, there are a few interesting theories that will help you be more strategic about how you use meetings to support forming actionable memories.

    Your Memory in Meetings

    The brain’s job in meetings is to accept inputs (things we see, hear, and touch) and store it as memory, and then to apply those absorbed ideas in discussion (things we say and make). See Figure 2.1.

    A drawing of a brain with appendages representing the five senses
    FIGURE 2.1 The human brain has a diverse set of inputs that contribute to your memories.

    Neuroscience has identified four theoretical stages of memory, which include sensory, working, intermediate, and long-term. Understanding working memory and intermediate memory is relevant to meetings, because these stages represent the most potential to turn thought into action.

    Working Memory

    You may be familiar with the term short-term memory. Depending on the research you read, the term working memory has replaced short-term memory in the vocabulary of neuro- and cognitive science. I’ll use the term working memory here. Designing meeting experiences to support the working memory of attendees will improve meetings.

    Working memory collects around 30 seconds of the things you’ve recently heard and seen. Its storage capacity is limited, and that capacity varies among individuals. This means that not everyone in a meeting has the same capacity to store things in their working memory. You might assume that because you remember an idea mentioned within the last few minutes of a meeting, everyone else probably will as well. That is not necessarily the case.

    You can accommodate variations in people’s ability to use working memory by establishing a reasonable pace of information. The pace of information is directly connected to how well aligned attendees’ working memories become. To make sure that everyone is on the same page, you should set a pace that is deliberate, consistent, and slower than your normal pace of thought.

    Sometimes, concepts are presented more quickly than people can remember them, simply because the presenter is already familiar with the details. Breaking information into evenly sized, consumable chunks is what separates a great presenter from an average (or bad) one. In a meeting, slower, more broken-up pacing allows a group of people to engage in constructive and critical thinking more effectively. It gets the same ideas in everyone’s head. (For a more detailed dive into the pace of content in meetings, see Chapter 3, “Build Agendas Out of Ideas, People, and Time.”)

    Theoretical models that explain working memory are complex, as seen in Figure 2.2.2 This model presumes two distinct processes taking place in your brain to make meaning out of what you see, what you hear, and how much you can keep in your mind. Assuming that your brain creates working memories from what you see and what you hear in different ways, combining listening and seeing in meetings becomes more essential to getting value out of that time.

    A chart showing a model of working memory
    FIGURE 2.2 Alan Baddeley and Graham Hitch’s Model of Working Memory provides context for the interplay between what we see and hear in meetings.

    In a meeting, absorbing something seen and absorbing something heard require different parts of the brain. Those two parts can work together to improve retention (the quantity and accuracy of information in our brain) or compete to reduce retention. Nowhere is this better illustrated than in the research of Richard E. Meyer, where he has found that “people learn better from words and pictures than from words alone, but not all graphics are created equal(ly).”3 When what you hear and what you see compete, it creates cognitive dissonance. Listening to someone speaking while reading the same words on a screen actually decreases the ability to commit something to memory. People who are subjected to presentation slides filled with speaking points face this challenge. But listening to someone while looking at a complementary photograph or drawing increases the likelihood of committing something to working memory.

    Intermediate-Term Memory

    Your memory should transform ideas absorbed in meetings into taking an action of some kind afterward. Triggering intermediate-term memories is the secret to making that happen. Intermediate-term memories last between two and three hours, and are characterized by processes taking place in the brain called biochemical translation and transcription. Translation can be considered as a process by which the brain makes new meaning. Transcription is where that meaning is replicated (see Figures 2.3a and 2.3b). In both processes, the cells in your brain are creating new proteins using existing ones: making some “new stuff” from “existing stuff.”4

    Two illustrations, showing a woman describing a hat to a man, and then a man showing an actual hat to a few people
    FIGURE 2.3 Biochemical translation (a) and transcription (b), loosely in the form of understanding a hat.

    Here’s an example: instead of having someone take notes on a laptop, imagine if Jane sketched a diagram that helped her make sense out of the discussion, using what was stored in her working memory. The creation of that diagram is an act of translation, and theoretically Jane should be able to recall the primary details of that diagram easily for two to three hours, because it’s moving into her intermediate memory.

    If Jane made copies of that diagram, and the diagram was so compelling that those copies ended up on everyone’s wall around the office that would be transcription. Transcription is the (theoretical) process that leads us into longer-term stages of memory. Transcription connects understanding something within a meeting to acting on it later, well after the meeting has ended.

    Most of the time simple meetings last from 10 minutes to an hour, while workshops and working sessions can last anywhere from 90 minutes to a few days. Consider the duration of various stages of memory against different meeting lengths (see Figure 2.4). A well-designed meeting experience moves the right information from working to intermediate memory. Ideas generated and decisions made should materialize into actions that take place outside the meeting. Any session without breaks that lasts longer than 90 minutes makes the job of your memories moving thought into action fuzzier, and therefore more difficult.

    A chart showing how the different types of memory work over a 90-minute meeting
    FIGURE 2.4 The time duration of common meetings against the varying durations for different stages of memory. Sessions longer than 90 minutes can impede memories from doing their job.

    Jane’s meeting with her three teams lasted nearly three hours. That length of time spent on a single task or topic taxes people’s ability to form intermediate (actionable) memories. Action items become muddled, which leads to liberal interpretations of what each team is supposed to accomplish.

    But just getting agreement about a shared task in the first place is a difficult design challenge. All stages of memory are happening simultaneously, with multiple translation and transcription processes being applied to different sounds and sights. A fertile meeting environment that accommodates multiple modes of input allows memories to form amidst the cognitive chaos.

    Brain Input Modes

    During a meeting, each attendee’s brain in a meeting is either in a state of input or output. By choosing to assemble in a group, the assumption is implicit that information needs to be moved out of one place, or one brain, into another (or several others).

    Some meetings, like presentations, move information in one direction. The goal is for a presenting party to move information from their brain to the brains in the audience. When you are presenting an idea, your brain is in output mode. You use words and visuals to give form to ideas in the hopes that they will become memories in your audience. Your audience’s brains are receiving information; if the presentation is well designed and well executed, their ears and their eyes will do a decent job of absorbing that information accurately.

    In a live presentation, the output/input processes are happening synchronously. This is not like reading a written report or an email message, where the author (presenting party) has output information in absence of an audience, and the audience is absorbing information in absence of the author’s presence; that is moving information asynchronously.

    Footnotes

    • 1. Joshua Foer, Moonwalking with Einstein (New York: Penguin Books, 2011).
    • 2. A. D. Baddeley and G. Hitch, “Working Memory,” in The Psychology of Learning and Motivation: Advances in Research and Theory, ed. G. H. Bower (New York: Academic Press, 1974), 8:47–89.
    • 3. Richard E. Meyer, “Principles for Multimedia Learning with Richard E. Mayer,” Harvard Initiative for Learning & Teaching (blog), July 8, 2014, http://hilt.harvard.edu/blog/ principles-multimedia-learning-richard-e-mayer
    • 4. M. A. Sutton and T. J. Carew, “Behavioral, Cellular, and Molecular Analysis of Memory in Aplysia I: Intermediate-Term Memory,” Integrative and Comparative Biology 42, no. 4 (2002): 725–735.
  • Designing for Research

    If you’ve spent enough time developing for the web, this piece of feedback has landed in your inbox since time immemorial:

    “This photo looks blurry. Can we replace it with a better version?”

    Every time this feedback reaches me, I’m inclined to question it: “What about the photo looks bad to you, and can you tell me why?

    That’s a somewhat unfair question to counter with. The complaint is rooted in a subjective perception of image quality, which in turn is influenced by many factors. Some are technical, such as the export quality of the image or the compression method (often lossy, as is the case with JPEG-encoded photos). Others are more intuitive or perceptual, such as content of the image and how compression artifacts mingle within. Perhaps even performance plays a role we’re not entirely aware of.

    Fielding this kind of feedback for many years eventually lead me to design and develop an image quality survey, which was my first go at building a research project on the web. I started with twenty-five photos shot by a professional photographer. With them, I generated a large pool of images at various quality levels and sizes. Images were served randomly from this pool to users who were asked to rate what they thought about their quality.

    Results from the first round were interesting, but not entirely clear: users seemed to have a tendency to overestimate the actual quality of images, and poor performance appeared to have a negative impact on perceptions of image quality, but this couldn’t be stated conclusively. A number of UX and technical issues made it necessary to implement important improvements and conduct a second round of research. In lieu of spinning my wheels trying to extract conclusions from the first round results, I decided it would be best to improve the survey as much as possible, and conduct another round of research to get better data. This article chronicles how I first built the survey, and then how I subsequently listened to user feedback to improve it.

    Defining the research

    Of the subjects within web performance, image optimization is especially vast. There’s a wide array of formats, encodings, and optimization tools, all of which are designed to make images small enough for web use while maintaining reasonable visual quality. Striking the balance between speed and quality is really what image optimization is all about.

    This balance between performance and visual quality prompted me to consider how people perceive image quality. Lossy image quality, in particular. Eventually, this train of thought lead to a series of questions spurring the design and development of an image quality perception survey. The idea of the survey is that users are providing subjective assessments on quality. This is done by asking participants to rate images without an objective reference for what’s “perfect.” This is, after all, how people view images in situ.

    A word on surveys

    Any time we want to quantify user behavior, it’s inevitable that a survey is at least considered, if not ultimately chosen to gather data from a group of people. After all, surveys are perfect when your goal is to get something measurable. However, the survey is a seductively dangerous tool, as Erika Hall cautions. They’re easy to make and conduct, and are routinely abused in their dissemination. They’re not great tools for assessing past behavior. They’re just as bad (if not worse) at predicting future behavior. For example, the 1–10 scale often employed by customer satisfaction surveys don’t really say much of anything about how satisfied customers actually are or how likely they’ll be to buy a product in the future.

    The unfortunate reality, however, is that in lieu of my lording over hundreds of participants in person, the survey is the only truly practical tool I have to measure how people perceive image quality as well as if (and potentially how) performance metrics correlate to those perceptions. When I designed the survey, I kept with the following guidelines:

    • Don’t ask participants about anything other than what their perceptions are in the moment. By the time a participant has moved on, their recollection of what they just did rapidly diminishes as time elapses.
    • Don’t assume participants know everything you do. Guide them with relevant copy that succinctly describes what you expect of them.
    • Don’t ask participants to provide assessments with coarse inputs. Use an input type that permits them to finely assess image quality on a scale congruent with the lossy image quality encoding range.

    All we can do going forward is acknowledge we’re interpreting the data we gather under the assumption that participants are being truthful and understand the task given to them. Even if the perception metrics are discarded from the data, there are still some objective performance metrics gathered that could tell a compelling story. From here, it’s a matter of defining the questions that will drive the research.

    Asking the right questions

    In research, you’re seeking answers to questions. In the case of this particular effort, I wanted answers to these questions:

    • How accurate are people’s perceptions of lossy image quality in relation to actual quality?
    • Do people perceive the quality of JPEG images differently than WebP images?
    • Does performance play a role in all of this?

    These are important questions. To me, however, answering the last question was the primary goal. But the road to answers was (and continues to be) a complex journey of design and development choices. Let’s start out by covering some of the tech used to gather information from survey participants.

    Sniffing out device and browser characteristics

    When measuring how people perceive image quality, devices must be considered. After all, any given device’s screen will be more or less capable than others. Thankfully, HTML features such as srcset and picture are highly appropriate for delivering the best image for any given screen. This is vital because one’s perception of image quality can be adversely affected if an image is ill-fit for a device’s screen. Conversely, performance can be negatively impacted if an exceedingly high-quality (and therefore behemoth) image is sent to a device with a small screen. When sniffing out potential relationships between performance and perceived quality, these are factors that deserve consideration.

    With regard to browser characteristics and conditions, JavaScript gives us plenty of tools for identifying important aspects of a user’s device. For instance, the currentSrc property reveals which image is being shown from an array of responsive images. In the absence of currentSrc, I can somewhat safely assume support for srcset or picture is lacking, and fall back to the img tag’s src value:

    const surveyImage = document.querySelector(".survey-image");
    let loadedImage = surveyImage.currentSrc || surveyImage.src;

    Where screen capability is concerned, devicePixelRatio tells us the pixel density of a given device’s screen. In the absence of devicePixelRatio, you may safely assume a fallback value of 1:

    let dpr = window.devicePixelRatio || 1;

    devicePixelRatio enjoys excellent browser support. Those few browsers that don’t support it (i.e., IE 10 and under) are highly unlikely to be used on high density displays.

    The stalwart getBoundingClientRect method retrieves the rendered width of an img element, while the HTMLImageElement interface’s complete property determines whether an image has finished loaded. The latter of these two is important, because it may be preferable to discard individual results in situations where images haven’t loaded.

    In cases where JavaScript isn’t available, we can’t collect any of this data. When we collect ratings from users who have JavaScript turned off (or are otherwise unable to run JavaScript), I have to accept there will be gaps in the data. The basic information we’re still able to collect does provide some value.

    Sniffing for WebP support

    As you’ll recall, one of the initial questions asked was how users perceived the quality of WebP images. The HTTP Accept request header advertises WebP support in browsers like Chrome. In such cases, the Accept header might look something like this:

    Accept: image/webp,image/apng,image/*,*/*;q=0.8

    As you can see, the WebP content type of image/webp is one of the advertised content types in the header content. In server-side code, you can check Accept for the image/webp substring. Here’s how that might look in Express back-end code:

    const WebP = req.get("Accept").indexOf("image/webp") !== -1 ? true : false;

    In this example, I’m recording the browser’s WebP support status to a JavaScript constant I can use later to modify image delivery. I could use the picture element with multiple sources and let the browser figure out which one to use based on the source element’s type attribute value, but this approach has clear advantages. First, it’s less markup. Second, the survey shouldn’t always choose a WebP source simply because the browser is capable of using it. For any given survey specimen, the app should randomly decide between a WebP or JPEG image. Not all participants using Chrome should rate only WebP images, but rather a random smattering of both formats.

    Recording performance API data

    You’ll recall that one of the earlier questions I set out to answer was if performance impacts the perception of image quality. At this stage of the web platform’s development, there are several APIs that aid in the search for an answer:

    • Navigation Timing API (Level 2): This API tracks performance metrics for page loads. More than that, it gives insight into specific page loading phases, such as redirect, request and response time, DOM processing, and more.
    • Navigation Timing API (Level 1): Similar to Level 2 but with key differences. The timings exposed by Level 1 of the API lack the accuracy as those in Level 2. Furthermore, Level 1 metrics are expressed in Unix time. In the survey, data is only collected from Level 1 of the API if Level 2 is unsupported. It’s far from ideal (and also technically obsolete), but it does help fill in small gaps.
    • Resource Timing API: Similar to Navigation Timing, but Resource Timing gathers metrics on various loading phases of page resources rather than the page itself. Of the all the APIs used in the survey, Resource Timing is used most, as it helps gather metrics on the loading of the image specimen the user rates.
    • Server Timing: In select browsers, this API is brought into the Navigation Timing Level 2 interface when a page request replies with a Server-Timing response header. This header is open-ended and can be populated with timings related to back-end processing phases. This was added to round two of the survey to quantify back-end processing time in general.
    • Paint Timing API: Currently only in Chrome, this API reports two paint metrics: first paint and first contentful paint. Because a significant slice of users on the web use Chrome, we may be able to observe relationships between perceived image quality and paint metrics.

    Using these APIs, we can record performance metrics for most participants. Here’s a simplified example of how the survey uses the Resource Timing API to gather performance metrics for the loaded image specimen:

    // Get information about the loaded image
    const surveyImageElement = document.querySelector(".survey-image");
    const fullImageUrl = surveyImageElement.currentSrc || surveyImageElement.src;
    const imageUrlParts = fullImageUrl.split("/");
    const imageFilename = imageUrlParts[imageUrlParts.length - 1];
    
    // Check for performance API methods
    if ("performance" in window && "getEntriesByType" in performance) {
      // Get entries from the Resource Timing API
      let resources = performance.getEntriesByType("resource");
    
      // Ensure resources were returned
      if (typeof resources === "object" && resources.length > 0) {
        resources.forEach((resource) => {
          // Check if the resource is for the loaded image
          if (resource.name.indexOf(imageFilename) !== -1) {
            // Access resource images for the image here
          }
        });
      }
    }

    If the Resource Timing API is available, and the getEntriesByType method returns results, an object with timings is returned, looking something like this:

    {
      connectEnd: 1156.5999999947962,
      connectStart: 1156.5999999947962,
      decodedBodySize: 11110,
      domainLookupEnd: 1156.5999999947962,
      domainLookupStart: 1156.5999999947962,
      duration: 638.1000000037602,
      encodedBodySize: 11110,
      entryType: "resource",
      fetchStart: 1156.5999999947962,
      initiatorType: "img",
      name: "https://imagesurvey.site/img-round-2/1-1024w-c2700e1f2c4f5e48f2f57d665b1323ae20806f62f39c1448490a76b1a662ce4a.webp",
      nextHopProtocol: "h2",
      redirectEnd: 0,
      redirectStart: 0,
      requestStart: 1171.6000000014901,
      responseEnd: 1794.6999999985565,
      responseStart: 1737.0999999984633,
      secureConnectionStart: 0,
      startTime: 1156.5999999947962,
      transferSize: 11227,
      workerStart: 0
    }

    I grab these metrics as participants rate images, and store them in a database. Down the road when I want to write queries and analyze the data I have, I can refer to the Processing Model for the Resource and Navigation Timing APIs. With SQL and data at my fingertips, I can measure the distinct phases outlined by the model and see if correlations exist.

    Having discussed the technical underpinnings of how data can be collected from survey participants, let’s shift the focus to the survey’s design and user flows.

    Designing the survey

    Though surveys tend to have straightforward designs and user flows relative to other sites, we must remain cognizant of the user’s path and the impediments a user could face.

    The entry point

    When participants arrive at the home page, we want to be direct in our communication with them. The home page intro copy greets participants, gives them a succinct explanation of what to expect, and presents two navigation choices:

    One button with the text “I want to participate!” and another button with the text “What data do you gather?”

    From here, participants either start the survey or read a privacy policy. If the user decides to take the survey, they’ll reach a page politely asking them what their professional occupation is and requesting them to disclose any eyesight conditions. The fields for these questions can be left blank, as some may not be comfortable disclosing this kind of information. Beyond this point, the survey begins in earnest.

    The survey primer

    Before the user begins rating images, they’re redirected to a primer page. This page describes what’s expected of participants, and explains how to rate images. While the survey is promoted on design and development outlets where readers regularly work with imagery on the web, a primer is still useful in getting everyone on the same page. The first paragraph of the page stresses that users are rating image quality, not image content. This is important. Absent any context, participants may indeed rate images for their content, which is not what we’re asking for. After this clarification, the concept of lossy image quality is demonstrated with the following diagram:

    A divided photo with one half demonstrating low image quality and the other demonstrating high quality.

    Lastly, the function of the rating input is explained. This could likely be inferred by most, but the explanatory copy helps remove any remaining ambiguity. Assuming your user knows everything you do is not necessarily wise. What seems obvious to one is not always so to another.

    The image specimen page

    This page is the main event and is where participants assess the quality of images shown to them. It contains two areas of focus: the image specimen and the input used to rate the image’s quality.

    Let’s talk a bit out of order and discuss the input first. I mulled over a few options when it came to which input type to use. I considered a select input with coarsely predefined choices, an input with a type of number, and other choices. What seemed to make the most sense to me, however, was a slider input with a type of range.

    A rating slide with “worst” at the far left, and “best” at the far right. The slider track is a gradient from red on the left to green on the right.

    A slider input is more intuitive than a text input, or a select element populated with various choices. Because we’re asking for a subjective assessment about something with such a large range of interpretation, a slider allows participants more granularity in their assessments and lends further accuracy to the data collected.

    Now let’s talk about the image specimen and how it’s selected by the back-end code. I decided early on in the survey’s development that I wanted images that weren’t prominent in existing stock photo collections. I also wanted uncompressed sources so I wouldn’t be presenting participants with recompressed image specimens. To achieve this, I procured images from a local photographer. The twenty-five images I settled on were minimally processed raw images from the photographer’s camera. The result was a cohesive set of images that felt visually related to each other.

    To properly gauge perception across the entire spectrum of quality settings, I needed to generate each image from the aforementioned sources at ninety-six different quality settings ranging from 5 to 100. To account for the varying widths and pixel densities of screens in the wild, each image also needed to be generated at four different widths for each quality setting: 1536, 1280, 1024, and 768 pixels, to be exact. Just the job srcset was made for!

    To top it all off, images also needed to be encoded in both JPEG and WebP formats. As a result, the survey draws randomly from 768 images per specimen across the entire quality range, while also delivering the best image for the participant’s screen. This means that across the twenty-five image specimens participants evaluate, the survey draws from a pool of 19,200 images total.

    With the conception and design of the survey covered, let’s segue into how the survey was improved by implementing user feedback into the second round.

    Listening to feedback

    When I launched round one of the survey, feedback came flooding in from designers, developers, accessibility advocates, and even researchers. While my intentions were good, I inevitably missed some important aspects, which made it necessary to conduct a second round. Iteration and refinement are critical to improving the usefulness of a design, and this survey was no exception. When we improve designs with user feedback, we take a project from average to something more memorable. Getting to that point means taking feedback in stride and addressing distinct, actionable items. In the case of the survey, incorporating feedback not only yielded a better user experience, it improved the integrity of the data collected.

    Building a better slider input

    Though the first round of the survey was serviceable, I ran into issues with the slider input. In round one of the survey, that input looked like this:

    A slider with evenly-spaced spaced labels from left to right reading respectively, “Awful”, “Bad”, “OK”, “Good”, “Great”. Below it is a disabled button with the text “Please Rate the Image…”.

    There were two recurring complaints regarding this specific implementation. The first was that participants felt they had to align their rating to one of the labels beneath the slider track. This was undesirable for the simple fact that the slider was chosen specifically to encourage participants to provide nuanced assessments.

    The second complaint was that the submit button was disabled until the user interacted with the slider. This design choice was intended to prevent participants from simply clicking the submit button on every page without rating images. Unfortunately, this implementation was unintentionally hostile to the user and needed improvement, because it blocked users from rating images without a clear and obvious explanation as to why.

    Fixing the problem with the labels meant redesigning the slider as it appeared in Figure 3. I removed the labels altogether to eliminate the temptation of users to align their answers to them. Additionally, I changed the slider background property to a gradient pattern, which further implied the granularity of the input.

    The submit button issue was a matter of how users were prompted. In round one the submit button was visible, yet the disabled state wasn’t obvious enough to some. After consulting with a colleague, I found a solution for round two: in lieu of the submit button being initially visible, it’s hidden by some guide copy:

    The revised slider followed by the text “Once you rate the image, you may submit.”

    Once the user interacts with the slider and rates the image, a change event attached to the input fires, which hides the guide copy and replaces it with the submit button:

    The revised slider now followed by a button reading “Submit rating”.

    This solution is less ambiguous, and it funnels participants down the desired path. If someone with JavaScript disabled visits, the guide copy is never shown, and the submit button is immediately usable. This isn’t ideal, but it doesn’t shut out participants without JavaScript.

    Addressing scrolling woes

    The survey page works especially well in portrait orientation. Participants can see all (or most) of the image without needing to scroll. In browser windows or mobile devices in landscape orientation, however, the survey image can be larger than the viewport:

    Screen shot of the survey with an image clipped at the bottom by the viewport and rating slider.

    Working with such limited vertical real estate is tricky, especially in this case where the slider needs to be fixed to the bottom of the screen (which addressed an earlier bit of user feedback from round one testing). After discussing the issue with colleagues, I decided that animated indicators in the corners of the page could signal to users that there’s more of the image to see.

    The survey with the clipped image, but now there is a downward-pointing arrow with the word “Scroll”.

    When the user hits the bottom of the page, the scroll indicators disappear. Because animations may be jarring for certain users, a prefers-reduced-motion media query is used to turn off this (and all other) animations if the user has a stated preference for reduced motion. In the event JavaScript is disabled, the scrolling indicators are always hidden in portrait orientation where they’re less likely to be useful and always visible in landscape where they’re potentially needed the most.

    Avoiding overscaling of image specimens

    One issue that was brought to my attention from a coworker was how the survey image seemed to expand boundlessly with the viewport. On mobile devices this isn’t such a problem, but on large screens and even modestly sized high-density displays, images can be scaled excessively. Because the responsive img tag’s srcset attribute specifies a maximum resolution image of 1536w, an image can begin to overscale at as “small” at sizes over 768 pixels wide on devices with a device pixel ratio of 2.

    The survey with an image expanding to fill the window.

    Some overscaling is inevitable and acceptable. However, when it’s excessive, compression artifacts in an image can become more pronounced. To address this, the survey image’s max-width is set to 1536px for standard displays as of round two. For devices with a device pixel ratio of 2 or higher, the survey image’s max-width is set to half that at 768px:

    The survey with an image comfortably fitting in the window.

    This minor (yet important) fix ensures that images aren’t scaled beyond a reasonable maximum. With a reasonably sized image asset in the viewport, participants will assess images close to or at a given image asset’s natural dimensions, particularly on large screens.

    User feedback is valuable. These and other UX feedback items I incorporated improved both the function of the survey and the integrity of the collected data. All it took was sitting down with users and listening to them.

    Wrapping up

    As round two of the survey gets under way, I’m hoping the data gathered reveals something exciting about the relationship between performance and how people perceive image quality. If you want to be a part of the effort, please take the survey. When round two concludes, keep an eye out here for a summary of the results!

    Thank you to those who gave their valuable time and feedback to make this article as good as it could possibly be: Aaron Gustafson, Jeffrey Zeldman, Brandon Gregory, Rachel Andrew, Bruce Hyslop, Adrian Roselli, Meg Dickey-Kurdziolek, and Nick Tucker.

    Additional thanks to those who helped improve the image quality survey: Mandy Tensen, Darleen Denno, Charlotte Dann, Tim Dunklee, and Thad Roe.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
  • Wonders of the universe
  • "Letting anyone publish anything for free and get rewarded based on the attention that they can drive was -- is a bad concept in itself," says Ev Williams.
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.