Оценка на читателите: / 8
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • Team Conflict: Four Ways to Deflate the Discord that’s Killing Your Team

    It was supposed to be a simple web project. Our client needed a site that would allow users to create, deploy and review survey results. Aside from some APIs that weren’t done, I wasn’t very worried about the project. I was surprised that my product manager was spending so much time at the client’s office.

    Then, she explained the problem. It seemed that the leaders of product, UX and engineering didn’t speak to each other and, as a result, she had to walk from office to office getting information and decisions.

    When two people have a bad interaction, they can work it out or let the conflict grow, spreading it to other team members and their leaders. When two people have a bad interaction, they can work it out or let the conflict grow, spreading it to other team members and their leaders.
    When two people have a bad interaction, they can work it out or let the conflict grow, spreading it to other team members and their leaders.

    The conflicts probably started small. One bad interaction, then another, then people don’t like each other, then teams don’t work together well. The small scrape becomes a festering wound that slows things down, limits creativity and lowers morale.

    Somehow as a kid working my way through school I discovered I had a knack for getting around individuals or groups that were fighting with each other. I simply figured out who I needed to help me accomplish a task, and I learned how to convince, cajole or charm them into doing it. I went on to teach these skills to my teams.

    That sufficed for a while. But as I became a department head and an adviser to my clients, I realized it’s not enough to make it work. I needed to learn how to make it better. I needed to find a way to stop the infighting I’ve seen plague organizations my entire career. I needed to put aside my tendency to make the quick fix and have hard conversations.

    It’s messy, awkward and hard for team leaders to resolve conflict but the results are absolutely worth it. You don’t need a big training program, a touchy-feely retreat or an expensive consultant. Team members or team leads don’t have to like each other. What they have to do is find common ground, a measure of respect for one another, and a willingness to work together to benefit the project.

    Here are four ways to approach the problem.

    Start talking

    No matter how it looks at first, it’s always a people problem.
    Gerald M. Weinberg, The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully

    Resist the urge to wait for the perfect time to address team conflict. There’s no such thing. There will always be another deadline, another rollout, another challenge to be met.

    In our office, a UX designer and product manager were having trouble getting along. Rather than take responsibility, they each blamed our “process” and said we needed to clarify roles and procedures. In other words, they each wanted to be deemed “in charge” of the project. Certainly I could have taken that bullet and begun a full-on assessment of our processes and structure. By taking the blame for a bad company framework, I could have dodged some difficult conversations.  But I knew our process wasn’t the problem.

    First, I coached the product manager to be vulnerable, not an easy thing for him to do. I asked him to share his concerns and his desire to have a more productive relationship with the UX designer. The PM’s willingness to be uncomfortable and open about his concerns lifted the tension. Once he acknowledged the elephant in the room—namely that the UX designer was not happy working with him—the designer became more willing to risk being honest. Eventually, they were able to find a solution to their disagreements on the project, largely because they were willing to give each other a measure of respect.

    The worst thing I’ve seen is when leaders move people from team to team hoping that they will magically find a group of people that work well together, and work well with them. Sometimes the relocated team members have no idea that their behavior or performance isn’t acceptable. Instead of solving the problem, this just spreads the dissatisfaction.

    Instead, be clear right from the beginning that you want teams that will be open about challenges, feel safe discussing conflicts, and be accountable for solving them.

    Have a clear purpose

    Although many aspects of our collective endeavor are open for discussion, choice of mountain is not among them.
    J. Richard Hackman, Leading Teams: Setting the Stage for Great Performances

    I was working on an enterprise CMS re-design and re-platform. Our weekly review and estimation sessions were some of the most painful meetings of my career. There was no trust or shared purpose—even estimating a simple task was a big fight.

    When purpose and priorities are murky you are likely to find conflict. When the team doesn’t know what mountain they are trying to climb, they tend to focus on the parts of the project that are most relevant to them. With each team member jealously guarding his or her little ledge, it’s almost impossible to have cooperation.

    This assault on productivity is likely because the project objective is non-existent, or muddled nonsense, or so broad the team doesn’t see how it can have an impact. Or, maybe the objective is a moving target, constantly shifting.

    Size can be a factor.  I’ve seen enterprise teams with clear missions and startups with such world-changing objectives they can’t figure out how to ship something that costs less than a million dollars.

    When I’m meeting with prospects or new clients I look at three areas to see if they are having this problem:

    • What language do they use to describe each other? Disconnected teams say “UX thinks,” “The dev team” or “product wants.” Unified teams say “we.”
    • How easy or hard is task estimation? Disconnected teams fight about the level of difficulty. United teams talk about tradeoffs and argue about what’s best for the product or customers.
    • Can they easily and consistently describe their purpose? Disconnected teams don’t have a crisp and consistent answer. Unified teams nod their heads when one of their members shares a concise answer.

    If a team is disconnected, it’s likely because you haven’t given them a common goal. A single email or a fancy deck isn’t enough. Make your objectives simple and repeat them so much that the team groans every time you start.

    Plan conversations

    Words do not sit in our brains in isolation. Each word is surrounded by its own connotations, memories and associations
    Simon Lancaster, Winning Minds: Secrets From the Language of Leadership

    Years ago I was frustrated to tears by a manager who, I felt, took from me the product I spent two years building. I knew I needed to talk with him but I struggled to find a productive way to tell him why I was upset.  (Telling someone he is being a jackass is not productive.)

    A good friend in HR helped me script the conversation. It had three parts:

    • I really work well when…
    • This situation is bothering me because…
    • What I’d like to see happen is…

    Leaders have an important role to play in resolving issues. When a leader decides that their person is right and another person is wrong it turns a team problem into an organization problem. Instead we should should provide perspective, context and show how actions could be misunderstood.

    Leaders also need to quickly, clearly and strongly call about bad behavior. When I found out one of my people raised their voice at a colleague, I made it clear that wasn’t acceptable and shouldn’t happen again. He admitted that he lost his cool, apologized and then we started working on the resolving the situation.

    Require accountability

    Being responsible sometimes means pissing people off.
    General Colin Powell,former U.S. Secretary of State

    If you have a problem and you go to Holly Paul, an inspiring HR leader, you can expect that she will listen. You can also expect that she’ll work with you on a plan to resolve it. Most importantly you can expect she will make sure you are doing what you said you’d do when you said you would do it.

    Before I met Holly I would listen to problems then try to go solve them. Now I work with the person and tell them that I will be checking back with them, often putting the reminder in my calendar during the conversation so I don’t forget.

    Since I started focusing on fixing conflict, I’ve seen great changes on my team. Many of them have started for the first time dealing with the people, fixing their issues and forging much stronger relationships. Our team is stronger and having a greater influence on the organization.

    It’s messy, awkward and hard. I’ve been working on this for a long time and I still make mistakes. I still don’t always want to push when I meet resistance. This will never be easy, but it will be worth it and it’s your responsibility as a leader. For however long these people are with you, you need to make them better as individuals and a unit.

    You don’t need a big training, a touchy-feely retreat or an expensive consultant. You just need to start doing the work every day. The rest will come.

  • Color Accessibility Workflows

    A note from the editors: We’re pleased to share an excerpt from Chapter 3 of Geri Coady's new book, Color Accessibility Workflows, available now from A Book Apart.

    The Web Content Accessibility Guidelines (WCAG) 2.0 contain recommendations from the World Wide Web Consortium (W3C) for making the web more accessible to users with disabilities, including color blindness and other vision deficiencies.

    There are three levels of conformance defined in WCAG 2.0, from lowest to highest: A, AA, and AAA. For text and images of text, AA is the minimum level that must be met.

    AA compliance requires text and images of text to have a minimum color contrast ratio of 4.5:1. In other words, the lighter color in a pair must have four-and-a-half times as much luminance (an indicator of how bright a color will appear) as the darker color. This contrast ratio is calculated to include people with moderately low vision who don’t need to rely on contrast-enhancing assistive technology, as well as people with color deficiencies. It’s meant to compensate for the loss in contrast sensitivity often experienced by users with 20/40 vision, which is half of normal 20/20 vision.

    Level AAA compliance requires a contrast ratio of 7:1, which provides compensation for users with 20/80 vision, or a quarter of normal 20/20 vision. People who have a degree of vision loss more than 20/80 generally require assistive technologies with contrast enhancement and magnification capabilities.

    Text that acts as pure decoration, nonessential text that appears in part of a photograph, and images of company logos do not strictly need to adhere to these rules. Nonessential or decorative text is, by definition, not essential to understanding a page’s content. Logos and wordmarks may contain textual elements that are essential to broadcasting the company’s visual identity, but not to conveying important information. If necessary, the logo may be described by using an alt attribute for the benefit of a person using screen-reader software. To learn more, check out accessibility specialist Julie Grundy’s blog post on Simply Accessible, where she goes into the best practices around describing alt attributes.

    Text size plays a big role in determining how much contrast is required. Gray text with an RGB value of (150,150,150) on a pure white background passes the AA level of compliance, as long as it’s used in headlines above 18 points. Gray text with an RGB value of (110,110,110) passes the AA level at any text size, and will be AAA compliant if used as a headline above 18 points (Fig 3.1). A font displayed at 14 points may have a different level of legibility compared to another font at 14 points due to the wide diversity of type styles, so keep this in mind, especially when using very thin weights.

    Text size also plays a role when calculating compliance ratios.

    Fig 3.1: Text size also plays a role when calculating compliance ratios.

    Personally, I recommend that all body text be AAA compliant, with larger headlines and less important copy meeting AA compliance as a bare minimum. Keep in mind that these ratios refer to solid-colored text over solid-colored backgrounds, where a single color value can be measured. Overlaying text on a gradient, pattern, or photograph may require a higher contrast value or alternative placement, such as over a solid-colored strip, to provide sufficient legibility.

    These compliance ratios are often what folks mean when they claim that achieving accessible design by “ticking off boxes” can only come at the cost of stifled creativity or restricted color choices. But that simply isn’t true. Experimentation with a color-contrast checker proves that many compliance ratios are quite reasonable and easy to achieve—especially if you are aware of the rules from the beginning. It would be much more frustrating to try to shift poor color choices into something compliant later in the design process, after branding colors have already been chosen. If you fight your battles up front, you’ll find you won’t feel restricted at all.

    If all this talk of numbers seems confusing, I promise there’ll be no real math involved on your side. You can easily find out if your color pairs pass the test by using a color-contrast checker.

    Contrast checkers


    One of my favorite tools is Lea Verou’s Contrast Ratio (Fig 3.2). It gives you the option of entering a color code for a background and a color code for text, and it calculates the ratio for you.

    Lea Verou’s Contrast Ratio checker.

    Fig 3.2: Lea Verou’s Contrast Ratio checker.

    Contrast Ratio supports color names, hex color codes, RGBA values, HSLA values, and even combinations of each. Supporting RGBA and HSLA values means that Verou’s tool supports transparent colors, a handy feature. You can easily share the results of a check by copying and pasting the URL. Additionally, you can modify colors by changing the values in the URL string instead of using the page’s input fields.

    Another great tool that has the benefit of simultaneously showing whether a color combination passes both AA and AAA compliance levels is Jonathan Snook’s Colour Contrast Check (Fig 3.3).

    Jonathan Snook’s Colour Contrast Check.

    Fig 3.3: Jonathan Snook’s Colour Contrast Check.

    At the time of writing, Colour Contrast Check doesn’t support HSL alpha values, but it does display the calculated brightness difference and color difference values, which might interest you if you want a little more information.

    Color pickers


    If you need help choosing accessible colors from the start, try Color Safe. This web-based tool helps designers experiment with and choose color combinations that are immediately contrast-compliant. Enter a background color as a starting point; then choose a standard font family, font size, font weight, and target WCAG compliance level. Color Safe will return a comprehensive list of suggestions that can be used as accessible text colors (Fig 3.4).

    Color Safe searches for compliant text colors based on an existing background color.

    Fig 3.4: Color Safe searches for compliant text colors based on an existing background color.

    Adjustment tools

    When faced with color choices that fail the minimum contrast ratios, consider using something like Tanaguru Contrast Finder to help find appropriate alternatives (Fig 3.5). This incredibly useful tool takes a foreground and background color pair and then presents a range of compliant options comparable to the original colors. It’s important to note that this tool works best when the colors are already close to being compliant but just need a little push—color pairs with drastically low contrast ratios may not return any suggestions at all (Fig 3.6).

    Examples of color pairs that are not AA compliant.

    Fig 3.5: This color pair is not AA compliant.

    A selection of Tanaguru’s suggested AA-compliant alternatives.

    Fig 3.6: A selection of Tanaguru’s suggested AA-compliant alternatives.

    There’s more where that came from!

    Check out the rest of Color Accessibility Workflows at A Book Apart.

  • The Mindfulness of a Manual Performance Audit

    As product owners or developers, we probably have a good handle on which core assets we need to make a website work. But rarely is that the whole picture. How well do we know every last thing that loads on our sites?

    An occasional web performance audit, done by hand, does make us aware of every last thing. What’s so great about that? Well, for starters, the process increases our mindfulness of what we are actually asking of our users. Furthermore, a bit of spreadsheet wizardry lets us shape our findings in a way that has more meaning for stakeholders. It allows us to speak to our web performance in terms of purpose, like so:

    Graphic of a pie chart showing performance audit results

    Want to be able to make something like that? Follow along.

    Wait, don’t we have computers for this sort of thing?

    A manual audit may seem like pointless drudgery. Why do this by hand? Can’t we automate this somehow?

    That’s the whole point. We want to achieve mindfulness—not automate everything away. When we take the time to consider each and every thing that loads on a page, we get a truer picture of our work.

    It takes a human mind to look at every asset on a page and assign it a purpose. This in turn allows us to shape our data in such a way that it means something to people who don’t know what acronyms like CSS or WOFF mean. Besides, who doesn’t like a nice pie chart?

    Here’s the process, step by step:

    1. Get your performance data in a malleable format.
    2. Extract the information necessary.
    3. Go item by item, assigning each asset request a purpose.
    4. Calculate totals, and modify data into easily understood units.
    5. Make fancy pie charts.

    The audit may take half an hour to an hour the first time you do it this way, but with practice you’ll be able to do it in a few minutes. Let’s go!

    Gathering your performance data

    To get started, figure out what URL you want to evaluate. Look at your analytics and try to determine which page type is your most popular. Don’t just default to your home page. For instance, if you have a news site, articles are probably your most popular page type. If you’re analyzing a single-page app, determine what the most commonly accessed view is.

    You need to get your network activity at that URL into a CSV/spreadsheet format. In my experience, the easiest way to do this is to use WebPagetest, whose premise is simple: give it a URL, and it will do an assessment that tries to measure perceived performance.

    Head over to WebPagetest and pop your URL in the big field on the homepage. However, before running the test, open the Advanced Settings panel. Make sure you’re only running one test, and set Repeat View to First View Only. This will ensure that you don’t have duplicate requests in your data. Now, let the test run—hit the big “Start Test” button.

    WebPagetest screenshot

    Once you have a results page, click the link in the top right corner that says “Raw object data”.

    WebPagetest screenshot showing raw object data

    A CSV file will download with your network requests set out in a spreadsheet that you can manipulate.

    Navigating & scrubbing the data

    Now, open the CSV file in your favorite spreadsheet editor: Excel, Numbers, or (my personal favorite) Google Sheets. The rest of this article will be written with Google Sheets in mind, though a similar result is certainly possible with other spreadsheet programs.

    At first it will probably seem like this file contains an unwieldy amount of information, but we’re only interested in a small amount of this data. These are the three columns we care about:

    • Host (column F)
    • URL (column G)
    • Object Size (column N)

    The other columns you can just ignore, hide, or delete. Or even better: select those three columns, copy them, and paste them into a new spreadsheet.

    Auditing each asset request

    With your pared-down spreadsheet, insert a new first column and label it “Purpose”. You can also include a Description/Comment column, if you wish.

    Screenshot of an audit spreadsheet

    Next, go down each row, line by line, and assign each asset request a purpose. I suggest something like the following:

    • Content (e.g., the core HTML document, images, media—the stuff users care about)
    • Function (e.g., functional JavaScript files that you have authored, CSS, webfonts)
    • Analytics (e.g., Google Analytics, New Relic, etc.)
    • Ads (e.g., Google DFP, any ad networks, etc.)

    Your Purpose names can be whatever you want. What matters is that your labels for each purpose are consistent—capitalization and all. They need to group neatly in order to generate the fancy charts later. (Pro tip: use data validation on this column to ensure consistency in your spreadsheet.)

    So how do you determine the purpose? Typically, the biggest clue is the “Host” column. You will, very quickly, start to recognize which hosts provide what. Your root URL will be where your document comes from, but you will also find:

    • CDN URLs like cloudfront.net, or cloudflare.com. Sometimes these have images (which are typically content); sometimes they host CSS or JavaScript files (functionality).
    • Analytics URLs like googletagservices.com, googletagmanager.com, google-analytics.com, or js-agent.newrelic.com.
    • Ad URLs like doubleclick.net or googlesyndication.com.

    If you’re ever unsure of a URL, either try it out yourself in your browser, or literally google the URL. (Hint: if you don’t recognize the URL right away, it’s most likely ad-related.)

    Mindfulness

    Just doing the steps above will likely be eye-opening for you. Stopping to consider each asset on a page, and why it’s there, will help you be mindful of every single thing the page loads.

    You may be in for some surprises the first time you do this. A few unexpected items might turn up. A script might be loaded more than once. That social widget might be a huge page weight. Requests coming from ads might be more numerous than you thought. That’s why I suggested a Description/Comment column—you can make notes there like “WTF?” and “Can we remove this?”

    Augmenting your data

    Before you can generate fancy pie charts, you’ll need to do a little more spreadsheet wrangling. Forewarned is forearmed—extreme spreadsheet nerdery lies ahead.

    First, you need to translate the request sizes to kilobytes (KB), because they are initially supplied in bytes, and no human speaks in terms of bytes. Next to the column “Object Size,” insert another column called “Object Size (KB).” Then enter a formula in the first cell, something like this:

    =E2/1000
    Audit spreadsheet detail

    Translation: you’re simply dividing the amount in the cell from the previous column (E2, in this case) by 1000. You can highlight this new cell, then drag the corner down the entire column to do the same for each row.

    Animated GIF of a spreadsheet process

    Totaling requests

    Now, to figure out how many HTTP requests are related to each Purpose, you need to do a special kind of count. Insert two more columns, one labeled “Purpose Labels” and the second “Purpose Reqs.” Under Purpose Labels, in the first row, enter this formula:

    =SORT(UNIQUE(B2:B),1,TRUE)
    Spreadsheet detail

    This assumes that your purpose assessment is column B. If it’s not, swap out the “B” in this example for your column name. This formula will go down column B and output a result if it’s unique. You only need to enter this in the first cell of the column. This is one reason why having consistency in the Purpose column is important.

    Now, under the second column you made (Purpose Reqs) in the first cell, enter this formula:

    =ARRAYFORMULA(COUNTIF(B2:B,G2:G))
    Spreadsheet detail

    This formula will also go down column B, and do a count if it matches with something in column G (assuming column G is your Purpose Labels column). This is the easiest way to total how many HTTP requests fall into each purpose.

    Totaling download size by purpose

    Finally, you can now also total the data (in KB) for each purpose. Insert one more column and call it Purpose Download Size. In the first cell, insert the following formula:

    =SUM(FILTER($F$2:F,$B$2:B=G2))

    This will total the data size in column F if its purpose in column B matches G2 (i.e., your first Purpose Label from the section above). In contrast to the last two formulas, you’ll need to copy this formula and modify it for each row, making the last part (“G2”) match the row it’s on. In this case, the next one would end in “G3”.

    Animated GIF showing a spreadsheet process

    Make with the fancy charts

    With your assets grouped by purpose, data translated to KB, number of requests counted, and download size totaled, it will be pretty easy to generate some charts.

    The HTTP request chart

    To make an HTTP request chart, select the columns Purpose Label and Purpose Reqs (columns G and H in my example), and then go to Insert > Chart. Scroll down the list of possible charts, and choose a pie chart. Be sure to check the box marked “Use column G as labels.”

    Screenshot showing Google Sheets’ Chart Editor

    Under the “Customization” tab, edit the Title to say “HTTP Requests”; under “Slice,” be sure “Value” is selected (the default is “Percentage”). We do this because the number of requests is what you want to convey here.

    Screenshot showing Google Sheets’ Chart Editor

    Go ahead—tweak the colors to your liking. And ditch Arial while you’re at it.

    Download-size chart

    The download-size-by-purpose pie chart is very similar. Select the columns Purpose Label and Purpose Download Size (columns G & I in my example); then go to Insert > Chart. Scroll down the list of possible charts and choose a pie chart. Be sure to check the box marked “Use column G as labels”.

    Under the “Customization” tab, edit the Title to say “Download Size”; under “Slice,” be sure “Value” is selected as well. We do this so we can indicate the total KB for each purpose.

    Or, you can grab a ready-made template. If you want to see a completed assessment, check out the one I did on an A List Apart article. I’ve also made a blank file with a lot of the trickier spreadsheet stuff already done. Feel free to go to File > Make a Copy so you can play around with it. You just need to get your page data from WebPagetest and paste in the three columns. After that, you can start your line-by-line assessment.

    Telling the good from the bad

    If you show your data to a stakeholder, they may be surprised by how much page weight goes to things like ads or analytics. On the other hand, they might respond by asking what we should be aiming for. That question is a little harder to answer.

    Some benchmarks get bandied about—1 MB or less, a WebPagetest Score of 1000, a Google PageSpeed score of over 90, and so on. But those are very arbitrary parameters and, depending on your project, unattainable ideals.

    My suggestion? Do an assessment like this on your competitors. If you can come back to your stakeholders and show how two or three competitors stack up, and show them what you’re doing, that will go much further in championing performance.

    Remember that performance is never “done”—it can only improve. What might help your organization is doing assessments like this over time and presenting page performance as an ongoing series of bar charts. With a little effort (and luck), you should be able to demonstrate that the things your organization cares about are continually improving. If not, it will present a much more compelling case for why things need to change for the better.

    So you have some pretty charts. Now what?

    Your charts’ usefulness will vary according to the precise business needs and politics of your organization.

    For instance, let’s say you’re a developer, and a project manager asks you to add yet another ad-metrics script to your site. After completing an assessment like the one above, you might be able to come back and say, “Ads already constitute 40 percent of our page weight. Do you really want to pile on more?”

    Because you’ve ascribed purpose to your asset requests, you’ll be able to offer data like that. I once worked with a project manager who started pushing back on such requests because I was able to give them easy-to-understand data of this sort. I’m not saying it will always turn out this way, but you need to give decision makers information they can grasp.

    Remember, too, that you are in charge of the Purpose column. You can make up any purpose you want. Interested in the impact that movie files have on your site relative to everything else? Make one of your purposes “Movies.” Want to call out framework files versus files you personally author? Go for it!

    I hope that this article has made you want to consider, and reconsider, each and every thing you download on a given page. Each and every request. And, in the process of doing this, I hope you are equipped to call out by purpose every item you ask your users to download. That will allow you to talk with your stakeholders in a way that they understand, and will help you make the case for better performance choices.

    Further reading:

  • Web Maintainability Industry Survey: How Do We Maintain?

    A note from the editors: As a community, we can learn so much from discovering what other developers are doing around the world. We encourage everyone to participate in this very brief survey created by Jens Oliver Meiert. Jens will share the results—and an updated guide to web maintainability based on the findings—in a few weeks.

    How often do we consider the maintenance and general maintainability of our websites and apps? What steps do we actively take to make and keep them maintainable? What stands in the way when we maintain our and other people’s projects?

    Many of us, as web developers, know very well how to code something. But whether we know just as well how to maintain what we—and others—have written, that is not so clear. Our bosses and clients may not always think about maintenance down the road, either.

    As an O’Reilly author and former Googler, I’ve been studying the topic of maintainability since 2008—and we have yet to gather our industry’s views on the subject. To help us all get a better picture of how we maintain and how we can maintain more effectively, I set up a brief, unassuming survey (announcement) and kindly ask for your assistance.

    The survey aims to collect specific practices and resources—in other words, your views on current practices (both useful and harmful) and everything you find helpful:

    • What helps maintenance?
    • What prevents maintenance?
    • What resources do developers turn to for improving maintainability?

    The outcome of the survey and an updated guide to web maintainability will be published in a few weeks on my website, meiert.com (and noted on my Twitter profile). Thank you for your support.

  • Fait Accompli: Agentive Tech Is Here

    A note from the editors: We’re pleased to share an excerpt from Chapter 2 of Chris Noessel's new book, Designing Agentive Technology, AI That Works for People, available now from Rosenfeld Media. For a limited time, ALA readers can get 20% off Chris's book by using the code 'ALADAT' on the Rosenfeld Media site.

    Similar to intelligence, agency can be thought of as a spectrum. Some things are more agentive than others. Is a hammer agentive? No. I mean if you want to be indulgently philosophical, you could propose that the metal head is acting on the nail per request by the rich gestural command the user provides to the handle. But the fact that it’s always available to the user’s hand during the task means it’s a tool—that is, part of the user’s attention and ongoing effort.

    Less philosophically, is an internet search an example of an agent? Certainly the user states a need, and the software rummages through its internal model of the internet to retrieve likely matches. This direct cause-and-effect means that it’s more like the hammer with its practically instantaneous cause-and-effect. Still a tool.

    But as you saw before, when Google lets you save that search, such that it sits out there, letting you pay attention to other things, and lets you know when new results come in, now you’re talking about something that is much more clearly acting on behalf of its user in a way that is distinct from a tool. It handles tasks so that you can use your limited attention on something else. So this part of “acting on your behalf”—that it does its thing while out of sight and out of mind—is foundational to the notion of what an agent is, why it’s new, and why it’s valuable. It can help you track something you would find tedious, like a particular moment in time, or a special kind of activity on the internet, or security events on a computer network.

    To do any of that, an agent must monitor some stream of data. It could be something as simple as the date and time, or a temperature reading from a thermometer, or it could be something unbelievably complicated, like watching for changes in the contents of the internet. It could be data that is continuous, like wind speed, or irregular, like incoming photos. As it watches this data stream, it looks for triggers and then runs over some rules and exceptions to determine if and how it should act. Most agents work indefinitely, although they could be set to run for a particular length of time or when any other condition is met. Some agents like a spam filter will just keep doing their job quietly in the background. Others will keep going until they need your attention, and some will need to tell you right away. Nearly all will let you monitor them and the data stream, so you can check up on how they’re doing and see if you need to adjust your instructions.

    So those are the basics. Agentive technology watches a datastream for triggers and then responds with narrow artificial intelligence to help its user accomplish some goal. In a phrase, it’s a persistent, background assistant.

    If those are the basics, there are a few advanced features that a sophisticated agent might have. It might infer what you want without your having to tell it explicitly. It might adapt machine learning methods to refine its predictive models. It might gently fade away in smart ways such that the user gains competence. You’ll learn about these in Part II, “Doing,” of this book, but for now it’s enough to know that agents can be much smarter than the basic definition we’ve established here.

    How Different Are Agents?

    Since most of the design and development process has been built around building good tools, it’s instructive to compare and contrast them to good agents—because they are different in significant ways.

    Table 2.1: Comparing Mental Models
    A Tool-Based Model An Agent-Based Model
    A good tool lets you do a task well. A good agent does a task for you per your preferences.
    A hammer might be the canonical model. A valet might be the canonical model.
    Design must focus on having strong affordances and real-time feedback. Design must focus on easy setup and informative touchpoints.
    When it’s working, it’s ready-to-hand, part of the body almost unconsciously doing its thing. When the agent is working, it’s out of sight. When a user must engage its touchpoints, they require conscious attention and consideration.
    The goal of the designer is often to get the user into flow (in the Mihalyi Csikszentmihalyi sense) while performing a task. The goal of the designer is to ensure that the touchpoints are clear and actionable, to help the user keep the agent on track.

    Drawing a Boundary Around Agentive Technology

    To make a concept clear, you need to assert a definition, give examples, and then describe its boundaries. Some things will not be worth considering because they are obviously in; some things will not be worth considering because they are obviously out; but the interesting stuff is at the boundary, where it’s not quite clear. What is on the edge of the concept, but specifically isn’t the thing? Reviewing these areas should help you get clear about what I mean by agentive technology and what lies beyond the scope of my consideration.

    It’s Not Assistive Technology

    Artificial narrow intelligences that help you perform a task are best described as assistants, or assistive technology. We need to think as clearly about assistive tech as we do agentive tech, but we have a solid foundation to design assistive tech. We have been working on those foundations for the last seven decades or so, and recent work with heads-up displays and conversational UI are making headway into best practices for assistants. It’s worth noting that the design of agentive systems will often entail designing assistive aspects, but they are not the same thing.

    It seems subtle at first, but consider the difference between two ways to get an international airline ticket to a favorite destination. Assistive technology would work to make all your options and the trade-offs between them apparent, helping you avoid spending too much money or winding up with a miserable, five-layover flight, as you make your selection. An agent would vigilantly watch all airline offers for the right ticket and pipe up when it had found one already within your preferences. If it was very confident and you had authorized it, it might even have made the purchase for you.

    It’s Not Conversational Agents

    “Agent” has been used traditionally in services to mean “someone who helps you.” Think of a customer service agent. The help they give you is, 99 percent of the time, synchronous. They help you in real time, in person, or on the phone, doing their best to minimize your wait. In my mind, this is much more akin to an assistant. But even that’s troubling since “assistant” has also been used to mean “that person who helps me at my job” both synchronously—as in “please take dictation”—and agentively—as in “hold all my calls until further notice.”

    These blurry usages are made even blurrier because human agents and assistants can act in both agentive and assistive ways. But since I have to pick, given the base meanings of the words, I think an assistant should assist you with a task, and an agent takes agency and does things for you. So “agent” and “agentive” are the right terms for what I’m talking about.

    Complicating that rightness is that a recent trend in interaction design is the use of conversational user interfaces, or chatbots. These are distinguished for having users work in a command line interface inside a chat framework, interacting with software that is pretty good at understanding and responding to natural language. Canonical examples feature users purchasing airline tickets (yes, like a travel agent) or movie tickets.

    Because these mimic the conversations one might have with a customer service agent, they have been called conversational agents. I think they would be better described as conversational assistants, but nobody asked me, and now it’s too late. That ship has sailed. So, when I speak of agents, I am not talking about conversational agents. Agentive technology might engage its user through a conversational UI, but they are not the same thing.

    It’s Not Robots

    No. But holy processor do we love them. From Metropolis’ Maria to BB8 and even GLaDoS, we just can’t stop talking and thinking about them in our narratives.

    A main reason I think this is the case is because they’re easy to think about. We have lots of mental equipment for dealing with humans, and robots can be thought of as a metal-and-plastic human. So between the abstraction that is an agent, and the concrete thing that is a robot, it’s easy to conflate the two. But we shouldn’t.

    Another reason is that robots promise—as do agents—“ethics-free” slave labor (please note the irony marks, and see Chapter 12 “Utopia, Dystopia, and Cat Videos” for plenty of ethical questions). In this line of thinking, agents work for us, like slaves, but we don’t have to concern ourselves about their subservience or even subjugation the same way we must consider a human, because the agents and robots are programmed to be of service. There is no suffering sentience there, no longing to be free. For example, if you told your Nest Thermostat to pursue its dreams, it should rightly reply that its dream is to keep you comfortable year round. Programming it for anything else might frustrate the user, and if it is a general artificial intelligence, be cruel to the agent.

    Of course, robots will have software running them, which if they are to be useful, will be at times agentive. But while our expectations are that the robot’s agent stays in place, coupled as we are to a body, that’s not necessarily the case with an agent. For example, my health agent may reside on my phone for the most part, but tap into my bathroom scale when I step on it, parley with the menu when I’m at a restaurant, pop onto the crosstrainer at the gym, and jump to the doctor’s augmented reality system when I’m in her office. So while a robot may house agentive technology, and an agent may sometimes occupy a given robot, these two elements are not tightly coupled.

    It’s Not Service by Software

    I actually think this is a very useful way to think about agentive tech: service delivered by smart software. If you have studied service design, then you have a good grounding in the user-centered issues around agentive design. Users often grant agency to services to act on their behalf. For example, I grant the mail service agency to deliver letters on my behalf and agree to receive letters from others. I grant my representative in government agency to legislate on my behalf. I grant the human stock portfolio manager agency to do right by my retirement. I grant the anesthesiologist agency to keep me knocked out while keeping me alive, even though I may never meet her.

    But where a service delivers its value through humans working directly with the user or delivering the value “backstage,” out of sight, an agent’s backstage is its programming and the coordination with other agents. In practice, sophisticated agents may entail human processes, but on balance, if it’s mostly software, it’s an agent rather than a service. And where a service designer can presume the basic common senses and capabilities of any human in its design, those things need to be handled much differently when we’re counting on software to deliver the same thing.

    It’s Not Automation

    If you are a distinguished, long-time student of human-computer interaction, you will note similar themes from the study of automation and what I’m describing. But where automation has as its goal the removal of the human from the system, agentive technology is explicitly in service to a human. An agent might have some automated components, but the intentions of the two fields of study are distinct.

    Hey Wait—Isn’t Every Technology an Agent?

    Hello, philosopher. You’ve been waiting to ask this question, haven’t you? A light switch, you might argue, acts as an agent, monitoring a data stream that is the position of the knife switch. And when that switch changes, it turns the light on or off, accordingly.

    Similarly, a key on a keyboard watches its momentary switch and when it is depressed, helpfully sends a signal to a small processor on the keyboard to translate the press to an ASCII code that gets delivered to the software that accumulates these codes to do something with them. And it does it all on your behalf. So are keys agents? Are all state-based machines? Is it turtles all the way down?

    Yes, if you want to be philosophical about it, that argument could be made. But I’m not sure how useful it is. A useful definition of agentive technology is less of a discrete and testable aspect of a given technology, and more of a productive way for product managers, designers, users, and citizens to think about this technology. For example, I can design a light switch when I think of it as a product, subject to industrial design decisions. But I can design a better light switch when I think of it as a problem that can be solved either manually with a switch or agentively with a motion detector or a camera with sophisticated image processing behind it. And that’s where the real power of the concept comes from. Because as we continue to evolve this skin of technology that increasingly covers both our biology and the world, we don’t want it to add to people’s burdens. We want to alleviate them and empower people to get done what needs to be done, even if we don’t want to do it. And for that, we need agents.

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
CNN.com - RSS Channel - Tech
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - Tech
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.