Оценка на читателите: / 9
Слаба статияОтлична статия 

Новини от света на уеб дизайна и СЕО

Представям Ви синдикирани новини от няколко от водещите сайтове в областта на уеб дизайна и СЕО - оптимизирането за търсачки.

A List Apart: The Full Feed
Articles for people who make web sites.
  • Integrating Animation into a Design System

    Keeping animation choreography cohesive from the outset of a project can be challenging, especially for small companies. Without a dedicated motion specialist on the team, it can be difficult to prioritize guidelines and patterns early in the design process. What’s more likely to happen is that animations will be added as the product develops.

    Unsurprisingly, the ad-hoc approach can lead to inconsistencies, duplications, and rework in the long run. But it also provides space for creative explorations and discoveries of what works and what doesn’t. As useful as it is to be able to establish system foundations early, it is also ok to let the patterns emerge organically as your team experiments and finds their own voice in motion.

    Once there are enough animations, you might start thinking about how to ensure some consistency, and how to reuse existing patterns rather than recreate them from scratch every time. How do you transition a few odd animations to a cohesive system? I find it helpful to start by thinking about the purpose of animations and the feel they’re designed to evoke.

    Start with purpose and feel

    Purpose

    Like any other element in a design system, animations must have a purpose. To integrate animation, start by looking through your interface and noting how and why you use animations in your particular product and brand.

    For example, at FutureLearn we noticed that we primarily use animation in three ways—to indicate a state change, to add an emphasis, or to reveal extra information:

    • A state change shows that an object has changed state due to user interaction. For example, a state can change on hover or on click. Animation here is used to soften the transition between states.
    • Emphasis animations are used to draw attention to specific information or an action, for example a nudge to encourage users to progress to the next step in the course.
    • Reveal animations are used to hide and reveal extra information, such as a menu being hidden to the side, a drop down, or a popover.

    There are no “standard” categories for the purposes of animations. Some products use a lot of standalone animations, such as animated tutorials. Some use screen transitions, others don’t. Some make personality and brand part of every animation, others group them into their own category, like in the Salesforce Lightning Design System.

    Animation types in Salesforce Lightning Design System are categorized in a different way to FutureLearn
    Animation types in Salesforce Lightning Design System are categorized in a different way to FutureLearn.

    The categories are specific to your interface and brand, and to how you use animation. They shouldn’t be prescriptive. Their main value is to articulate why your team should use animation, in your specific project.

    Feel

    As well as having a purpose in helping the user understand how the product works, animation also helps to express brand personality. So another aspect to consider is how animation should feel. In “Designing Interface Animation,” Val Head explains how adjectives describing brand qualities can be used for defining motion. For example, a quick soft bouncy motion can be perceived as lively and energetic, whereas steady ease-in-outs feel certain and decisive.

    Brand qualities translated to motion
    Brand feel Animation feel Effect examples
    Lively and energetic Quick and soft Soft bounce
    Anticipation
    Soft overshoot
    Playful and friendly Elastic or springy Squash and stretch
    Bouncy easing
    Wiggle
    Decisive and certain Balanced and stable Ease-in, Ease-out Ease-in-out
    Calm and soft Small soft movements or no movement at all Opacity, color or blur changes, scale changes

    As you look through the animation examples in your interface, list how the animation should feel, and note particularly effective examples. For example, take a look at the two animations below. While they’re both animating the entrance and exit of a popover, the animations feel different. The Marvel example on the left feels brisk through the use of bouncy easing, whereas the small movement combined with opacity and blur changes in the FutureLearn example on the right make it feel calm and subtle.

    Popover animation comparison
    Popover animation on Marvel (left) and FutureLearn (right).

    There’s probably no right and wrong way to animate a popover. As far as I know it all depends on your brand and how you choose to communicate through motion. In your interface you might begin to notice animations that have the same purpose but have entirely different feels. Take note of the ones that feel right for your brand, so that you can align the other animations to them later on.

    Audit existing animations

    Once you have a rough idea of the role animation plays in your interface and how it should feel, the next step is to standardize existing animations. Like an interface inventory, you can conduct an inventory focused specifically on animations. Start by collecting all the existing animations. They can be captured with QuickTime or another screen recording application. At the same time, keep a record of them in a Google Doc, Keynote, or an Excel file—whatever suits you.

    Based on the purpose you defined earlier, enter categories, and then add the animations to the categories as you go. As you go through the audit, you might adjust those categories or add new ones, but it can be helpful not having to start with a blank page.

    Example of initial categories for collecting animations in Google Doc
    Example of initial categories for collecting animations in Google Doc.

    For each animation add:

    • Effect: The effect might be difficult to describe at first (Should it be “grow” or “scale,” “wiggle” or “jiggle”?). Don’t worry about the right words yet, just describe what you see–you can refine that later.
    • Example: This could be a screenshot of the animated element with a link to a video clip, or an embedded gif.
    • Timing and easing: Write down the values for each example, such as 2 seconds ease.
    • Properties: Write down the exact values that change, such as color or size.
    • Feel: Finally, add the feel of the animation—is it calm or energetic, sophisticated and balanced, or surprising and playful?

    After the inventory of animations at FutureLearn, we ended up with a document with about 22 animations, grouped into four categories. Here’s the state change category.

    The 'State Change' page from FutureLearn’s animation audit, conducted in a Google Doc
    The “State Change” page from FutureLearn’s animation audit, conducted in a Google Doc.

    Define patterns of usage

    Once you’ve collected all the animations, you can define patterns of usage, based on the purpose and feel. For example, you might notice that your emphasis animations typically feel energetic and playful, and that your state change transitions are more subtle and calm.

    If these are the tones you want to strike throughout the system, try aligning all the animations to them. To do that, take the examples that work well (i.e. achieve the purpose effectively and have the right feel) and try out their properties with other animations from the same category. You’ll end with a handful of patterns.

    Animation patterns on FutureLearn, grouped by purpose and feel
    Purpose Animation effects Feel
    Interactive state change Color, 2s ease
    Opacity, in – 0.3s, out – 1.1s ease
    Scale, 0.4 ease
    Calm, soft
    Emphasis Energetic pulse, 0.3s ease-in
    Subtle pulse
    Wiggle, 0.5s ease-in-out
    Energetic, playful
    Info reveal Slide down, 0.4 swing
    Slide up, 0.7s ease
    FadeInUp, 0.3 ease
    Rotate, 0.3 ease
    Certain, decisive, balanced

    Develop vocabulary to describe effects

    Animation effects can be hard to capture in words. As Rachel Nabors noted in “Communicating Animations,” sometimes people would start with “friendly onomatopoeias: swoosh, zoom, plonk, boom,” which can be used as a starting point to construct shared animation vocabularies.

    Some effects are common and can be named after the classic animation principles (squash and stretch, anticipation, follow through, slow in and out1) or can even be borrowed from Keynote (fade in, flip, slide down, etc.), others will be specific to your product.

    Vocabulary of animations in Salesforce Lightning Design System.
    Movement types in IBM Design Language.
    Movement types in IBM Design Language.

    There might also be animation effects unique to your brand that would require a distinctive name. For example, TED’s “ripple” animation in the play button is named after the ripple effect of their intro videos.

    The ripple effect in the intro video on TED (left) mirrored in the play button interaction (right).
    The ripple effect in the intro video on TED (left) mirrored in the play button interaction (right).

    Specify building blocks

    For designers and developers, it is useful to specify the precise building blocks that they can mix and match to create the new animations. Once you have the patterns and effects, you can extract precise values—timing, easing, and properties—and turn them into palettes. The animation palettes are similar to color swatches or a typographic scale.

    Timing

    Timing is crucial in animation. Getting the timing right is not so much about perfect technical consistency as making sure that the timing feels consistent. Two elements animated with the same speed can feel completely different if they are different sizes or travel different distances.

    The idea of “dynamic duration” in Material Design focuses on how fast something needs to move versus how long it should take to get there:

    Rather than using a single duration for all animations, adjust each duration to accommodate the distance travelled, an element’s velocity, and surface changes.

    Sarah Drasner, the author of SVG Animations, suggested that we should deal with timing in animation like we deal with headings in typography. Instead of having a single value, you’d start with a “base” and provide several incremental steps. So instead of h1, h2 and h3, you’d have t1, t2, t3.

    Depending on the scale of the project, the timing palette might be simple, or it might be more elaborate. Most of the animations on FutureLearn use a base timing of 0.4. If this timing doesn’t feel right, most likely your object is traveling a shorter distance (in which case use “Shorter time”) or a longer distance (in which case use “Longer time”).

    • Shorter time: 0.3s: Shorter travel distance
    • Base: 0.4s: Base timing
    • Longer time: 0.6s: Longer distance traveled

    Similar ideas used in the duration guidelines for the Carbon Design System are related to the “magnitude of change”:

    Duration guidelines in Carbon Design System
    Duration guidelines in Carbon Design System.

    Easing

    Different easing values can give an animation a distinctive feel. It’s important to specify when to use each value. The easing palettes in the Marvel Styleguide provide a handy guide for when to use each value, e.g. “Springy feel. Good for drawing focus.”

    Easing palette in the Marvel Styleguide
    Easing palette in the Marvel Styleguide.

    An easing palette can also be more generic and be written as a set of guidelines, as done in the Salesforce Lightning Design System, for example:

    For FutureLearn, we kept it even more simple and just limited it to two types of easing: Ease out “for things that move” (such as scale changes and slide up/down) and Linear “for things that don’t move” (such as color or opacity change).

    Properties

    In addition to timing and easing values, it is useful to specify the properties that typically change in your animations, such as:

    • Opacity
    • Color
    • Scale
    • Distance
    • Rotation
    • Blur
    • Elevation

    Again, you can specify those properties as palettes with a base number, and the incremental steps to support various use cases. For example, when specifying scaling animations at FutureLearn, we noticed that the smaller an object is, the more it needs to scale in proportion to its size, for the change to be visible. A palette for scaling objects reflects that:

    Small: ×0.025
    Large objects
    e.g. an image thumbnail

    Base: ×0.1
    Medium objects
    e.g. button

    Large: ×0.25
    Small objects
    e.g. icon

    Although there’s no perfect precision to how these properties are set up, they provide a starting point for the team and help us reduce inconsistencies in our motion language.

    Agree on the guiding principles

    If you have guiding principles, it’s easier to point to them when something doesn’t fit. Some of the principles may be specific to how your team approaches animation. For example,

    Guiding principles for motion in Salesforce Lightning Design System are kept short and simple
    Guiding principles for motion in Salesforce Lightning Design System are kept short and simple.

    If your team is not yet confident with animation, it may be worth including some of the more general principles, such as “reserve it for the most important moments of the interaction” and “don’t let it get in the way of completing a task.”

    The guiding principles section can also include rationale for using animation in your product, and the general feel of your animation and how it connects with your brand. For example, IBM Design Language uses the physical movement of machines to extract the qualities they want to convey through animations, such as precision and accuracy.

    From the powerful strike of a printing arm to the smooth slide of a typewriter carriage, each machine movement serves a purpose and every function responded to a need.
    IBM Design Language
    IBM Design Language

    In IBM’s Design Language, the rhythmic oscillation of tape reels in motion is used in a metaphorical way to support user’s waiting experience.

    Guiding principles can also include spatial metaphors, which can provide a helpful mental model to people trying to create animations. Google’s Material Design is a great example of how thinking of interface as physical “materials” can provide a common reference for designers and developers when thinking about motion in their applications.

    In Material Design, 'Material can push other material out of the way.'
    In Material Design, “Material can push other material out of the way.”

    To sum up

    When integrating animation in design systems, try viewing it in relation to three things: guiding principles, patterns of usage, and building blocks. Guiding principles provide general direction, patterns of usage specify when and how to apply the effects, and building blocks aid the creation of new animations. Even if your animations were initially created without a plan, bringing them together in a cohesive, documented system can help you update and build on what you have in an intentional and brand-supporting way.

    Further reading:
    Creating Usability with Motion: The UX in Motion Manifesto
    Web Animation Past, Present, and Future
    Designing Interface Animation
    Animation in Responsive Design

    Footnotes

  • Practical User Research: Creating a Culture of Learning in Large Organizations

    Enterprise companies are realizing that understanding customer needs and motivations is critical in today’s marketplace. Building and sustaining new user research programs to collect these insights can be a major struggle, however. Digital teams often feel thwarted by large organizations that are slow to change and have many competing priorities for financial investments.

    As a design consultant at Cantina, I’ve seen companies at wildly different stages of maturity related to how customer research impacts their digital work. Sometimes executives struggle to understand the value without quantifiable numbers. Other times engineering teams see customer research and usability testing as a threat to delivery dates.

    While you can’t always tackle these issues directly, the great thing about large organizations is that they’re brimming with people, tools, and work practices forming an overall culture. By understanding and utilizing each of these organizational resources, digital teams can create an environment focused on learning from customers.

    I did some work recently for a client I’ll call WorkTech, who had this same struggle aligning their digital projects with the needs of their customers. WorkTech was attempting to redesign their entire ecommerce experience with a lean budget and team. In a roughly six month engagement, two of us from Cantina were tasked with getting the project back on track with a user-centered design approach. We had to work fast and start bringing customer insights to bear while moving the project forward. Employing a pragmatic approach that looked at people, tools, and work practices with a fresh set of eyes helped us create an environment of user research that better aligned the redesign with the needs of WorkTech’s customers.

    Get comfortable talking to People in different roles

    Effective user research programs start and end with people. Recognizing relationships and the motivations held by everyone interacting with a product or service encourages goodwill and can unearth key connections and other, less tangible benefits. To create and sustain a culture of learning in your company, find a group of users to interview—get creative, if you have to—and enlist the support of teammates and stakeholders.

    Begin by taking stock of anyone connected to your product. You won’t always find a true set of end users internally, but everyone can help raise awareness of the value of user research—and they can help your team sustain forward progress. Ask yourself the following questions to find allies and research resources:

    • What departments use your product indirectly, but have connections to people in the user roles you’re targeting?
    • Is there a project sponsor who can help sell the value of research and connect you to additional staff that can assist in some capacity?
    • Are there employees in other departments whose individual goals align with getting better feedback from users?
    • Are there departments within the organization (sales, customer service) who can offer connections to customers wanting to provide candid feedback?

    Our WorkTech project didn’t have a formal research budget for recruiting users (or any other research task). What we did have going for us was a group of internal users who gave our team immediate access to an initial pool of research participants. The primary platform we were hired to help redesign was used by two groups: WorkTech employees and the customers they interacted with. Over time, our internal users were able to connect us with their external counterparts, amplifying the number of people offering feedback significantly.

    Maximize the value of every interview

    While interviewing external customers, we kept an eye on the long term success of our research program and concluded each session by asking participants:

    • If they’d be willing to join another session in the future (most were willing)
    • If they could share names of anyone else in their network (internal or external) they thought would have feedback to offer

    During each conversation, we also identified distinct areas of expertise for each user. This allowed us to better target future usability testing sessions on specific pieces of functionality.

    Using this approach, our pool of potential participants grew exponentially and we gained insight into the shared motivations of different user personas. Taking stock of such different groups of people using the platform also revealed opportunities that helped us prioritize different aspects of the overall redesign effort.

    Find helpful Tools that are already available or free

    Tools can’t create an effective user research program on their own, but they are hugely helpful during the actual execution of research. While some organizations have an appetite for purchasing dedicated “user research” platforms able to handle recruitment, scheduling, and session recordings, many others already have tools in place that bring value to the organization in different areas. If your budget is tiny (or nonexistent), you may be able to repurpose or extend the software and applications your company already uses in a way that can support talking to customers.

    Consider the following:

    • Are there current tools available in the organization (perhaps in other groups) that could be adapted for research purposes?
    • Are users already familiar with any tools or workflows you can utilize during research activities?
    • If new tools for automating specific tasks are out of budget, can your team build up repeatable templates and processes manually?

    We discovered early on at WorkTech that our internal user base had very similar toolsets because of company-wide technology purchases. Virtually all employees already had Cisco Webex installed and were familiar with remote conferencing and sharing their screen.

    WorkTech offices and customers were spread across the continental United States, so it made sense for our sessions to be remote, moderated conversations via phone and teleconference. Using Webex allowed the internal users to focus on the actual interview, avoiding the friction they might have felt attempting to use new technology.

    Leveraging pre-existing tools also meant we could expand our capabilities without incurring significant new costs. (The only other tool we introduced was a free InVision account, which allowed us to create simple prototypes of new UI concepts, conduct weekly usability tests, and document and share our findings quickly and easily.)

    Document and define templates as you go

    Many digital research tools are simply well-defined starting points—templates for the various types of communication and idea capture needed. If purchasing access to these automated tools is out of the question, using a little elbow grease can be equally effective over time.

    At WorkTech, maintaining good documentation trails minimized the time spent creating new materials for each round of research and testing. For repeatable tasks like creating scripts and writing recruitment emails, we simply saved and organized each document as we created it. This allowed us to build a library of reusable templates over time. Even though it was a bit of a manual effort, this payoff increased with every additional round of usability testing.

    Utilizing available tools eliminates another significant hurdle to getting started—time delays. In large organizations with tight purchase protocols, using repurposed and free tools can enable teams to get moving quickly. Filling in the remaining gaps with good documentation and repeatable templates covers a wide range of scenarios and doesn’t let finances act as a blocker when collecting insights from users. 

    Take a fresh look at your company’s Work Practices

    A culture of learning won’t be sustainable over the long term if the lessons learned from user research aren’t informing what is being built. Bringing research insights to bear on your product or site is where everything pays off, ensuring digital teams can focus on what delivers the highest value to customers.

    Being successful here requires a keen understanding of the common processes your organization uses to get things accomplished. By being aware of an organization’s current work practices (not some utopian version), digital teams can align what they’ve learned with practices that help ensure solutions get shipped.

    Dig into the work practices in your organization and identify ways to meet people where they are:

    • Are there centrally-located workspaces that can be used for posting insights and keeping research outcomes in front of the team?
    • Are there any regularly-scheduled meetings with diverse teams that would be an opportunity to present research findings?
    • Are there established sprint cycles or product management reviews you can align with research efforts?

    The WorkTech team collaborating with us on the project already had weekly meetings on the calendar, with an open agenda for high priority items. Knowing it would be important to get buy-in from this group, we set up our research and usability sessions each week on Tuesdays. This allowed us to establish a cadence where every Tuesday we tested prototypes, and every Wednesday we presented findings at the WorkTech team meeting. As new questions or design concepts to validate came up, the team was able to document them, pause any further debates, and move on to other topics of discussion. Everyone knew testing was a weekly occurrence, and within a few weeks even the most skeptical developer started asking us to get customer feedback on specific features they were struggling with.

    Schedule regular customer sessions even before you are “ready”

    Committing to a cadence of regular weekly sessions also allowed us to separate scheduling from test prep tasks. We didn’t wait to schedule sessions only when we desperately needed feedback. Because the time was already set aside each Tuesday, we simply had to develop questions and tests for the highest priority topic at that point in time. If something wasn’t quite ready, the next set of sessions was only a week away.

    Using these principles, we conducted 40+ sessions over the course of 12 weeks, gathering valuable insights from the two primary user groups. We were able to gather quantifiable data pointing to one design pattern over another, which minimized design debates and instilled confidence in the research program and the design. In addition to building relationships with users across the spectrum, the sessions helped us uncover several key interface issues that we were then able to design around.

    Even more valuable than the interface issues were general uses cases that came to light, where the website experience was only one component in a large ecosystem of internal processes at customers’ organizations. These insights proved valuable for our redesign project, but also provided a deeper understanding of WorkTech’s customer base, helping to prove the value of research efforts to key stakeholders.

    Knowing the schedules and team norms in your organization is critical for creating a user research program whose insights get integrated into the design and development process. The insights of a single set of sessions are important, but creating a culture surrounding user research is more valuable to the long term success of your product or site, as is a mindset of ongoing empathy toward users.

    To help grow and sustain a culture of research, though, teams have to be able to prove the value in financial terms. Paul Boag said it elegantly in the third Smashing Magazine book: “Because cost is (often a) primary reason for not testing, money has to be part of your justification for testing.”

    The long term success of your program will likely be tied to how well you can prove its ROI in business terms, even though the methods described here minimize the need to ask for money. In other words, translate the value of user research to terms any business person can understand. Find ways to quantify the work hours currently lost to feature debates and building un-validated features, and you’ll uncover business costs that can be eliminated.

    User research doesn’t have to be a big dollar, corporate initiative. By paying attention to the people, tools, and work practices within an organization, your team can demonstrate the value of user research on the ground, which will open doors to additional resources in the future.

  • Team Conflict: Four Ways to Deflate the Discord that’s Killing Your Team

    It was supposed to be a simple web project. Our client needed a site that would allow users to create, deploy and review survey results. Aside from some APIs that weren’t done, I wasn’t very worried about the project. I was surprised that my product manager was spending so much time at the client’s office.

    Then, she explained the problem. It seemed that the leaders of product, UX and engineering didn’t speak to each other and, as a result, she had to walk from office to office getting information and decisions.

    When two people have a bad interaction, they can work it out or let the conflict grow, spreading it to other team members and their leaders. When two people have a bad interaction, they can work it out or let the conflict grow, spreading it to other team members and their leaders.
    When two people have a bad interaction, they can work it out or let the conflict grow, spreading it to other team members and their leaders.

    The conflicts probably started small. One bad interaction, then another, then people don’t like each other, then teams don’t work together well. The small scrape becomes a festering wound that slows things down, limits creativity and lowers morale.

    Somehow as a kid working my way through school I discovered I had a knack for getting around individuals or groups that were fighting with each other. I simply figured out who I needed to help me accomplish a task, and I learned how to convince, cajole or charm them into doing it. I went on to teach these skills to my teams.

    That sufficed for a while. But as I became a department head and an adviser to my clients, I realized it’s not enough to make it work. I needed to learn how to make it better. I needed to find a way to stop the infighting I’ve seen plague organizations my entire career. I needed to put aside my tendency to make the quick fix and have hard conversations.

    It’s messy, awkward and hard for team leaders to resolve conflict but the results are absolutely worth it. You don’t need a big training program, a touchy-feely retreat or an expensive consultant. Team members or team leads don’t have to like each other. What they have to do is find common ground, a measure of respect for one another, and a willingness to work together to benefit the project.

    Here are four ways to approach the problem.

    Start talking

    No matter how it looks at first, it’s always a people problem.
    Gerald M. Weinberg, The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully

    Resist the urge to wait for the perfect time to address team conflict. There’s no such thing. There will always be another deadline, another rollout, another challenge to be met.

    In our office, a UX designer and product manager were having trouble getting along. Rather than take responsibility, they each blamed our “process” and said we needed to clarify roles and procedures. In other words, they each wanted to be deemed “in charge” of the project. Certainly I could have taken that bullet and begun a full-on assessment of our processes and structure. By taking the blame for a bad company framework, I could have dodged some difficult conversations.  But I knew our process wasn’t the problem.

    First, I coached the product manager to be vulnerable, not an easy thing for him to do. I asked him to share his concerns and his desire to have a more productive relationship with the UX designer. The PM’s willingness to be uncomfortable and open about his concerns lifted the tension. Once he acknowledged the elephant in the room—namely that the UX designer was not happy working with him—the designer became more willing to risk being honest. Eventually, they were able to find a solution to their disagreements on the project, largely because they were willing to give each other a measure of respect.

    The worst thing I’ve seen is when leaders move people from team to team hoping that they will magically find a group of people that work well together, and work well with them. Sometimes the relocated team members have no idea that their behavior or performance isn’t acceptable. Instead of solving the problem, this just spreads the dissatisfaction.

    Instead, be clear right from the beginning that you want teams that will be open about challenges, feel safe discussing conflicts, and be accountable for solving them.

    Have a clear purpose

    Although many aspects of our collective endeavor are open for discussion, choice of mountain is not among them.
    J. Richard Hackman, Leading Teams: Setting the Stage for Great Performances

    I was working on an enterprise CMS re-design and re-platform. Our weekly review and estimation sessions were some of the most painful meetings of my career. There was no trust or shared purpose—even estimating a simple task was a big fight.

    When purpose and priorities are murky you are likely to find conflict. When the team doesn’t know what mountain they are trying to climb, they tend to focus on the parts of the project that are most relevant to them. With each team member jealously guarding his or her little ledge, it’s almost impossible to have cooperation.

    This assault on productivity is likely because the project objective is non-existent, or muddled nonsense, or so broad the team doesn’t see how it can have an impact. Or, maybe the objective is a moving target, constantly shifting.

    Size can be a factor.  I’ve seen enterprise teams with clear missions and startups with such world-changing objectives they can’t figure out how to ship something that costs less than a million dollars.

    When I’m meeting with prospects or new clients I look at three areas to see if they are having this problem:

    • What language do they use to describe each other? Disconnected teams say “UX thinks,” “The dev team” or “product wants.” Unified teams say “we.”
    • How easy or hard is task estimation? Disconnected teams fight about the level of difficulty. United teams talk about tradeoffs and argue about what’s best for the product or customers.
    • Can they easily and consistently describe their purpose? Disconnected teams don’t have a crisp and consistent answer. Unified teams nod their heads when one of their members shares a concise answer.

    If a team is disconnected, it’s likely because you haven’t given them a common goal. A single email or a fancy deck isn’t enough. Make your objectives simple and repeat them so much that the team groans every time you start.

    Plan conversations

    Words do not sit in our brains in isolation. Each word is surrounded by its own connotations, memories and associations
    Simon Lancaster, Winning Minds: Secrets From the Language of Leadership

    Years ago I was frustrated to tears by a manager who, I felt, took from me the product I spent two years building. I knew I needed to talk with him but I struggled to find a productive way to tell him why I was upset.  (Telling someone he is being a jackass is not productive.)

    A good friend in HR helped me script the conversation. It had three parts:

    • I really work well when…
    • This situation is bothering me because…
    • What I’d like to see happen is…

    Leaders have an important role to play in resolving issues. When a leader decides that their person is right and another person is wrong it turns a team problem into an organization problem. Instead we should should provide perspective, context and show how actions could be misunderstood.

    Leaders also need to quickly, clearly and strongly call about bad behavior. When I found out one of my people raised their voice at a colleague, I made it clear that wasn’t acceptable and shouldn’t happen again. He admitted that he lost his cool, apologized and then we started working on the resolving the situation.

    Require accountability

    Being responsible sometimes means pissing people off.
    General Colin Powell,former U.S. Secretary of State

    If you have a problem and you go to Holly Paul, an inspiring HR leader, you can expect that she will listen. You can also expect that she’ll work with you on a plan to resolve it. Most importantly you can expect she will make sure you are doing what you said you’d do when you said you would do it.

    Before I met Holly I would listen to problems then try to go solve them. Now I work with the person and tell them that I will be checking back with them, often putting the reminder in my calendar during the conversation so I don’t forget.

    Since I started focusing on fixing conflict, I’ve seen great changes on my team. Many of them have started for the first time dealing with the people, fixing their issues and forging much stronger relationships. Our team is stronger and having a greater influence on the organization.

    It’s messy, awkward and hard. I’ve been working on this for a long time and I still make mistakes. I still don’t always want to push when I meet resistance. This will never be easy, but it will be worth it and it’s your responsibility as a leader. For however long these people are with you, you need to make them better as individuals and a unit.

    You don’t need a big training, a touchy-feely retreat or an expensive consultant. You just need to start doing the work every day. The rest will come.

  • Color Accessibility Workflows

    A note from the editors: We’re pleased to share an excerpt from Chapter 3 of Geri Coady's new book, Color Accessibility Workflows, available now from A Book Apart.

    The Web Content Accessibility Guidelines (WCAG) 2.0 contain recommendations from the World Wide Web Consortium (W3C) for making the web more accessible to users with disabilities, including color blindness and other vision deficiencies.

    There are three levels of conformance defined in WCAG 2.0, from lowest to highest: A, AA, and AAA. For text and images of text, AA is the minimum level that must be met.

    AA compliance requires text and images of text to have a minimum color contrast ratio of 4.5:1. In other words, the lighter color in a pair must have four-and-a-half times as much luminance (an indicator of how bright a color will appear) as the darker color. This contrast ratio is calculated to include people with moderately low vision who don’t need to rely on contrast-enhancing assistive technology, as well as people with color deficiencies. It’s meant to compensate for the loss in contrast sensitivity often experienced by users with 20/40 vision, which is half of normal 20/20 vision.

    Level AAA compliance requires a contrast ratio of 7:1, which provides compensation for users with 20/80 vision, or a quarter of normal 20/20 vision. People who have a degree of vision loss more than 20/80 generally require assistive technologies with contrast enhancement and magnification capabilities.

    Text that acts as pure decoration, nonessential text that appears in part of a photograph, and images of company logos do not strictly need to adhere to these rules. Nonessential or decorative text is, by definition, not essential to understanding a page’s content. Logos and wordmarks may contain textual elements that are essential to broadcasting the company’s visual identity, but not to conveying important information. If necessary, the logo may be described by using an alt attribute for the benefit of a person using screen-reader software. To learn more, check out accessibility specialist Julie Grundy’s blog post on Simply Accessible, where she goes into the best practices around describing alt attributes.

    Text size plays a big role in determining how much contrast is required. Gray text with an RGB value of (150,150,150) on a pure white background passes the AA level of compliance, as long as it’s used in headlines above 18 points. Gray text with an RGB value of (110,110,110) passes the AA level at any text size, and will be AAA compliant if used as a headline above 18 points (Fig 3.1). A font displayed at 14 points may have a different level of legibility compared to another font at 14 points due to the wide diversity of type styles, so keep this in mind, especially when using very thin weights.

    Text size also plays a role when calculating compliance ratios.

    Fig 3.1: Text size also plays a role when calculating compliance ratios.

    Personally, I recommend that all body text be AAA compliant, with larger headlines and less important copy meeting AA compliance as a bare minimum. Keep in mind that these ratios refer to solid-colored text over solid-colored backgrounds, where a single color value can be measured. Overlaying text on a gradient, pattern, or photograph may require a higher contrast value or alternative placement, such as over a solid-colored strip, to provide sufficient legibility.

    These compliance ratios are often what folks mean when they claim that achieving accessible design by “ticking off boxes” can only come at the cost of stifled creativity or restricted color choices. But that simply isn’t true. Experimentation with a color-contrast checker proves that many compliance ratios are quite reasonable and easy to achieve—especially if you are aware of the rules from the beginning. It would be much more frustrating to try to shift poor color choices into something compliant later in the design process, after branding colors have already been chosen. If you fight your battles up front, you’ll find you won’t feel restricted at all.

    If all this talk of numbers seems confusing, I promise there’ll be no real math involved on your side. You can easily find out if your color pairs pass the test by using a color-contrast checker.

    Contrast checkers


    One of my favorite tools is Lea Verou’s Contrast Ratio (Fig 3.2). It gives you the option of entering a color code for a background and a color code for text, and it calculates the ratio for you.

    Lea Verou’s Contrast Ratio checker.

    Fig 3.2: Lea Verou’s Contrast Ratio checker.

    Contrast Ratio supports color names, hex color codes, RGBA values, HSLA values, and even combinations of each. Supporting RGBA and HSLA values means that Verou’s tool supports transparent colors, a handy feature. You can easily share the results of a check by copying and pasting the URL. Additionally, you can modify colors by changing the values in the URL string instead of using the page’s input fields.

    Another great tool that has the benefit of simultaneously showing whether a color combination passes both AA and AAA compliance levels is Jonathan Snook’s Colour Contrast Check (Fig 3.3).

    Jonathan Snook’s Colour Contrast Check.

    Fig 3.3: Jonathan Snook’s Colour Contrast Check.

    At the time of writing, Colour Contrast Check doesn’t support HSL alpha values, but it does display the calculated brightness difference and color difference values, which might interest you if you want a little more information.

    Color pickers


    If you need help choosing accessible colors from the start, try Color Safe. This web-based tool helps designers experiment with and choose color combinations that are immediately contrast-compliant. Enter a background color as a starting point; then choose a standard font family, font size, font weight, and target WCAG compliance level. Color Safe will return a comprehensive list of suggestions that can be used as accessible text colors (Fig 3.4).

    Color Safe searches for compliant text colors based on an existing background color.

    Fig 3.4: Color Safe searches for compliant text colors based on an existing background color.

    Adjustment tools

    When faced with color choices that fail the minimum contrast ratios, consider using something like Tanaguru Contrast Finder to help find appropriate alternatives (Fig 3.5). This incredibly useful tool takes a foreground and background color pair and then presents a range of compliant options comparable to the original colors. It’s important to note that this tool works best when the colors are already close to being compliant but just need a little push—color pairs with drastically low contrast ratios may not return any suggestions at all (Fig 3.6).

    Examples of color pairs that are not AA compliant.

    Fig 3.5: This color pair is not AA compliant.

    A selection of Tanaguru’s suggested AA-compliant alternatives.

    Fig 3.6: A selection of Tanaguru’s suggested AA-compliant alternatives.

    There’s more where that came from!

    Check out the rest of Color Accessibility Workflows at A Book Apart.

  • The Mindfulness of a Manual Performance Audit

    As product owners or developers, we probably have a good handle on which core assets we need to make a website work. But rarely is that the whole picture. How well do we know every last thing that loads on our sites?

    An occasional web performance audit, done by hand, does make us aware of every last thing. What’s so great about that? Well, for starters, the process increases our mindfulness of what we are actually asking of our users. Furthermore, a bit of spreadsheet wizardry lets us shape our findings in a way that has more meaning for stakeholders. It allows us to speak to our web performance in terms of purpose, like so:

    Graphic of a pie chart showing performance audit results

    Want to be able to make something like that? Follow along.

    Wait, don’t we have computers for this sort of thing?

    A manual audit may seem like pointless drudgery. Why do this by hand? Can’t we automate this somehow?

    That’s the whole point. We want to achieve mindfulness—not automate everything away. When we take the time to consider each and every thing that loads on a page, we get a truer picture of our work.

    It takes a human mind to look at every asset on a page and assign it a purpose. This in turn allows us to shape our data in such a way that it means something to people who don’t know what acronyms like CSS or WOFF mean. Besides, who doesn’t like a nice pie chart?

    Here’s the process, step by step:

    1. Get your performance data in a malleable format.
    2. Extract the information necessary.
    3. Go item by item, assigning each asset request a purpose.
    4. Calculate totals, and modify data into easily understood units.
    5. Make fancy pie charts.

    The audit may take half an hour to an hour the first time you do it this way, but with practice you’ll be able to do it in a few minutes. Let’s go!

    Gathering your performance data

    To get started, figure out what URL you want to evaluate. Look at your analytics and try to determine which page type is your most popular. Don’t just default to your home page. For instance, if you have a news site, articles are probably your most popular page type. If you’re analyzing a single-page app, determine what the most commonly accessed view is.

    You need to get your network activity at that URL into a CSV/spreadsheet format. In my experience, the easiest way to do this is to use WebPagetest, whose premise is simple: give it a URL, and it will do an assessment that tries to measure perceived performance.

    Head over to WebPagetest and pop your URL in the big field on the homepage. However, before running the test, open the Advanced Settings panel. Make sure you’re only running one test, and set Repeat View to First View Only. This will ensure that you don’t have duplicate requests in your data. Now, let the test run—hit the big “Start Test” button.

    WebPagetest screenshot

    Once you have a results page, click the link in the top right corner that says “Raw object data”.

    WebPagetest screenshot showing raw object data

    A CSV file will download with your network requests set out in a spreadsheet that you can manipulate.

    Navigating & scrubbing the data

    Now, open the CSV file in your favorite spreadsheet editor: Excel, Numbers, or (my personal favorite) Google Sheets. The rest of this article will be written with Google Sheets in mind, though a similar result is certainly possible with other spreadsheet programs.

    At first it will probably seem like this file contains an unwieldy amount of information, but we’re only interested in a small amount of this data. These are the three columns we care about:

    • Host (column F)
    • URL (column G)
    • Object Size (column N)

    The other columns you can just ignore, hide, or delete. Or even better: select those three columns, copy them, and paste them into a new spreadsheet.

    Auditing each asset request

    With your pared-down spreadsheet, insert a new first column and label it “Purpose”. You can also include a Description/Comment column, if you wish.

    Screenshot of an audit spreadsheet

    Next, go down each row, line by line, and assign each asset request a purpose. I suggest something like the following:

    • Content (e.g., the core HTML document, images, media—the stuff users care about)
    • Function (e.g., functional JavaScript files that you have authored, CSS, webfonts)
    • Analytics (e.g., Google Analytics, New Relic, etc.)
    • Ads (e.g., Google DFP, any ad networks, etc.)

    Your Purpose names can be whatever you want. What matters is that your labels for each purpose are consistent—capitalization and all. They need to group neatly in order to generate the fancy charts later. (Pro tip: use data validation on this column to ensure consistency in your spreadsheet.)

    So how do you determine the purpose? Typically, the biggest clue is the “Host” column. You will, very quickly, start to recognize which hosts provide what. Your root URL will be where your document comes from, but you will also find:

    • CDN URLs like cloudfront.net, or cloudflare.com. Sometimes these have images (which are typically content); sometimes they host CSS or JavaScript files (functionality).
    • Analytics URLs like googletagservices.com, googletagmanager.com, google-analytics.com, or js-agent.newrelic.com.
    • Ad URLs like doubleclick.net or googlesyndication.com.

    If you’re ever unsure of a URL, either try it out yourself in your browser, or literally google the URL. (Hint: if you don’t recognize the URL right away, it’s most likely ad-related.)

    Mindfulness

    Just doing the steps above will likely be eye-opening for you. Stopping to consider each asset on a page, and why it’s there, will help you be mindful of every single thing the page loads.

    You may be in for some surprises the first time you do this. A few unexpected items might turn up. A script might be loaded more than once. That social widget might be a huge page weight. Requests coming from ads might be more numerous than you thought. That’s why I suggested a Description/Comment column—you can make notes there like “WTF?” and “Can we remove this?”

    Augmenting your data

    Before you can generate fancy pie charts, you’ll need to do a little more spreadsheet wrangling. Forewarned is forearmed—extreme spreadsheet nerdery lies ahead.

    First, you need to translate the request sizes to kilobytes (KB), because they are initially supplied in bytes, and no human speaks in terms of bytes. Next to the column “Object Size,” insert another column called “Object Size (KB).” Then enter a formula in the first cell, something like this:

    =E2/1000
    Audit spreadsheet detail

    Translation: you’re simply dividing the amount in the cell from the previous column (E2, in this case) by 1000. You can highlight this new cell, then drag the corner down the entire column to do the same for each row.

    Animated GIF of a spreadsheet process

    Totaling requests

    Now, to figure out how many HTTP requests are related to each Purpose, you need to do a special kind of count. Insert two more columns, one labeled “Purpose Labels” and the second “Purpose Reqs.” Under Purpose Labels, in the first row, enter this formula:

    =SORT(UNIQUE(B2:B),1,TRUE)
    Spreadsheet detail

    This assumes that your purpose assessment is column B. If it’s not, swap out the “B” in this example for your column name. This formula will go down column B and output a result if it’s unique. You only need to enter this in the first cell of the column. This is one reason why having consistency in the Purpose column is important.

    Now, under the second column you made (Purpose Reqs) in the first cell, enter this formula:

    =ARRAYFORMULA(COUNTIF(B2:B,G2:G))
    Spreadsheet detail

    This formula will also go down column B, and do a count if it matches with something in column G (assuming column G is your Purpose Labels column). This is the easiest way to total how many HTTP requests fall into each purpose.

    Totaling download size by purpose

    Finally, you can now also total the data (in KB) for each purpose. Insert one more column and call it Purpose Download Size. In the first cell, insert the following formula:

    =SUM(FILTER($F$2:F,$B$2:B=G2))

    This will total the data size in column F if its purpose in column B matches G2 (i.e., your first Purpose Label from the section above). In contrast to the last two formulas, you’ll need to copy this formula and modify it for each row, making the last part (“G2”) match the row it’s on. In this case, the next one would end in “G3”.

    Animated GIF showing a spreadsheet process

    Make with the fancy charts

    With your assets grouped by purpose, data translated to KB, number of requests counted, and download size totaled, it will be pretty easy to generate some charts.

    The HTTP request chart

    To make an HTTP request chart, select the columns Purpose Label and Purpose Reqs (columns G and H in my example), and then go to Insert > Chart. Scroll down the list of possible charts, and choose a pie chart. Be sure to check the box marked “Use column G as labels.”

    Screenshot showing Google Sheets’ Chart Editor

    Under the “Customization” tab, edit the Title to say “HTTP Requests”; under “Slice,” be sure “Value” is selected (the default is “Percentage”). We do this because the number of requests is what you want to convey here.

    Screenshot showing Google Sheets’ Chart Editor

    Go ahead—tweak the colors to your liking. And ditch Arial while you’re at it.

    Download-size chart

    The download-size-by-purpose pie chart is very similar. Select the columns Purpose Label and Purpose Download Size (columns G & I in my example); then go to Insert > Chart. Scroll down the list of possible charts and choose a pie chart. Be sure to check the box marked “Use column G as labels”.

    Under the “Customization” tab, edit the Title to say “Download Size”; under “Slice,” be sure “Value” is selected as well. We do this so we can indicate the total KB for each purpose.

    Or, you can grab a ready-made template. If you want to see a completed assessment, check out the one I did on an A List Apart article. I’ve also made a blank file with a lot of the trickier spreadsheet stuff already done. Feel free to go to File > Make a Copy so you can play around with it. You just need to get your page data from WebPagetest and paste in the three columns. After that, you can start your line-by-line assessment.

    Telling the good from the bad

    If you show your data to a stakeholder, they may be surprised by how much page weight goes to things like ads or analytics. On the other hand, they might respond by asking what we should be aiming for. That question is a little harder to answer.

    Some benchmarks get bandied about—1 MB or less, a WebPagetest Score of 1000, a Google PageSpeed score of over 90, and so on. But those are very arbitrary parameters and, depending on your project, unattainable ideals.

    My suggestion? Do an assessment like this on your competitors. If you can come back to your stakeholders and show how two or three competitors stack up, and show them what you’re doing, that will go much further in championing performance.

    Remember that performance is never “done”—it can only improve. What might help your organization is doing assessments like this over time and presenting page performance as an ongoing series of bar charts. With a little effort (and luck), you should be able to demonstrate that the things your organization cares about are continually improving. If not, it will present a much more compelling case for why things need to change for the better.

    So you have some pretty charts. Now what?

    Your charts’ usefulness will vary according to the precise business needs and politics of your organization.

    For instance, let’s say you’re a developer, and a project manager asks you to add yet another ad-metrics script to your site. After completing an assessment like the one above, you might be able to come back and say, “Ads already constitute 40 percent of our page weight. Do you really want to pile on more?”

    Because you’ve ascribed purpose to your asset requests, you’ll be able to offer data like that. I once worked with a project manager who started pushing back on such requests because I was able to give them easy-to-understand data of this sort. I’m not saying it will always turn out this way, but you need to give decision makers information they can grasp.

    Remember, too, that you are in charge of the Purpose column. You can make up any purpose you want. Interested in the impact that movie files have on your site relative to everything else? Make one of your purposes “Movies.” Want to call out framework files versus files you personally author? Go for it!

    I hope that this article has made you want to consider, and reconsider, each and every thing you download on a given page. Each and every request. And, in the process of doing this, I hope you are equipped to call out by purpose every item you ask your users to download. That will allow you to talk with your stakeholders in a way that they understand, and will help you make the case for better performance choices.

    Further reading:

Search Engine Watch
Keep updated with major stories about search engine marketing and search engines as published by Search Engine Watch.
Search Engine Watch
ClickZ News
Breaking news, information, and analysis.
PCWorld
CNN.com - RSS Channel - App Tech Section
CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.
CNN.com - RSS Channel - App Tech Section
  • It's been five years since NASA's Curiosity Mars Rover landed on the red planet on August 6, 2012. The rover survived the much-publicized "seven minutes of terror" and safely landed near Mount Sharp. The rover accomplished its main goal in less than a year, collecting a rock sample that shows ancient Mars could supported living microbes.
  • Japan is building the fastest supercomputer in the world
    Japan is building the world's fastest supercomputer, which it hopes will make the country the new global hub for artificial intelligence research.
  • Ingrem, a Chinese company has created the "husband pod," an arcade booth intended to stave off boredom for men who accompany their partners to the mall.
 

Ако решите, че "как се прави сайт" ръководството може да бъде полезно и за други хора, моля гласувайте за сайта:

+добави в любими.ком Елате в .: BGtop.net :. Топ класацията на българските сайтове и гласувайте за този сайт!!!

Ако желаете да оставите коментар към статията, трябва да се регистрирате.