Website+Best+Practices-Our+Criteria

=Criteria for good library websites:=

=Jamie= @http://libraryconnect.elsevier.com/lcp/0502/lcp0502.pdf

How to Design Library Websites to Maximize Usability

**Adele:** []

[]

[] (Jakob Nielsen's website)

Megan:
This paper presents a user-centred design and evaluation methodology for ensuring the usability of IR interfaces. The methodology is based on sequentially performing: a competitive analysis, user task analysis, heuristic evaluation, formative evaluation and a summative comparative evaluation. Task 1: Find information on the topic of computer-aided design Task 2: Find information about e-commerce Task 3: Find information on concurrent engineering in construction Task 4: Find information about applications of fibre optics Task 5: Find works of Lawrence R. Rabiner Task 6: Find work produced by the researchers in the Chemical Engineering department at UMIST Task 7: Find articles citing work by M. Smith published in the journal of Addictive Behaviors
 * AHMED, S., McKNIGHT, C. , & OPPENHEIM, C. (2006). A user-centred design and evaluation of ir interfaces. //Journal of Librarianship & Information Science//, //38//(3), 157-172.**

The following performance variables were measured during our usability testing with the Web of Science interface: 1. Time taken: the total time taken to complete each task; 2. Search terms used: the number of different search terms used for each task; 3. Success score: successful completion of each task (1 = success, 0 = fail); 4. Error rates: number of different errors made for each task.


 * Dougan, K., & Fulton, C. (2009). Side by side: What a comparative usability study told us about a web site redesign. //Journal of Web Librarianship//, //3//(3), 217-237**.

A think-out-loud task-based usability test was developed, which was to be completed on the old and the new sites, as well as a post-test survey that would allow participants to share their demographic information and provide subjective feedback about their experience with both the old and new sites.

Although some studies claimed that only five users needed be tested to reveal the majority of a site’s problems (Nielsen 2000), fifteen users were tested... Tests were scheduled for one-hour intervals, although the pre-test indicated that most tests would likely take 45 minutes or less. Tests were held in Dougan’s office because it provided the necessary software and privacy to conduct the tests. Fulton read the introduction, which was scripted so each participant would receive the same instructions. The test protocol was directed by Dougan, who reiterated instructions as necessary and read the tasks to the participants. Fulton recorded observations.

The usability test consisted of two eleven-task sets, one for the old site and one for the new site. The questions for each site were very similar and tested both factual information (staff phone numbers, locations of materials in the library) and conceptual knowledge (“Does the Library own a particular item?” or “Find a journal article about a Beethoven symphony”). The conceptual tasks served to test two questions: did participants know which kind of tool to use to complete the task/answer the question, and could they find the tool(s) on both the old site and the new site? We hoped that comparing the two sites would show if the new site improved user performance. To allow for participants having some familiarity with the old site, whether each participant began with the old or new site was alternated. This also addressed the fact that by completing one set of tasks, users would learn about the tools and terminology of the library in general. By having half the users complete tasks on the new site first and then progressing to the old site, this learning curve should have been minimized in the results. For a complete list of tasks/questions, see Table 1. Participants were repeatedly reminded that the design of the sites were being tested, not their ability. Rather than set time restrictions on each question, users were permitted to take as long as they liked, but that if they felt they could not locate the answer, they could give up on a question and be given the answer. The tests were recorded using TechSmith’s Camtasia software (http://www.techsmith.com/camtasia.asp) and a USB microphone. This allowed us to document participants’ clicks and mouse movement, which can be as revealing as actual clicks, and their “think-out-loud” commentary.

TABLE 1 Task List Factual tasks: Task 2—Locate a list of the Special Collections available in the Music and Performing Arts Library. Task 4—Find a link on the Web site to the Inter Library Loan form. Task 8—Does the Music and Performing Arts Library offer online chat reference service? Task 9—What is the name and e-mail address of the head of the Music and Performing Arts Library? (On new site: a Music and Performing Arts Library Graduate Assistant) Task 11—On what floor of the Library are the listening carrels? (On new site: periodicals) The series of tasks above could be found directly on the Music and Performing Arts Library Web site. The participants did not need familiarity with library resources and tools to complete the task. Conceptual tasks: Task 1—Locate a class guide for the Vocal Literature class (On new site: String Literature class). Task 3—Find a journal article about a Beethoven symphony. Task 5—Find a link to a site that will help you create citations for your research papers. Task 6—Does the Library own a score (printed music) of selections from the musical Wicked? (On new site: Rent) Task 7—Find an online German dictionary. Task 10—Does Professor Erik Lund have any audio tracks on e-reserve this semester? (On new site: Professor Zack Browning) In order to complete the tasks above, participants needed familiarity with certain library resources and tools.


 * Ebenezer, C. (2003). Usability evaluation of an nhs library website. //Health Information & Libraries Journal//, //20//(3), 134-142.**

A small sample of six NHS library sites was selected for the benchmarking/content evaluation. The libraries were chosen deliberately for the range of approaches to navigation and design they represented, and the range of their online content. Nine participants in all were recruited for three separate focus groups run at lunch times; each group had three members. All were given about 15 min to ‘play’ with the site before each of the sessions. Each group was facilitated by the author and lasted about 45 min. Seven testers were recruited for the observation test. At the start of each test the participants were given a script and list of tasks. The 15 tasks, some of which had a number of different components, were designed to address anticipated usability problems. The usability metrics derived were: percentage of tasks completed, number of false starts for each task, longest time taken for each task, number of prompts required per task per user, and user satisfaction ratings. Volunteers for the card-sorting test were recruited via a Trust-wide e-mail. Sets of paper slips were created, one slip for each item on each of the menus. Menu category headings were also included among the slips. Subjects were asked to sort the slips into categories, using either one of the menu headings as a label for the category, or devising their own heading if they preferred. The cluster analysis software USort/EZSort, as described by Dong and co-workers was used to record and analyse the results. Respondents were asked to complete a detailed label intuitiveness/category membership questionnaire. This provided screen shots illustrating the main menu and sub-menus; respondents were asked what they would expect to be included in each main category, and what sort of information they thought each of the links would indicate.


 * Nielsen, Jakob. Ten Usability Heuristics.  (22 July 2003).**

//While this is not a peer-reviewed study, Nielsen is seen as an expert in the field and an authority on heuristics. Much of his work is available on his website and he was cited in the sample paper. There are several other articles by this author that may help us.//

I originally developed the heuristics for [|heuristic evaluation] in collaboration with Rolf Molich in 1990 [Molich and Nielsen 1990; Nielsen and Molich 1990]. I since refined the heuristics based on a factor analysis of 249 usability problems [Nielsen 1994a] to derive a set of heuristics with maximum explanatory power, resulting in this revised set of heuristics [Nielsen 1994b].
 * Visibility of system status**: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
 * Match between system and the real world**: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
 * User control and freedom**: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
 * Consistency and standards**: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
 * Error prevention:** Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
 * Recognition rather than recall**: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
 * Flexibility and efficiency of use:** Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
 * Aesthetic and minimalist design**: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
 * Help users recognize, diagnose, and recover from errors:** Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
 * Help and documentation**: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.

//This checklist was cited in the sample paper and seems very helpful. I'm not sure why the source was considered more scholarly than some of the ones we've looked at, but it was allowed. Maybe something we could build off of? //
 * North Texas Regional Library Partners. (2009). Website scorecard. Retrieved from [] **


 * Travis, T., & Norlin, E. (2002). Testing the competition: Usability of commercial information sites compared with academic library web sites. //College & Research Libraries V. 63 No. 5 (September 2002) P. 433-48//, //63//(5), 433-448.**

To increase the validity of the study that is the-subject of this article, nine students were selected. Neilsen's Mathematical Model of Usability Problems shows that up to nine users will demonstrate 90 percent of a Web site's usability problems.

All of the sites were bookmarked, and students were observed using all sites. Students were asked between three and five questions for each site. Question groups were specifically designed for individual sites. The types of questions were both simple task-oriented questions ("How much does it cost to subscribe to Questia September 2002 for one year?") and complex questions that required students to find information for a hypothetical research assignment ("Find a journal or magazine article about eating disorders and males"). The more difficult questions required students to formulate their own search terms to find the answer and were designed to mimic typical topics students have when they begin research in a true library setting. The questions required students to use links from the home page as well as links from what was determined to be a "gateway page" (main page for links to electronic resources). Students were asked to complete the questions only to the point of finding resources. An observer recorded the paths they took while a second.individual asked the questions. After completing the tasks for a site, students were asked to fill out a Likert-type scale, which measured their attitudes toward a site.


 * Palmquist, R.A. (2001, Spring). An Overview of usability for the study of users' web-based information retrieval behavior. //Journal of Education for Library and Information Science, 42//(2), 123-136.**

Discusses various methods of assessing usability.