IMLS desktop project

“Creating digital library DL design guidelines on accessibility, usability and utility for blind and visually impaired (BVI) users” Iris Xie (PI) and Rakesh Babu (Co-PI). IMLS Leadership grant (Research category) for 2016-2021, $495,600.00.

Funded by Institute of Museum and Library Services (IMLS)

Aim

The project goal is to create digital library (DL) design guidelines on accessibility, and usability (DLAUG) by incorporating the perspectives of key stakeholders. Most importantly, DLAUG addresses the help-seeking situations that blind and visually impaired (BVI) users experience in DL interactions.

Research questions for Study 1

RQ1: What are the types of coping tactics that BVI users apply during their interactions with digital libraries?

RQ2: How do different types of coping tactics correspond to the top five types of help-seeking situations?

Research questions for Study 2

RQ1. In which help-seeking situations do users show significantly different perceptions of DL design guidelines’ relevance, clarity and usefulness than scholars/experts and developers? What are the associated reasons for the differences?

RQ2. In which help-seeking situations do developers show significantly different perceptions of DL design guidelines’ relevance, clarity and usefulness than scholars/experts and users? What are the associated reasons for the differences?

Research questions for Study 3

RQ1. To what extent do the existing digital libraries comply with the DLAUG concerning different types of help-seeking situations encountered by BVI users?

H0: There is no significant difference in the number of the DLAUG compliance score among types of situations.  

RQ2. What are the types of design violations found in the existing digital libraries against the suggested guidelines concerning different types of help-seeking situations encountered by BVI users? 

RQ3. What are the types of challenges that DL developers encounter when complying with guidelines? 

Methodology

The proposed project consists of four stages.

Stage 1 Build a foundation for DL design guidelines

A thorough and comprehensive literature search for the last 20 years and document analysis was conducted to identify a list of help-seeking situations that BVI users encounter in their Internet interactions since very little research has been conducted in the DL environment.

Two types of analyses were conducted to identify the current status and problems with the existing guidelines as they relate to accessibility and usability: 1) analysis of existing accessibility and usability guidelines and 2) analysis of associated research on the topic.

Stage 2 Develop draft of DL guidelines

Sampling

Sixty-four BVI participants were recruited through our partner BVI organizations, in particular, the National Federation of the Blind (NFB). Participants received the informed consent form prior to joining the study. Among the 64 participants, thirty-two participated in the study onsite while the other half completed the study offsite via diaries. Table 1 presents the demographic data of the participants. The study sample represents participants with diverse backgrounds and IR skills. For the onsite group, 32 participants joined the study in person and performed three search tasks using DLs. Among them, 24 participants were met at the NFB convention, where they were invited to join the study in the hotel rooms. Another eight participants came to our usability lab to complete their tasks. The protocols and procedures were consistently followed for the onsite study. For the offsite group, 32 participants completed the tasks on their own time while completing diaries. The reason that two separate groups of participants were recruited is that the study attempted to involve participants across the United States. Eight of the onsite participants were from the Midwest, and most of the onsite participants at the NFB conference were from the east coast. The diary participants were from all over the US. The diary also allowed researchers to use different data collection methods and let participants to use different screen reader software, enabling researchers to identify issues beyond a controlled lab environment.

Data collection

Multiple data collection methods, including pre-questionnaires, think-aloud protocols, transaction logs, and diaries, were employed in the study. In the pre-questionnaires, participants provided information regarding their demographic information, Internet experience, levels of subject knowledge, search skills/knowledge, system knowledge, and assistive technology use. Five DLs (Artstor Digital Library, Digital Public Library of America [DPLA], HathiTrust Digital Library, Library of Congress [LOC] Digital Collections, and the LuEsther T. Mertz Library) were chosen for the study. Every participant worked on three tasks in LOC Digital Collections. For the rest of the four DLs, sixteen participants (eight from the onsite group and eight from the diary group) were randomly assigned to one of the DLs and instructed to complete three search tasks in each DL. To identify diverse types of interactions, each participant had to perform three tasks: an orientation task, a specific information search task, and an exploratory search task. The same tasks were assigned to both onsite and offsite groups. For the onsite group, when performing these tasks, participants were asked to think aloud during their interactions with LOC Digital Collections and one other assigned DL. In addition, we also employed the diary data collection method, which does not require participants to think aloud to record their thoughts behind their coping tactics. Laptops with JAWS Screen Reader and Morae software were used for this study. JAWS is the most popular Screen Reader in the BVI community, and Morae is a usability and accessibility testing software that captures participant verbalization, screenshots, and transaction logs. Think-aloud protocols and transaction logs recorded participants’ behaviors and their thoughts behind these behaviors, including the help-seeking situations they encountered and the coping tactics they applied during the search process. For the offsite group, participants received instructions via email with a diary file. The diary file consisted of an instruction sheet for the diary, examples of diaries, links to LOC Digital Collections and another assigned DL, as well as explanations of the tasks to be completed.

Data analysis

This study’s unit of analysis is each coping tactic and the associated top five help-seeking situations that drive participants to apply these tactics. First, each transcript and diary were manually analyzed, and text related to coping tactics and associated help-seeking situations were highlighted based on the definitions presented in the Introduction. Each situation was marked at the point where the participant verbally communicated or recorded a need for help or expressed confusion in response to problems with a DL and its corresponding features. For each coping tactic, participants’ actions, goals, and corresponding situations were analyzed. Second, each coping tactic and associated help-seeking situation were examined with tentative labels. Third, coping tactics and help-seeking situations were compared and conceptualized; similar coping tactics and help-seeking situations were assigned the same labels, respectively. Examples of types of tactics and help-seeking situations are presented in the Results, in which coping tactics are bolded and italicized while help-seeking situations are underlined. Qualitative data collected from think-aloud protocols and transaction logs from the onsite group and diaries from the offsite group were examined for each of the research questions. The open coding technique, which is the process of breaking down, examining, comparing, conceptualizing, and categorizing unstructured textual transcripts was utilized. In the coding schemes, situations and coping tactics are organized alphabetically, and definitions of each situation and coping tactic are provided. To avoid repetition, examples of situations and coping tactics are reported in the Results. Five coders participated in the coding process. Two independent coders analyzed coping tactics and situations. According to Holsti’s (1969) formula, the inter-coder reliability of situations and coping tactics are 0.92 and 0.923, respectively. Any disagreement between the two coders was further discussed with the research team until an agreement was made. Descriptive data analysis was also performed to determine the frequency of the types of situations encountered by BVI participants and the tactics they applied in their interaction process with the selected DLs.

Stage 3 Refine DL guidelines

Sampling

Users, scholars/experts and DL developers were the three key types of stakeholders in this study. In total, 150 participants were recruited, with 50 participants from each group. BVI users were mainly recruited with the help of national BVI organizations. The authors conducted literature searches in academic databases to collect contact information of scholars who have publications related to DLs and/or accessibility and usability of systems. Accessibility and usability experts were recruited via related listservs. An invitation flyer was sent to DL developers in different academic libraries across the USA to recruit them. Participants with various demographic characteristics were recruited to represent these three groups.

Data collection

An in-depth survey was administered to 150 participants representing three types of stakeholders to obtain their assessment. The DLAUG provided associated guidelines for 37 situations that BVI users encounter when interacting with DLs. When assessing the guidelines for a situation using a seven-point Likert scale, participants rated the associated guidelines based on perceived clarity, relevance and usefulness, and the definitions of these terms were provided in the general instruction to the participants as specified in the Introduction. Additionally, participants were also instructed to provide reasons for their ratings.

Data analysis

First, one-way ANOVA was applied to analyze the numerical ratings to reveal the similarities and differences among the three groups of stakeholders in their assessment of the guidelines. Second, when any statistically significant difference was observed, a post hoc test using Tukey’s honestly significant difference (HSD) method was conducted to compare all possible pairs of group means to identify specific situations where a user or developer group rated the relevance, clarity or usefulness of the related guidelines significantly differently from the others. Moreover, to better understand the reasons behind the different ratings from stakeholders, the authors performed a qualitative analysis of participant comments regarding the relevance, clarity and usefulness of guidelines pertaining to the above quantitative analysis.

Stage 4 Test and finalize the guidelines

Sampling

Multiple methods were applied to recruit DL developers. Flyers were emailed to partner organizations such as the Digital Library Federation and the American Library Association. In addition, listservs for DL and accessibility/usability scholars, experts, developers were used to contact potential participants. Also, flyers were emailed to DL developers working in about 200 universities based on the classification issued by The Carnegie Classification of Institutions of Higher Education. Snowball sampling strategy was applied to recruit DL developers.  DL developers who had agreed to participate were also encouraged to recommend DL developers who they think would be interested in participating in the research project. In total, there were 31 DL developers involved. Table 1 shows the demographic information of the 31 DL developers.

Data collection

Survey and online focus group were applied to collect data. Based on previous user studies (citations), the research team identified 27 unique help-seeking situations that BVI users encounter in their interactions with DLs. Associated guidelines that respond to each situation have been proposed. Five DL were selected representing diverse DLs in US. Four major DLs representing the prestigious and large scale DLs were selected for assessment, including Digital Public Library of America (DPLA), HathiTrust Digital Library (HathiTrust), Library of Congress (LOC), and National Science Digital Library (NSDL). Each DL was accessed by six participants except HathiTrust assessed by seven participants. An assessment questionnaire was created organized by different types of help-seeking situations. Each participant was asked to go through the guidelines for each help-seeking situation and assess to what degree one specified DL complies with the guidelines based on the 7-point Likert scale, and they were also encouraged to offer specific examples of violations against the guidelines as well as good techniques and features used by the DL assessed. Upon completion of the assessment questionnaire, each participant was invited to participate in a two-week online focus groups on Canvas, a web-based learning management system. In total, there were four focus groups with three of them involving eight participants and one involving seven participants. Before each focus group started, the researchers created a course on Canvas and posted all the discussion questions in the “Discussion” section, and instruction information was sent to participants, showing them step-by-step information regarding how to create a Canvas account and how to register the course. All participants were asked to use their participant IDs (e.g., P1, P2, and P3) as their usernames in the online focus group to protect their identity and privacy. In this way, their institution identities can also be protected. During the two-week period of each focus group, each participant could post his/her messages and reply to others’ at their convenient time. For this study, we only focus on the first discussion question in which participants talked about the ways that they deal with accessibility and usability issues in creating DL collections, the guidelines they followed in practice and the challenges they encountered when following the guidelines.

Data analysis

The analysis for quantitative data focuses on testing the difference in existing digital libraries’ compliance with the DLAUG among different types of situations. First, a Shapiro-Wilks test was performed to check the normality of the sample. As the sample was not normally distributed, Kruskal-Wallis test was used to compare the mean differences among the compliance rate of the DLAUG for each situation from the assessment questionnaires. The mean rank of the Kruskal-Wallis test was employed to identify the lowest compliance guidelines that existing DLs were rated for further qualitative analysis. The open coding method was used to analyze the textual data from assessment questionnaires and focus group discussions, specifically by breaking down, examining, comparing, conceptualizing, and categorizing unstructured textual transcripts. The analysis for qualitative data focuses on three aspects, including 1) violations against the DLAUG, 2) guidelines used by DL developers in practice, and 3) challenges DL developers encounter when following guidelines. Specifically, violations against the guidelines were identified from the textual feedback from assessment questionnaires; guidelines used by DL developers in practice, and challenges DL developers encounter when following guidelines were analyzed based on the online focus group discussions.

Findings

This project generates three final products: 1) DLAUG organized by types of help-seeking situations associated with accessibility, usability and utility based on the WCAG structure; 2) reports on the current status of how DLs satisfy BVI users’ help needs and support DL interactions; 3) methodology that can be applied to other underserved users to develop similar guidelines.

Finding for Study 1

BVI participants applied 20 types of coping tactics in their interactions with DLs. Based on the frequency data, the coping tactics are grouped into three groups. The first group is the highest frequency coping tactic group that contains coping tactics that were each applied more than 20 times. The top six coping tactics in this group are: Narrowing down, broadening up, or paralleling search (NBP); Exploring relevant features (ERF); Searching for keywords (SFK); Exploring DL page structure (EPS); Gleaning contextual cues (GCC); and Exploring an accessible alternative (EAA). NBP was applied 89 times among these tactics, ranking as the most frequently applied tactic by BVI participants. Interestingly, the second most frequently applied tactic, ERF, was applied 45 times, which is just over half the frequency count of NBP. The second group includes coping tactics that were applied more than 10 times but not more than 20. A total of eight coping tactics fall into this group: Refreshing or re-starting a page (RRP); Inspecting content of a retrieved item (ICI); Seeking human help (SHH); Checking help pages (CHP); Skipping over inaccessible or incomprehensible information (SII); Employing non-DL navigation features (ENF); Checking system feedback (CSF); and Scanning through result list (SRL). The number of applications of tactics in this group ranged from 10 to 16. The last group includes the least frequently applied tactics by BVI participants to cope with their difficulties, each of which was used less than 10 times. A total of six coping tactics are in this group: Employing alternative non-visual interaction approach (EAN); Delving into subcategories (DSC); Checking format of search results (CFR); Decoding the location of a DL page (DLP); Disregarding redundant information (DRI); and Checking current location (CCL) 

The top five situations based on the frequency data are: Difficulty evaluating information (71), Difficulty accessing information (69), Difficulty constructing queries or refining searches (64), Difficulty with help (57), and Confusion about multiple programs and DL structures (42). Eleven types of tactics were used in three or more situations: SFK, SHH, NBP, ERF, GCC, EAA, RRP, ICI, CHP, SII, and SRL. Among these 11 types of tactics, SFK and SHH were employed in all top five types of situations. NBP, ERF, and CHP were employed in four situations. GCC, EAA, RRP, ICI, and SRL were each utilized in three different types of situations. In addition, the bold lines in Figure 4 represent the high-frequency tactics that are applied more than 20 times.  

Findings for Study 2

There were significant differences among the three groups in rating of the relevance of the guidelines for Difficulty locating a specific word/phrase (DLSW) (users: M = 6.64, scholars/experts: M = 5.36, developers: M = 5.60, F(2, 72) = 5.649, p < .05). According to the post hoc test results, users perceived the guidelines for DLSW (M = 6.64, SD = 0.86) significantly more relevant than those of developers (M = 5.60, SD = 1.50, p = .033) and those of scholars/experts (M = 5.36, SD = 1.78, p = .006). There were significant differences among the three groups in rating of the clarity of the guidelines for Difficulty assessing Alt Text for an image (DAAT) (users: M = 6.08, scholars/experts: M = 4.84, developers: M = 4.88, F(2, 72) = 7.585, p < .05) and Difficulty understanding results structure/layout (DURS) (users: M = 6.48, scholars/experts: M = 6.20, developers: M = 6.20, F(2, 72) = 4.439, p < .05). According to the post hoc test results, users (M = 6.08, SD = 1.04) perceived the guidelines for DAAT significantly clearer than developers (M = 4.88, SD = 1.17, p = 0.004), and scholars/experts (M = 4.84, SD = 1.57, p = .003). Users (M = 6.84, SD = 0.47) perceived the guidelines for DURS significantly clearer than developers (M = 6.20, SD = 0.91, p = .032) and scholars/experts (M = 6.20, SD = 1.12, p = .032). There were significant differences among the three groups in rating of the usefulness of the guidelines for DAAT (users: M = 6.68, scholars/experts: M = 5.60, developers: M = 5.80, F(2, 72) = 6.796, p < .05). According to the post hoc test results, users (M = 6.68, SD = 0.80) perceived the guidelines for DAAT significantly more useful than developers (M = 5.80, SD = 1.26, p = 0.017) and scholars/experts (M = 5.60, SD = 1.19, p = .003). 

There were significant differences among the three groups in rating of the clarity of the guidelines for Difficulty locating a navigational aid (DLNA) (users: M = 6.44, scholars/experts: M = 6.56, developers: M = 5.56, F(2, 72) = 6.205, p < .05) and Confusion about digital library structure (CDLS) (users: M = 6.68, scholars/experts: M = 6.28, developers: M = 5.40, F(2, 72) = 6.986, p < .05). According to the post hoc test results, developers perceived the guidelines for DLNA (M = 5.56, SD = 1.19) were significantly less clear than those of users (M = 6.44, SD = 1.23, p = .016) and those of scholars/experts (M = 6.56, SD = 0.82, p = .005). Developers perceived the guidelines for CDLS (M = 5.40, SD = 1.63) were significantly less clear than those of users (M = 6.68, SD = 0.85, p = .001) and those of scholars/experts (M = 6.28, SD = 1.10, p = .037). There were significant differences among the three groups in rating of the usefulness of the guidelines for DLNA (users: M = 6.68, scholars/experts: M = 6.44, developers: M = 5.56, F(2, 72) = 6.973, p < .05). According to the post hoc test results, developers perceived the guidelines for DLNA (M = 5.56, SD = 1.29) were significantly less useful than those of users (M = 6.68, SD = 0.75, p = .002) and those of scholars/experts (M = 6.44, SD = 1.23, p = .018). 

Findings for Study 3

When comparing the number of the DLAUG compliance score for different types of situations, a Shapiro-Wilk test was first applied to check for normality for the rate of each type of compliance. The result shows that all types except two were not normally distributed across the data. Therefore, Non-parametric test was performed for conducting to test for difference among compliance categories of the DLAUG. Kruskal-Wallis test is used because the test is for comparing multiple groups in Non-parametric test. The result shows that there is significant difference (H(26)=126.575, P=0.00) in existing digital libraries’ compliance with the DLAUG among different types of situations. Table 5 presents the mean rank of all types of situations. Among all the situations, mean ranks of situations that are under 400 are the lowest ones, which is further analyzed qualitatively. 

This study identified five main categories and 26 subcategories of violations against the DLAUG. The five main categories include complex information presentation, inadequate features, inadequate help information, descriptive information issues, compatibility issues.

According to the focus group discussions, DL developer participants followed legal stipulation and various types of guidelines at different levels in their practices of creating DLs, including national guidelines, regional guidelines, institutional guidelines, and others. Figure 8 presents the distribution of guideline compliance by DL developers. 

It is likely that DL developers will encounter a variety of challenges when creating and maintaining DLs, and the focus group discussions show that the challenges that the DL developer participants have encountered are related to resources, DL developers, (content) awareness, migration, administration, institution requirements, vendor restriction, accessibility tools, and guidelines. 

Design

Design implications from Study 1

The findings of this study generate valuable implications for the design of DLs to support BVI users. For the coping tactics that BVI users adopted from non-DL environments into the DL context, three types of design suggestions are proposed. First, DL design needs to support BVI users and directly solve these situations so that BVI users will not need to apply these alternative coping tactics. For example, NBP was used as an alternative approach to deal with Difficulty evaluating information because this tactic was most familiar to BVI users. As DLs do not offer snippets of search results, BVI participants had to apply NBP as an alternative approach to reduce the search results and look for clues themselves. Previous research in non-DL environments suggests including search previews of each document, overviews/summaries of search results, and grouping research results as clusters (Al-Thani & Stockman, 2018; Aqle, Al-Thani & Jaoua, 2020; Sahib et al., 2012).  These recommendations can be brought to the DL environment to assist BVI users to effectively evaluate search results. Second, DL design needs to offer more options that facilitate the use of DLs by BVI users. GCC is an indirect approach for BVI users to access an otherwise inaccessible item. In addition to trying to make every item accessible, DLs need to provide more context cues for BVI users, such as section overviews, related resources, and description elements for each item. Third, DL design needs to create or enhance DL features to incorporate some of the coping tactics into system design. Seeking human help is one of the frequently applied tactics in non-DL environments because there is not much help support from the IR systems. In the DL context, human help can become one option for online help in the form of contacting librarians synchronously or asynchronously. In addition, the design of help mechanism needs to take into consideration how BVI users seek help from human helpers. Offering a multimodal representation of awareness information, as recommended by Al-Thani and Stockman (2018), such as the audio alert of a “new chat message,” can encourage BVI users to actively interact with other users as well as librarians in the DL environment.   

For the unique coping tactics generated from BVI users’ interaction with DLs, the authors focus on the specific design implications that support these tactics by reducing the help-seeking situations in the following areas: 1) Improving the design of DLs to effectively guide users when exploring DL structures. Here are the design suggestions to support EPS: (a) page content organized into topical sections; (b) accessible section headings with meaningful section titles; (c) meaningful labels for links, form fields, and other active elements; (d) logically grouped active elements and features. 2) Offering accessible alternatives to inaccessible content. Here are the design suggestions to support EAA: (a) transcripts for scanned image items; (b) descriptive audio for video items (text or audio description of pictures/broadsides/graphics). 3) Helping BVI users to adapt to DLs by creating better DL help mechanism to support receiving guidance on effective DL access and use. Here are the design suggestions to support CHP: (a) compulsory DL user orientation for first-time visitors; (b) prominently located links to Help, Search Tips, and Screen Reader Tips sections on DL pages; and (c) context-sensitive help tips at known accessibility pain points (e.g., visual items, browse categories, etc.). 4) Facilitating BVI users’ ability to bypass inaccessible or incomprehensible content. Here are the design suggestions to support SII: (a) skip-over-image feature; (b) shortcut to relevant content beyond auxiliary or ambiguous information; (c) meaningfully-labeled unique links; (d) accessible section headers with meaningful titles. 5) Making the DL design to support utilizing advanced screen reader function. Here are the design suggestions to support EAN: (a) compatibility with all screen reader software; (b) support for keyboard shortcuts for common operations; and 6) Assisting BVI users in effectively exploring subcategories. Here are the design suggestions to support DSC: (a) logically categorized topics and collections; (b) meaningfully-labeled subcategories clarifying the relationship with the associated topic/collection; (c) supplementary text for subcategory links describing topics or collections; (d) prominently located Return to Home button on individual subcategory pages. 

Design implications from Study 2

Based on the survey results for the three groups of stakeholders, this study enhanced DLAUG. For relevance, developers requested useful guidelines related to recommended features; scholars asked to provide comprehensive guidelines; and users expected guidelines targeted at their encountered problems when using DLs. Therefore, the enhancement of DLAUG considered how to cover all critical help-seeking situations. For clarity, the guidelines were modified by replacing their technical terms with more generally understandable terms because users and scholars recommended not using technical language. In this way, all stakeholders, especially users and developers, could read and understand clearly. Where technical terms persisted, a glossary was provided. Also, developers suggested a more organized structure, so an introduction to the guidelines was added. Improving the guidelines usefulness was mainly achieved by adding more how-to examples with suggested steps for implementation. Providing a prioritized recommendation list addressed developers requests for a list of guidelines with proposed priorities. In addition, to address scholars concerns, a numbering system was applied to the entire set of guidelines to show the connections between components. 

Publications

Study 1

Xie, I., Babu, R., Lee, T. H., Wang, S., & Lee, H. S. (2021). Coping tactics of blind and visually impaired users: Responding to help-seeking situations in the digital library environment. Information Processing and Management, 58(5), 102612. 

Study 2

Xie, I., Babu, R., Wang, S., Lee, H. S., & Lee, T. H. (2022). Assessment of digital library design guidelines to support blind and visually impaired users: a study of key stakeholders’ perspectives.The Electronic Library, (ahead-of-print). 

Xie, I., Babu, R., Lee, H. S., Wang, S., & Lee, T. H. (2021). Orientation tactics and associated factors in the digital library environment: comparison between blind and sighted usersJournal of the Association for Information Science and Technology,72(8), 995–1010.

Conference

Xie, I., Babu, R., Lee, T., Shengang, W., Lee, H. (2020). Tactics applied by blind and sighted users: Initial exploration of a digital libraryProceedings of Joint Conference on Digital Libraries 2020. Wuhan, China: ACM.

Xie, I. (2017). Keynote speaker. Support Accessibility, Usability, and Utility: An investigation of blind and visually impaired users’ interaction with digital Libraries. Presented at iConference Workshop 2017: Vulnerable Communities in the Digital Age: Advancing Research and Exploring Collaborations, Wuhan, China.

 

Team

Principal investigators

  1. Iris Xie, PI, Professor, UWM-SOIS, hiris@uwm.edu
  2. Rakesh Babu, Co-PI, PhD, Assistant Professor, UWM-SOIS, babu@uwm.edu

Research assistants

  1. Tae Hee Lee, PhD student, UWM-SOIS
  2. Shengang Wang, PhD student, UWM-SOIS
  3. Hyun Seung Lee, PhD student, UWM-SOIS
  4. Melissa Davey Castillo, PhD student, UWM-SOIS
  5. Sukjin You, PhD student, UWM-SOIS
  6. Sukwon Lee, PhD student, UWM-SOIS

Consultants for IMLS project

  1. Jim Allan, Accessibility Coordinator, Webmaster, Texas School for the Blind and Visually Impaired; Chair, W3C User Agent Accessibility Guidelines Working Group, Web Accessibility Initiative, jimallan@tsbvi.edu
  2. Krystyna Matusiak, PhD, Assistant Professor, University of Denver; Digital librarian for 10 years; Chair, ASIST Special Interest Group for Visualization, Images, and Sound; ASIST Standards Committee member, matusiak@du.edu

Advisory board members for IMLS project

  1. Mary Alexander, National Program Director for Learning Ally, mAlexander@LearningAlly.org
  2. Daniel Cohen, Executive Director, Digital Public Library of America, dan@dp.la
  3. Susan Fraser, Director, The New York Botanical Garden Mertz Library, sfraser@nybg.org
  4. Mike Furlough, Executive Director, HathiTrust, furlough@hathitrust.org
  5. Geri Bunke Ingram, Community Manager for OCLC’s CONTENTdm community, develop and design the CONTENTdm user experience, ingramg@oclc.org
  6. Bethany Nowviskie, Director, Digital Library Federation at CLIR, bnowviskie@clir.org
  7. Serena Rosenhan, Director, User Experience Design, ProQuest, Serena.Rosehan@proquest.com
  8. Carrie Russell, Program Director, Public Access to Information, American Library Association(ALA), crussell@alawash.org
  9. James Shulman, President, Artstor, Schulman@ARTstor.org
  10. Dan Wenzel, Executive Director, BLIND, Inc., dwenzel@blindinc.org
  11. Kristen Witucki, Community Coordinator, Blindness & Visual Impairment, Learning Ally, kwitucki@learningally.org
  12. Marcia Zeng, Professor, Kent State University, Prior member of IFLA’s Digital Library Guidelines Working Group and the ASIST standards committee, mzeng@kent.edu

Partners for IMLS project

  1. American Council of the Blind, Eric Bridges, Executive Director, ebridges@acb.org
  2. American Library Association, Alan S. Inouye, Director, Office for Information Technology Policy, ainouye@alawash.org
  3. Association for the Blind & Visually Impaired, John McElheron, Social Worker, jmclhoron@avimichigan.org
  4. Blind Service Association, Debbie Grossman, Executive Director, dgrossman@blindserviceassociation.org
  5. Digital Public Library of America, Daniel Cohen, Executive Director, dan@dp.la
  6. Learning Ally Mary Alexander, National Program Director, malexander@learningally.org Kristen Witucki, Community Coordinator, kwitucki@learningally.org
  7. Milwaukee Art Museum, Beret Balestrieri Kohn, Audio Visual Librarian, beret.balestrierikohn@mam.org
  8. Milwaukee Public Library and Wisconsin Talking Book & Braille Library, Paula Kiely, Director, pkiely@milwaukee.gov
  9. Milwaukee Public Museum, Hillary Olson, Vice President, Audience and Community Engagement, olsonh@mpm.edu
  10. National Federation of the Blind-WI Chapter, John Fritz, President, johnfritz66@gmail.com
  11. University of Wisconsin-Milwaukee (UWM) Libraries, Michael Doylen, Associate Vice Provost & Director of the Libraries, doylenm@uwm.edu
  12. Vision Forward Association, Terri Davis, Associate Director, tdavis@vision-forward.org
  13. Wisconsin Library Services (WiLS)- Recollection Wisconsin program, Emily Pfotenhauer, Recollection Wisconsin Program Manager, emily@wils.org

Supporting organization

United States Library of Congress, Office of the Law Library, Rebecca Shaffer, rshaffer@loc.gov