BEN Collaborative Generalized Cataloging/Submission Tool Pilot Test

A pilot test of the BEN Collaborative Generalized Cataloging/Submission Tool was conducted at the American Society for Microbiology (ASM) Conference on Undergraduate Education. The test was conducted on Saturday May 20, 2006 during the exhibit hours. The test time was included in the program book and faculty testers were recruited from Microbelibrary.org workshops held sessions held earlier in the day.

The goal of the pilot test was to:

  1. Identify what terminology on the screen needs to be changed or defined
  2. Identify if the users understand the vocabularies and are able to apply them to the submissions
  3. Identify if the order of the information presented is logical for the user
  4. Obtain general feedback on ease of use from users, their comfort using the tool
  5. Confirm willingness to use it to submit their materials to ML or identify impediments to use

The version of the tool available for the pilot allowed authors to submit either a visual or curriculum resource. The pilot version of the tool is not a full implementation of the specification that was previously circulated and reviewed by BEN Collaborators and NSDL Core Infrastructure. The pilot tool capabilities are:

  1. Input screens for authors to enter the descriptive information necessary for submission of resources to the Microbelibrary Visual and Curriculum Resources Collections (includes the special fields for each collection)
  2. Screens for administrators to review metadata records

Jean Kayira, Coordinator for Microbelibrary Educational Resources created a page with links to sample visual and curriculum resources. The computers were set up with desktop shortcuts to the cataloging tool and the sample resources page.

Jean Kayira and Linda Akli, staffed the pilot test. Shai Sachs of Isovera was on call. Though he checked in, there were not any technical difficulties encountered with the setup or with the tool during the testing.

Each tester was asked to submit at least one visual and one curriculum resource. The testers were asked to use their name as the primary author when submitting. The testers were encouraged to submit resources from the sample page. There were some testers that submitted alternative resources.

Comments and questions that the testers made while using the tool were collected by Jean and Linda. The testers were asked the following questions when the completed their testing.

Q1. Did you attend one of the Microblibrary workshops? If no, were you familiar with Microbelibrary prior to participating in this test?

Q2. What terminology on the screens was unclear?

Q3. Are you familiar with the terms in the drop down menus?

Q4. Did you have any difficulty selecting the terms in the drop down menus? If yes, which ones?

Q5. Did you have difficulty with the descriptive information requested? If yes, which ones?

Q6. How would your rate the ease of use of the tool? Simple, Moderately Difficult, Difficult

Q7. Would you like to submit resources to Microbelibrary? If not, why?

Q8. Is there anything else you’d like to share with us about submitting resources to Microbelibrary?

67% of the testers attended one of the Microblibrary.org workshops or had prior knowledge of the Microbelibrary. Seven testers had difficulty with either selecting terms from the dropdown menus. This was in part due to the length of some of the lists and the difficulties encountered when attempting to add words to the list. The screen refresh and the distance of the ability to add the new term from the original questions was confusing. Six users had some difficulty in providing the descriptive information requested. The reasons varied for the difficulty with descriptive information. Table 1 summarizes the answers.

Table 1: Answer Summary

Yes No Simple Moderate Difficulty Difficult
Q1 12 6 18
Q2 3 15 18
Q3 18 0 18
Q4 7 11 18
Q5 6 12 18
Q6 18 0 0 18
Q7 18 0 18

Overall, the testers were very positive about the tool. The specific comments are in Appendix A.


Appendix A. User Comments

Question Number of Testers* Description
Q2 3 Finish button appears too early on the screens before all the information is completely filled in. It is on the first screen and should be only on the last after all information is supplied.
Q4 1 Core skills drop down menu choices straddle areas.
Q4 2 Option to add keywords not clear. Too far from question.
Q4 6 Selecting other and adding authors, keywords, micro-organisms confusing - screen refresh puts user back at top of screen.
Q4 1 Prefer cutting and pasting than scrolling through the drop down menu.
Q4 1 Intended audience - “All” should be a choice.
Q4 1 Want to add to the core skills description.
Q4 1 Learning time because was told in earlier session varies and needs to be accompanied by explanation.
Q4 3 Found dropdown menus cumbersome to use
Q4 1 Clarify for authors what vocabulary/terms controlled by ASM and what can be added to by the users.
Q5 1 The abstract asked for author names, which are already provided for in the author fields, and not an abstract.
Q5 1 Too many descriptions requested. Seem to overlap.
Q5 1 Relevance (Instructional Value) description - last paragraph should come first.
Q5 2 Links to the existing relevant ML resources - won't include.
Q5 1 Legend description not clear
Q5 1 Provide a list of safety issues to select from with the option to type in additional ones.
Q5 1 New faculty - unclear about pedagogy and pedagogical terms.
Q5 1 Didn’t understand the difference between student and instructor versions for curriculum resources
Q5 1 There should be a link back to the rubric in the appendix for answer keys.
Q8 3 Black letters on grey boxes difficult to read.
Q8 3 Provide something to print/review to know ahead of time what information is needed.
Q8 3 Provide option to delete the record or not submit it.
Q8 2 A way to save and come back to complete submission.

* This is the number of testers that had the same or similar comment.