LIBS 674 – Blog Post #4, Assessment and Evaluation

Prior to the event taking place, it would be beneficial to gather a small group of stakeholders to discuss our expectations and what we expect the event to look like. By establishing a collective idea of what a successful event looks like, our team would then know what to judge the actual results of the event against (Hoag, 2019). 

Upon completion of the Health Care events at my library, assessment and evaluation of the programming and initiative as a whole would begin. A meeting of staff members of the library and the volunteers who assisted with this event would be first, where feedback can be gathered and discussed. In the meeting, I believe it would be best to ask each individual to write down three aspects of the initiative that went well, and three aspects that could be improved upon. Someone would be available to write down the collective responses. Then, as a group, we could go over the issues and ideas in detail. Ideally, this meeting should be able to take place within a 30 to 45 minute time span. Since the participants of this meeting are all staff or volunteers, they would serve as a focus group for those involved with the healthcare initiative (Ainsworth, 2020). This format would provide some interview-style feedback through the open discussion platform, as well as a tangible list containing feedback. 

Next, patrons and attendees of the event would be surveyed to gain feedback. Surveys would be sent out via email or text message 2 days after the last event. Contact information for said attendees would be gathered and retained from the event itself. The survey would be in the form of a Google Forms document or a Survey Monkey document, and would consist of questions in a variety of formats. Some questions would be on a measure of Strongly Disagree to Strongly Agree, while others would have check boxes and some having short answer sections. Offering a concluding short answer section allows participants to write out their responses in their own words, which can be more helpful than selecting from a list of answer options (Graves et al., 2018). For example, when asking participants, “Is there anything you can think of that would make the program better?” I would then provide a short answer section for them to write out their responses (Hoag, 2019).

Catering the assessment format and question format to the content of the question and the intended audience allows the questions to be uniquely tailored to gain the most pressing information. For stakeholders and staff members, I believe that it is more important to offer open-ended questions, so that more discussion amongst teammates can occur. Conversely, offering a variety of question formats, but primarily rating/selection questions to attendees enables us to tune in to the specific information needs of the assessment. Creating specific questions with specific feedback in mind is what makes surveys effective (Graves et al., 2018).

References

Ainsworth, Q. (2020, April 2). Data Collection Methods. Retrieved from https://www.jotform.com/data-collection-methods

Graves, S., LeMire, S., Mastel, K., & Farrell, S. (2018, August 13). Demonstrating Library Value Through Outreach Goals and Assessment. Retrieved from https://er.educause.edu/articles/2018/8/demonstrating-library-value-through-outreach-goals-and-assessment

Hoag, E. (2019, July 11). Tips for Reflecting On and Evaluating Your Library Programs. Retrieved from https://ideas.demco.com/blog/library-program-evaluation-tips/

Leave a Reply

Your email address will not be published. Required fields are marked *