Assessing Libraries’ Contribution to Student Learning
Alaska Library Association Conference, February 2006
Abstract: Academic libraries exist to further learning, yet it’s difficult to know how well they accomplish that mission. This workshop will explore the contradictory politics of assessment, provide some examples of naturalistic assessment tools, and give participants an opportunity to draft a library assessment plan focused on student learning.
First, let me lay out the problems I have with the assessment movement. I agree with an op ed piece in Inside Higher Ed in which Edward Palm described the charismatic mood of assessment conferences as “a sort of New Age revival.” It’s telling that he likens it to a religious experience. One of the reasons the assessment movement has the appearance of a cult is that it requires a profession of faith and a full immersion into its belief system that goes well beyond the practical or the rational. It has become a creed, and too often requires a profession of faith.
My college is accredited by the North Central Association. When we were preparing for an accreditation team in 2003, the materials they provided us had little to say about what students should learn, or how they should learn, but focused how far along the institution has come in “developing a culture of assessment.” They included a rubric by which we could judge our progress. Those who attained the highest level of enlightenment were those who could produce lots of evidence that they gather lots of evidence and that students and faculty believe that gathering lots of evidence is tremendously important. It all but required signing a blood oath, and it had little to do with what students actually learn. As Palm has said, "I’m still not sure about what a 'culture of assessment' is. As nearly as I can determine, once a given institution has arrived at a state of profound insecurity and perpetual self-scrutiny, it has created a 'culture of assessment.'" In the past year or so they’ve backed off on the cultural indoctrination bit, partly because too many institutions were failing the test of faith and needed return visits.
Furthermore, this cult has a strong political element: according to Palm, “The much-touted shift in focus from teaching to student learning at the heart of the assessment movement is grounded in the presupposition that professors have been serving their own ends and not meeting the needs of students.” And where exactly is all of this coming from? From a genuine interest in improving student learning? Sometimes, yes, as it is found in the Boyer Commission Report or in publications of the American Association of Colleges and Universities, but quite often it stems from a call from outside the academy for “accountability,” for making higher education “pay off” in a currency that is primarily linked to economic productivity. The U.S. Department of Education, partly driven by concerns that tuition costs are rising too steeply and partly by a desire to increase the numbers of American students trained in science and engineering, has created a Commission on the Future of Higher Education. The Secretary of Education, Margaret Spellings, said at a press conference kicking off the Commission, "It is time to examine how we can get the most out of our national investment in higher education. We have a responsibility to make sure our higher education system continues to meet our nation's needs for an educated and competitive workforce in the 21st century." So in a sense, this skepticism about whether students are learning is rooted in concern that they are learning the wrong things. The assumption is that critical thinkers who have an interest in social justice and, perhaps, a major in history are of less value to employers than narrowly-trained scientists. “"What we are really seeing,” Palm says, “is the re-emergence of the anti-intellectualism endemic to American culture and a corresponding redefinition of higher education in terms of immediately marketable preparation for specific jobs or careers."
The Commission, by the way, believes that colleges need to pull up their socks because the No Child Left Behind Act has so improved k-12 education that colleges are not ready for the new demands they will face. And they may well recommend extending national standardized testing to colleges and universities in order to qualify for federal funding.
At the same time, a very different system of accounting still holds a lot of influence in higher ed. The factors that contribute to a high ranking in US News or in the National Research Council’s department rankings have little to do with all the evidence that is compulsively collected for accreditation visits and even less to do with student learning. It has to do with status and a different sort of “productivity.” A company called Academic Analytics will analyze how departments at research universities by journal publication, book publication with university presses, journal citations, and grant dollars per faculty member - all factors that help clutter up libraries with useless publications and exhaust public funding for research but have no direct effect on student learning.
So, given all the contradictory and politically noxious elements embedded in the assessment movement, why should we bother? Well, if you boil down what accreditation agencies are saying to its essentials, it’s not inconsistent with our own goals. They say we need to find ways to figure out what students are learning and where they’re running into difficulties and use that information to inform what we do next. I would argue that all good teachers have always done this so by complying with that basic concept we aren’t doing anything that goes against our principles. We’re simply describing to these meddlesome agencies what, at our best, we already do. The trick is to shift the mindset from a defensive one I have to gather evidence so I can prove I’m doing my job to a creative, inquisitive one, to develop a culture of curiosity rather than of evidence. And then, to develop processes for conducting meaningful assessment that don’t cut too deeply into the time it takes to teach well or without changing our focus from meaningful learning to learning that is easy to measure.
For libraries in particular, this can be a truly fruitful opportunity to make clearer to ourselves and our communities that we are not about how many books we have on our shelves; it’s about how those books on the shelf contribute to learning. If we don’t know the answer to that, what does it matter how many there are, or even how often they’re checked out? We can do better. And the ACRL Standards for Libraries in Higher Education approved in 2004 urge us to do better. The previous standards were focused on inputs and outputs. Now outcomes have been added to the mix, and that lets us ask much more interesting questions.
The effects of doing assessment of learning in libraries can be small and practical. For example, when we first collected student papers to examine as part of our assessment plan, even before we developed a good rubric for reading those papers we noticed students had a lot of trouble citing electronic sources. As a result we spent a few hours putting together a page of citation models, and it quickly became one of the most visited pages on our library’s Website. You may wonder why it took us so long to do something so obvious but what had been in the category of “we should do this when we get around to it” moved up on the to do list, and that was driven by seeing first hand what students needed.
A more ambitious outcome of developing measures for student learning in libraries is that we can uncover things about our students’ research behaviors that are worth sharing with faculty in the disciplines. What we learn from students in focus groups and interviews, for example, about their research processes and how they choose sources can help faculty as they design assignments and mentor novice researchers.
And finally, if we learn interesting things about our own students and their learning, chances are the results will interest librarians generally. The Scholarship of Teaching and Learning movement has encouraged scholars in all fields to apply their research methodologies to their classrooms. We, too, have plenty of opportunity to use assessment as an opportunity to contribute to the field. Our library assessment plans can nicely align with our own research interests. Consider it multitasking.
Though it can be a chore, institutional accreditation can also be a moment to sum up and share with the community (as well as the external reviewers) the contributions your library makes to student learning in ways that may not occur to those outside the library. Though the library and information resources standard (Standard Five) of the Northwest Commission stresses traditional inputs and outputs relating to holdings, equipment, facilities, and personnel, librarians should also contribute to addressing Standard Two, on the educational program and its effectiveness, in which there is this statement: “Faculty, in partnership with library and information resources personnel, ensure that the use of library and information resources is integrated into the learning process.” (2.A.8)
How do you interpret that directive on your campus? What would you like to see happen in the best of all possible worlds?
What I’d like to do next is to walk you through one approach to developing an assessment plan that is focused on student learning. This process is based on something that happened on our campus. In 1998, with an accreditation visit in the wings and some stern words about assessment in our previous report, all academic departments were asked to develop assessment plans using the template I’ve distributed. Though the planners had departments like history and physics in mind, we are technically an academic department so were included in this process unlike some support units such as IT, which were never asked to focus so clearly on student learning. What a great opportunity!
The first task was to decide as a department what we felt students needed to know, to be able to do, and to believe when it came to our library and its role in lifelong learning. And we couldn’t afford to include the whole of the Information Literacy Competency Standards. We were asked to choose four outcomes no more. And the discussion we had the day as we started to draft our plan was one of the most meaningful and exciting ones I can remember us ever having. Because what we were trying to define were big questions: What is the purpose of an academic library? What do we want students to learn while using our libraries? What does “information literacy” really mean? Why does it matter?
So before we talk about how to find out whether students are learning, let’s take some time to think about what it is we want our students to learn. What’s really important? They should be things that encompass big ideas, not specific skills or bits of knowledge. And they should be relevant to life after college.
I’m also going to suggest something radical. Don’t use the Information Literacy Competency Standards for Higher Education as your scaffold for outcomes, even though nearly all assessment plans that I’ve seen do so. In fact, many librarians seem to have committed them to memory the way police know the criminal code: how do you ensure compliance with standard one point two point six? I think that’s problematic for two reasons.
First, the standards break down what students should know as if it’s a singular process. You define an “information need” or a topic and you proceed to create a “product” (a research paper or presentation) while complying with ethics (e.g. without plagiarizing). It’s a process very much modeled on one kind of information use: completing a school assignment. I would argue that, after college, much of what we hope students have learned by using libraries is not geared to completing tasks, but rather prepares them for a wider understanding of the world and how they can participate in creating and responding to knowledge in a variety of forms. That rarely involves defining an information need in order to create a product.
Two, the standards reduce information literacy to a set of small, constituent parts, each of which can be separately tested. It fails to ask how those parts are put together, and that to me is a critical failure.
So just for today, try for “big picture” outcomes, not the day-to-day library skills we need to teach students in order to use our libraries. And don’t limit yourself to outcomes that derive directly from your part in their instruction. In other words, don’t limit yourself to evaluating your instruction program, but assess what students have learned without regard to how they learned it. If they are having trouble learning something important, then you may be able to help. If they know how to do it, it doesn’t really matter whether they learned if from you in your classroom or by some other means. Remember, you’re not assessing yourselves or your programs, but student learning in general.
As an example, one of the outcomes we chose for our assessment plan was “Students will understand how knowledge is organized and will be able to use that understanding to pursue information independently.” That is big enough to cover essentially what’s in the standards. But we also included “Students will develop an understanding of how knowledge is produced and disseminated and will recognize that they play a role in knowledge production” because that identification of the self with the conversations that go on among the books on the library shelves seemed important to us more important than learning how to merely locate those books or use them for a specific purpose.
So, think about what really matters and try for a few “big picture” outcomes that you want your students to carry with them beyond graduation. Let’s try to do this in twenty minutes.
[break into groups and come up with four outcomes; share]
Now, given these big issues, how can we figure out how our students are doing? What measures would give us the greatest insight? And remember insight is the goal, not numbers that justify or prove or defend. We’re looking for data that helps us see from a student’s perspective what works and what doesn’t. Your next task is to develop a set of measures that could give you insight into whether your students are learning the things that are stated in your outcomes. And here are the rules:
Final task: Now you have the rudiments of an assessment plan focused on student learning. Spend five minutes coming up with one research project that such assessment might enable.