Integrating RSS Feeds of New Books into the Campus Course Management System
Edward M. Corrado and Heather L. Moulaison
We both work as librarians at The College of New Jersey. TCNJ is a public liberal arts institution with roughly 6,000 undergrads and 900 graduate students that is supported by a medium-sized academic library. We are not a research institution, but the library does aim to support the courses and programs that TCNJ offers. Last year, we moved into an impressive new library building (see next page), and we are more anxious than ever to bring the campus community in to use our resources.
Like many institutions that receive monies from the government, TCNJ has watched its budget shrink in recent years. We’re lucky that the library has remained a top priority for the college and that book budgets and faculty librarian lines have remained intact. Little money exists, however, for funding new library projects or making purchases to enhance the library’s services to the campus community. I, Edward, the systems librarian with a yen to solve difficult technology-related problems and a penchant for collaborating, first tackled the idea of working within these budgetary limits to come up with a new and useful library service. In the beginning of summer 2005, I started formulating the idea, and by August of the year, I got Heather on board. Heather, who has done a lot of teaching, including teaching traditional courses that were Web-assisted, is a subject-area liaison working with faculty, as well as a cataloging librarian who likes technology. There also were a lot of other folks, from librarians to IT staff to student workers, who contributed along the way.
The Idea to Integrate
My idea was to find a way to integrate RSS feeds of new subject-specific books into the course management system (CMS) on campus. We all know that RSS is a great way to manage constantly updating information. There are two typical ways to view RSS: 1) Use an RSS aggregator (see sidebar on page 64), and 2) Have the feeds show up automatically on a Web page. The latter is the basis for my idea. The course instructor can set up an automatic RSS feed that appears when a student opens the course page in the CMS. Feeds can still be downloaded to an aggregator (at right), but this requires a user to open the aggregator, and our students already have too much to do (Byrne 2005). Embedding the information in the user’s work flow has been the focus of some previous articles, and this seemed like a good way to have a captive audience of sorts (Dempsey 2005).
The goal of this particular idea was to get relevant lists of recent and available monographic acquisitions to display prominently within the CMS page for any given class on any given subject. The headlines would be tailored to that course, and would anticipate user need. TCNJ has its own homegrown course management system called the Simple Online Courseware System (SOCS). Other CMSs such as Blackboard already allow professors to display RSS feeds inside the CMS page. The IT folks working on SOCS were receptive to the idea of improving our system by adding a similar RSS feature. It was up to the library to create the RSS feed.
When I first considered creating the RSS feeds for our CMS I asked myself a few questions. The first question was, “If we build it, will they come?” No matter how cool or useful you think something is, it is not worth implementing if no one wants it. TCNJ Library uses the Voyager integrated library system from Endeavor. I had recently installed Michael Doran’s New Book List, which allows patrons to search for new materials, and it was well received, so I had reason to believe that patrons were interested in learning about new library acquisitions. With that in mind I had informal discussions with key stakeholders including other librarians, classroom faculty, students, and people from the IT department. They all seemed interested.
After determining that the feeds would be used if created, I went out to see if we could build something like this without any dedicated funding. While some other Voyager sites have since implemented their own versions of library acquisition RSS feeds, at the time I couldn’t find anyone that had done anything similar. Not being able to find any programming code we could borrow, I started to investigate how we could build the feed with minimal resources and time invested. I was confident that I could get the appropriate data out of Voyager, but converting it to RSS was another issue altogether. I discussed our plans with the college Web developer and he offered to “loan” me a student programmer who had worked on RSS feeds for other campus projects. Once I decided I could do this with the student’s help and that it wouldn’t cost anything (besides a limited amount of staff time), I determined this project was well worth the effort.
Our Methodology for Feeding the Data
I create the RSS feeds of new library acquisitions in a three-stage process:
1. ‑Get the appropriate data out of the library catalog.
2. ‑Convert the data into an RSS feed.
3. ‑Display the RSS feed in the CMS. Since Voyager is built on a relational database (Oracle) and I am familiar with SQL*Plus, I was confident that getting the required data out of Voyager would be a minor detail. What was not a minor detail, however, was determining what data should be used. First, I had to decide what should be advertised in the feeds as a “new” acquisition. After consulting with technical services, I found that it can take as long as 3 days after being cataloged for an item to get processed and be shelved. I only want books to appear in the RSS feed if they are recently acquired and fully processed. Based on our library’s work flows, I decided that a book with an item record created between 3 and 60 days prior should be considered new for this project.
Next, I had to determine how to separate the various feeds by subject. I considered a number of options including using acquisition funds or Library of Congress Subject Headings. In the end, I decided to use call numbers because it appeared to be the best way to automatically group items.
I also had to select which fields to display. It made the most sense to display the title, title statement (the full 245 MARC field), location, and call number. Additionally, I had to extract the acquisition date to order the items in the RSS feeds and the ISBN to create a hyperlink directly to the bibliographic record in Voyager. Once I figured out what data was needed, I created a SQL*Plus query to create plain text files of new acquisitions for each subject area. These files would then be converted to RSS feeds during stage two.
In stage two, the plain text files are processed, with each field formatted accordingly and valid RSS files being output. There were two major design considerations at this stage. First, I needed to pick what tools to use to convert the plain text files to RSS. We chose the Perl programming language because it has rich-text processing features and the student programmer and I were familiar with it. Also, some programming code was freely available as a Perl module called XML::RSS (authored by Jonathan Eisenzopf and Rael Dornfast), so we didn’t have to program everything from scratch. The second consideration was which RSS version to use for our feeds. (See Various Versions of RSS on page 25 of the February 2006 Computers in Libraries.) I decided to use RSS 2.0 because it was being used in other projects across campus. I felt that using the same version would make it easier to integrate our feeds into other campus systems.
In about a week, the student programmer and I had a working script that could convert the plain text files into RSS feeds. Using cron, a program that executes scheduled commands, I set the scripts to automatically run multiple times a day to create updated RSS feeds of new acquisitions. After the initial setup, the only changes have been when adding new feeds or adjusting call number ranges in existing feeds.
Once we had the RSS feeds being automatically created, the third stage was to integrate them into the CMS. This was the first, and so far only, time that we actually hit a roadblock. Well, it wasn’t really a roadblock, more like a detour since it wasn’t something the library had control over. It ended up taking longer than expected to get the ability to use RSS feeds into SOCS. Once the programmers working on SOCS saw we had the feeds ready to go, and once Heather and I inspired the library dean to lean on them a little, IT added the RSS functionality pretty quickly. Teaching faculty were then able to display these feeds in SOCS (see below).
Benefits: Service, Visibility
I, Heather, had just been hired when Edward first talked to me about the RSS project idea. I gave it a little thought, both from the point of view of a teacher and a librarian. By integrating the local RSS feed of new books into a forum for potentially interested and captive users, the library was serving (e-)patrons while answering education’s call for more library visibility in the CMS (Cohen 2002).
When we set up feeds and get instructors to use them in their CMS pages, we have a continuous advertising venue for the library’s print holdings. These materials are hand-selected by departmental faculty or subject-area librarians, or they are acquired through standing orders with reputable academic publishers. The library invests a considerable amount of money and man-hours in selecting, acquiring, and processing these books. We want to let the users know that we’ve got a great collection, and that our books are just what they need to finish assignments, explore research topics, or delve further into new scholarship on a particular subject.
Within the increasingly digital campus environment, the RSS feeds promote the library’s continued relevancy in the school campus’ online information environment. Although an OPAC is not as simple to navigate as a search engine, practice makes perfect. The results make it worthwhile for our users to work through the OPAC learning curve. For the uninitiated, hyperlinked headlines in the CMS’s RSS feed take users directly into the library catalog’s short record display for the item. From here, users can pursue the More Like This option if they want to continue their library catalog search painlessly. They may find themselves embarking on a meaningful self-guided OPAC discovery tour without realizing it. These users may not be browsing the library’s print holdings on a regular basis otherwise, so we’re happy to welcome them electronically into the fold of regular browsers and users.
Edward and I are convinced that once the campus community sees what the library has to offer, folks will be impressed with our collection, intrigued by our acquisitions, and will want to use our print resources. Faculty will see instantly when the materials they ordered have arrived. (Otherwise, they may not know we’ve gotten in their book for weeks or even months.) The titles themselves are worth something too. Since I do a lot of cataloging in my job, I know that there are some newly published academic works with very provocative titles. Erotic utopia: the decadent imagination in Russia’s fin-de-siècle by Olga Matich was recently included in the General Literature feed. When titles like this appear in RSS headlines, they will pique the interest of students using the CMS and serve as additional motivation to click through.
By viewing books that the library has just received and by linking through to the library catalog, students can verify ahead of time that a title is not checked out. As a mid-size college library, our collection is respectable but not enormous. For some research works, students end up using interlibrary loan services and waiting an unknown period of time for their resources to arrive. With the RSS feeds links, the headlines only link to works we own. Besides being there physically, these books represent the most current trends in the field. This way, the RSS feeds create a virtual New Book area that is visible electronically every time the CMS page is opened.
Since the feeds are streamlined to display only acquisitions in the class’ subject area, they target a very specific user group with a particular set of materials exactly at the point of need. The feed’s been embedded in the class members’ work flow, taking the stress out of preliminary research. Over the course of the semester, the evolving nature of the collection is there, in black and white, for them to monitor.
One final benefit to creating RSS feeds to include in SOCS is that it spotlights and encourages local skills and talents without burdening any library department or area. On-campus faculty, staff, and students collaborated on various stages of the design, creation, implementation, and marketing of the project. Once put into place, there is nothing extra for technical services or subject-area selectors to add to their work flows since the feeds are automatically generated. As behind-the-scenes librarians, Edward and I got to get up from our computers and work collaboratively with members of the IT department, public services librarians, student programmers, and college professors. We learned from them, but also got to connect and show what we were able to do.
Bricks, Not Clicks
The idea has been realized, but we both understand this is still a work in progress. Because of the delay in getting RSS feeds into SOCS, fall 2006 will be the first semester of the regular academic year that the feeds are available. Thus, we haven’t been able to gather many usage statistics. However, anecdotal evidence has been encouraging. Subject librarians have been very enthusiastic. In fact, they were so interested in the RSS feeds that we worked with the college Web developer to display them in subject and class guides (at right). Teaching faculty who have already used the feeds in SOCS have provided positive feedback.
Teaching faculty in the Department of Music have let us know that they like the RSS feeds because they show students books that are physically in the library. These faculty members believe that when students actually go to the library they learn more and produce better scholarship than if they only use electronic resources remotely. One of our librarians termed this concept “bricks, not clicks.”
The library community has also shown great interest. Librarians have contacted us for information about this project and we’ve made our code available. We are proud that this project was awarded the 2006 New Jersey Library Association–College and University Section’s Technology Innovation Award.
As this project gains momentum, we see new issues arising. One subject librarian asked Heather about customized feeds for a very specific aspect of anthropology. Although it doesn’t fit nicely into call number ranges, there may be a way to generate a special feed based on carefully chosen subject headings. Cross-disciplinary topics are even more problematic, and we’re still working on a way to create a feed for areas such as global studies or French studies. Edward has had requests for feeds based on item type (such as music recordings, scores, and DVDs). Because many of these nonbook items do not have ISBNs, we’ll have to change how we hyperlink to the bibliographic record. Luckily, in the near future, updates to the ILS will allow us to hyperlink to records using bibliographic ID.
A major challenge for any project like this is getting the word out. The main method we have used is to simply speak up. We’ve discussed these feeds during faculty SOCS training sessions. We also take every opportunity to tell teaching faculty about the RSS feeds when we see them in the library or elsewhere on campus. We’ve sent e-mail to the faculty discussion list about this service and have publicized it in the news section of the library Web site. Newer faculty might be more apt to try new things, so we contacted the person in charge of the new faculty orientation about making a brief presentation. We were very decidedly given a boost when the Office of Public Affairs wrote a story about how we collaborated with a student on this project for a campus newsletter. An unexpected benefit of consulting with IT staff members about this project and encouraging them to include RSS feeds in SOCS is that they now feel a certain amount of ownership and thus have been strong advocates for the library RSS feeds and are helping us get the word out.
Now that the original idea has been put into action, we have been able to integrate the physical library into the campus CMS—a domain previously dominated by electronic resources or Web searches. By raising awareness of the library’s ever-evolving print collections, we hope to increase the circulation of new print resources. When users come in for the books, we hope they will stay to take advantage of the other services we offer. By helping to get users familiar with the holdings and by enticing them to come to the library, this project has helped to reinforce the library’s place as the intellectual hub of the college, one of the very reasons the new building was erected!
From our point of view, this project has been a “win-win-win” situation for TCNJ. Users win when they can easily find library print resources. Librarians and administrators win when users take advantage of our services and resources, justifying library-related expenses. Lastly, the college wins when we can foster an environment of creative collaboration in a way that supports its mission by better serving faculty and students.
For more information, including examples of code, please see our project Web site at http://www.tcnj.edu/~library/rss.