Online KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM EContentMag Faulkner Information Services Fulltext Sources Online InfoToday Europe Internet@Schools Intranets Today ITIResearch.com KMWorld Library Resource Literary Market Place OnlineVideo.net Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer



Magazines > Computers in Libraries > February 2009

Back Index Forward
 




SUBSCRIBE NOW!
Vol. 29 No. 2 — February 2009
FEATURE
Virtual Journal Room: MSU Libraries Table of Contents Service
by Michael Sutherland and Jason Clark

Montana State University (MSU) Libraries (where the authors serve as web services librarian and digital initiatives librarian, respectively) is perhaps unique because it has no current periodicals room due to lack of space. As a result, finding current journals for research is difficult for our patrons, who do not always know how or where to locate them. This became apparent when MSU Libraries participated in the LibQUAL library assessment study in the spring of 2008.

The survey examined patrons’ perceptions of library services and indicated that patrons had a difficult time accessing both print and electronic journals. Despite the libraries’ more than 44,000 subscriptions, including 9,924 print subscriptions, users suggested that they did not have access to adequate print and/or electronic journal collections required for their work. They also felt that the libraries needed to be better at delivering electronic resources to their homes or offices. Based on user perceptions, we appeared to have a problem communicating information on our holdings and how those holdings can be accessed by patrons interested in research.

Patrons can locate physical journals by searching the library catalog. The journals are integrated with other library materials in the stacks. The catalog does not, however, contain information regarding our electronic subscriptions, so we direct patrons to our JournaList, a Serials Solutions database that patrons can access through the MSU Libraries’ website. The JournaList is only updated once a month and could contain inaccuracies, so a patron might not find the most up-to-date subscription information. Additionally, browsing these journals is a complicated, multistep process: JournaList to database to volume to issue to content. Patrons may go through all of these steps only to discover that the current journal issue is not available.

The Inspiration

Coincidentally, in April 2008, we attended Computers in Libraries in Arlington, Va. During one afternoon session, Derik Badman, the digital services librarian at Temple University, gave a presentation titled Virtual Reading Rooms. This was a compelling presentation that offered a potential solution to the setup at our own library. If it could not have a physical current periodicals room, perhaps we could provide one virtually. Since April, we have investigated RSS aggregators and worked on creating a web-based current periodical table of contents (TOC) service using the Google AJAX Feed API.

The Initiative

As mentioned, there is no physical space for current periodicals at MSU Libraries. To answer this lack of a physical journal browsing environment, we developed a web-based journal TOC distribution system that enabled the delivery of journal articles to users’ desktops. The current beta version of our TOC service is linked from our journals page at www.lib.montana.edu/journals. An individual sample of the TOC service is available at www.lib.montana.edu/toc/lis. Our system uses the common XML feed formats Atom and RSS, the Google AJAX Feed API, and some JavaScript (or server-side PHP) working within the Ajax framework to deliver targeted journal content to our users as a series of XHTML pages. The goal was a browsable web environment that enabled article-level access to the multiple journal packages available from MSU Libraries.

(Note: The Google AJAX Feed API is available at http://code.google.com/apis/ajaxfeeds. The documentation details how to make calls to the API and what to expect from the returned results.)

As we moved toward our goal, we started small and identified some early adopter faculty groups to pilot the service. We decided to move forward with open feed formats early on because we wanted a flexible data source for our templating system of different journal content page views. We were seeing more and more RSS and Atom feeds of journal content provided by publishers, and it seemed like a “good enough” data source to remix the information contained—the journal title, the published date, article titles, and author information—into a user-friendly XHTML list. Our open templating system and use of available publisher and vendor feeds allow any library to mash up RSS feeds, and a portion of this article will detail how to “re-engineer” the Google AJAX Feed API JavaScript and XHTML markup to create a journal page view that lists article content. More on that in a moment.

(Note: There is an excellent resource that aggregates all the public feeds from the major publishers and vendors. The grant-funded project, supported by the JISC, is called ticTOCs and is the best place to start if you are looking to collect table of contents feeds for your patrons.)

The Google AJAX Feed API is really the engine that does all the work for our mashup. Thankfully, it’s relatively easy to understand and use. It was an important move to use the API because it gave us access to the highly optimized cached version of the Google index of web feeds. Access to the Google index enabled the speed and responsiveness you see in the TOC mashup. To get started, we make a series of requests to the API through JavaScript or a PHP script. Within our scripts, we make requests that include our source feed URLs; the number of feeds to return; the different article title, author, and journal title information; and the structured data format to return. (In our case, it’s usually JSON—JavaScript Object Notation.) Once all this information is returned, we use our scripts to parse the data and format it into an XHTML page view.

You can see in the source data file on the previous page that all the journal data is there—journal title, links, authors, article titles, and so on. Google has a simple wizard that enables a developer to plug and play with some values. If you are interested in getting up and running quickly, visit the Google Dynamic Feed Control Wizard (www.google.com/uds/solutions/wizards/dynamicfeed.html) and try it out. After you fill in some values about source feeds, you’ll get some JavaScript to plug into your website.

The code produced by the Google Dynamic Feed Control Wizard was our starting point. The next section will detail how we were able to customize the JavaScript and play with the structured data returned to create our MSU Libraries TOC service.

Sample Code and Explanation

To customize the MSU Libraries TOC service, we copied the gfdynamicfeedcontrol.js and CSS from Google to our server. The first problem we encountered was how to proxy all hyperlinks so that only MSU affiliates could access databases and licensed information and still provide article-level access to journal packages. This was accomplished by including the following code to the function feedLoaded_, with the help of Ben Lisbakken, a Google support engineer on the Google AJAX Feed API forum:

var proxy = ‘http://proxybz.lib.montana.edu/login?url=’;
result.feed.link = proxy + result.feed.link;
for (var i=0; i < result.feed.entries.length; i++) {
result.feed.entries[i].link = ‘http://proxybz.lib.mon
tana.edu/login?url=’ + result.feed.entries[i].link;
}.

This function is a callback associated with the Google AJAX Feed API after load. Adding the proxy prefix to this function changes the feed object before it gets displayed anywhere. Thus, it does not matter where it gets displayed; hyperlinks on the page will always have our proxy prefix. This might be a problem in the future if we use RSS feeds from sources that we do not subscribe to, because the proxy server would return an error, but for the moment this is not an issue.

As the TOC service was demonstrated to colleagues, we discovered that off-campus users were unable to use the service because the feeds would not load. Additional feedback indicated that users did not like the scrolling marquee that featured an article and cycled through each feed. They also wanted citation information with each article instead of just the article title. This last request was a challenge, since the RSS feeds provide differing amounts of citation information depending on the provider (e.g., EBSCOhost; Gale, a part of Cengage Learning; etc.). For instance, some feeds include an abstract for each article, while others do not. The marquee was deleted from the code to remove it from the display. The <div> containers built in the function createSubContainers_ were commented out so they would not be created. Additionally, we altered the JavaScript that built the GFdynamicFeedControl and bound it to the page. We shortened the function name to make it easier to use from a script, and we limited the function to only load one RSS feed at any given time instead of loading all of the feeds.

<script type=”text/javascript”>
    function load(rss, feedTitle) {
      var feed = [{
        title: feedTitle,
        url: rss}];
      var options = {
        numResults: 15
      }
      new GFdynamicFeedControl(feed, “feedControl”, options);

        }
    google.load(“feeds”, “1”);
    google.setOnLoadCallback(load);
</script>

Using the JavaScript onload event, one feed would be immediately loaded and displayed on the page, while the others would be dynamically loaded when called upon by a user clicking on links on the left side of the page, such as the following:

<a href=”javascript:load(‘http://www.emeraldinsight.com/info/
     RSSFeeds/InsightFeeds/lht_rss.xml’,’Library Hi Tech’);”
>Library Hi Tech</a>.

This method significantly decreased the loading time of the page while allowing for more feed entries to be displayed within the viewable area of the screen. Off-campus users could now access the feeds without difficulty. Also, to include standard citation information in the entries as users wanted, we added code to the createListEntries_ function, such as the following:

var entryAuthor = this.createDiv_(‘gf-author’);
if (entries[i].author != ‘’){
      entryAuthor.innerHTML = ‘by ‘ + entries[i].author;
};

If the author information is available in the feed, then it will be added in the display.

At the same time, we began working on a nonscript version using JSON written in PHP. This version does not require a Google AJAX Feed API key and uses the highly optimized cached copies of feeds that Google uses for its Google Reader application. By using PHP, there is no need for <noscript> tags and no worrying if users have disabled JavaScript; if you are interested in viewing this code, it is available for download at www.lib.montana.edu/~michaels/code.

Lessons Learned

Of course, any extended work in building a new library application leads to interesting lessons and generates more and more ideas about where to take things next. The MSU Libraries TOC project was no exception. First, some quick notes on the lessons learned.

Lesson 1: Not all feeds are created equal. We were frequently confounded by the variability in quality from feed to feed. Some feeds included author information and some didn’t. Some feeds dumped XHTML into their content and drove our parser crazy. The important step to take is to study your data sources and understand what the lowest common denominator of data available across each one will be. In our case, we were limited to <title>, <link>, and <description> elements because we knew all of our feeds would have those fields available for our scripts.

Lesson 2: Stick to feeds that have persistence and permanence. We originally wanted to use some generated feeds from our databases, but we had to reconsider when we found out that some generated feeds expire over time.

Lesson 3: Study source code and don’t be afraid to ask questions in development forums. At one point, we were at a loss as to how to add proxy prefixes to the journal URLs. Some patience and posing some questions in the Google AJAX Feed API Google Group put us in touch with one of the support engineers for the API. In time, he taught us how to make the script work for our needs.

The Future of the TOC Project

As to the future directions for the TOC project, we are looking at ways we might aggregate the disparate source feeds into master feeds. We’d like to use this master feed to create gadgets that “broadcast” our journal lists (according to subject) into the campus learning management system and other web portals such as iGoogle. Another possibility would be letting patrons create their own customized journal page views and feeds through a simple web form that allows for a remix of the content we already have.

Another plan is to test the usability of our web template and tweak the design accordingly. We’d also like to see a greater integration with our OpenURL server, as this seems like the most logical gateway for providing smart article-level access to journal content. Finally, while we are looking at more outreach and marketing through our library liaison program to let departments know that we can build TOC journal pages based on their research interests, we want to have a service available to all of our patrons.

Conclusion

As stated by Google, “adding a Dynamic Feed Control to your page is a snap.” Users may simply dump what the Dynamic Feed Control Wizard provides, copy code from examples, or follow the API’s three-step setup instructions to insert the Dynamic Feed Control into a webpage. The ability to customize this API is an attractive feature. The developer documentation and Dynamic Feed Control Programming Guide are well-commented so that novice users may easily add advanced features, such as changing the number of results displayed from the default. Developers may also download and tweak the gfdynamicfeedcontrol.js as we did, providing those with advanced skills more customization options since the JavaScript file has good comments and is in a line-by-line readable format. The API is well-suited to provide the MSU Libraries TOC service, and Google does the heavy lifting, making this a speedy and responsive mashup. Consequently, we recommend using Google’s AJAX Feed API for similar applications.

Further, professors in applied mathematics and political science who piloted the service, for example, shared it with other faculty in their departments. We now have requests for pages in mathematics, statistics, and mathematics teaching. According to LibQUAL, the perception among these groups was that we lacked the journals they required for their work. Thus, although it is difficult to measure the impact that the MSU Libraries TOC service is having thus far, we believe it is a successful solution to the library’s lack of a physical current periodical room.


Michael Sutherland (msutherland@montana.edu) is currently serving as the web services librarian at Montana State University (MSU) Libraries. He holds an Sp.L.I.S. and M.L.S. from Indiana University.

Jason Clark
(jaclark@montana.edu) is the digital initiatives librarian and the Digital Access and Web Services team leader at MSU Libraries. He holds an M.L.S. from the University of Wisconsin–Madison. He frequently posts code samples and his thoughts about library technology at www.jasonclark.info.

       Back to top