Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

Pages: 1| 2
Pop the Hood for Website Changes
By
May/June 2016 Issue

Webmasters dedicate much of their time and energy to the user experience. Looking outward is exactly as it should be. However, into the life of every webmaster comes a time when the technologies that underlie what we do change so greatly that we need to momentarily stop obsessing over creating great customer experiences and get our hands dirty under the hood. This is such a time. As with all things under the hood, you should consult with your resident network/server/programming guru, who may or may not be you, before taking final, irrevocable actions. This column introduces areas where technological changes require your attention—the lengthening of IP addresses, the security aspects of hypertext transfer protocols, and accelerated mobile pages.

LONGER IP ADDRESSES

Library webmasters wear many hats. In an ideal world, we’d be able to focus, laser-like, day in and day out, on our websites: the interface, the usability, the user experience. That’s not the reality for many of us, who have more than a passing relationship with library electronic resources, because the library website is an important gateway to them. The under-the-hood problem? The world is running out of IP addresses. The solution? Add newer, longer ones.

The IP version with which we’re all familiar is IPv4. It looks like this: 123.456.789.123. The newer, longer address is called IPv6. (What happened, you might ask, to IPv5? It did exist, but was never publicly deployed.)

The World IPv6 Launch occurred on June 6, 2012, so it’s not exactly new. However, you’ll be hearing more and more about it moving forward. You should care about this change because it can have a substantial impact on user access to the electronic resources your library provides.

Much of the documentation on the web surrounding the change to IPv6 is wonderfully optimistic. Google says, “You don’t need to prepare anything for IPv6; your applications and devices will work just as they did before” (google.com/intl/en/ipv6/faq.html). As a consumer of web services, that’s true, but most libraries are also providers of web services (content) that authenticate users based on their IP address. For folks in one of your branches, it’s smooth sailing. You’ve already given all of your vendors your library’s IP addresses, so users pass through without incident.

From offsite, it’s a bit more cumbersome, as it involves a proxy server. You’ve given the proxy server’s IP address to your vendors as well, and it’s most likely configured so that every link to your licensed econtent has a proxy prefix specific to your institution that looks something like my university’s EZ Proxy page: pitt.idm.oclc.org.

This, combined with the URL of the resource in question, takes registered users to a specified database, in this example, the advanced search page for ProQuest’s Dissertations database: pitt.idm.oclc.org/login?url=http://search.proquest.com/dissertations/advanced?

The logic behind this is, “If someone is trying to connect to this ProQuest database and they’re not on the library network, make them log in to the proxy server so that they connect to ProQuest from the proxy IP (which ProQuest already has). Voila, they’re in.” What’s the problem? Currently, the entire system is predicated on the use of IPv4 addresses, not IPv6.

In the past, your network has most likely added new IPs as it has expanded. In those cases, you’ve just contacted your vendors and asked them to add the new IPs. Why would IPv6 be any different? Well, as of this writing, very few vendors support IPv6, and we’re not talking about the small operators either. Without taking action, IPv6 addresses popping up on your network can cause on-site users to have access issues. That’s not creating great user experience.

What to do? First, coordinate with your network provider. IPv6 is likely somewhere on its radar. Make sure you know the plan. Talk to vendors about what they anticipate doing about IPv6. Finally, talk to your proxy administrator—sooner rather than later. Directives can be put into place that can help IPv4 and IPv6 peacefully coexist with minimal impact on your users.

SECURING THE PROTOCOL

There’s no shortage of stories of identity theft, hijacking, and encountering malicious scripts (not to mention surveillance by governments around the world) to make you think twice about what you’re typing into Google—or accessing the web in the first place. How does this happen? The simple answer is that the most commonly used protocol on the web, http, is inherently insecure.

This comes as no surprise to even casual users of the web, let alone those of us who are actively building on it. There is a way to make the web safe (though not completely safe), and that’s using secure connections for all browsing, as opposed to just for high-value transactions such as banking, as is the case now. Further, this tool for making browsing safer has been around for ages, so why isn’t it widely adopted?

The tool in question is the protocol Hyper Text Transfer Protocol Secure, or https for short, which verifies the identity of a website or web service for a connecting client and encrypts nearly all information sent between the website or service and the user. If we’ve had at our disposal for quite a long time a tool to help prevent the all-too-common pitfalls of being online, why hasn’t it been implemented more widely? And why haven’t libraries, defenders of privacy, been at the forefront? It’s complicated.

In order to be an https site, you need a certificate that verifies you have indeed implemented https and have done so correctly. Until recently, getting this certificate was an expensive and somewhat bureaucratic process, one that needed to be repeated at some regular interval. In addition, sites going to https needed to go all in. If you use https for your main webserver but pull images from a different server than isn’t https, you’d get those “mixed content” warning messages that no one likes to see.

Given the threats we’re hearing about so frequently, is anyone doing anything about it? Is there anything you can do to help safeguard the privacy of your library’s users? Browser makers are stepping up to the plate, announcing plans to favor secure connections by doing such things as more promi nently displaying insecure content warnings; Google has announced plans to begin favoring secure sites over insecure sites in search rankings. This is one of many influencing factors but signals an intent to try to move the dial.

Pages: 1| 2


Jeff Wisniewski is web services librarian, University Library System, University of Pittsburgh.

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top