Online KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Magazines > Computers in Libraries > April 2007

Back Index Forward
 




SUBSCRIBE NOW!
Vol. 27 No. 4 — Apr. 2007
FEATURE
Super Standards and Substandard Standards
by Daniel Chudnov

Last month I wrote about the problems with managing instant access to diverse files and file formats, and mentioned a few standards being used to improve file “packaging.” It’s a huge problem, and one to keep your eye on, because it will become more important the older our data gets. Mixed up in all of this is the story of standards, and much like with dental floss and automated off-site backups, thinking about and using standards isn’t exactly a lot of fun. Sometimes you might want to kick yourself for not using standards, and other times you’ll sure be glad you did.

Standards and their applications are all around us, more than you might think. In libraries, we have standards for metadata, search protocols, invoicing and payments, and resource sharing, and that’s just the tip of the iceberg. These days we don’t often think about the library hand, ALA forms for interlibrary loan, or the ALA character set. Below the surface of these standards past and present, and a dozen other categories of standards applicable to libraries, are hundreds of other standards: The shape of an electrical plug and the current flowing through the wires, the amount of water used to flush a toilet and the shape and size of sewage pipe fittings, and building codes for safety and efficiency are all critical to construct­ing and maintaining our library facilities.

Look at any computer and you will notice hardware standards for connecting peripheral devices, the layout of the keyboard, and the aspect ratio of the screen. Operating systems use standards to regulate communication between system components, programs and the kernel, and devices on the network. Applications make use of all these and use their own standards for instant messaging, typefaces, and file names. If you really want to blow your mind, think what life would be like without standards for time, weights and measures, currency, and human language. But let’s not blow our minds just yet.

Lessons Learned from HL7

Let’s stay focused on the here, the now, and the libraries in computers to come. It’s an obvious question to ask: What standards would we need to use to integrate diverse items from an array of users and their machines into an instantaneous library? There are plenty to choose from, and there’s always the option of writing our own. How do we choose from among the most hyped new specs and the venerable stalwarts?

In the mid-1990s, I was involved with the development of a widely used healthcare data exchange standard called HL7 (Health Level Seven). My time working on HL7 was brief, but I learned a lot about how standards work, and how they change over time. At my first HL7 meeting, I found that there was a group of participants interested in using SGML as a document and data standard for HL7, which had until then had its own unique syntax or “wire format” (the format of data sent over the wire). We agreed that there was enough interest in moving forward with this to meet again.

At the next HL7 meeting, my second, Tim Bray gave a keynote on the new work he was leading to define the XML standard. I’m sure that this was the first time that most of us in attendance had heard of XML, but those of us already gathered to talk about SGML knew that XML would be big. Within a few years, and after a lot of hard work by many capable people, HL7 had changed to make extensive use of XML.

I took several lessons from this experience. One was that at the very moment a group formed to look at using SGML, XML appeared and made our job easier. We knew SGML was difficult to understand, and this made it a hard sell. But we knew that SGML had great potential, and among the group assembled were people with wide-ranging experiences and skills who had reached a similar conclusion. The people behind XML knew SGML’s potential well and knew that for people like us to realize that potential, we needed a newer, better standard that offered the benefits of SGML but in a substantially easier package. They knew this because they had experience with SGML themselves, and they had seen groups like ours form in other industries, only to struggle to move forward. XML removed those barriers to acceptance and quickly became hugely successful in many industries in addition to healthcare.

Another lesson was that the people wanting to use SGML in HL7 knew what they wanted from SGML, but were ready to abandon SGML when they saw that XML offered the same benefits for less work. As far as I know, nobody ever submitted a formal proposal to use SGML in HL7. The XML proposals went through many revisions and debates like any other proposal would, but since XML was quickly gaining adoption in other industries, the debates and revisions were centered on making XML work for HL7, not on whether XML would work at all. If the people in the original SGML group had dogmatically stuck to their SGML guns, they might never have gotten anywhere. So the last little lesson here, then, is that when heavyweights in a bigger industry than yours line up behind a newer, better way to do something you want to do, it’s a good idea to take a hard look at whe­ther their newer way works for you.

Three Questions for Evaluating Standards Today

Based on these experiences, and having worked with the output of other standards initiatives during my career, I’ve narrowed my process for evaluating standards down to three questions. These aren’t all of the questions you need to ask, and they won’t ever tell you the whole story. But if you’re not sure which way to go on picking one standard from another, or whether to use one at all, maybe these will help.

1. Is it simpler? Standards aren’t just buzzwords for glossy product bro­chures and resumes. Standards have to solve some problem to be successful, and the best standards I know solve problems in a way that makes things easier for all parties involved. It isn’t enough to say “electrical plugs should have three prongs, one of which is for grounding,” and then define a mechanism for choosing the shape, size, and relative position of the three prongs. A better standard defines all of this, giving manufacturers exact details about how to configure the prongs on their devices or the holes in plug receptacles.

It doesn’t necessarily matter if one standard is “more correct” or “more complete” than another. Among competing standards, the one that simplifies adoption and implementation the most is usually a good choice. In many ways, this might be the reason that SRU is replacing Z39.50, and why OpenSearch is already more widely implemented than either Z39.50 or SRU. Additionally, the easier a standard is to read and understand, and, frankly, the shorter it is, the more likely it is to be considered.

2. Do people need to develop and adopt a new category of standards and products? Upon thinking up a new library service, it can be very tempt­ing to want to write a standard for its implementation, hoping other librarians will want to use it. In many cases, doing so can be a generous offering to the community. In an area where a service didn’t even exist previously, you can offer a specification and a working example to demonstrate its use. But it’s rare to get everything right the first time, or even the second. One reaction to this dilemma is to start a new standards committee through an existing standards body, or even just on your own, and to develop and promulgate a standard through this new group. Often, though, this can happen too quickly, before the nature of the service and the issues it addresses are understood well, resulting in a standard built on the wrong assumptions, or one optimized for too general or too specific a context.

This is particularly tricky to judge. Just how long it takes for a new service to be widely understood or for its context to be properly defined is hard to know. On the other hand, if you already know standards well, and have a strong mental toolkit for evaluating new efforts and how they compare to older standards, you can always ask yourself: “Do we really need a new standard here?” Sometimes, an ideal solution can be to recontextualize the new service as a variation on older practices. This way, maybe you can find an already widely used standard to solve part or even all of the new problem.

One great example of this is the GData API for integrating Google data and services with other Web sites (http://code.google.com/apis/gdata). Instead of starting a whole new specification from scratch, Google engineers chose to build a new protocol around the Atom Publishing Protocol. Because of this, any developer who already knows Atom and wants to use GData has less to learn and less code to write. For the same reason, it can be easier to adopt an XML-based specification than to adopt a spec with its own wire format and syntax, and it can be easier to develop a network protocol based on HTTP than to develop a complete new protocol from scratch. As a software developer, it is reassuring to read a new specification and find that most of it is based on standards you already know. This indicates that the authors of the new standard have thought very carefully about how to minimize the additional scope and requirements for implementing the new service by piggy­backing on older services.

3. Does it “just work”? Arthur C. Clarke wrote that “Any sufficiently advanced technology is indistinguishable from magic.” To me, since it has been integrated into the latest revisions of Firefox and Internet Explorer, Open­Search seems magical. The Web and email still seem magical at times (when they’re not filled with flashing ads and spam). Sharing music over iTunes is magical, and so is the ease with which even somebody like me can quickly have a new podcast about library geeks featured in the iTunes Music Store. Behind these applications are great standards: HTTP, XML, RSS/Atom, email and MIME, and ZeroConf. We know these are great standards because they are so widely used we don’t even have to think about them anymore. Part of the reason each has succeeded so wildly is that feeling of invisibility—the Web page just loads, or the email just shows up, or you’re sitting in an airport lounge and find interesting music on a stranger’s laptop and start listening with just two clicks. It seems to just work, and that scores high marks for the standards that support these uses.

That’s not to say that these standards didn’t have their own stumbles out of the gate, but mistakes got fixed, their adoption took off, and now their applications flow like water. When I look at badly designed displays of MARC data in Web catalogs, or when I look at wordy OpenURL resolver user interfaces, I don’t get that magical feeling. That’s not to say that we can’t make improve­ments, but rather it’s an indicator that these standards and applications remain insufficiently advanced.

Standards Aren’t Magic

Standards alone don’t make the magic; a lot of hard work goes into achieving real usability. The best standards allow us as librarians and software developers to focus our attention on getting usability right, instead of fretting over standards compliance and losing time to developing software to support new protocols when old protocols would do fine. If we’re going to improve our libraries enough to make them seem like magic, we’d do well to look for opportunities to simplify library standards.

So the next time you’re evaluating tools and standards, just ask yourself: “Is it easier, and does it work with what I already know?” Ultimately, if the answer to the big question—“Can this help me make my library work like magic?”—isn’t yes, you’ve got more work to do.


Daniel Chudnov is a librarian working as an information technology specialist in the Office of Strategic Initiatives at the Library of Congress, and a frequent speaker, writer, and consultant in the area of software and service inno­vation in libraries. Previously, he worked on the DSpace project at MIT Libraries and the jake metadata service at the Yale Medical Library. His email address is daniel.chudnov@gmail.com, and his blog is at http://onebiglibrary.net.

       Back to top