Searcher
Vol.8, No. 3 • March 2000
• FEATURE •
Oh No! I Can’t Get on the Web:
Offline Strategies for Internet Content Presentations
by Steven J. Bell, Director
Paul J. Gutman Library Philadelphia University

As part of the organizing team for a recent local library conference, I was asked to work with the speakers in arranging and coordinating their presentations. When I asked one of the speakers if she planned to make use of the Internet for a live demonstration of Web sites she replied, “No, I don’t need to take that kind of risk.”

It seems ironic in this day and age that something as ubiquitous as network connectivity and Internet access, something most of us take for granted on a daily basis, is approached with such caution by those giving presentations and instructional programs. Apparently, most information professionals would rather invest the bulk of their savings in a new Internet stock offering, than perform a presentation that depended on a live Internet demonstration. Why, when a “live demo” can so improve a presentation, do even experienced speakers consider it such a risky proposition, one that technologically savvy information professionals will avoid at all costs?

Live connections to online resources allow instructors to create a more dynamic learning environment. Instead of simply explaining the mechanics of using a resource, instructors can have participants see actual searching and the results — both good and bad. Live demos also provide opportunities for patrons to pose “what if” search questions and get immediate responses. And live demos are so easy to do these days! Classrooms and conference rooms frequently come equipped with data connections and the computers and projectors needed to support live demonstrations. The portability of both computers and projectors makes it far simpler to “take a show on the road” to any room with a data connection. The use of presentation software by presenters is almost universal, as more and more speakers become comfortable with computers and projection systems and confident that this equipment will work. Those same speakers, though, still often hesitate to go “live.”

An aversion to live Internet demonstrations is hardly irrational. While Internet and network connectivity is certainly more reliable than ever, system failures are not a thing of the past. The networks at most campuses and companies can still break down at any of several points. The campus network can go down, the institution’s or company’s Internet Service Provider can experience a problem, the telecommunications company may have upgrades underway that cause network problems, the particular site or database system needed isn’t available, or the entire Internet just doesn’t work precisely when you need it the most. Even if everything works, who hasn’t experienced struggling to keep attention span-challenged audiences entertained while waiting a minute or more for a site to download? The likelihood that one or several of these failures will occur is directly proportional to the degree that you are absolutely dependent on having a reliable connection (Bell’s Law). In reality, most of us would rarely experience these kinds of failures. Still, who wants to risk being the victim of a failed presentation at any time? As a proponent of live demonstrations, I always plan to use them and encourage others to do so. The issue then becomes one of risk management.

Preparing for presentation disasters is nothing new for information professionals. As presenters migrated from overhead transparencies to computer-based presenting, the smart ones hung on to their transparencies just in case something went wrong. Owing to software version inconsistencies, hardware malfunctions, and other inexplicable but ever-present presentation gremlins, preparing a set of one’s presentation slides as transparencies made for good insurance. As demonstrations of Web sites have become more central to presentations, transparencies as a backup technique clearly does not work well. Given the sheer number of sites presenters can cover, it is impractical to create large numbers of transparencies. The inability of transparencies, even good color ones, to capture the graphic detail of Web pages, or the essence of browsers and hypertext linking, makes them inadequate backups for Web site presentations.

How should you protect yourself for those rare contingencies when the network is down, slow, or just plain uncooperative at your site or elsewhere? The options for responding to these events are growing, and the software for creating and delivering what is referred to as “offline browsing” offers some interesting options for training and instruction. Speakers should view the alternatives for creating offline browsing presentations as a spectrum in which choices range from simple, fast, and crude to time-consuming, elegant, and multi-functional. The offline option you select from this spectrum should depend on factors such as the time available to prepare an offline backup, the degree of importance of the presentation, whether or not the presentation is a one-time event or a regularly scheduled program, available technology at the presentation site, and whether the presenter will want to develop the backup into an offline tutorial. Whatever you choose, as an information professional, you can confidently prepare and conduct live Internet demonstrations that can hold up even if the network goes down.
 

Different Strokes
The phrase “offline browser” conjures up images of several different kinds of software and approaches to achieving a similar task. A quick Dialog search of several computer and library literature databases on the phrase (properly truncated, of course) returned many product reviews. While most of these focused on a commonly recognized offline browser — those that “grab” individual pages or entire Web sites — others discussed the offline browsing capabilities of Internet browsers themselves, and yet others identified Web portals, such as PointCast, as offline browsers. All of these articles discussed some form of software that allows the user to download one or more Web sites or one or more pages from one of those Web sites for later offline use. That is the basic functionality of offline browsers. However, considerable diversity exists in how the different offline browsers collect and store Web pages, the features available to organize or search downloaded Web sites, and the customization available for reuse of captured pages in a presentation.

Two articles demonstrate this point. We highly recommend Karl Beiser’s article as an overview of offline browsing [“Getting ‘Grabby,’” Online vol. 21, No.2, March/April 1997, pp. 20-27]. Beiser identifies eight reasons why librarians would want to capture and view Web content offline. Beiser also focuses on one specific offline browser, WebWhacker, reviewing its functionality and operation in more detail. This article provides an excellent description of what a full-featured offline browser can do. Beiser also demonstrates some accuracy as a futurist when he stated that the functionality of WebWhacker may appear in future Web browsers. Gary Price, in the article “Opera — The Little Browser That Can,” [Online, vol. 22, No. 3, May/June 1998, pp. 34-36] reviews the lesser-known browser, Opera. He notes that one of Opera’s assets is its special feature for caching images, text, and other page features. These capabilities, Price claimed, make Opera an excellent option for offline browsing. As Beiser predicted, current browser versions have begun offering features to facilitate their use as offline browsers. And then there’s my own piece advocating the importance of live demonstrations [“Using the ‘Live Demo’ in Online Instruction,” Online, vol. 14, No. 3, May 1990, pp. 38-42].

While at least two of Beiser’s eight reasons to use offline browsers related to instruction or content presentation (a nice term for what most speakers do with Internet pages), the application of offline browsers to presentations was not the focus of his article. Nor has it been the focus of any other article about offline browsers. This article emphasizes the offline browser as a presenter’s tool for two reasons. First, not all presentations are the same. Depending on the intended audience, the time allotted to prepare for and deliver the presentation, and the availability of presentation technology at the presentation site, a well-prepared presenter would want to carefully examine the options for balancing a live Internet demonstration with an offline backup. Second, new software packages have become available that serve training or presentation functions, but which can function as offline browsers and more.

An ideal offline browser would offer the following features:

None of the software options discussed in this article offer all these features. That makes it more likely that those who need to use offline browsers will need to understand the operational capabilities of these utilities in order to select the appropriate one from their toolbox to match the requirements of specific presentations. One type of offline browser may work best at downloading a few pages for a quick-and-dirty presentation, while speakers may need a more full-featured software package to handle a sophisticated presentation that will also serve as a tutorial.
 

Simple Measures
Attendees at searcher conferences that took place around the time when the Web first became popular, circa 1995, may recall two simple, early methods used to create offline browsing backups. The reason for these backups was not the unreliability of the Web, but rather the inability to access the Internet at all. Conference facilities could not yet offer reliable, fast Internet connectivity. If a presenter wanted to show some Web sites, offline was the only way to go.

One of those methods was to download individual Web screens to a diskette or computer hard-drive. Presenters using this method needed to open, as separate files, each Web screen individually. While certainly cumbersome, this approach did allow the presenter to show the Web sites in a browser window. The process was slow and the links didn’t work, but if the audience used their imaginations, they could almost feel Net-borne. Presenters more attached to their presentation software discovered another method for creating a similar illusion. Using auxiliary software to capture screen images or to cut portions of images from screens with software like SnagIt and Hypersnap, they could cut and paste Web images into presentation software slides. This more time-consuming process to create Web imagery might make the transition between screens smoother, but it could not replicate the look and feel of a real browser.

Of these two methods the former has become dominant. Downloading Web pages for offline display has become easier because browsers can now handle the process, as can other software products designed for other functions. Any browser can simply capture a Web page using the “save as” function. While an acceptable method for handling a small number of screens, this approach soon becomes cumbersome. First, to view each screen, the speaker must open the corresponding files individually, making for an awkward presentation. Second, each downloaded screen will create an HTML file and an accompanying folder that contains graphics needed to properly display the page. Should you accidentally neglect to transfer both the file and accompanying folder to a diskette for later use during a presentation, the file will fail to open. Third, the links on the pages will stay “cold” when saved as individual files. This method also often fails to store large numbers of pages on a 3.5-inch diskette, as the size of the files exceeds the capacity of a standard diskette.
 

Browser-Based Solutions
Microsoft’s Internet Explorer 5 (IE5) offers a good example of a browser’s ability to facilitate offline browsing. Though probably not intended for presenters, the offline browsing function incorporated into IE5 makes the downloading process more efficient than the “save as” method. This feature is also available in IE4 and the steps listed here should work, but IE5 does simplify the procedure. Begin by adding the desired Web page to the Favorites list (IE’s terminology for bookmarking a site). Check the box to make the item available offline (Figure 1). I recommend creating a folder labeled “offline browsing” to store all captured pages (Figure 2). The download can be customized two ways. First, the user can specify whether or not to include the links on the page being downloaded. That is, should the links on the captured page be active (“hot”) or non-working (“cold”)? Presenters that show a page where there is no need to go to the links on it can save disk space and download time by just capturing the single page with cold links. If all the links need to be active offline, choose the option to save the links (Figure 3). For example, I downloaded a page with 17 links on it, and wanted those links hot. This took several minutes to download. Be careful about the depth of levels selected. You may end up capturing all the links on the downloaded page as well as all the pages to which those links link. Second, during this process the user is also prompted about “synchronization.” Select this option when you want the downloaded page updated whenever the actual page is updated. This can be a convenient feature when you plan to use pages for an extended period of time.

The IE5 documentation refers to the capturing process as “using the Internet with your laptop.” The key word here is “laptop” because this option doesn’t put captured Web pages on a diskette. If you attempt to export the contents of the offline browsing folder to a diskette, it will only transfer links to the pages onto the disk, none of the actual content. Once the user captures the desired pages with the offline browsing option, they can only retrieve the pages offline by going to the File menu of IE5 and selecting “work offline,” or else the operation fails. Only when you set the browser to work in offline mode can you view the pages captured in the offline browsing folder. All the links will be hot, assuming the pages to which the links point were also captured. This will help a presenter survive a presentation if the network fails. Certainly the dynamics of a live demonstration, particularly the ability to answer “what if” questions, will be lost, however.

Netscape offers a similar feature. If you strongly prefer Netscape as your browser, you can explore its offline browsing feature. It is fairly similar to the process described above. One notable difference: The offline browsing feature is part of Netcaster, not Netscape Navigator. This may add just an extra step to the process.
 

PDF
Producers of other kinds of software have begun noticing the value of offline browsing. At least one popular software package has added this feature, though it is conspicuously unrelated to the software’s primary functionality. The recently released Adobe Acrobat V.4 (released spring/summer 1999) offers an easy-to-use, near-Web-grabber quality function. Though a bit confusing at first, the process is easily mastered. The end product is a PDF format version of a series of Web pages or an entire Web site that you can view entirely offline. This new feature is called “Web capture.” It works only with Internet Explorer. To begin the process, click on the “open Web page button.” This brings up a dialog box in which to enter the desired URL (Figure 4). At this point you can choose to download a single page or an entire Web site. The choice depends on how many links will be demonstrated during the presentation. Like any grabber, the more levels you capture, the longer it will take to complete the download and the larger the size of the resulting saved file.

Once you begin the process, Adobe will go out on the Web and grab the appropriate pages and links. When completed, the software will display the captured page in the Adobe window. Next to the image in a left frame, a bookmark feature will appear (Figure 5). This will facilitate an offline demonstration. The bookmark lists all the captured links in index format. This makes it easy to jump to any captured link. You can also demonstrate a Web site by clicking on the actual links. If you have downloaded multiple levels, the links will work. As with any PDF file, in order to view the downloaded pages offline, you must have a working copy of the Adobe Reader.

In another nice feature, once you download the first Web page, you can download any additional linking page just by clicking on the link. Therefore, if you know exactly what your Web presentation will involve, you can easily download all the required pages just by working through the demonstration. Each new captured page goes onto the bookmark frame. For many users who don’t need a sophisticated Web grabber, Adobe’s Web capture may be entirely sufficient. To get a better idea of how the downloaded pages  look in the Adobe Reader, along with the bookmark, point your browser to http://staff.philau.edu/bells/webpresent.html. There you will find a sample of an Adobe PDF file captured using the process described above.
 

Screen Capture
Screen capture software can also function as an offline browser, though not its main intention. Such programs have circulated for quite a few years. They work well for snapping entire screens or screen regions for inclusion in PowerPoint slides, as graphics for Web screens, or as images integrated into documentation. Many Web sites that instruct users on how to conduct an online search or install software, use screen capture software to show the actual dialog boxes and screens the users must learn.

SnagIt from Techsmith is one well-known screen capture software that has expanded its functionality. It demonstrates particular versatility in capturing screen regions. The user can identify a specific portion of a screen to capture or can set SnagIt for full window capture. When viewed offline the links on the captured screens are inactive. SnagIt makes no claims that it can serve as a full-featured offline browser. Because saved images retrieve individually, using SnagIt for offline browsing would differ little from the old “file—save as” method, with the added utility of region capture. Anyone still using this type of image or screen capture software for offline browsing needs to explore other options.
 

Dedicated to Grabbing
Now let’s look at the software dedicated to offline browsing. Not a browser, not designed for other functions, the sole or primary function is the capture of Web pages and/or sites for offline browsing. These packages, of which WebWhacker is probably the best known, have been around for 2-3  years. Other Web grabbers include NetAttachJ Pro, WebZIP, WebSnake, and Web Buddy. All perform the basic download operation, but offer other supplementary features that make these packages somewhat preferable to IE5 or Adobe.

Individuals who plan on doing extensive offline browsing or want to capture large sites will probably want to obtain a dedicated grabber. For one thing, grabbers do a superior job of organizing downloaded Web pages. For example, WebWhacker provides a useful hierarchical view of the captured site, making it easy to digest the organization of the Web site and the paths between the pages. Finding a suitable offline browser from among the Web grabbers is a manageable task. Software reviews provide insight into product features, and many producers offer Web sites that allowed interested individuals to acquire a trial version of the software.

Those with only occasional need for a Web grabber might consider freeware. One piece of software gaining some popularity is SiteSnagger. Available as a free utility from ZDNET, it was first publicized in the February 24, 1998 issue of PC Magazine. [Information about SiteSnagger, and a link to download the software appears at http://www.zdnet.com/pcmag/pctech/content/17/04/ut1704.001.html.] SiteSnagger can download to a computer’s hard disk as much or as little as a user wants of any Web site for offline browsing. SiteSnagger organizes the site information into a tree display of the pages and multimedia files snagged. A table of contents page provides links to each of the downloaded pages.

Getting started with SiteSnagger is fairly easy. From the initial screen, open a new project. This displays a dialog box where you enter the name of the project. Think of “project” as the file where you will store the downloaded Web pages (Figure 6). Project files generate automatically under the directory that holds the SiteSnagger software in a subdirectory called “projects.” In this example, we created a project called “library Web site.” This will become the name of the folder within the projects subdirectory and will hold all of the HTML pages and multimedia files snagged.

Like most offline browsers, SiteSnagger allows the user to control the number of levels captured. If the “max levels” option (found under the Project menu) is set to one, SiteSnagger will download only the page specified and none of its links. This will provide just cold links. If max levels go to two, then it will capture that page plus all the pages to which that page links (Figure 7). SiteSnagger can download up to 20 levels. Watch out!. The author of SiteSnagger indicated that if PC Magazine’s Web site was downloaded at only level three, over 1,000 files would be captured. Good offline browsers offer additional controls to help the user contain the size of captured sites. SiteSnagger offers control mechanisms to limit both the maximum number of pages captured and the capture of multimedia files. If the user set max levels at two, but didn’t want more than 50 pages, SiteSnagger would stop capturing at 50, regardless of whether it had completely drilled down through all the level-two pages.

If you only want to capture links within a domain (e.g., you want your link to LEXIS-NEXIS Academic Universe, but not to the advertiser on the same page), SiteSnagger’s default setting will keep capture to within a domain. Should you desire to capture links in other domains, use the “follow offsite links” option. Finally, SiteSnagger’s “get multimedia files” option lets you decide whether or not to capture any page’s associated multimedia files. Snagging the files will improve the appearance of captured pages, but it will dramatically increase the time required for the software to snag a site. If your only interest in a Web site is its textual content, save lots of download time by leaving this option unchecked. For example, when I downloaded my library’s Web site at two levels, with no multimedia files, the downloaded file size was 11 K. When I repeated the process with multimedia files, the file size increased to 68 K. That’s not a large file, but it demonstrates the rapid file growth of incorporating graphics. Still, most of us will want them.

I found the actual viewing of captured sites the most confusing aspect of SiteSnagger. After the download, a project is opened and a dialog box shows the named file with a plus sign (+) to the left. You must click on the plus sign to get to the table of contents. Only by double-clicking on the table of contents link will you get to a page that shows all the downloaded pages (Figure 8). To bring up any page offline, just click on the link and the page will appear in your PC’s default browser window. I often capture the Web pages on my desktop PC, but want to show them on my notebook or another PC. I found I could not directly capture files to my floppy diskette. Rather, I had to copy the downloads from my hard drive over to a diskette; not a difficult operation, but it adds one more step to the whole process. If properly controlled, large Web sites can easily be contained on a standard 3.5-inch diskette. When retrieving from a floppy, remember that each download operation creates the project folder containing the pages, plus an SNG file. To get the captured Web pages to open properly, you must double-click on the SNG file. This procedure will seem awkward at first, but practice sessions will quickly help you learn the technique.
 

More Than Web Site Capture
If capturing Web pages or whole sites enables one to demonstrate the Web in a controlled offline environment, presentations get even better when you can turn the captured pages into an interactive teaching tool. Some presenters and trainers want to take offline browsing a step further by integrating the captured Web pages with additional search guidance and educational information. The software options discussed up to this point do not allow manipulation of the captured pages for these purposes. A different type of software, designed more for offline Web training than straight page capture, can accomplish both tasks. As you might expect, these other software packages are not as adept at capturing and organizing entire Web sites, nor do they allow features such as synchronization. These packages can help you to effectively develop self-paced, self-guided instructional tutorials for use as a presentation backup or as a stand-alone learning resource.

I first realized the need for this type of product during presentations when I found myself frequently waiting for Web pages to download while audiences drifted off. I wanted to retrieve and navigate pages rapidly. My staff and I also realized that a downloaded presentation could standardize the content of our presentations so we would all show the same basic content. I considered two pieces of software for developing the downloaded presentations: Lotus Screencam and Catch-the-Web. The former is better known, so we will only cover it briefly

Screencam behaves like a video recorder that captures everything that comes over your computer monitor. The resulting product will play back on your monitor like a VCR tape. To create a Screencam presentation a trainer would go through all the component parts of a presentation while saving them as a Screencam file. I found this a bit unsettling, because I felt I needed a well-developed script for the presentation that would take me through the steps to capture the presentation exactly as I wanted to play it back. There is some pressure to get it all right the first time, which reduces the possibility of making spontaneous changes.

Screencam presentations comprise files called “segments.” A segment might consist of a LEXIS-NEXIS search or a demonstration of Northern Light. You can then join segments together to create a presentation. While you can delete or replace entire segments within a presentation, you can’t go back and edit just a single part of a segment; the entire segment recording has to be redone. For example, if the LEXIS-NEXIS segment was completed prior to the addition of a new screen or feature to the system, you would have to redo the entire segment to incorporate this one new feature. Screencam is certainly easy to use. Just start and stop the recording. The saved file plays back in Screencam. If you record a Web page, you will see the browser window as well as the main content. Unlike Web grabbers, the captured files do not appear in a browser; the files only display on the Screencam player.

Screencam can create excellent self-directed learning tools because the user can control the pace of the instruction. Like a video recording, the user can speed up to skip material or reverse to repeat material. However, it works rather awkwardly if you want to give a canned, offline demonstration. It requires the instructor to continually work the start, stop, and rewind buttons. I have found myself reversing too quickly and then missing the point I wanted to show. You can, however, overlay the recorded online session with an audio track. This involves additional scripting, but Screencam can accommodate a vocal supplement that can guide the presentation and explain what is happening on the screen. [To check out how Screencam works, go to a tutorial with a soundtrack at http://plato.acadiau.ca/sandbox/tutorial/solutions/powerpoint/homepage.html.]

Screencam has one obvious advantage; you can mount its recordings on Web sites and make them accessible to entire communities. However, only users with a version of Screencam installed can use the recordings. Fortunately, interested users can download the free Screencam player helper application. Screencam recordings tend to be large files — at least several megabytes. Because the files will rarely fit onto standard 3.5-inch diskettes, Screencams are certainly less portable. With zip and CD-Recordable drives becoming commonplace, this may not matter much longer. Also unlike Web grabbers, the idea here isn’t to replicate entire pages or sites offline with links intact. The strength of Screencam lies in its ability to take the viewer on a guided tour of a Web site. This can be constraining if the viewer wishes to do some spontaneous self-exploration. Page links are not simply cold; they don’t actually exist. Clearly, Screencam has its strengths and weaknesses. Though still well worth considering for presentations, it probably works better as a development tool for library tutorials.

Though not without flaws, I find Catch-The-Web (CTW) provides the best balance between a Web grabber, offline browser, and tutorial development tool. CTW presentations can be saved as executable files and are small enough to fit onto standard 3.5-inch floppy diskettes without any file compression. You can view the files using any browser. Alternatively, you can save CTWs in HTML format for mounting on a Web site and easy Internet access. In either format, executable or HTML, the files are exactly the same presentation.

Unlike Web grabbers, CTW cannot capture an entire Web site. It can capture only one Web page at a time. Unlike Screencam, you can modify or replace any single screen or entire sections within a CTW presentation. CTW has one odd limitation, however; it only creates presentations with Microsoft Internet Explorer. However, this only pertains to CTW production. Any browser will work for viewing the completed presentations. If viewed offline, the page links are cold. If viewed online, all page links are hot. This works to a presenter’s advantage. To move a presentation along quickly, you can display all Web pages quickly, without downloading delays. However, if an audience member wants to see a unique search or asks a “what if” question, you can perform the search right away as long as you have an online connection, since the links are hot. Even after doing several searches, the presenter can return to any point within the canned CTW presentation with a mouse click.

At its most basic, one can use the software to quickly create a CTW presentation consisting of a series of Web pages for an emergency backup. With more time and effort, a presenter can turn the presentation into a self-guided tutorial, complete with supplemental information notes and PowerPoint-like slides inserted throughout the presentation to provide further guidance.

Getting started with CTW is easy. You purchase and download it off the Web. Once installed, the software provides a desktop interface for building and compiling presentations. Creating a CTW is as easy as finding the right pages and clicking on them with the pushpin tool. When running CTW, the pushpin goes into the Windows tool tray. Dragging the pushpin from the tool tray on the desired page captures a screen. A gray box will appear around the region of the target screen; a click of the mouse completes the operation. A dialog box then appears in the browser window (Figure 9). Each captured screen can be given a unique title (a good idea). You can attach notes to the captured screen that will appear in the completed presentation as text, explaining what that slide shows. Each captured Web page can be put into a “chapter,” or a section of the presentation. For example, all the captured screens from a search of AltaVista could go into a chapter called “AltaVista.” If you forget to create or name a chapter, you can do it after finishing the presentation. A free utility comes with CTW, called CTW Slider, that allows the presenter to create and insert slides into a presentation. For example, you can insert a slide of your own explaining about AltaVista before displaying all the captured AltaVista screens.

Once you have captured all the desired Web pages, you can compile them into a CTW presentation. The latest version of the software has a preview feature that shows what the presentation will look like when complete. This allows the creator to make changes before compiling the presentation. The latest version also improved the compiling process by adding a “wizard” that takes the creator through the process with a series of on-screen prompts. Mainly, the user has to decide whether to make the finished product an executable file or an HTML file. CTW automatically stores the completed file in an “out” folder. You do not have to create an extended presentation in one sitting; you can build it one chapter at a time. After each work session, simply save all Web pages captured to that point as a “work” file. That work file can be re-opened and new captured pages added. Captured pages within a presentation can also be dragged and dropped from any point within a presentation to a new place. Some of my CTWs have 50 or 60 individual screens in them. Still, the resulting files are size manageable.

Viewing a completed CTW presentation is simple. If you have saved the CTW as an executable file, on a floppy diskette for example, you can load it into the disk drive and then double-click the file name from within a utility such as the Windows Explorer. This will cause the CTW to begin executing. Upon completion, it will automatically load the default browser (any one will work) and load the splash screen of the presentation. The CTW compiling wizard offers options for customizing the splash screen, e.g., a library could add its logo graphic (Figure 10).

This executable file approach to CTW does require a bit more effort than viewing a CTW saved as an HTML file. When finished compiling an HTML version, CTW will store all the HTML pages and graphics into a folder. Within that folder, there will be a single “index.htm” file. After you transfer the folder and its contents to a server, you only need a link to the index.htm file. A click on that link will take the user to the splash screen. If you prefer to stick with the executable files, you can also load them onto the Web. While the files won’t be executed directly over the Web, anyone can download the CTW file and view it on their own. [Two sample CTW files — a tutorial on finding and evaluating Web information and a tutorial on understanding library databases, both in HTML format — are available at http://www.philau.edu/library/tutorials.html. Readers who want to experiment with an executable version of a CTW can find one there, too.]

Navigating a CTW presentation is intuitive. Directional arrows at the top of the screen allow the viewer to move forward or backwards through the presentation one screen at a time. An index of all the screens in the presentation appears in the left frame. This allows a viewer to jump to any point within the presentation, a very desirable feature for a tutorial because it lets users complete sections as time allows and then return and resume the tutorial from where they left off. If a user prefers not to have the index taking up monitor real estate, another button (the up arrow) allows the user to eliminate the index frame altogether. [The demonstration CTWs also provide examples of how to use the CTW Slider utility to incorporate slides into a presentation.]

Lotus Screencam and CTW are not the only software packages that fall into this category. A shareware package called HyperCam works like Screencam. It can create a video file out of a series of visits to different Web sites, that, when played back, creates a simulation of an actual online Web session. SnagIt, mentioned above, now offers video capture in its latest version. Costing much less than Screencam and just a bit more than HyperCam, SnagIt is easy to use and can record entire screens or specified screen regions. Like Screencam, it can record an audio track for adding to the video. As with any video recording software, completed files will be large. I recorded just a few screens from my library Web site and created files that averaged about 2 megabytes. If you go this route and need portability, either capture to a laptop’s hard drive or plan to use zip, super disks, or CD-Recordable. SnagIt offers one advantage in that it needs no special helper application to view the files. The completed recordings display through standard media players, such as the one packaged with Windows 98. Though cold, offering no chance for search spontaneity, the links are worth considering as tutorials and presentation backups that can allow Web mounting.

Other purveyors of products or services for creating Web-based training modules exist. For example, two products, E-Show from Alive.com and WebPresenter from WebSentric, offer technologies that allow presenters to build sophisticated presentation modules directly on the Web. These products are aimed at users with little or no HTML skills who want to develop Web-based presentations. However, most of these services are considerably more costly, with functionality far afield from offline browsing. Obviously these options won’t work when the Web is down, but they do take the concept of Web-based training to a new level. Despite the variety of products on the market for Web-based presenting, I have yet to come across anything that works quite like CTW.
 
Web Sites for Products Mentioned
Adobe Acrobat
http://www.adobe.com/prodindex/acrobat/main.html
Catch-The-Web
http://www.catchtheweb.com
E-Show
http://www.yesler.com
HyperCam
http://www.hyperionics.com/www/hypercam. htm
Lotus Screencam
http://www.lotus.com/home.nsf/tabs/screencam
NetAttachJ Pro
http://www.tympani.com/products/NAPro/NAPro.html
SiteSnagger
http://www.zdnet.com/pcmag/pctech/content/17/04/ut1704.001.html
SnagIt
http://www.techsmith.com/products/snagit/index.htm
WebPresenter
http://www.websentric.com
WebSnake
http://www.emailsoftware.com/web.snake.htm
WebZip
http://www.downloadshareware.com/misc/webzip.html
WebWhacker
http://www.bluesquirrel.com/products/whacker/whacker32.html
Web Buddy
http://www.dataviz.com/products/webbuddy /WB_Home.html

Conclusion
While presenters today may obsess about preparing for the eventuality of failed Internet connectivity, I cannot help but think that this is a temporary problem. It seems inconceivable to imagine a future where Internet connectivity would be anything less than so completely ubiquitous, so completely intertwined with many routine things we do, that the reliability of the connectivity would seem as secure as what we experience today with electricity and phone service. How many of us prepare for the possibility of having no electrical power when we give a presentation? I don’t recall seeing many presenters bringing portable generators with them to conferences. If this issue of Internet unreliability is just an intermediate pothole on the road to true connectivity, how do we best deal with the situation until that long-promised superhighway is truly dependable? For now, the state of the art in offline browsing is good. Depending on the situation, any presenter should have an option available that provides a solution.

Web browsers can serve as one mechanism to capture Web pages for offline browsing, though their functionality is somewhat limited. However, the price is certainly right if your need for backup Web pages is minimal. Since you already have the software on your desktop, learning to master it should not overwhelm you. For more sophisticated needs, e.g., downloading extensive sites and compiling the sites into a presentation using good organizing and retrieval features, you should probably invest in a dedicated Web grabber. Before making that move, however, you might want to give the freeware or shareware versions a test run. Those who want to go to the next level by combining their offline browsing tool with some form of tutorial development product should explore products like SnagIt, Screencam, and Catch-The-Web. These products don’t perform some of the functions that the mainstream Web grabbers do, but they will let you create some Web or library database tutorials that can serve as backups for library presentations, as well as providing more training resources for your constituency.

In researching this article I’ve discovered that no single software product seems to do it all. Some grab parts of or whole sites; some offer better recording and playback features; some allow you to edit the captured pages for better tutorial functionality. All the offline browsers and associated products have varying strengths and weaknesses. It would sure be nice if someone came up with a truly versatile product. The good news is that most of these software products are either free or reasonably priced ($50-$100). Conceivably, those in need of offline browsing tools can create a toolkit that includes several of these utilities.

Presenters who can anticipate and prepare for the myriad potential disasters that cause presentation catastrophes are quite skilled. Many individuals are good at creating presentations that incorporate live demonstrations, but the real masters of the art are equally adept at developing offline alternatives that make sure the “show must go on.” We encourage all library and information professionals who make presentations to gain control over the risk factors involved in live demonstrations, so they can approach them with confidence, rather than fear. We also encourage those reluctant to make presentations of any sort, owing to fears about technology failures, to become more confident. The technology, though sometimes difficult to deal with, is certainly manageable when one comes equipped with the right tools. Let’s all hope and work for the day when the Internet becomes totally reliable. This should diminish and even eliminate our need for offline browsers as backups for Internet content presentations. Until that happy day, I have a suggestion for those who want to raise the art of offline browsing to the highest level of achievement. How about a new journal called Offline?
 
 

Steven J. Bell is the director of the Paul Gutman Library at Philadelphia University. He writes and speaks frequently about online searching, technology topics, and academic librarianship. He received the IAC Award for Information Authorship in 1992 for his article, “Online Without the Line: Cellular Technology for Searching on the Go,” which appeared in the September 1991 issue of Online. Communications to the author should be addressed to Steven J. Bell, Director, Gutman Library, Philadelphia University, Schoolhouse Lane and Henry Avenue, Philadelphia, PA 19144; 215/951-2847; bells@philau.edu.


Contents Searcher Home