Content area
For many years, catalogs have served as the gateway to library collections; our collections are inaccessible without them. This picture is rapidly changing with the current explosion of online resources.
For many years, catalogs have served as the gateway to library collections; our collections are inaccessible without them. This picture is rapidly changing with the current explosion of online resources. The new gateway is the World Wide Web. Should we attempt to accommodate the new resources in the old gateway? Libraries have struck various balances between what is accessed via their catalog versus their web site, but creating lists of online resources on the Web is always part of the solution. The author points out deficiencies of our catalogs that have led us to this point of creating a parallel "catalog" on our web sites. The author also offers some powerful reasons why focusing access on the Web makes sense now.
For the past 25 years, OPACs have been at the center of the library world. That era is over. Ask any patron how many times a week he uses an OPAC and how many times a week he uses a web search engine. The answer to that question should scare us.
Stuart Weibel, quoted in Chepesiuk (1999, 63)
Over the past ten years, selection and acquisition of electronic resources have gradually been brought into the mainstream of our collection development and technical services operations. The same has not happened with our catalogs. One of the primary functions of library web sites today is to provide lists of resources available in (or through) the library. Why do we do this, when we all have expensive integrated library systems, complete with web-based catalogs, whose function it is to provide access to our collections? Clearly, we create lists because we feel that those catalogs are not meeting our users' needs, but if we're ever going to improve on current practice, we need to look deeper at where we stand now.
In making these comments, I am envisioning academic libraries of all sizes. The future of the online catalog in public libraries has been actively debated in the library profession since Coffman's (1999) article "Building Earth's Largest Library" appeared last year. Our users are different, as are our collections and our missions. Some would argue that the two library communities are becoming more rather than less different with the digital revolution.
We have a public relations problem on our hands as well. Unfavorable comparisons between the Web and the library come up regularly in the news and popular press. A recent one was from U.S. District Judge Harry Hupp, ruling on the legality of deep linking between web sites: "There is no deception in what is happening. This is analogous to using a library's card index to get reference to particular items, albeit faster and more efficiently."1
Why Does the Library Catalog Look Like a Misfit on the Internet?
First, what does the library catalog do? Librarians started creating catalogs when collections got too big to remember where everything was. In other words, they were designed to help people find things, usually in one physical location. So a defined collection was key. Librarians built their collections to meet the needs of their users. Not any old thing was worthy of being acquired, and not everything acquired was worthy of being included in the catalog. Thus, the catalog comprises specific objects of interest to the catalog's community. Each item's description was strictly controlled by cataloging standards and practices, a key component of which was a controlled vocabulary that imparts meaning, so as to improve that item's chances of being identified when needed. To be able to physically retrieve the item, we added a call number location.2
When viewing our catalogs in this way, their disjoint with the online world is obvious. The Web currently lacks all of these elements, but their existence in formative stages shows that the concepts underlying the catalog are critical ones for organizing and locating knowledge. Metadata adds description, metadata plus XML DTD's could add more meaning, and various initiatives are trying to pin down the idea of location.
At the same time, communities are emerging around web portals, chat rooms, and other telecommunications channels. The Web will eventually contain "catalogs" describing collections that are location independent and of interest to specific communities. Sometimes, these communities of interest will coincide with communities of access (such as a campus), but often not. The bottom line is that the universe of items that constitutes objects on the net is far larger and the objects more fluid than libraries have had to deal with until now. And that is a critical difference.
The collocation function of the catalog serves to bring together works authored by a given person, with a given title, or on a given subject. With Internet resources, author and title are often ambiguous attributes of an item and sometimes not important at all. Internet searchers value collocation by subject, of course, and Yahoo's readily browsable classification system has made it the most popular site on the Web. But Yahoo only indexes what it chooses to index, by all measures a very small subset of what's available. It took four decades for the library community collectively to build OCLC WorldCat's 41 million records, during which time new information products were vast and growing but still contained by physical format, long-standing distribution systems, and so on. The Web is less than a decade of age, and already AltaVista indexes more than 250 million pages. We all know the reasons why there is little comparability between the WorldCat and AltaVista databases, but those distinctions are lost on most of our users. By cautiously embracing metadata, which does not rely on a controlled vocabulary, librarians have acknowledged that substantial and radical changes in bibliographic control are inevitable. But how friendly are our catalogs to metadata records?
The Heart of the Problem: What's Not in the Catalog
The catalog is a collection of surrogates. Unfortunately, users do not want item surrogates; they want the actual information object, the real thing. As Younger (1997) pointed out, "Surrogates, which are cataloging records, furnish sufficient information so decisions can be made about relevance and usefulness without examining the document itself. Where resources can be more easily accessed and reviewed, the amount of information required in the surrogate may be less than is now recorded" (p. 472). We have practically all personally experienced the heady satisfaction of finding a document full-text online without having to find the book or journal and make a photocopy. That's also heady for users who have experienced the sneakernet method, particularly those who remember using card catalogs. Just think what it's like for undergraduates who never knew the old way. It must be like asking them to go to a special building if they want to see a television!
Long ago, the library community chose to cede bibliographic control of the journal literature to the indexing industry. The fruit of that decision is that we now purchase full-text journal aggregations from vendors, and, because they are no longer delivering just surrogates (i.e., indexes), our users are drawn to them like magnets. We now see a growth in the importance of valuable knowledge outside the scope of both WorldCat and indexes (and thus our catalogs). Those include preprints, gray literature, documents connected to online courses, and so on.
The Other Big Problem: Our Standards Make the Catalog Inflexible
We must admit that the format best supported by the library catalog is the book. The bibliographic tools to control books are fully developed, MARC is an excellent way to exchange data about books, and our catalog displays are tailored to describing books. Archive, image, or audio formats are much less well supported and arguably less amenable to surrogacy. Vendors can sell us add-ons to better handle those formats, but more powerful and desirable software tools, as well as standards for information storage and display, are easily found outside the library world.
A MARC record describes a manifestation of a work that is unchanging; it does not attempt to describe the work itself. If the work changes (as serials do), then a host of problems arise, with the end result usually being that a separate record is created. Obviously, web documents change much faster than serials and don't announce themselves like a journal issue does when it arrives on your doorstep to be checked in. A person who works with relational databases would say, "Why did you create a separate record when that journal changed title? Why not just change the title associated with that entity?" As Kevil (2000) recently pointed out, because of MARC, no library catalog is a true relational database. Our catalogs violate the two major principles of a relational database: a "key" to uniquely define an item and the absence of redundant data. They do this in part because MARC is built around describing a physical manifestation of a work and not the work itself, but also because MARC is used for multiple purposes: there is a "confusion of realms between description, storage, transport and display" (Miller 2000). In a recent article on the problems with Web-based catalogs, Ortiz-Repiso and Moscoso (1999) wrote about MARC: "Its linear and rigid structure impedes the development of a structure of nodes and links inherent in the web environment" (p. 68).
The manifestation problem has another dimension. In a digital world, the particular manifestation of the item is not of particular interest (in part because a single manifestation may only last a short while before being supplanted by another one), but the work itself and its relationships to related documents is of interest. How can we gain control of those types of documents in our catalogs if there is no central record that can serve as the root of a tree of relationships?3 As a result, our catalogs remain squarely in the category of automated card catalogs rather than electronic systems, according to the well-known progression (manual, automated, electronic) defined by Buckland (1992, 6).
Irrelevant Data and Awkward Links out of the Catalog
Much of the data we code and display in our catalogs are not relevant to electronic resources and, in fact, would seem strange to users accustomed to using the Internet. Our serial records are becoming irrelevant particularly quickly. Serial check-in data were useful primarily to let people know whether the latest issue had arrived or whether a given issue should be on the shelf. Most libraries don't do that, of course, for electronic issues, and why should they when a user can check whether the new issue is there with a couple of mouse clicks? Another problem with our serial records is the publisher information. With publishers aggregating their online content and putting it behind a search engine, more users know and care to know the publisher of a given journal. The fact that that information is not routinely updated (which is current cataloging practice4) leaves our catalogs full of just plain bad information.
Links coded in the MARC 856 field, roughly analogous to call numbers for electronic resources, have a host of problems. The 856 field is a MARC holdings tag, yet many of our library system vendors only support it as a link in the bibliographic record. One reason they are doing that is because there are URLs in the 856 field of OCLC bibliographic records. Why is that? Libraries get their electronic content from a range of sources and, on top of that, may run web traffic through a proxy server or other local gateway page. Just because the call number is in the bibliographic record does not mean it makes sense to have the URL there; a URL is not analogous to a call number because, unlike the call number, it possesses no inherent meaning.
Then there's the question of what should the 856 link to. If it links directly to the resource, then what happens when the link is bad? (You need to extract the URLs from your catalog and run some kind of link checker on them.) What happens when the resource is restricted and you want to communicate the username and password to your user community? The confusion continues if libraries (as most do) add the 856 link to the record for the paper version of an electronic item instead of creating a separate record. Because it is holdings information, and because the electronic version is a different manifestation than the paper, serial catalogers stress that putting a URL in an 856 field of a bibliographic record for the paper version is done simply to note the availability of the electronic version. This practice works for a while because the electronic resources we acquire have paper counterparts (especially those deemed worthy to be put into the catalog, a decision made often because an electronic item has a paper equivalent). This confusion is a recipe for disaster as the paper and electronic versions diverge in content, as the paper is canceled, and so on.
Weak Subject Searching
Subject searching has always been an embarrassingly weak link in our catalogs. Online catalogs never successfully implemented real subject browsing, which would allow a user to browse holdings by moving up and down the hierarchical context of a subject term. Mapping between terms is another feature critical for navigating a controlled vocabulary that is absent in our catalogs. As a result, study after study over the past twenty-plus years has shown that keyword searching is more effective than subject searching (e.g., Blecic et al. 1999).
"Subject" searching to our users now means browsing a Yahoo-like classification or simply typing a keyword into a search engine. It used to be that this type of searching on the Internet was quite ineffective, presenting the user with an impossibly long and unorganized list of web pages that may or may not be related to his or her query. But this common wisdom no longer holds up. The length of the hit list is not as great a problem as we thought it was. Sullivan (1999) of Search Engine Watch reports that users want relevancy, not recall, so a few good links are fine as long as they appear at the top of the list. And, increasingly, users are getting context and useful results, even in response to very broad queries. Because most users do simple searches, and the same ones over and over, search engines have developed prepackaged results screens. Building on the history of others' experience is a powerful tool to iteratively improve search results (as opposed to our catalogs, where each search is the first search ever done as far as the catalog is concerned). For example, not long ago typing "movie" into a search engine would have resulted in a large and useless list of hits; now, if you type "movie" into some search engines, such as Lycos or Excite, you get a very useful, categorized page back. These pages are based on the Open Directory project.5 The success of these directory-based search engines validates what librarians have always said: you need human intelligence to create truly useful access to information resources. One perspective on this development is that, because our users are much more successful with "dumb" searches on the Internet than they are with the same search in our catalog, why would they ever bother to learn how to use the catalog? But what's even more interesting is that those useful Open Directory records are created by a loose network of thousands of human volunteers, not unlike the Internet itself.
The bottom line comes back to surrogacy. When you find the full document, the price of lower precision is nowhere near as great as when you are looking at a description of that document and must go to the stacks to evaluate it. The likelihood of your substituting that online document for another (probably better) one in the library is also much greater.
Poor Interoperability
Looking to the near future, how will our proprietary catalogs participate in exchange of data with other systems, for instance metadata repositories? So far, libraries have relied on Z39.50 to do this. But even with Z39.50, many libraries are in the position of having to wait for their vendors to support the existing standard. And anyone who has done a cross-platform search using Z39.50 knows that, for whatever combination of software, policy definition, or other reasons, results from different systems in different libraries are so divergent that we have no confidence in the search results. Practically speaking, for Z39.50 to support various metadata formats, those formats must be first converted into USMARC. So not only will the records be dumbed down to fit into Z39.50, their hierarchical content will be lost. Is this the way archivists want to search and view their finding aids, which were so richly encoded in the SGML-based EAD DTD? Probably not. XML (like SGML) has none of these disadvantages and one enormous advantage: it is interoperable with other web-based systems.
We used to achieve interoperability between our catalogs and databases by loading databases locally into our integrated library systems. With the advent of the common web interface and improved bandwidth, most libraries rely on vendors to store and serve the data. Usually, integration with the catalog was lost in the process (except maybe for a "library owns this item" note). But because the vendor's search interface was generally more powerful and easier to use, few librarians or users complained.
Why Do Web Lists Work?
The fact is that libraries are putting up lists of collection resources on their web sites. Some of the above discussion of the problems with catalogs is part of the explanation why. But not only is there a "push" away from catalogs, there is an even stronger corresponding "pull" toward web lists. Users want lists. Why is that? It helps to look at who our users are and what they are doing.
What Are Most of Our Undergraduates Doing?
Students are looking for some information on topic X (or something close enough) to complete an assignment. They are not looking for the complete range of the best and most appropriate information on topic X to complete an assignment. As librarians we will say, maybe to our chagrin, that users have always done that. But it could be getting worse. Some now argue that a "`good enough' information culture is emerging, fueled by the need for instant information gratification that arises from Internet searching" (Brindley 1999). For this type of information need, the Internet does not supplement a search of the library catalog; it replaces it. "The development of commercial search engines to provide access to electronic information has become another alternative to the catalog and traditional library methods of bibliographic organization" (Stevens 1998, 188). It is not easy to ask the catalog "show me everything you have online in child psychology." That should be easy, and that's what web lists strive to deliver.
Lists secured a foothold in our web sites through lists of databases. Just as in the past users looked on the index tables in the reference room for paper indexes because they knew the index they wanted (or the broad subject area), when the indexes were put online users did not expect to have to do a search in the online catalog to locate them. That's so obvious that libraries have been doing separate lists for databases ever since they created web sites. But this principle is even truer for Internet resources. When users want information on the Internet on a topic, they use a directory site or search engine. If librarians want to present prepackaged, evaluated lists of sites, it is already an uphill battle to get users to click on those links. We don't stand much of a chance of getting users to search the library catalog where Internet resources are mixed up with paper resources.
In the academic library, our catalogs' "competition" is not Amazon.com but, rather, the likes of EbscoHost, ProQuest, Lexis-Nexis, and now the PDF version of journals. Because of their ease of use, academic libraries are witnessing a broad shift away from use of books toward use of journal literature. Majka (1999) recently reported in American Libraries that his library saw a 37 percent drop in circulation after the introduction of ProQuest direct. Association of Research Libraries statistics support this overall decline in circulation.6 These valuable resources, in addition to being location independent, again have nothing to do with the library catalog.
What Are Our Graduate Students and Faculty Doing?
Many, if not most, graduate students and faculty have always been infrequent users of our catalogs except for known-item lookup. For these users, competition comes on two fronts: publisher initiatives and changes in scholarly communication. What's happening now with the scientific/technical/medical (STM) literature gives us a window on the future for all disciplines. Users of STM literature increasingly are able to move directly from bibliographic citation to the full text for a given article, and from within that article's citations to link to other articles.7 In the lucrative STM market, technical issues are being resolved, revenue models ironed out, and standards agreed upon.8 The buzz now is about preprint servers. Everyone has heard about what the theoretical physicists have done, and now their experience has been codified into standards through the Open Archives initiative.9 Other initiatives carry important institutional support, such as PubMed Central and BioOne.10 It may be true that researchers have much less elastic information needs than undergraduates, but their tools are moving out of the catalog's scope-and out of the library-in just the same way.
The Familiar Principle of Least Effort
The first online catalogs were a big time-saver over having to mentally identify a single access point (author, title, or subject), find the card in the catalog, note the call number, and then find the book. They made the first two steps much more efficient and, by the early 1990s, location independent. Online catalogs made another big step forward in usability when web interfaces replaced text-based interfaces.
Online journal indexes performed a similar service for the journal literature. The improved searching and ease of use of the online version was so obvious, especially in databases that included abstracts, that paper indexes rapidly disappeared. Because of the difficulty of using paper indexes in the first place, not to mention the addition of searchable abstracts, the benefits realized with the migration to online indexes were even greater than with the migration to the online catalog.
The journal index with full-text ASCII content was another big leap forward and drew more users away from the catalog. Then came page-image electronic journals, and our scholars were very happy indeed. Now, even the lower-end full text is improving (e.g., EbscoHost and SilverPlatter offering page images in their aggregate databases). Our users would prefer not to use the catalogs-or even a library web site-as a gateway, except when they have no alternative. From comparing my site's usage statistics with publisher-provided statistics, I see that only 15% to 50% of electronic journal usage comes via links on the library's web site. It's not very hard to remember the URL www.nature. com and even easier to bookmark it. Our users are doing just that.
Electronic books promise to bring book and journal use back into balance by making them equally ubiquitous and easy to use. We must learn from our experience with full-text journal content to predict the coming demand. The level of demand will not consist of some proportion of existing demand for those books, with the balance still preferring paper. It has already been demonstrated that there will be whole new levels of demand brought about by the ease of use of the new format. Dennis Dillon (1999), head of collections and information resources at the University of Texas at Austin, commented recently in The New York Times: "Usually a book has a one-third chance of being checked out. So to have some title checked out 25 times in two months-that's shocking" (p. C1). Yes, it's more pleasant to curl up with a paper book, but the principle of least effort is a powerful force. In fact, the electronic medium suits the usage of books in an academic library very well: most usage is not cover-to-cover reading of a single volume (Amazon.com satisfies some of that demand anyway). Of course, libraries can load records for e-books in their catalogs. But if we get our books through a single vendor such as netLibrary, with seamless IP-based authentication and full-text searching, why would anybody search the catalog to find an ebook? Why not just search netLibrary directly to see what it has? When that book content is combined with journal content and put behind a search engine, as Ebsco is now doing, our catalogs are pushed even closer to irrelevancy.
Again, the problem comes back to surrogacy. Because of the Internet, user expectation is to search and reach an item, not a description of that item. Web sites reward this expectation: users download software and music, search up-to-the-minute news and stock information, and message their friends in real time. A gulf of time and space between bibliographic description and document is at best awkward for people to navigate and, in reality, just doesn't make sense anymore for most of our users.
What's Bad about Web Lists
There are several major disadvantages to web lists, even if they're dynamically generated, that I hope will cause this practice to serve as only an interim solution. The most glaring problem is the separation of electronic resources from the rest of the collection. This orphans the print collection just as the part of the collection left in the card catalog was orphaned when the online catalog came along. Another disadvantage is the loss of indexing power. However, with electronic resources such as databases, journals, and collections of web pages, powerful searching is less critical. Finally, from the user's perspective, there is the problem of multiple lists. Most sites have separate lists for each resource type, so that if the user wants to do a comprehensive search, he or she must do it in three or more places. Some libraries are addressing this problem by developing a one-page search interface to all those resources. What is searched can be rich data, even fielded, and could easily support records coded according to the Dublin Core.12
What's Good about Web Lists
Because our catalogs are not true relational databases, such a tool is irresistibly appealing when the task at hand is to integrate disparate resources-and disparate attributes associated with those resources-under a single user-- friendly umbrella.
Web lists give you flexibility:
* to add additional access points to items (e.g., descriptors, resource type, vendor source);
* to distribute the responsibility for creating the databases that underlie the display to the appropriate library staff;
* to change the web display based on usability studies or other reasons.
Web lists give you control:
* over information not easily controlled in MARC, such as terms and conditions of access;
* over the URL (the 856 link in the catalog can link to a page on your web server that provides information on access restrictions, passwords, etc.);
* over the web display (an item can be marked "no display," for instance, if it is temporarily broken).
Web lists support access:
* Web presentations can help with the subject browsability problem because items can be categorized under broad subject terms.
* It's easy to create searchable subcollections, or to combine many collections under one index.
* No special search syntax, understanding of Boolean logic, or how to read a catalog's bibliographic display, is required.
What's Going to Happen Next?
Many libraries are implementing My Library customization features in an attempt to overcome some of the deficiencies of web lists. My Library sites, in essence, let users create their own lists, as well as interact with their personal information in the integrated library system.
A near-term goal will be to get the data out of the catalog and into other databases for purposes of web display. Such a course does not jeopardize the future of our shared database, or even the issue of the future of MARC, but makes it far more accessible. MARC and other metadata must be interchangeable. Some conversion tools already exist (e.g., TEI2MARC and MARC/SGML), and Stanford recently released MARC2XML conversion tools to the library community.13 Such standards and conversion tools always predate good presentation software, but XML is a good bet for a universally usable markup language that will soon work on all current generation browsers. Other, more sophisticated display tools will be developed to suit the display of particular data types and will be used by specific communities of interest.
How do our current catalogs fit into this picture? Stevens (1998) writes: "Perhaps it is time to recognize that the library catalog, as we once knew it, may be on its way out" (p. 190). Because they are an indispensable part of our integrated library systems, they cannot exist outside of that context. That gives the library system vendors quite a bit of leverage. The recent announcement that Endeavor has been acquired by Elsevier Science14 indicates that the vendors seek to do the job of integration for us. They may well end up selling us a more complete package of content and software than we've ever seen before under the now-familiar umbrella of better service and less choice that we see with consolidation in the marketplace and the growth of library consortia. Perhaps the current, early-stage work toward an open source integrated library system will bear fruit.15 In any event, a sensible first step is to extract key data from the catalog to provide an integrated and flexible display of electronic and traditional materials on the Web.
As individual librarians in our own institutions, we have shown how flexible we can be in responding to the library user's need to access information. The constraints within which we work, however-standards, practices, software-- are understandably slower to change. But the more we understand and acknowledge their weaknesses, the better we will be able to contribute to the ongoing dialogue in our profession that will help us evolve quickly and in the right direction.
Notes
1. Available at http://www.wired.com/news_drop/ palmpilot/story/0,1325,35306,00.html.
2. A fuller analysis of the catalog along these lines was done by Debora Seys at the Internet Librarian conference, San Diego, California, 10 November 1999. Available at http://www.infotoday.com/ i199/presentations/seys1.ppt.
3. This problem is not new to the cataloging community. Object-oriented cataloging has been discussed for more than a decade but was never implemented because of the advantages of MARC as a distribution format and the difficulties of changing such an entrenched system (Younger 1997, 476).
4. "With the exception of the final date of publication, significant changes appearing on later issues are recorded in notes, when considered desirable. Do not clutter the record with minor changes, particularly those that involve commercial publishers" (CONSER Cataloging Manual).
5. Available at http: //dmoz.org/.
6. Between 1995 and 1998, circulation declined at most ARL libraries, and by more than 10 percent at thirty-one libraries (http:// fisher.lib.Virginia.EDU/newarl/).
7. See, for instance, the CrossRef initiative (http://www. crossref.org) and HighWire Press's cited reference linking capabili
ties (http://highwire.stanford.edu/institutions/ features.dtl).
8. Some of the more advanced efforts are Stanford University's HighWire Press (http://highwire.stanford.edu/), Elsevier's ScienceDirect (http://www.sciencedirect.com/). The digital object identifier (DOI) standard will underlie cross-linking as well as rights management (http://www.doi.org/).
9. Open Archives initiative (http://www.openarchives. org/).
10. PubMed Central (http://www.pubmedcentral.nih. gov/) and BioOne (http://www.bioone.org/).
11. Press release, 29 March 2000, Ebsco Publishing.
12. Some examples are MIT's "Vera" (http://libraries. mit.edu/vera), the Virginia Military Institute's SourceFinder (http://www.vmi.edu/sourcefinder/), and the Arizona Health Sciences Library's MultiFind (http://www.ahsl.arizona. edu/multifind/).
13. Stanford University Lane Medical Library Medlane project (http://xnlmarc.stanford.edu/).
14. Press release, 7 April 2000, Elsevier Science.
15. Avanti (http:www.nslsilus.org/nschlumpf/ avanti) and Open Source Digital Library System project (http:// osdls.library.arizona.edu/).
References
Blecic, Deborah D., Josephine L. Dorsch, Melissa H. Koenig, and Nirmala S. Bangalore. 1999. A longitudinal study of the effects of OPAC screen changes on searching behavior and searcher success. College and Research Libraries 60:516.
Brindley, Lynne. 1999. Institutional framework for change. Paper presented at the Ingenta Institute Conference, September, London. Available at http://www.ingenta.com/hometfs_ingentainstitute.htm
Buckland, Michael. 1992. Redesigning library services: A manifesto. Chicago: American Library Association.
Chepesiuk, Ron. 1999. Organizing the Internet: The "core" of the challenge. American Libraries 30 (1): 60-63.
Coffman, Steve. 1999, Building Earth's largest library: Driving into the future. Searcher 7 (3): 34-47. Available at http://www.infotoday. com/searcher/mar99/coffman.htm
Dillon, Dennis. 1999. Racing to convert books to bytes. The New York Times, 9 December, C1.
Kevil, L. Hunter. 2000. Re: Do we still need online catalog vendors? Post to Web4Lib, 6 March.
Majka, David. 1999. Of portals, publishers and privatization. American Libraries 30 (9): 47.
Miller, David. 2000. Re: In "American libraries." Post to Autocat, 4 April.
Ortiz-Repiso, Virginia, and Purificacion Moscoso. 1999. Web-based OPACs: Between tradition and innovation. Information Technology and Libraries 18 (2): 68.
Stevens, Norman D. 1998. Looking back at looking ahead, or "The catalogs of the future revisited" with additional speculation. Information Technology and Libraries 17 (4): 188.
Sullivan, Danny. 1999. Web tool for all: Search engine watch. Paper presented at the Internet Librarian Conference, 10 November, San Diego, California. Available at http://calafia.com/presentations/
Younger, Jennifer A. 1997. Resources description in the digital age. Library Trends 472.
About the Author
Kristin Antelman is head of systems and networking at the Arizona Health Sciences Library, Tucson. She received an M.L.S. in 1988 and was a systems librarian at the University of Delaware Library for eight years. Readers can contact her at [email protected].
Copyright Sage Publications, Inc. 1999
