REUSE+ Result Report for Multipart Records

REUSE+

Bernhard Eversberg

Presentation at
CC:DA Meeting, Washington, DC
27 June 1998



The good news in this report is that we can now see avenues of approach and alignment for our two bibliographic galaxies, which have up until now been living in uneasy separation. Avenues, in other words, toward nothing less than a grand unification.

In the REUSE project, after clearing away a number of obstacles in the entries and headings departments, as reported by Monika Münnich, there was one remaining roadblock, and that was, and continues to be, the difference in how we handle multiparts of all kinds and whether or how we do analysis. It is a major roadblock, and it has far-reaching ramifications. Which is why OCLC supported a sequel to REUSE, and which is why we are here to talk about it.

I had a dream the other day, a day dream (though I suppose to some it would occur as a nightmare). In this dream, we wake up one morning in Germany to find that the Masters of the Bibliographic Universe have done a universal replacement in our utility databases, substituting every record with an AACR2/USMARC counterpart.

The interesting question is, after this intervention, what effects will we perceive, how will it affect our routine.

With regard to multiparts, leaving everything else aside, the effects are likely to be the following:

We will perceive a general deficit in bibliographic control. Volumes are less well controlled than they used to be. We will miss a considerable amount of bibliographic detail about volumes and parts, which means detail we used to have in our shared records, including persons related to volumes, edition details, dates, physical descriptions, series statements. As life goes on, we realize that this deficit has at least five major consequences in day-to-day work:

  1. The shared bibliographic record no longer contains bibliographic volume details, and that means at least some of these details have to be entered as holdings information on the local level, to distinguish the circulatable parts from each other. We had been used to receiving this data with the shared record and adding nothing more than the barcode to the appropriate part record. This means more data inputting on the local level for every library copying the shared record. More work on the local level, in every library.
  2. Pretty soon we will also discover that multiparts have been handled in unpredictable ways in our new records - for some there are analytic records while others have been cataloged on the collection level only. Doing a search, there is no way of knowing in what way one particular publication may have been handled by the agency that did the original record. This means more uncertainty, and in turn more time spent for searching. This is because AACR practice tolerates decisions based on factors that lie outside the domain of the rules, namely, classification and acquisitions. Not every library can accept all the decisions made by other libraries - which means more work. But generally, unpredictability in a catalog, in other words the lack of clearcut principles, always means more work on the local level, for every library.
  3. In OPACs, for the same reasons, we will experience a loss of reliability and consistency in search results. Titles of volumes will sometimes be found, sometimes not (because of incomplete contents notes), or not with the same approach (some are in a 245, some in a 700$t, some in a 505). That generates more work for reference departments, more local work once again, in every library.
  4. Interlibrary loan will become more difficult and be slowed down for multiparts: the shared records in the union database no longer reveal which parts, let alone which editions of volumes, one library actually owns. This generates more work for ILL departments and reference, in every library.
  5. And finally, when exchanging data between agencies above the library level, or bibliographic utilities, the lack of precision in the description of parts must lead to more work for catalogers eventually using the records, more work in every library.
Plain and simple, what we observe after the intervention is more work in every individual library, whereas in the olden days, before the intervention, there was only some more work in one library (the one that did the original record), and less in all the others. Which is what shared cataloging is all about - we had just taken it one step further and drawn the line between bibliographic data and local or holdings or copy data at a different point, and we like to think a more logical point, to generate more effect from more complete shared data. Strictly speaking: every detail that is common to all copies of an edition is not copy or holdings data but bibliographic.

We can, however, and this is good news again from the REUSE project, dream a much nicer dream, and one that need not remain a dream. Rather than having our records replaced with existing USMARC records by intervention from above, we can convert our records without losing information into USMARC records. That's because all the options we need are provided for both in USMARC and AACR2. No new rules or format elements are required. The fathers and mothers of rules and format had the foresight to make provisions for everything we need. We only have to make use of record linking. And due to some changes we are now in the process of making in our shared databases, the linking technique can be kept very simple and yet apply for all situations where we see a need for it. To show you what it may look like, I have prepared a few examples. I present these examples as bare outlines only, woodcuts rather than etchings, to make the essentials visible. Full examples are in the demo database set up for the project and accessible on our Web server in Braunschweig. You get to these examples in a matter of three or four mouseclicks if you call up this URL

www.biblio.tu-bs.de/allegro/formate/reusep.htm

This is for the REUSE+ final report and links you on to the demo database. The PowerPoint slides are available from this address, too.

The examples are in three categories: (see the PowerPoint slides and demo database)

Sets - Multiparts - Containers.

Please keep in mind I am not an AACR cataloger, so you are bound to find mistakes with indicators or worse. Please look beyond this, look at the "big picture".

Looking at the example for sets, you are probably tempted to say it is overstepping the mark to provide subrecords for every volume when there is effectively very little information on the volume level. Our time-tested experience points to the contrary, for five reasons:

  1. Only one library or agency has to put in the work, all the others can copy it
  2. It is not much work (3 or 4 % of items), and much of it can be automated
  3. Volume subrecords can be turned into holdings records for circulation, saving local work in every library
  4. We avoid having to draw a line between cases where we do and where we don't create subrecords, and thereby also avoid the need for discussions in cases of doubt - because no doubt ever arises.
  5. New volumes coming out at a later point, supplements, index volumes etc., are very easy to add without a need to modify the main record: you just copy one of the existing subrecords and modify it.
Now, finally, I have to disappoint you if you are still waiting for the bad news - it is not coming. Even better news is coming instead, especially for the USMARC galaxy:

If you introduce links and subrecords (not exactly a small "if", but I repeat: it requires no new inventions in format or rules!) then you can finally reuse German data to their full extent to enhance your national (and subsequently local) bibliographic record, even for many American books, not just for German ones, for we did and we do create the subrecords for all the books we catalog. (In the Göttingen Pica database alone, we have 1.5 Million subrecords.) And you end up with a sum total of less work on the local level, and more effect. This is not a miracle, it is nothing more than an application of common sense, and in our country, it is reality. In some US local systems, bibliographic linking is also a reality today, and these could benefit more directly and sooner than others. It is certainly possible that one of the US local systems, or one of the vendors, go ahead with a project to merge our subrecords into their database and link them to existing collection records.

What you will find lacking in our records, and most of you will know it, is usable subject information. There is no commonly used classification in Germany, and our subject headings do not easily match with LC subject headings. The concepts captured in the headings are very often incongruent. Some work is under way now to construct concordances with LC classification and subject headings. If this work bears fruit, more benefits can be reaped on both sides because we could also make more complete use of MARC data than is possible now.

For this presentation, I chose to confront you with some sort of a vision - the unification of our separate bibliographic galaxies, rather than just the few nuts and bolts technicalities we found as possible, small improvements within the contraints of the status quo, which would mean minimal changes in USMARC practice - but also no measurable step toward unification. (We describe these nuts and bolts, like proposals for new indicators for the 505, in the final report. They are really not enough substance for a presentation on an occasion like this.)

Since the suggestions I presented conform with trends discussed last year at the Toronto Conference, and I may remind you of the IFLA paper on Functional Requirements for Bibliographic Records, and since the recent metadata initiatives, as exemplified by the Dublin Core movement, point into the same direction: namely the implementation of some kind of bibliographic linking technique, there is reason for hoping we can get our bibliographic galaxies finally linked up.