Several years ago RAC faced a dilemma familiar to many in our profession – a daunting processing backlog that was growing exponentially and depriving scholar’s and staff of access to many of the records held in our collections. Our collections are great resources of knowledge, but only if those resources are available to our users!
To find a solution, we actively sought processing practices that reflect our values as an operating foundation, specifically the values of collaborating and sharing knowledge, disseminating information, promoting discovery in all its forms, and facilitating open and equitable access to all our archival holdings. Over the last year and a half, we shifted our strategy to processing by accession and implemented a standards-based approach which has been a resounding success thus far and has resulted in the processing and opening for research of over 4,500 cubic feet of records. This discussion will be the first in a series of posts about our processes and collaborations. I hope our experiences may be valuable and helpful to others. Continue reading →
This week I had the opportunity to attend the Born Digital Archiving eXchange hosted by Stanford University. It was a really great unconference that brought together digital archivists, curators, and others working to preserve and provide access to born-digital archives.
You might remember that earlier this year I wrote a post about Metadata Cleanup and Ford Foundation Grants that gave a very basic overview of how I went about reconciling thousands of subject terms against the Library of Congress. This reconciliation was essential in helping us gain control over data that we did not create, but that we also identified as possibly extremely valuable to researchers. This post will give an in-depth and updated account of how I hobbled together a (mostly) automatic way to check large amounts of topical terms against the Library of Congress. It still requires some hands-on work and quality checking is a must, but it cut a hundreds of hours job down exponentially.
We prepared a series of screencasts for a recent donor meeting. These screencasts give a really nice, visual, overview of how we use three different systems: Archivematica, ArchivesSpace, and DIMES, and how they connect to each other.
The first screencast reviews our Archivematica ingest process, and covers how we link to metadata in the Archivists’ Toolkit. We’ll be implementing this functionality using the ArchivesSpace API in the near future.
As you should all know by now, we will be transitioning from ATReference to ArchivesSpace in a couple of months. It has been a lengthy project, but we’re quickly approaching its final stages. As such, I wanted to give everyone a quick rundown of the final timeline and the work that we are doing to get us there.
One of the sessions I really enjoyed at this year’s edUI conference (for a broad recap of the conference, see my earlier post) was Designing for Information Objects, presented by Duane Degler (Design for Context) and Neal Johnson (National Gallery of Art). Although the presentation took place on the afternoon of the last day of the conference, by which time my brain was already past its saturation point, it was immediately apparent to me that there were some pretty important ideas in the presentation that deserved some detailed attention. In part, I wanted to write this post as a way to revisit that session now that I’ve had some time to recover from the conference overload. Continue reading →