I’m excited to announce that the Rockefeller Archive Center’s new Digital Media Log is live!
This week I had the opportunity to attend the Born Digital Archiving eXchange hosted by Stanford University. It was a really great unconference that brought together digital archivists, curators, and others working to preserve and provide access to born-digital archives.
It’s been a busy couple of weeks for conferences! On Friday, Bonnie and I attended a Born-Digital Workflows CURATEcamp, held at the Brooklyn Historical Society. We gave a brief presentation on our workflows for arranging and descriping born-digital materials, and also learned a lot from other attendees. Continue reading
You might remember that earlier this year I wrote a post about Metadata Cleanup and Ford Foundation Grants that gave a very basic overview of how I went about reconciling thousands of subject terms against the Library of Congress. This reconciliation was essential in helping us gain control over data that we did not create, but that we also identified as possibly extremely valuable to researchers. This post will give an in-depth and updated account of how I hobbled together a (mostly) automatic way to check large amounts of topical terms against the Library of Congress. It still requires some hands-on work and quality checking is a must, but it cut a hundreds of hours job down exponentially.
We prepared a series of screencasts for a recent donor meeting. These screencasts give a really nice, visual, overview of how we use three different systems: Archivematica, ArchivesSpace, and DIMES, and how they connect to each other.
The first screencast reviews our Archivematica ingest process, and covers how we link to metadata in the Archivists’ Toolkit. We’ll be implementing this functionality using the ArchivesSpace API in the near future.
This past Wednesday, we gathered all the archivists who do processing into the D-Lab and wowed them with a demo of the Forensic ToolKit (FTK) software.
I started with an overview of our processes for separating removable digital media from collections:
Then I gave an overview of the proposed workflow for working with that media: Continue reading
If, like me, you missed out on yesterday’s webinar from OCLC Research titled “Achieving Thresholds for Discovery: Addressing Issues with EAD to Increase Discovery and Access,” you can now view a recording of the presentation. It’s worth your time to listen to Merrilee Proffitt (OCLC Research) talk about her recent article on EAD tag analysis in the Code4Lib Journal, and also to listen to Dan Santamaria (Princeton University) talk about the work of his institution’s Archival Description Working Group in improving their archival description as well as their discovery system for archival materials. A few months back, I wrote about a presentation that Dan and others from Princeton gave at SAA this past August, and found that this webinar nicely complemented that earlier presentation.
One of the sessions I really enjoyed at this year’s edUI conference (for a broad recap of the conference, see my earlier post) was Designing for Information Objects, presented by Duane Degler (Design for Context) and Neal Johnson (National Gallery of Art). Although the presentation took place on the afternoon of the last day of the conference, by which time my brain was already past its saturation point, it was immediately apparent to me that there were some pretty important ideas in the presentation that deserved some detailed attention. In part, I wanted to write this post as a way to revisit that session now that I’ve had some time to recover from the conference overload. Continue reading