Over the past year, the RAC has been taking steps towards preserving digital media found within our collections. We have established new policies and moved away from separating digital media from their parent collection upon accessioning due to format. In the near future, we plan to institute a new workflow that involves processing archivists inventorying, imaging, and virus checking these materials during processing and recording their progress using the Digital Media Log. We are currently at the documentation stage of this project where we are working to develop and make available imaging workflows that encourage a comprehensive understanding of the transfer process. Continue reading
In May, Marissa Vassari and I presented a poster at the 2017 Annual Meeting of The American Institute for Conservation of Historic and Artistic Works (AIC) in Chicago. Check back soon for a posting about our poster experience.
In this post, I wanted to discuss why I flew out to the AIC annual conference early in order to attend a workshop on digital file structure entitled “Examining the Composition and Structure of Digital Collection Objects: Strategies and Guidance for Ongoing Management and Preservation.” Although this workshop may have stood out as an oddity against a schedule full of photograph conservation, collections care, and environmental monitoring lectures, this workshop helped inform how I am thinking about my larger role in preservation at the RAC. Continue reading
At the recent Society of American Archivists annual conference, I was fortunate enough to present as part of a panel discussing the application of digital forensics in an archival setting. I touched on the work I’ve been doing with the D-Recs committee and on developing the forensics workflows that I’ve discussed previously. My co-presenters, Cal Lee, Don Mennerich, and Christie Peterson, discussed different aspects related to digital forensics in archives, from learning forensics techniques to an overview of current research in the field. I highly recommend checking out the audio for the session, which is available on our shared drive.
As a result of my presentation, I was asked to do an interview with Trevor Owens for The Signal, the Library of Congress blog on digital preservation. The interview went live last week and touches on some points I made during my presentation as well as current and future D-Team projects. I hope you enjoy it!
One of the first sessions I attended at this year’s SAA annual meetings was “Getting Things Done with Born-Digital Collections,” and it stuck with me as a great entry-level review of how to deal with born digital materials in a variety of different institutional environments. It also introduced tools to help archivists jump into their work, while providing some advice for those looking implement or expand born-digital programs. Many of the following tools/concepts may seem familiar in the work that we do here at the RAC.
The panel included five panelists: Gloria Gonzalez, Jason Evans Groth, Ashley Howdeshell, Dan Noonan, and Lauren Sorensen. While all of the panelists covered slightly different experiences, there was one universal takeaway: preserving digital collections needs to be an institutional endeavor, and in many cases, that endeavor is a constant work-in-progress, from tools to processes.
We prepared a series of screencasts for a recent donor meeting. These screencasts give a really nice, visual, overview of how we use three different systems: Archivematica, ArchivesSpace, and DIMES, and how they connect to each other.
The first screencast reviews our Archivematica ingest process, and covers how we link to metadata in the Archivists’ Toolkit. We’ll be implementing this functionality using the ArchivesSpace API in the near future.
In November 2013, the first phase of the Legacy Digital Media Survey began with examining collection information of the legacy collections at the Rockefeller Archive Center.
The Legacy Digital Media Survey aim is to gain intellectual and physical control over digital media materials in the collections. This survey came about due to the accumulated backlog of unprocessed and unknown amount of digital media over the past forty years of collection building at the RAC. Initial steps are being taken in order to manage the backlog of born-digital content for identification, separation and accessibility of these at-risk items. Continue reading
Beginning in 2011, the Rockefeller Archive Center accepted the records of the Ford Foundation, and included among these papers were records pertaining to Unpublished Reports and Grants from the Foundation’s inception to the present. Along with the materials, the Ford Foundation provided us with two spreadsheets filled with metadata describing both the Unpublished Reports and Grants files. The Grants file alone includes 54,644 rows and 34 columns of information ranging from subjects terms to restriction information. However, much of this data was “dirty”; many subject terms did not match LCSH vocabulary, dates did not match formatting for import into Archivist’s Toolkit, and many more issues. Despite these issues, the metadata opened new avenues of access and description to the materials, and wrangling and refining them for import into Archivist’s Toolkit and DIMES would help researchers from all over the world discover the exact item he or she is looking for. In November of 2013, members of the Digital Projects team met with representatives from Processing and Reference with the express goal of transforming this metadata into a machine-readable format so that the RAC may provide it in a searchable format online.
About a year ago, I started reviewing ways to secure our digital assets against potential catastrophic losses due to disaster (natural or otherwise), technical error, hardware failure, and system attacks. I wanted a solution that offered geographically-dispersed server space to minimize the risk of loss due to disaster, regular fixity checks to help uncover any potential hardware issues on those servers, and administration differentiation between those servers and our own systems to help alleviate technical error and system attack issues.
After a review of Distributed Digital Preservation Services (as of Spring, 2013), we selected MetaArchive. MetaArchive is a digital preservation network created and hosted by memory organizations like libraries and archives. It uses LOCKSS software, which was developed by Stanford University and is used by about 12 different preservation networks worldwide. In a LOCKSS based system, materials are ingested and stored on servers hosted by network members in disparate geographical locations. The fixity of the materials is checked at regular intervals. This system helps prevents data loss occurring during natural disasters or other emergencies, or due to malicious or negligent factors. No one administrator has access to all copies of the data or can tamper without detection. A step-by-step review of how it works can be found here. Current MetaArchive membership institutions span 13 states and 4 countries. These members include many universities, the Library of Congress, and a few smaller libraries. Continue reading
Last week I spoke at the NYART event, Preserving and Archiving Electronically Generated Materials, which was sponsored by the Leon Levy Foundation. My slides are attached below. Slides from other presenters will be made available on the event website, and you can find the event schedule here. Continue reading