1968: The Ford Foundation Gets a Computer

Today’s post comes from Rachel Wimpee, Historian and Project Director in our Research and Education division. Rachel uncovered this story while working with the Ford Foundation archives held at the RAC, and asked if it might be worth posting here. I only had to quickly skim the text to see the relevance for this blog.

A couple of broad themes jumped out at me when I read this piece. The first is the durability of modes of speaking and thinking about technology, which seem to persist despite (or perhaps because of) rapid technological changes. Artificial intelligence and machine learning, both hot tech trends currently, figure heavily in this story from 1965. You’ll also notice efficiency being employed as the ultimate justification for technology, even in a situation where increasing the profit margin didn’t apply. This story is also an excellent illustration of the socially constructed nature of technology. As Rachel’s piece reveals, technology is the result of consensus and compromise. There are negotiations mediated by money, practicality, and personality. Not only that, but technology and underlying processes are often so intertwined as to be indistinguishable, and each is often blamed for things the other produces.

In many ways, this is cautionary tale of what happens when we start with the new shiny thing rather than the needs of users (something that Evgeny Morozov and others have called “solutionism”). It’s not all bad, though. Rachel writes about the training plan implemented by the Ford Foundation at the same time staff began to use an IBM 360/30 mainframe for data processing in the late 1960s, as well as a regular process of evaluation and change implementation which lasted well into the 1970s. This reminded me of the importance of ongoing cycles of training and evaluation. New technologies usually require humans to learn new things, so a plan for both teaching and evaluating the effectiveness of that teaching should be part of any responsible technology change process. The D-Team is thinking a lot about training these days, particularly in the context of Project Electron, which will embed technologies into our workflows in holistic way. Even though the project won’t be complete until the end of the year, we’re already scheduling training to amplify our colleague’s existing skills and expertise so they can feel confident working with digital records.

Continue reading

Project Electron Update: Introducing Aurora 1.0

We are very pleased to announce the initial release of Aurora, an application to receive, virus check, and validate the structure and contents of digital records transfers. It provides a read-only interface for representatives of donor organizations to track transfers, so that they can follow their records as they move through the archival lifecycle. It also includes functionality for RAC staff to add or update organization accounts and users associated with them, appraise incoming transfers, and initiate the accessioning process. Aurora is built on community-driven standards and specifications, and we have released it as open source software. This is a major milestone for Project Electron, and we are excited to share it with the world. Many thanks to our partners at Marist College IT and to the Ford Foundation for their generous support of the project.

Aurora homescreen

We will continue to improve Aurora as we test and integrate it with a chain of other archival management and digital preservation tools.

Read more about Project Electron here.

Continue reading

Digital Processing Project Update

We’re now a few months into our Digital Processing Project, which I wrote about back in April for the blog of SAA’s Electronic Records Section. By the end of this project, RAC processing archivists will have the tools, workflows and competencies needed to process digital materials. Through this, we will be able to preserve and provide access to unique born digital content stored on obsolete and decaying media.

Continue reading

From AT to ArchivesSpace Part 2: Migrations and Error Reporting

Migration Testing, Data Quality Checks, and Troubleshooting Errors – 295 hours in 8 months

After finishing the initial data cleanup, it was time to start testing our migration; the only way to identify major issues was to do a dry run. To set up for our initial testing, I took a MySQL dump of our AT database, loaded it up into an empty AT instance, and then installed the AT Migration Plugin. To install the AT Migration Plugin, just place the scriptAT.zip in your base AT plugins folder, either on a server or on your local machine.

Our first migration test did not go smoothly. Continue reading