In the weeks following the Code4Lib Conference (held this year in Ann Arbor, Michigan) I have been reflecting on some themes from the conference which have relevance for our work here at the RAC. I’ve also been thinking about what it means to be part of a professional community over the long haul, and how my own relationship with Code4lib has evolved over the years.
Continue reading
We’ve described in earlier posts changes we made to onsite digitization processes – including implementing ideas from the Toyota Production system and establishing file naming conventions – as a result of the service design research that Hannah conducted. However, it was also clear from the research that another major pain point in our digitization processes was handling the products of outsourced digitization processes.
Typically, when we send out a group of materials to be digitized by a vendor, we’d get the digitized files back from the vendor at the end of the project on an external hard drive. We’d load these files onto our local storage, do some manual renaming and restructuring of files, and then push them through a semi-automated pipeline which assigns structured rights statements, submits files for ingest, and finally makes access versions available online via DIMES. Not only does this process require a lot of manual intervention, it’s also incredibly tedious and error prone, puts a lot of strain on our local storage, and ultimately means there’s a long delay between the time when materials are digitized and when they are made available online.
Fortunately, we had an idea about how to improve this process, through the introduction of additional automation and scalable architecture.
Continue reading
This post in our series on reimagining digitization processes is all about digital surrogate file naming and organization, which warrants its own post because a lot of other processes hinge on developing a digital surrogate file management approach that is standardized and actionable to allow both humans and machines to do things with those files.
Continue reading
The service design-based research and analysis Hannah described in an earlier post revealed two fundamental challenges to rethinking our digitization processes: nobody understood the entire process from end-to-end, and the process was not observable. It was clear that until we were able to address these fundamental problems, we would essentially throwing darts while blindfolded and dizzy. We’d have no idea where to intervene or whether changes we’d implemented had resulted in things getting better.
As a result, we chose to start in a very specific way, by making the units of work that are part of this process (digitization transactions) smaller, and de-siloing the entire process so that archivists could process a transaction from start to finish. While this might seem a somewhat arbitrary place to start, these changes are based in ideas developed in the Toyota Production System (TPS), a predecessor of Lean Manufacturing methodologies in wide use today.
Continue reading
In the next few weeks, we’ll be sharing a series of posts focused on the Rockefeller Archive Center’s archival records digitization services. In 2021-2022, our institution experienced a substantial increase in researcher digitization requests during reading room closures and diminished travel due to COVID-19, and we know we are not alone in experiencing shifting research patterns and request volume since the pandemic. So with the goal of sharing how we are reevaluating our digitization processes to better meet the demands of remote research support, this post will kick off the series with a look at how we used service design methodologies to holistically analyze our digitization workflows, systems, equipment, documentation, supporting policies, and the points of communication between them with the goal of identifying friction points and generating ideas for improvement based on that analysis.
Continue reading