Following our initial release of Aurora, we’ve continued to improve the application through usability testing with RAC staff members. This process has been essential in identifying usability issues in Aurora and guiding our strategy to implement fixes to address those issues. In this post, I want to share our approach to usability testing, a summary of our findings and fixes, and our next steps.
We are very pleased to announce the initial release of Aurora, an application to receive, virus check, and validate the structure and contents of digital records transfers. It provides a read-only interface for representatives of donor organizations to track transfers, so that they can follow their records as they move through the archival lifecycle. It also includes functionality for RAC staff to add or update organization accounts and users associated with them, appraise incoming transfers, and initiate the accessioning process. Aurora is built on community-driven standards and specifications, and we have released it as open source software. This is a major milestone for Project Electron, and we are excited to share it with the world. Many thanks to our partners at Marist College IT and to the Ford Foundation for their generous support of the project.
We will continue to improve Aurora as we test and integrate it with a chain of other archival management and digital preservation tools.
Read more about Project Electron here.
In her most recent blog post, Hannah wrote about our approach to Project Electron’s proposed systems integration architecture. One of our goals with Project Electron is to support the flow of data about digital materials between our systems and getting valuable information to researchers in new ways. Supporting data in motion is integral to Project Electron’s success, and while Hannah and Hillel have been hammering away at creating a comprehensive overview of the microservices architecture, I’ve been working with the entire archive center to develop a draft data model for discovery and display of born digital and digitized materials. If, as we’ve been thinking, Project Electron is about creating infrastructure to support data, a data model will in turn act as a blueprint for that infrastructure. Data models are tools we can use to communicate and define how we want data to move between systems, and we think understanding how our data will move throughout our systems to our researchers is vital to the success of the entire project. Continue reading
The underlying architecture that enables the movement of data between systems is a key aspect of Project Electron. In our project values, we talk about components as modular and generalizable, independently deployable and flexible enough to accommodate integrations with changing systems. The project value to “support data in motion” recognizes the strength of duplicate and distributed data, and articulates Project Electron’s approach to systems as points at which humans interact with or manage that data. All of this is to say that our strategic decisions relating to choosing an approach to system architecture, particularly with regards to systems integration, is essential to the project’s success and sustainability. In this post, I’ll share some of our current thinking around the various systems integration models and our considerations in choosing an approach that will enable these integrations of archival applications. Continue reading
In the last Project Electron update, I discussed the benefits of user interfaces as communication tools during development. This month I want to share more about the archival functions that those user interfaces enable in the application, which has been the focus of our recent development work. Specifically, I will share how the application enables appraisal and accessioning functions, as well as managing structured rights statements.
As development of the Project Electron transfer application has continued over the past month, one important aspect of the work has been the creation of user interfaces based on the wireframes we have designed during the design planning process. In this month’s update, I will discuss how both wireframes and the resulting user interfaces (UIs) of the application are important communication tools both internally for the development team, and externally with user groups including Rockefeller Archive Center staff and donors. Continue reading
As our work on the transfer application portion of Project Electron nears its completion, I’ve started to think more seriously about modeling our data that we are bringing into our systems. We’ve actually been prepping for this stage of the project for months, going all the way back to the Data Model Bibliography I put together in February 2017. Now that the D-Team was in the thick of data modeling, we thought it was time to bring the rest of the Archive Center on board as well. I’m just a single archivist, and even though I’ve done a lot of reading about data models, I’m no expert on the entirety of our collection or its materials. We knew that we’d need more eyes on our initial data model draft once we made it to make sure we weren’t forgetting an important component of our collections. Continue reading
It has been a busy month for Project Electron as we near the end of the first phase of development, which focused on building an application to enable the secure transfer and validation of digital records and their metadata according to the Rockefeller Archive Center BagIt specification. In addition to the transfer application development, much of our work this month has been about planning for the next phase of the project. In this post I will share a few activities and strategies we undertook to prepare for development and to keep users at the center of the design process. Continue reading
This month has been all about developing the Project Electron transfer application. The work is based on our defined specifications and the development decisions we made last month with our Marist College partners at the hackathon. We are really excited about testing transfers in the coming month.
In this post I am going to briefly discuss Gherkin, which in addition to being a delightful little cucumber, is a language that is used to define the requirements of software in order to document and test the software’s behavior as part of Behavior Driven Development (BDD). We have been using Gherkin to write Quality Assurance (QA) tests for the functions of our Project Electron transfer application. The language is human-readable, so it can enable communication between teams working in different domains across a project.
We kicked off this past month with a hackathon, hosted by our Marist College partners, to plan and start developing the part of Project Electron that enables the transfer of digital records from donor/depositor organizations to the RAC over a secure network connection. We worked with the Marist College team, including Marist students, to diagram the transfer structure and dependencies, building from the transfer specifications that we released in June and discussed in our last blog update. These specify the metadata and structural requirements for transfer and provide a bag profile to validate bags from donors. Additionally, we created wireframes and started building out the user interfaces (UIs) to view and track transfer information, view error messages, and manage user and organizational accounts. Continue reading