Hillel, Ima, and Patrick all attended this years annual Code4Lib conference in Princeton, NJ. As always, the conference was well-programmed with many thoughtful presenters and presentations. Each of the authors came away with different insights about their work, current trends in the field, and new and thoughtful ways to to engage with their community of practice. Their reflections below offer a brief insight into the experience.

Hillel

As you might expect, almost all of the talks at this year’s Code4lib contended with AI. There were a wide range of perspectives present, including folks who are deeply opposed to AI as well as those who are looking to operationalize it in search, transcription and entity resolution. Interestingly, although AI felt omnipresent, it doesn’t show up directly in any talk titles or abstracts, and folks chose to express their opinions about AI implicitly or in passing. As I observed after last year’s Code4Lib conference, it’s a topic that folks clearly have strong feelings about but are also unwilling to make binding public proclamations about.

Another trend I noticed was the large number of talks about system migrations or, more broadly, managing technical change. Although many of those talks were about things like accessibility, project management, or internal collaboration, the migration process provided a motivating event and narrative backdrop for the presentation. For me this underlined just how much of this community’s work is predetermined by the systems choices available to us, and then about incrementing on those tools and solutions to pay down technical debt and meet user needs.

I’ve written a lot about maintenance and innovation, which are often conceptualized as opposite and competing impulses. However, as many folks who have written and thought extensively about these topics have noted, maintenance and innovation are deeply connected, and influence each other in direct and indirect ways. This interdependence was reflected in both the opening and closing keynotes. Nikko Stevens’ closing keynote asked us to imagine the world we want to see, and to build technologies for that world which does not yet exist. Ruth Kitchin Tillman’s opening keynote posited that the way to build these futures is through small interventions. Demonstrating that “this isn’t always how it has to be” to people through “scaffolding technologies” (a concept that will stick with me for a while), attentiveness and shared problem solving can be incredibly transformative for folks who use systems operationally and often have learned to adapt to system’s shortcomings. This reminded me a lot of the way the Digital Strategies Team tries to use technology as an empowering force, which starts by recognizing that, by default, it’s employed in exactly the opposite way in many of our lives.

Patrick

We’ve been talking in the Digital Strategies Team a lot about our values (what informs them, how we apply them to our work, and where we could stand to learn more) and that was on the forefront of my mind going into this conference. I think that’s why I was thinking about ethical applications of technological and how important believing in your work is for everyday happiness. First, I found Alex Guo’s hilarious lightning talk about their experiences in big tech extremely helpful in reminding me that how we do our work and that we believe in what we’re doing is integral to our everyday lives.

I was also linked to Princeton University Library’s framework for assessing technology values during the conference. I really liked this document because it provides concrete actions one can take when evaluating technologies. It is by no means exhaustive, but it’s a great starting point for anyone thinking about value-driven systems work.

Additionally, Nikko Stevens’ closing keynote, “Resisting Minimum Viable Futures” was a stark reminder of how important it is to be intentional with our technology decisions, especially in our current global climate. Their talk posits that it’s rarely technology alone that is the problem, it is who owns the technology and how they use it. For instance, cloud computing is a valuable and useful tool, but AWS often uses their power and tools for oppression. Their core argument is that “understanding the history and underlying logics of technical practices can give us discernment into ways we can use technical tools for abolitionist ends.” We not only have an obligation to intentionality, but we also have an obligation to imagine restorative uses of oppressive technologies. It felt both freeing and like a heavy responsibility at the same time.

Finally, on a smaller note, I really enjoyed two breakout sessions I went to: Working on a small tech team, and Documentation. Both sessions connected rather nicely; in talking about how important it was to set up your processes and workflows and document them so someone can come in and pick up the work easily. I wrote about it last year, but this got me thinking of Bess Sadler’s “Compassion of DevOps” talk again, and how our work is really prioritizing empathy towards humans that are working with our systems. We choose tools and workflows that make lives easier, and there’s not a ton of difference in goals between a team of four at the RAC and a team of four at Princeton University Library with twenty other developers, sysadmins, and DevOps people.

Ima

As Hillel mentioned, there was a lot of discussion around the use of AI in many of the presentations. Many, such as Eric Hellman’s talk “Making an accessible Winnie-the-Pooh”, focused on implementing generative AI to facilitate accessibility. The accessibility argument is the most compelling I have heard in advocating for using AI and the presentations took a measured tone regarding what AI tools can do well and where they are still lacking. A common refrain from this conference is that “bad alt-text is better than no alt-text.”

What surprised me about the conference was that some of the most top-of-mind ethical considerations of using these tools were not a significant aspect of the presentations I saw. Environmental concerns related to digital archives are something I am interested in learning more about, especially in balance with other priorities such as providing wider access to collections. There are no easy or simple answers to this conflict, but I would be interested to see how institutions are addressing ethical issues around the use of generative AI.

Something I was pleased and surprised to see at such a technically focused library and archives conference was how people-focused the talks were in discussing how we do our work and the prevalence of positionality statements at the top of presentations. Most of the presentations I saw established the speakers’ identities, professional and personal, in some way, giving context on their backgrounds and how they approach topics such as project management, changing work routines for deeper focus, and managing staff changes.

The presentation which stood out to me the most in how it interwove the presenter’s identity as the lens through which they talked about their work was in the Closing Keynote given by Nikko Stevens. They emphasized their background and experiences in discussing how the containers information comes in contributes to the way that information is perceived. Focusing on the structure of data models, they demonstrated that how information is or is not related to each other in a database can define the boundaries of how that information can be shared and the connections that can be made. When we know something about how the speaker relates to the topic they present, it helps others see not just the connections being made but may also illuminate where connections may be missing.