In our mission statement, the D-Team talks about “providing technical leadership and expertise,” which is accurate but not really the most conversational way of talking about what it is that we do. Lately I’ve been saying the D-Team’s role here at the RAC is ”helping our staff and researchers have a healthy relationship with technology.” I thought it was time to dig into what that means a little. In thinking about this, I’ve realized this conceptualization of our work is has been strongly informed by some recent reading as well as conversations I’ve had with colleagues.

I just finished Ursula Franklin’s book The Real World of Technology - easily the most provocative thing I’ve read in a couple of years - which combines several lectures she gave in 1989 with more recent writings. The older lectures hold up incredibly well, in large part because she focuses primarily on outlining fundamental characteristics of technology and the ways in which those characteristics shape the relationship between technology and humans.

Although the book is packed with prescient insights, the thing that resonates most strongly with me is her framing of technologies as either holistic or prescriptive: in other words, they either encourage comprehensive human understanding of processes, or they lead to increasing specialization and compartmentalization of knowledge and labor. As you might suspect, her contention is that technologies have become increasingly prescriptive, seeking to define the labor of humans in increasingly smaller and specialized spheres. This is a damning counter-narrative to the idea of technology as a liberating force, and even the idea of technology as a neutral tool that can be bent to whatever ends are desired.

The other idea from this book that really struck me was the discussion of designing for failure. In Franklin’s view, most technological systems are built with an eye towards maximizing gain. This means that failure or disasters are often unaccounted for and, when they inevitably occur, they amplify rather than diminish human devastation. Instead, she argues, we should be building systems that seek to minimize disaster. While it’s possible to see this in terms of the classic optimist/pessimist divide, I think what she’s pointing to, once more, is the relationship that technology has to human life. When things break, when the unexpected happens, when natural disasters strike, will the systems we’ve built help us, or will they make our lives more difficult, dangerous and deadly?

Franklin’s framing of technology effectively moves the boundaries of a conversation about technology away from superficial techno-solutionism that focuses on ways technology can save or change the world, and instead forces us to ask hard (and more realistic) questions about ways in which technology can be employed that don’t actively damage human labor, intelligence and life.

I’ve been thinking about these ideas in connection with some conversations that came out of the International Image Interoperability Framework (IIIF) conference which touched on the interaction between technology and society. IIIF is a loose international consortium of people who came together to build a common framework and platform to, as the name suggests, encourage interoperable sharing of images in cultural heritage organizations. Unsurprisingly, this is an effort which has run into a whole lot of social and cultural issues around sharing, intellectual property and trust, to name just a few. There were lively discussions about which of these problems are appropriate to solve with technology, and which need to be handled by other means. There aren’t clear lines, and in many cases the solution likely lies somewhere between and in both the worlds of society and technology.

How can we architect systems that support the “better angels” of human nature (sharing of cultural knowledge, respect for difference, inclusivity) while accounting for bad actors? How do we automate processes and protect the labor and livelihoods of information workers at the same time? Are these divergent sets of goals even possible? I think the answer to this, as Franklin would suggest, is that it’s impossible to separate technology from humans, and that a change in technology not only precipitates changes in society, it is a change in society (and vice versa).

“Technology, like democracy, includes ideas and practices; it includes myths and various models of reality. And like democracy, technology changes the social and individual relationships between us. It has forced us to examine and redefine our notions of power and accountability.”

I began this post by talking about the D-Team’s work as one of helping to establish a healthy relationship between technology and RAC staff and researchers. In order to do that, I think we have to have a realistic view of technology. We can’t be scared of it, but we also can’t expect it to magically solve our problems or obviate the need for human labor. It means that we understand it won’t work perfectly all the time, but it also means that when things start to go wrong we have some idea of how to fix them, or at least know where the power button is so we can turn it off and stop things from getting any worse.

How do we help to build these kinds of relationships? I think we do that first and foremost by modeling what it looks like to engage with technology in a constructive and productive way. We can also work one-on-one or in small peer groups to help develop skills and build confidence. But we can also do this by thinking carefully about the systems we implement and perhaps more importantly how we implement them. What values do they support? What do they do well, and what are they not capable of? What are they doing for and to humans and human relationships?