Tag Archives: Larry Wall

The API at the center of the museum

Extract from "Outline map of New York Harbor & vicinity : showing main tidal flow, sewer outlets, shellfish beds & analysis points.",  New York Bay Pollution Commission, 1905. From New York Public Library.

Extract from “Outline map of New York Harbor & vicinity : showing main tidal flow, sewer outlets, shellfish beds & analysis points.”, New York Bay Pollution Commission, 1905. From New York Public Library.

Beneath our cities lies vast, labyrinthine sewer systems. These have been key infrastructures allowing our cities to grow larger, grow more densely, and stay healthy. Yet, save for passing interests in Urban Exploration (UrbEx), we barely think of them as ‘beautifully designed systems’. In their time, the original sewer systems were critical long term projects that greatly bettered cities and the societies they supported.

In some ways what the Labs has been working on over the past few years has been a similar infrastructure and engineering project which will hopefully be transformative and enabling for our institution as a whole. As SFMOMA’s recent post, which included an interview with Labs’ Head of Engineering, Aaron Cope, makes clear, our API and the collection site that it is built upon, is a carrier for a new type of institutional philosophy.

Underneath all our new shiny digital experiences – the Pen, the Immersion Room, and other digital experiences – as well as the refreshed ‘services layer’ of ticketing, Pen checkouts, and object label management, lies our API. There’s no readymade headline or Webby award awaiting a beautifully designed API – and probably there shouldn’t be. These things should just work and provide the benefit to their hosts that they promised.

So why would a museum burden itself with making an API to underpin all its interactive experiences – not just online but in-gallery too?

Its about sustainability. Sustainability of content, sustainability of the experiences themselves, and also, importantly, a sustainability of ‘process’. A new process whereby ideas can be tested and prototyped as ‘actual things’ written in code. In short, as Larry Wall said its about making “easy things easy and hard things possible”.

The overhead it creates in the short term is more than made up for in future savings. Where it might be prudent to take short cuts and create a separate database here, a black box content library there, the fallout would be unchanging future experiences unable to be expanded upon, or, critically, rebuilt and redesigned by internal staff.

Back at my former museum, then Powerhouse web manager Luke Dearnley, wrote an important paper on the reasons to make your API central to your museum back in 2011. There the API was used internally to do everything relating to the collection online but it only had minor impact on the exhibition floor. Now at Cooper Hewitt the API and exhibition galleries are tightly intertwined. As a result there’s a definite ‘API tax’ that is being imposed on our exhibition media partners – Local Projects and Tellart especially – but we believe it is worth it.

So here’s a very high level view of ‘the stack’ drawn by Labs’ Media Technologist, Katie.

Click to enlarge

Click to enlarge

At the bottom of the pyramid are the two ‘sources of truth’. Firstly, the collection management system into which is fed curatorial knowledge, provenance research, object labels and interpretation, public locations of objects in the galleries, and all the digitised media associated with objects, donors and people associated with the collection. There’s also now the other fundamental element – visitor data. Stored securely, Tessitura operates as a ticketing system for the museum and in the case of the API operates as an identity-provider where needed to allow for personalisation.

The next layer up is the API which operates as a transport between the web and both the collection and Tessitura. It also enables a set of other functions – data cleanup and programmatic enhancement.

Most regular readers have already seen the API – apart from TMS, the Collection Management System, it is the oldest piece of the pyramid. It went live shortly after the first iteration of the new collections website in 2012. But since then it has been growing with new methods added regularly. It now contains not only methods for collection access but also user authentication and account structures, and anonymised event logs. The latter of these opens up all manner of data visualization opportunities for artists and researchers down the track.

In the web layer there is the public website but also for internal museum users there are small web applications. These are built upon the API to assist with object label generation, metadata enhancement, and reporting, and there’s even an aptly-named ‘holodeck’ for simulating all manner of Pen behaviours in the galleries.

Above this are the two public-facing gallery layers. The application and interfaces designed and built on top of the API by Local Projects, the Pen’s ecosystem of hardware registration devices designed by Tellart, and then the Pen itself which operates as a simple user interface in its own right.

What is exciting is that all the API functionality that has been exposed to Local Projects and Tellart to build our visitor experience can also progressively be opened up to others to build upon.

Late last year students in the Interaction Design class at NYU’s ITP program spent their semester building a range of weird and wonderful applications, games and websites on top of the basic API. That same class (and the interested public in general) will have access to far more powerful functionality and features once Cooper Hewitt opens in December.

The API is here for you to use.

Rethinking Search on the Collections Site

One of my longer-term projects since joining the museum has been rethinking how the search feature functions on the collections website. As we get closer to re-opening the museum with a suite of new technologies, our work in collaboration with Local Projects has prompted us to take a close look at the moving pieces that comprise the backend of our collections site and API. Search, naturally, forms a large piece of that. Last week, after a few weeks of research and experimentation, I pushed the first iteration live. In this post, I’ll share some of the thoughts and challenges that informed our changes.

First, a glossary of terms for readers who (like me, a month ago) have little-to-no experience with the inner-workings of a search engine:

  • Platform: The software that actually does the searching. The general process is that we feed data to the platform (see “index”), and then we ask it for results matching a certain set of parameters (see “query”). Everything else is handled by the platform itself. Part of what I’ll get into below involves our migration from one platform, Apache Solr, to another, Elasticsearch.
  • Index: An index is the database that the search platform uses to perform searches on. The search index is a lot like the primary database (it probably could fill that role if it had to) but it adds extra functionality to facilitate quick and accurate retrieval of search results.
  • Query: The rules to follow in selecting things that are appropriate to provide as search results. For users, the query could be something like “red concert poster,” but we have to translate that into something that the search provider will understand before results can be retrieved. Search providers give us a lot of different ways we can query things (ranges of a number, geographic distance or word matching to name a few), and a challenge for us as interface designers is to decide how transparent we want to make that translation. Queries also allow us to define how results should be sorted and how to facet results.
  • Faceting/Aggregation: A way of grouping results based on traits they posses. For example, faceting on “location” when you search our collection for “cat” reveals that 80 of our cat-related things are from the USA, 16 are from France, and so on.
  • Analysis (Tokenization/Stemming etc): A process that helps a computer work with sentences. Tokenization, for example, would split a search for “white porcelain vase” into the individual tokens: “white,” “porcelain” and “vase,” and then perform a search for any number of those tokens. Another example is stemming, which would allow the platform to understand that if a user searches for “running,” then items containing other words like “run” or “runner” are also valid search results. Analysis also gives us the opportunity to define custom rules that might include “marathon” and “track” as valid results in a search for “running.”

The State of Search

Our old search functionality showed its symptoms of under-performance in a few ways. For example, basic searches — phrases like “red concert poster” — turned up no results despite the presence of such objects in our collection, and searching for people would not return the person themselves, only their objects. These symptoms led me to identify what I considered the two big flaws in our search implementation.

On the backend, we were only indexing objects. This meant that if you searched for “Ray Eames,” you would see all of the objects we have associated with her, but to get to her individual person page, you would have to first click on an object and then click on her name. Considering that we have a lot of non-objects1, it makes sense to index them all and include them, where relevant, in the results. This made my first objective to find a way to facilitate the indexing and querying of different types of things.

On the frontend, we previously gave users two different ways to search our collection. The default method, accessible through the header of every page, performed a full text search on our Solr index and returned results sorted by image complexity. Users could also choose the “fancy search” option, which allows for searches on one or more of the individual fields we index, like “medium,” “title,” or “decade.” We all agreed here that “fancy search” was confusing, and all of its extra functionality — faceting, searching across many fields — shouldn’t be seen as “advanced” features. My second objective in rethinking how search works, then, was to unify “fancy” and “regular” search into just “search.”

Objective 1: Update the Backend

Our search provider, Solr, requires that a schema be present for every type of thing being indexed. The schema (an XML file) tells Solr what kind of value to expect for a certain field and what sort of analysis to perform on the field. This means I’d have to write a schema file — anticipating how I’d like to form all the indexed data — for each new type of thing we want to search on.

One of the features of Elasticsearch is that it is “schemaless,” meaning I can throw whatever kind of data I want at the index and it figures out how to treat it. This doesn’t mean Elasticsearch is always correct in its guesses — for example, it started treating our accession numbers as dates, which made them impossible to search on — so it also gives you the ability to define mappings, which has the same effect as Solr’s schema. But if I want to add “people” to the index, or add a new “location” field to an object, using Elasticsearch means I don’t have to fiddle with any schemas. This trait of Elasticsearch alone made worth the switch (see Larry Wall’s first great virtue of programmers, laziness: “the quality that makes you go to great effort to reduce overall energy expenditure”) because it’s important to us that we have the ability to make quick changes to any part of our website.

Before building anything in to our web framework, I spent a few days getting familiar with Elasticsearch on my own computer. I wrote a python script that loops through all of the CSVs from our public collections repository and indexed them in a local Elasticsearch server. From there, I started writing queries just to see what was possible. I was quickly able to come up with a lot of the functionality we already have on our site (full-text search, date range search) and get started with some complex queries as well (“most common medium in objects between 1990-2000,” for example, which is “paper”). This code is up on Github, so you can get started with your own Cooper Hewitt search engine at home!

Once I felt that I had a handle on how to index and query Elasticsearch, I got started building it into our site. I created a modified version of our Solr indexing script (in PHP) that copied objects, people, roles and media from MySQL and added them to Elasticsearch. Then I got started on the endpoint, which would take search parameters from a user and generate the appropriate query. The code for this would change a great deal as I worked on the frontend and occasionally refactored and abstracted pieces of functionality, but all the pieces of the pipeline were complete and I could begin rethinking the frontend.

Objective 2: Update the Frontend

Updating the frontend involved a few changes. Since we were now indexing multiple categories of things, there was still a case for keeping a per-category search view that gave users access to each field we have indexed. To accommodate these views, I added a tab bar across the top of the search forms, which defaults to the full-collection search. This also eliminates confusion as to what “fancy search” did as the search categories are now clearly labeled.

Showing the tabbed view for search options

Showing the tabbed view for search options

The next challenge was how to display sorting. Previously, the drop-down menu containing sort options was hidden in a “filter these results” collapsible menu. I wanted to lay out all of the sorting options for the user to see at a glance and easily switch between sorting modes. Instead of placing them across the top in a container that would push the search results further down the page, I moved them to a sidebar which would also house search result facets (more on that soon). While it does cut in to our ability to display the pictures as big as we’d like, it’s the only way we can avoid hiding information from the user. Placing these options in a collapsible menu creates two problems: if the menu is collapsed by default, we’re basically ensuring that nobody will ever use them. If the menu is expanded by default, then it means that the actual results are no longer the most important thing on the page (which, on a search results page, they clearly are). The sidebar gives us room to lay out a lot of options in an unobtrusive but easily-accessible way2.

Switching between sort mode and sort order.

Switching between sort mode and sort order.

The final challenge on the frontend was how to handle faceting. Faceting is a great way for users who know what they’re looking for to narrow down options, and a great way for users who don’t know what they’re looking for to be exposed to the various buckets we’re able to place objects in to.

Previously on our frontend, faceting was only available on fancy search. We displayed a few of the faceted fields across the top of the results page, and if you wanted further control, users could select individual fields to facet on using a drop-down menu at the bottom of the fancy search form. When they used this, though, the results page displayed only the facets, not the objects. In my updates, I’ve turned faceting on for all searches. They appear alongside the search results in the sidebar.

Relocating facets from across the top of the page to the sidebar

Relocating facets from across the top of the page to the sidebar

Doing it Live

We initially rolled these changes out about 10 days ago, though they were hidden from users who didn’t know the URL. This was to prove to ourselves that we could run Elasticsearch and Solr alongside each other without the whole site blowing up. We’re still using Solr for a bit more than just the search (for example, to show which people have worked with a given person), so until we migrate completely to Elasticsearch, we need to have both running in parallel.

A few days later, I flipped the switch to make Elasticsearch the default search provider and passed the link around internally to get some feedback from the rest of the museum. The feedback I got was important not just for working out the initial bugs and kinks, but also (and especially for myself as a relative newbie to the museum world) to help me get the language right and consider all the different expectations users might have when searching our collection. This resulted in some tweaks to the layout and copy, and some added functionality, but mostly it will inform my bigger-picture design decisions going forward.

A Few Numbers…

Improving performance wasn’t a primary objective in our changes to search, but we got some speed boosts nonetheless.

Query Before (Solr) After (Elasticsearch)
query=cat, facets on 162 results in 1240-1350ms 167 results in 450-500ms
year_acquired=gt1990, facets on 13,850 results in 1430-1560ms 14,369 results in 870-880ms
department_id=35347493&period_id=35417101, facets on 1,094 results in 1530-1580ms 1,150 results in 960-990ms

There are also cases where queries that turned up nothing before now produce relevant results, like “red concert poster,” (0 -> 11 results) “German drawings” (0 -> 101 results) and “checkered Girard samples” (0 -> 10 results).

Next Steps

Getting the improved search in front of users is the top priority now – that means you! We’re very interested in hearing about any issues, suggestions or general feedback that you might have — leave them in the comments or tweet us @cooperhewittlab.

I’m also excited about integrating some more exiting search features — things like type-ahead search and related search suggestion — on to the site in the future. Additionally, figuring out how to let users make super-specific queries (like the aforementioned “most common medium in objects between 1990-2000”) is a challenge that will require a lot of experimentation and testing, but it’s definitely an ability we want to put in the hands of our users in the future.

New Search is live on our site right now – go check it out!

1 We’ve been struggling to find a word to use for things that are “first-class” in our collection (objects, people, countries, media etc.) that makes sense to both museum-folk and the laypeople. We can’t use “objects” because those already refer to a thing that might go on display in the museum. We’ve also tried “items,” “types” and “isas” (as in, “what is this? it is a person”). But nothing seems to fit the bill.

2 We’re not in complete agreement here at the labs over the use of a sidebar to solve this design problem, but we’re going to leave it on for a while and see how it fares with time. Feedback is requested!

A Kiwi spends three weeks in the Cooper-Hewitt Labs

the savage’s romance,
accreted where we need the space for commerce–
the center of the wholesale fur trade,
starred with tepees of ermine and peopled with foxes,
the long guard-hairs waving two inches beyond the body of the pelt;
the ground dotted with deer-skins–white with white spots,
“as satin needlework in a single color may carry a varied pattern,”
and wilting eagle’s-down compacted by the wind;
and picardels of beaver-skin; white ones alert with snow.
It is a far cry from the “queen full of jewels”
and the beau with the muff,
from the gilt coach shaped like a perfume-bottle,
to the conjunction of the Monongahela and the Allegheny,
and the scholastic philosophy of the wilderness.
It is not the dime-novel exterior,
Niagra Falls, the calico horses and the war-canoe;
it is not that “if the fur is not finer than such as one sees others wear,
one would rather be without it”–
that estimated in raw meat and berries, we could feed the universe;
it is not the atmosphere of ingenuity,
the otter, the beaver, the puma skins
without shooting-irons or dogs;
it is not the plunder,
but “accessibility to experience.”

Marianne Moore, ‘New York’.

It has been said of New Zealanders that we are a poetry-loving nation, which is one of the reasons I’ve chosen a poem to start this blogpost on just a few of the experiences I’ve had during my time in the Digital & Emerging Media department (aka Labs) here at Smithsonian’s Cooper-Hewitt, National Design Museum in New York City.

(The tenses change throughout as a reflection of how this #longread was assembled. They have been preserved to preserve the moments that they were written in).

I’m here on a three-week scholarship in memory of the late Paul Reynolds, a man who loved libraries, museums, art, archives and digital access to them. Like Bill Moggridge, the former director of the Cooper-Hewitt, Paul passed away of cancer before his time. Paul would have been so interested by what this museum is doing.

The award is administered by New Zealand’s library and information association, LIANZA, and I’ve also been generously supported by my workplace, the First World War Centenary Programme Office within the Ministry for Culture & Heritage, to take it up.

In particular, I’m here because I wanted to study a museum in the midst of transforming itself into an environment for active engagement with collection-based stories, knowledge, and information – or ‘experiential learning’ – and the innovative use of networked media in this context. It has been a rare privilege to be here while the Cooper-Hewitt are going through this change.

Screen shot 2013-10-03 at 9.45.53 AM

New York is no longer “starred with tepees of ermine and peopled with foxes” – it’s more kale salads and small dogs. Nonetheless, you can get a sense of some of the experiences I’ve had since being here on my #threesixfive project for this year.

The rules for this project are pretty simple. Each day, I take a photograph using my cell phone and Instagram and connect it with one from the past in the online collections of a library, archive or museum. Connections can be visual, geographical, conceptual, or tangentially semantic.

In New Zealand, I draw on historical images from the pictorial collections of the Alexander Turnbull Library, largely because they make their online items so easy to share and re-use. Here, I’m borrowing (with permission) material from the New York Public Library.

I sometimes refer to this as my ‘this is water’ project, in reference to David Foster Wallace’s commencement address to the graduates of Kenyon College in 2005. As Wallace describes ‘learning how to think’ in his post-modern way:

It means being conscious and aware enough to choose what you pay attention to and to choose how you construct meaning from experience.

I choose to pay attention to the present as well as the past presents within it. I think this is also a reasonably accurate description of the work the team behind the Cooper-Hewitt Labs, and those they work with in the wider museum, are doing as well.



I’ve had an eclectic curriculum while I’ve been here. If my learning journey were a mythic story, it would go a bit like this:

Act One:

– The ordinary world: I go about my daily life working for the government in New Zealand, but know that I am lacking in-depth knowledge of how to move from ‘publishing content’ to ‘designing experiences’ for learning.
– Call to adventure: I get an email from LIANZA telling me that I have won an award to gain this knowledge. (A major earthquake also strikes the city).
– Meeting with the mentor: Seb Chan begins preparing me from afar to face the unknown. Emails and instructions arrive. I find an apartment. I book tickets for planes and theatre shows.
– Crossing the threshold: I cross from the ordinary world into the special world. Seb invites me to Central Park (near the Cooper-Hewitt museum) with his family. I get instructions for catching the subway and learn where to get palatable coffee. I obtain a subway ticket – my talisman.

Act Two:

– Approach to the Inmost Cave: I re-enter the subway and enter the Cooper-Hewitt where the object of my quest (knowledge) exists. There are security guards and curators and educators. I meet the members of the Cooper-Hewitt labs team. There is a mash-up picture on the wall of a cat with a unicorn horn. Another shows a cat being . . . Wait, what’s happening in that image?

Things happen . . . I get a virus and lose my voice . . . and then here I am three weeks later preparing to return home to the ordinary world, bottling some of the elixir by way of this blog post.

I draw on the idea of mythic storytelling not to be clever (well, maybe a little bit), but also to introduce some of the values and influences shaping the Cooper-Hewitt’s approach to their museum redevelopment.

Seb has written great posts on the two experimental theatre pieces Then She Fell by Third Rails Projects and Sleep No More by Punchdrunk over on Fresh and New. Among other things, these hint at the Cooper-Hewitt’s choice to knowingly break the rules and tell stories in a non-linear way. I won’t cover the same ground here.

Another key inspiration is the Museum of New Art in Tasmania.

The idea of the talisman (in Then She Fell a set of keys; in Sleep No More a white mask; in MONA the ‘o’) is an important one and seems to inform the Cooper-Hewitt’s approach to visitor technology. Devices that are accessible to all, the visitor’s ability to unlock stories through interaction, and the availability of all the information about collection items being online after you visit are also relevant.

In addition to the ‘memorability’ of the event, a few other thoughts spring to mind on elements of Then She Fell and Sleep No More. Both relate to a conversation I had with Jake Barton of Local Projects on the relationship of audience to successful experience design. I’ll talk more about Local Projects later in this post.

Meanwhile, in both Sleep No More and Then She Fell, all you are given as you are guided to cross the threshold into the story-world are the rules for engagement and a talisman. Beyond this the ‘set’ (which incorporates the atmosphere and fabric of the site it is layered over) feels simultaneously theatrical (magical) and life-like (real).

I mention this because of the observation Jake made on creating digital applications that are wondrous enough to work for everyone because they tap into real-world human experiences. Obviously you wouldn’t take an eight year old to Sleep No More, so content choices are important. But the fundamental interaction works for everyone. This is also the case with the Cleveland Art Museum line and shape interactive.

In Then She Fell, these interactions are also personalised and, while guided, audience members make choices that drive the outcome of the scene. Taking dictation for the Mad Hatter using a fountain pen, for example, an actor improvised and remarked “my, you do have nice handwriting. I can see why you come highly recommended”. In another scene plucked from the database of scenes, a doctor asked me a series of questions as he progressively concocted a blend of tea for me, which I then drank. Other scenes were arrestingly intimate.

Another striking aspect of these environments is the radical trust the theatre company invests in its audience to be human and responsible. Of course none of the objects or archival files and letters in Sleep No More or Then She Fell are real, nor are the books copies of last resort.

But the fact that you can touch them and leaf through them and hold them in your hand, or that you can use them to figure things out is a really potent part of the experience.



Quote from Carl Malamud above Aaron Straup Cope’s desk.

My time in New York hasn’t all been theatre visits and blog publishing. With the Carnegie mansion that houses the Cooper-Hewitt closed for renovation and expansion of the public gallery space, I’ve also been spending time with staff immersed in the process of design and making.

When the building re-opens next year, the museum will be an environment that, as Jake Barton of Local Projects put it, “makes design something people can participate in” – not just look at or learn ‘about’ through didactic label text or the end-product of someone else’s creativity.

Local Projects are the media partners for the Cooper-Hewitt refurbishment, with architects Diller Scofidio + Renfro. Their philosophy is encapsulated in a quote from Confucius that Jake frequently references in his public talks;

I hear and I forget.
I see and I remember.
I do and I understand.

You can see how complementary this thinking is with the immersive theatre environments of Sleep No More and Then She Fell.

By way of illustration, the Cooper-Hewitt wants to encourage a more diverse range of visitors to learn about design by letting them collect and interact with networked collection objects and interpretive content in the galleries.

New Zealanders might think of the ‘lifelines’ table at the National Library of New Zealand designed by Clicksuite, which is driven off the Digital New Zealand API; and Americans might recall the recent collection wall at the Cleveland Art Museum, also designed by Local Projects.

But the Cooper-Hewitt is neither a library nor an art museum. It’s a design museum – “the only museum in the United States dedicated just to historic and contemporary design”.

Consequently, applications Local Projects develops with the museum also seek to incorporate design process layers where visitors can make connections and learn more about objects on display and also be designers.

The challenge, as Jake articulated it when we met, is ‘how you transmit knowledge within experiential learning (the elixir)?’. How do you make information seep in in a deeper way so that visitors or audience members do, in fact, learn?

The gradual reveal of the story in Then She Fell, with spaces also for solitary reflection and contemplation, is significant I think. I suspect I’m not the only one who googled the relationship of Alice Liddell to Lewis Carroll in the days after the performance.


If Then She Fell and Sleep No More were like slipping into a forgotten analogue world of the collection stores, Massive Attack v Adam Curtis was like all of the digitization projects of the past decade come back to haunt you.

Curtis describes this ‘total experience’ as a “a gilm’ – a new way of integrating a gig with a film that has a powerful overall narrative and emotional individual stories”. It’s not too far a cry from the word we use in New Zealand to describe the collecting sector of galleries, libraries archives and museums and their potential for digital convergence: a glam.

Imagine Jane Fonda jazzercising it up on a dozen or so massive screens on three walls of a venue, collaged with Adam Curtis’ commentary on how the 80s instituted a new regime of bodily management and control, and Massive Attack with Liz Fraser and Horace Andy covering 80s tunes that you can’t help moving along with. This is the first time I’ve experienced kinaesthetic-visual juxtaposition as a storytelling technique.

It is really hard to find yourself dancing to the aftermath of Chernobyl. It is also very memorable.

As Curtis describes the collaboration between United Visual Artists, Punchdrunk’s Felix Barrett and stage designer Es Devlin: “What links us is not just cutting stuff up – but an interest in trying to change the way people see power and politics in the modern world.”

“I see the people who created our Internet as a gift to the world” – Carl Malamud

“A fake, but enchanting world which we all live in today – but which has also become a new kind of prison that prevents us moving forward into the future” – Adam Curtis

How do we transform our institutions into way-finding devices for the cultural landscapes of the present and past presents, not prisons?



Marco Fusinato, ‘Mass Black Implosion (Shaar, Iannis Xenakis)’ 2012. (Courtesy the artist and Anna Schwartz Gallery)

“To make these drawings, Fusinato chose a single note as a focal point and then painstakingly connected it to every other note on the page” – MOMA interpretation label

Like many museums around the world, the Cooper-Hewitt as a Smithsonian-Institution has been seeking to broaden access to its collections online and deepen relationships with its audiences.

Much of the recent work of the museum that I’ve observed has focused on establishing two-way connections and associations between each of the many hundreds of objects that will physically be on display in the galleries and at least ten related ‘virtual’ objects and related media.

These thousands of digital objects in total will be available through the Cooper Hewitt’s collections API, which will also be a foundation for interactive experiences and other applications where people can manipulate and do things with content to learn more about design and the stories embedded in the museum’s collections.

But there’s a snag.

The vast majority of information and story potential, the knowledge and the ability to see meaningful and significant connections, isn’t in the database. It’s in the heads of the collection experts: the curators. Extracting this narrative and getting it into useful digital form is a huge undertaking.

Progress is being made though. I happily sat in on a checkpoint meeting for curators to make sure that objects they were tagging with a vocabulary (co-designed with the museum’s educators who bring “verbs to the curator’s nouns”) would not be orphaned. If objects are tagged, and another curator doesn’t use the same tag, the connection will be lost.

Thus, as one curator put out a call for colleagues to dive into their part of a collection, a wallpaper with a Z pattern found its match in a Zig Zag chair. Pleated paper found its match in an Issey Miyake dress. This is a laborious and time-consuming process, coordinated by Head of Cross-Platform Publishing Pam Horn.

But it means that the collection is starting to come alive in that ‘1 + 1 = 4’ way that is so magical. Through a balance of curatorial and automated processes, these connections and pathways through the collection will (all going to plan) continue to multiply over the months to come.

Visitors will also be able to find their own way through the knowledge the museum holds, and access all of the data online – much as every museum is also trying to connect pre- and post-visits together.

“Now find your own way home” – Massive Attack v Adam Curtis



Aaron exposes the power structure that is the donor walls of New York City – Pratt Institute, 14th Street.

On Tuesday nights I’ve been accompanying Seb and Aaron to teach a graduate class at Pratt Institute called Museums and the Network (subtitle Caravvagio in the age of Dan Flavin lights). The syllabus states: “Museums have been deeply impacted by the changes in the digital landscape.

At the same time they are buffeted by the demographic transformations of their constituent communities and changes in education. The collapsing barriers to collection, publishing and distribution afforded by the internet have further eroded the museum’s role as cultural conduit.”

It’s a wonderful learning environment, full of serious play and playful seriousness; theoretical ideas and practical examples. Just like the real Cooper-Hewitt Labs.

The students’ ultimate project will be to create an exhibit – perhaps out of the collection of donor walls of New York’s museums – one of the class’ first assignments. Donor walls loom large and prominently in the cultural institutions here. So much of the work of the sector is funded through endowments and private donations.

Like the Cooper-Hewitt, the students have started by digitising the donor walls and turning all their data into a structured open form so that they (and others) can start to tell stories out of it and present it through a web interface. They are gradually building up to staging an exhibition, “that exists at the intersection of the physical and the internet, from concept through development”.

The readings from this class have become my Instapaper companions as I commute for 40 minutes up the island of Manhattan each morning, and home again. I’ve also started to imagine a museum exhibit of my time in New York. Or perhaps it’s a conceptual art piece or a marketing intervention.


Whatever it is, you enter a space that looks like a real installation install. It’s probably painted off-white. There are pieces of papers on the wall with numbers, plinths on which objects could stand, sheets of blank paper in cabinet drawers, empty glass cases and maybe even 3D replicas of framed paintings that are also off-white.

A docent (in New Zealand we call them visitor hosts) guides you to an “information desk” where you can collect a mobile guide or brochures in exchange for your own personal cell phone, which you must check in. You are told that you can read whatever you like on the guide, but you must not erase the content you find there or create new content.

You are told how to use the phone to interact with the numbers on the walls.

Exploring the various applications on the phone you begin to uncover the story of the visitor who came before you. You read their text messages, look at their Instagram feed, explore their Twitter profile and open their photographs (which show photographs of objects, followed by labels with prominent numbers matching the ones on the walls). Maybe there’s a projector installed.

As you stand next to your friend in front of the same object (perhaps it’s the 3D-printed white replica of a framed painting with no texture to indicate the pictorial content) you realize that you have different content on your phones. They are seeing a Van Gogh at #7, you a Rembrandt. You talk about what you (can’t) see. Perhaps there are also some color-less 3D printed replicas of sculptural pieces or other collection items you can hold.

Other audience members text #7 to the number they have been given, and are sent back a record for an object. This is a project that Micah has been playing with using the collections API, using Twilio.

As you hand back the device, you are given the address of a museum where you can see the collection and a URL for it online. Perhaps you exit through a gift shop where you can buy printed postcards of what you didn’t see.

Enough speculation. This is not the collaborative project I came here to consider. Nor am I entirely serious (well, maybe a little bit serious).


(Record of trip to New Museum)


There are many comments I could make about the differences and similarities between what I’ve experienced in my short time in New York City and what is familiar to me back home.

At the risk of generalizing, I could talk about the constraints that the grant-based funding model here seem to place on the ability to play a long game with digital infrastructure or to embed sustainable museological practice into the fabric of the institution.

I could talk about how the Cooper-Hewitt seems to run on a skeleton staff of just 73 people, which is small (even by New Zealand standards), for a national institution. How museums I have worked with in New Zealand use visitor and market research and audience segmentation as a foundation for decision-making about programming opportunities, which seems less evident here.

I could mention how far ahead collection documentation and interpretation strategies seem in museums with equivalent missions in New Zealand such as Te Papa – where relating exhibition label text, narratives, external collections and content assets such as videos around online collections is now everyday practice.

I could talk about how ‘coffee plunger’ is a dirty word for French press, how people walk on the wrong side of the sidewalk, and how the light switches go up not down to power on the light. But these are just surface differences for the same basic human motivations.

What I want to highlight, however, isn’t any of these things. Nor is it a comparison. It’s the willingness I’ve seen of staff at the Cooper-Hewitt to start working together across disciplinary boundaries and departments (education/curatorial/digital media) to continue Bill Moggridge’s vision for an ‘active visitor’ to the museum.

This kind of cultural change takes time. (And time already moves slower in museums than the real world). It’s messy and confusing and identity-challenging. It’s hard to achieve when short-term priorities and established modes of operating keep jostling for the attention of the same staff who need to be its agents.

Yet everyone I have met in my short time here has been so friendly and willing to share information with me. Echoing the sentiment of many that I have talked to at the Cooper-Hewitt, I am also hugely grateful to Seb for his encouraging mentorship and guidance, and Aaron for challenging me to think harder.

As Larry Wall puts it in ‘Perl, the first postmodern computer language’, “these are the people who inhabit the intersections of the Venn diagrams”. The accessibility to experience made possible by the Smithsonian’s Cooper-Hewitt, National Design Museum in New York City will be so much richer for their efforts.

I hope it continues to grow and flourish for many years to come.