At the backend of our museum’s new interactive experiences lies our API, which is responsible for providing the frontend with all the data necessary to flesh out the experience. From everyday information like an object’s title to more novel features such as tags, videos and people relationships, the API gathers and organizes everything that you see on our digital tables before it gets displayed.
In order to meet the needs of the experiences designed for us by Local Projects on our interactive tables, we added a lot of new data to the API. Some of it was sitting there and we just had to go find it, other aspects we had to generate anew.
Either way, this marks a huge step towards a more complete and meaningful representation of our collection on the internet.
Today, we’re happy to announce that all of this newly-gathered data is live on our website and is also publicly available over the API (head to the API methods documentation to see more about that if you’re interested in playing with it programmatically).
People
For the Hewitt Sisters Collect exhibition, Local Projects designed a front-end experience for the multitouch tables that highlights the early donors to the museum’s collection and how they were connected to each other. Our in-house “TMS liaison”, Sara Rubinow, worked to gather and structure this information before adding it to TMS, our collection management system, as “constituent associations”. From there I extracted the structured data to add to our website.
We created a the following new views on the web frontend to house this data:
- All relationships
- Instances of a type of relationship (e.g. “parent-child“)
- People related to a single person (e.g. Sarah Cooper Hewitt)
We also added a few new biography-related fields: portraits or photographs of Hewitt Sisters people and two new biographies, one 75 words and the other 50 characters. These changes are viewable on applicable people pages (e.g. Eleanor Garnier Hewitt) and the search results page.
The overall effect of this is to make more use of this ‘people-related’ data, and to encourage the further expansion of it over time. We can already imagine a future where other interfaces examining and revealing the network of relationships behind the people in our collection are easily explored.
Object Locations and Things On Display
Some of the more difficult tasks in updating our backend to meet the new requirements related to dealing with objects no longer being static – but moving on and off display. As far as the website was concerned, it was a luxury in our three years of renovation that objects weren’t moving around a whole lot because it meant we didn’t have to prioritize the writing of code to handle their movement.
But now that we are open we need to better distinguish those objects in storage from those that are on display. More importantly, if it is on display, we also need to say which exhibition, and which room it is on display.
Object locations have a lot of moving parts in TMS, and I won’t get into the specifics here. In brief, object movements from location to location are stored chronologically in a database. The “movement” is its own row that references where it moved and why it moved there. By appropriately querying this history we can say what objects have ever been in the galleries (like all museums there are a large portion of objects that have never been part of an exhibition) and what objects are there right now.
We created the following views to house this information:
- Objects on display
- People whose objects are on display
- All locations
- Objects to have ever been in a location
- Objects currently in a location
Exhibitions
The additions we’ve made to exhibitions are:
- Separate current exhibitions from past ones
- Add “wall text” to the website
- Add associated videos to exhibitions
- Group objects by which section of an exhibition they were in
There is still some work to be done with exhibitions. This includes figuring out a way to handle object rotations (the process of swapping out some objects mid-exhibition) and outgoing loans (the process of lending objects to other institutions for their exhibitions). We’re expecting that objects on loan should say where they are, and in which external exhibition they are part of — creating a valuable public ‘trail’ of where an object has traveled over its life.
Tags
Over the summer, we began an ongoing effort to ‘tag’ all the objects that would appear on the multitouch tables. This includes everything on display, plus about 3,000 objects related to those. The express purpose for tags was to provide a simple, curated browsing experience on the interactive tables – loosely based around themes ‘user’ and ‘motif’. Importantly these are not unstructured, and Sara Rubinow did a great job normalizing them where possible, but there haven’t been enough exhibitions, yet, to release a public thesaurus of tags.
We also added tags to the physical object labels to help visitors draw their own connections between our objects as they scan the exhibitions.
On the website, we’ve added tags in a few places:
- All tags
- Objects for a tag
- Tags for an object
- We also added an object’s tags to its search record to improve relevant search results.
That’s it for now – happy exploring! I’ll follow up with more new features once we’re able to make the associated data public.
Until then, our complete list of API methods is available here.
Congratulations!!
Great work all!
I’m really digging this simple UI bit for sorting:
Pingback: On Exhibitions and Iterations | Cooper Hewitt Labs
Pingback: On Exhibitions and Iterations | Sam Brenner: Notes
Pingback: Label Writer: Connecting NFC tags to collection objects | Sam Brenner: Notes