Category Archives: Exhibitions

New Relationships: A Summer at Cooper Hewitt

This summer I was the Peter A. Kruger Cross-Platform Publishing intern. When asked about my responsibilities many people want to know, “What does “Cross-Platform” mean?”

At Cooper Hewitt, Cross-Platform Publishing sits at the nexus of the Digital and Emerging Media, Communications, Curatorial, Education, and Exhibitions departments. During my internship I have helped to research, develop, and manage all forms of content for print and electronic publications. As a part of the Cross-Platform Publications team, I have had the opportunity to participate in decision-making that affects the design and content of museum channels, printed books, and digital tables.

One of my favorite projects this summer was collaborating with the Product Design and Decorative Arts department to develop their plan for the digital tables in the museum’s upcoming exhibition, Jewelry of Ideas: Gifts from the Susan Grant Lewin Collection. The process for developing an application appearing on one of the museum’s digital tables in an exhibition begins with thinking about what stories are not being told in the exhibition didactics. When Cooper Hewitt launched its digital tables in 2014 the team built an application that shows relationships between the founding donors of the Cooper Hewitt collection. It has been in use in the exhibition Hewitt Sisters Collect. So I was asked how might we modify that application to apply to the constituents in the Jewelry of Ideas exhibition. I began looking at research about the designers in the exhibition to uncover meaningful relationships and connections between designers. Working with the curator, we decided which relationships we believed were the most important to highlight. From there, I along with the help of our registrar created relationship hierarchies in TMS. For the Susan Grant Lewin exhibition we decided that the most important relationships to feature on the digital table were those created by the schools that various designers attended or taught for. With this information we hope visitors can see how various styles and techniques arose from certain schools and how these designers’ works influenced one another.

To build the foundation of the interactive content in TMS, I recorded the connections between designers based on school, mentorship, and history of collaboration. Currently, new code is being written to modify the donor application. Once completed the collecetion site records and the digital table will reveal the relationships to its users. The table interface is designed with a “river” where objects and designer images will flow on the digital table. When a designer is selected, a short biography will appear. Underneath the biography, related designers are listed who either participated in the same school or worked together in some way. We hope that this interactive digital experience will help visitors visualize the interconnected nature of the collection in a new way.

 

 

Large-print labels are live!

Launching alongside the long-awaited Jazz Age exhibition, the exciting new large-print label feature on our collection site is a key part of Cooper Hewitt’s ongoing accessibility initiative.

The original goal for the large-print labels project was to create a physical manifestation of our exhibition label content that could be distributed to museum visitors by our Visitor Experiences team upon request. Graphic designer Ayham Ghraowi, one of our 2015 Peter A. Krueger summer interns, worked closely with Micah Walter and Pamela Horn, director of Cross-Platform Publications, to design and develop a prototype application, the Label Book Generator. This prototype, built with Python using the Flask framework and originally hosted on Heroku, produced printable label booklets that met the museum’s standards for accessibility. This prototype was the first big step toward providing an accessible complementary experience of exhibition content for our visitors with low vision.

This fall, using the CSS stylesheets that Ayham developed for his prototype, Digital & Emerging Media experimented with a number of possible ways to deliver this large-print content to VE in production. Ultimately, we decided that rather than establishing a dedicated application for large-print label booklets, integrating the large-print labels into our collection site would be the best solution. This would not only allow us to rely on existing database connections and application code to generate the large-print label documents, but it would also corral all of our exhibition content under one domain, reducing any complications or barriers to discovery for VE and visitors alike. And by providing the label chats for each object, which are otherwise not included in any of our digital content, the large-print pages serve to supplement the main exhibition pages for our website visitors as well, adding a deeper layer of engagement to both the web and in-gallery experiences.

As of today, when a user visits any exhibition page on our collection site or main website, they’ll see a new option in the sidebar, inviting them to view and print the exhibition labels. The large-print page for each exhibition includes A-panel text alongside all available object images and label chats. If an exhibition is organized into thematic sections, this is reflected in the ordering of the large-print labels, and B-panel text is included alongside the section headers.

To generate these pages, I created a new exhibitions_large_print PHP library, which leverages the existing exhibitions, objects, and exhibitions_objects libraries to assemble the necessary information. Because we want to be able to print the large-print pages as one document for in-gallery use, our large-print pages cannot be paginated, unlike our main exhibition pages. This presents no issues for smaller exhibitions, like the ongoing Scraps. But very large exhibitions — like Jazz Age, for example, with over 400 objects — require way too much memory to be processed in production.

To get around this issue, I decided to assemble the large-print label data for certain “oversized” exhibitions in advance and store it in a series of JSON files on the server. A developer can manually run a PHP script to build the JSON files and write them to a data directory, each identified by exhibition ID. The ID for each oversized exhibition is added to a config variable, which tells our application to load from JSON rather than query the database.

https://gist.github.com/rnackman/8ed20dbfdc2962077c0b87975ca80e85

For greater flexibility based on individual needs, our large-print pages include clear and easy-to-locate UI tools for printing and adjusting font size. A user can select one of six default font sizes ranging from 18pt to 28pt, triggering some simple JS to reset the body font size accordingly. Internally, we can use the  pt query string parameter to enable large-print links to open the page with a specific default font size selected. For example, navigating to the large-print label page from an exhibition page using the Large Print sidebar icon opens the page at 24pt font.

https://gist.github.com/rnackman/84b10e7c4c862d128f58140f8adae75b

Visitor Experiences has prepared a number of printed large-print label booklets for our Jazz Age exhibition, available upon request at the front desk. Visitors may also print this document at home and bring it along with them, and any individual can access this responsive page on their desktop or mobile device.

We’ll be keeping an ear out for suggested improvements to this feature, and we’re excited to see how our visitors are engaging with these large-print labels on the web and in the galleries!

In addition to launching the large-print label pages, we’ve added an accessibility link to our header and footer navigation across all our sites, where visitors can learn more about the growing list of access services that Cooper Hewitt offers alongside large-print label booklets.

Process Lab: Citizen Designer Digital Interactive, Design Case Study

Fig.1. Process Lab: Citizen Designer exhibition and signage, on view at Cooper Hewitt.

Fig.1. Process Lab: Citizen Designer exhibition and signage, on view at Cooper Hewitt

Background

The Process Lab is a hands-on educational space where visitors are invited to get involved in design process. Process Lab: Citizen Designer complimented the exhibition By the People: Designing a Better America, exploring the poverty, income inequality, stagnating wages, rising housing costs, limited public transport, and diminishing social mobility facing America today.

In Process Lab: Citizen Designer participants moved through a series of prompts and completed a worksheet [fig. 2]. Selecting a value they care about, a question that matters, and design tactics they could use to make a difference, participants used these constraints to create a sketch of a potential solution.

Design Brief

Cooper Hewitt’s Education Department asked Digital & Emerging Media (D&EM) to build an interactive experience that would encourage visitors to learn from each other by allowing them to share and compare their participation in exhibition Process Lab: Citizen Designer.

I served as project manager and user-experience/user-interaction designer, working closely with D&EM’s developer, Rachel Nackman, on the project. Interface Studio Architects (ISA) collaborated on concept and provided environmental graphics.

Fig. 2. Completed worksheet with question, value and tactic selections, along with a solution sketch

Fig. 2. Completed worksheet with question, value and tactic selections, along with a solution sketch

Process: Ideation

Project collaborators—D&EM, the Education Department, and ISA—came together for the initial steps of ideation. Since the exhibition concept and design was well established at this time, it was clear how participants would engage with the activity. Through the process of using cards and prompts to complete a worksheet they would generate several pieces of information: a value, a question, one to two tactic selections, and a solution sketch. The group decided that these elements would provide the content for the “sharing and comparing” specification in the project brief.

Of the participant-generated information, the solution sketch stood out as the only non-discrete element. We determined that given the available time and budget, a simple analog solution would be ideal. This became a series of wall-mounted display bins in which participants could deposit their completed worksheets. This left value, question, and tactic information to work with for the content of the digital interactive.

From the beginning, the Education Department mentioned a “broadcast wall.” Through conversation, we unpacked this term and found a core value statement within it. Phrased as a question, we could now ask:

“How might we empower participants to think about themselves within a community so that they can be inspired to design for social change?”

Framing this question allowed us to outline project objectives, knowing the solution should:

  • Help form a virtual community of exhibition participants.
  • Allow individual participants to see themselves in relation to that community.
  • Encourage participants to apply learnings from the exhibition other communities

Challenges

As the project team clarified project objectives, we also identified a number of challenges that the design solution would need to navigate:

  • Adding Value, Not Complexity. The conceptual content of Process Lab: Citizen Designer was complex. The design activity had a number of steps and choices. The brief asked that D&EM add features to the experience, but the project team also needed to mitigate a potentially heavy cognitive load on participants.
  • Predetermined Technologies. An implicit part of the brief required that D&EM incorporate the Pen into user interactions. Since the Pen’s NFC-reading technology is embedded throughout Cooper Hewitt, the digital interactive needed to utilize this functionality.
  • Spatial Constraints. Data and power drops, architectural features, and HVAC components created limitations for positioning the interactive in the room.
  • Time Constraints. D&EM had two months to conceptualize and implement a solution in time for the opening of the exhibition.
  • Adapting to an Existing Design. D&EM entered the exhibition design process at it’s final stages. The solution for the digital interactive had to work with the established participant-flow, environmental graphics, copy, furniture, and spatial arrangement conceived by ISA and the Education Department.
  • Budget. Given that the exhibition design was nearly complete, there was virtually no budget for equipment purchases or external resourcing.

Process: Defining a Design Direction

From the design brief, challenges, objectives, and requirements established so far, we could now begin to propose solutions. Data visualization surfaced as a potential way to fulfill the sharing, comparing and broadcasting requirements of the project. A visualization could also accommodate the requirement to allow an individual participants to compare themselves to the virtual exhibition community by displaying individual data in relation to the aggregate.

ISA and I sketched ideas for the data visualization [figs. 3 and 4], exploring a variety of structures. As the project team shared and reviewed the sketches, discussion revealed some important requirements for the data organization:

  • The question, value and tactic information should be hierarchically nested.
  • The hierarchy should be arranged so that question was the parent of value, and value was the parent of tactics.
Fig. 3. My early data visualization sketches

Fig. 3. My early data visualization sketches

Fig. 4. ISA’s data visualization sketch

Fig. 4. ISA’s data visualization sketch

With this information in hand, Rachel proceeded with the construction of the database that would feed the visualization. The project team identified an available 55-inch monitor to display the data visualization in the gallery; oriented vertically it could fit into the room. As I sketched ideas for data visualizations I worked within the given size and aspect ratio. Soon it became clear that the number of possible combinations within the given data structure made it impossible to accommodate the full aggregate view in the visualization. To illustrate the improbability of showing all the data, I created a leaderboard with mock values for the hundreds of permutations that result from the combination of 12 value, 12 question and 36 tactic selections [fig. 5, left]. Not only was the volume of information overwhelming on the leaderboard, but Rachel and I agreed that the format made no interpretive meaning of the data. If the solution should serve the project goal to “empower participants to think about themselves within a community so that they can be inspired to design for social change,” it needed to have a clear message. This insight led to a series steps towards narrativizing the data with text [fig. 5].

Concurrently, the data visualization component was taking shape as an enclosure chart, also known as a circle packing representation. This format could accommodate both hierarchical information (nesting of the circles) and values for each component (size of the circles). With the full project team on board with the design direction, Rachel began development on the data visualization using D3.js library.

Fig. 5. Series of mocks moving from a leaderboard format to a narrativized presentation of the data with an enclosure chart

Fig. 5. Series of mocks moving from a leaderboard format to a narrativized presentation of data with an enclosure chart

Process: Refining and Implementing a Solution

Through parallel work and constant communication, Rachel and I progressed through a number of decisions around visual presentation and database design. We agreed that to enhance legibility we should eliminate tactics from the visualization and present them separately. I created a mock that applied Cooper Hewitt’s brand to Rachel’s initial implementation of the enclosure chart. I proposed copy that wrapped the data in understandable language, and compared the latest participant to the virtual community of participants. I opted for percentage values to reinforce the relationship of individual response to aggregate. Black and white overall, I used hot pink to highlight the relationship between the text and the data visualization. A later iteration used pink to indicate all participant data points. I inverted the background in the lower quarter of the screen to separate tactic information from the data visualization so that it was apparent this data was not feeding into the enclosure chart, and I utilized tactic icons provided by ISA to visually connect the digital interactive to the worksheet design [fig. 2].

Next, I printed a paper prototype at scale to check legibility and ADA compliance. This let us analyze the design in a new context and invited feedback from officemates. As Rachel implemented the design in code, we worked with Education to hone the messaging through copy changes and graphic refinements.

Fig. 5. A paper prototype made to scale invited people outside the project team to respond to the design, and helped check for legibility

Fig. 5. A paper prototype made to scale invited people outside the project team to respond to the design, and helped check for legibility

The next steps towards project realization involved integrating the data visualization into the gallery experience, and the web experience on collection.cooperhewitt.org, the collection website. The Pen bridges these two user-flows by allowing museum visitors to collect information in the galleries. The Pen is associated with a unique visit ID for each new session. NFC tags in the galleries are loaded with data by curatorial and exhibitions staff so that visitors can use the Pen to save information to the onboard memory of the Pen. When they finish their visit the Pen data is uploaded by museum staff to a central database that feeds into unique URLs established for each visit on the collection site.

The Process Lab: Citizen Designer digital interactive project needed to work with the established system of Pens, NFC tags, and collection site, but also accommodate a new type of data. Rachel connected the question/value/tactic database to the Cooper Hewitt API and collections site. A reader-board at a freestanding station would allow participants to upload Pen data to the database [fig. 6]. The remaining parts of the participant-flow to engineer were the presentation of real time data on the visualization screen, and the leap from the completed worksheet to digitized data on the Pen.

Rachel found that her code could ping the API frequently to look for new database information to display on the monitor—this would allow for near real-time responsiveness of the screen to reader-board Pen data uploads. Rachel and I decided on the choreography of the screen display together: a quick succession of entries would result in a queue. A full queue would cycle through entries. New entries would be added to the back of the queue. An empty queue would hold on the last entry. This configuration assumed that if the queue was full when they added their entry participants may not see their data immediately. We agreed to offload the challenge of designing visual feedback about the queue length and succession to a subsequent iteration in service of meeting the launch deadline. The queue length has not proven problematic so far, and most participants see their data on screen right away.

Fig. 6. Monitor displaying the data visualization website; to the left is the reader-board station

Fig. 6. Monitor displaying the data visualization website; to the left is the reader-board station

As Rachel and I brought the reader board, data visualization database, and website together, ISA worked on the graphic that would connect the worksheet experience to the digital interactive. The project team agreed that NFC tags placed under a wall graphic would serve as the interface for participants to record their worksheet answers digitally [fig. 7].

Fig. 7. ISA-designed “input graphic” where participants record their worksheet selections; NFC tags beneath the circles write question, value and tactic data to the onboard memory of the Pen

Fig. 7. ISA-designed “input graphic” where participants record their worksheet selections; NFC tags beneath the circles write question, value and tactic data to the onboard memory of the Pen

Process: Installation, Observation & Iteration

Rachel and I had the display website ready just in time for exhibition installation. Exhibitions staff and the project team negotiated the placement of all the elements in the gallery. Because of obstacles in the room, as well as data and power drop locations, the input wall graphic [fig. 7] had to be positioned apart from the reader-board and display screen. This was unfortunate given the interconnection of these steps. Also non-ideal was the fact that ISA’s numeric way-finding system omitted the step of uploading Pen data at the reader-board and viewing the data on-screen [fig.1]. After installation we had concerns that there would be low engagement with the digital interactive because of its disconnect from the rest of the experience.

As soon as the exhibition was open to the public we could see database activity. Engagement metrics looked good with 9,560 instances of use in the first ten days. The quality of those interactions, however, was poor. Only 5.8% satisfied the data requirements written into the code. The code was looking for at least one question, one value, and one tactic in order to process the information and display it on-screen. Any partial entries were discounted.

Fig. 8. A snippet of database entries from the first few days of the exhibition showing a high number of missing question, value and tactic entries

Fig. 8. A snippet of database entries from the first few days of the exhibition showing a high number of missing question, value and tactic entries

Conclusion

The project team met the steep challenges of limited time and budget—we designed and built a completely new way to use the Pen technology. High engagement with the digital interactive showed that what we created was inviting, and fit into the participatory context of the exhibition. Database activity, however, showed points of friction for participants. Most had trouble selecting a question, value and tactic on the input graphic, and most did not successfully upload their Pen data at the reader-board. Stringent database requirements added increased difficulty.

Based on these observations, it is clear that the design of the digital interactive could be optimized. We also learned that some of the challenges facing the project could have been mitigated by closer involvement of D&EM with the larger exhibition design effort. Our next objective is to stabilize the digital interactive at an acceptable level of usability. We will continue observing participant behavior in order to inform our next iterations toward a minimum viable product. Once we meet the usability requirement, our next goal will be to hand-off the interactive to gallery staff for continued maintenance over the duration of the exhibition.

As an experience, the Process Lab:Citizen Designer digital interactive has a ways to go, but we are excited by the project’s role in expanding how visitors use the Pen. This is the first time that we’ve configured Pen interactivity to allow visitors to input information and see that input visualized in near real-time. There’s significant potential to reuse the infrastructure of this project again in a different exhibition context, adapting the input graphic and data output design to a new educational concept, and the database to new content.

Traveling our technology to the U.K.

Visitors to the London Design Biennale use our “clone” of the Wallpaper Immersion Room.

Recently, we launched a major initiative at the inaugural London Design Biennale at Somerset House. The installation was up from September 7th through the 27th and now that it has closed and the dust has settled, I thought I’d try and explain the details behind all the technology that went into making this project come alive.

Quite a while back, an invitation was extended to Cooper Hewitt to represent the United States in the London Design Biennale, an exhibition featuring 37 countries from around the world. Our initial idea being to spin up a clone of our very popular “Wallpaper Immersion Room” and hand out Cooper Hewitt Pens.

The idea of traveling our technology outside the walls of the Carnegie Mansion has been of great interest to the museum ever since we reopened our doors in 2014. The process of figuring out how to make our technology portable, and have it make sense in different environments and contexts was definitely a challenge we were up for, and this event seemed like the perfect candidate to put that idea through its paces.

So we started out gathering up the basic requirements and working through all that would be needed to make it all come together, including some very generous support from the Secretary of the Smithsonian and the Smithsonian National Board, Bloomberg Philanthropies, and Amita and Purnendu Chatterjee.

The short version is, this was a huge undertaking. But it all worked in the end, and visitors at the first-ever London Design Biennale were able to use Cooper Hewitt Pens to explore 100 wallpapers from our collection, create their own designs and save them. Plus, visitors could collect and save installations from other Biennale participants.

Thanks to a whole bunch of people, there's an Immersion Room in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Thanks to a whole lot of people, there are @cooperhewitt pens in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

The long version is as follows.

An Immersion Room in England

First and foremost, we wanted to bring the Immersion Room over as our installation for the London Design Biennale. So, let’s break down what makes the Immersion Room what it is.

The original Immersion Room, designed by Cooper Hewitt and Local Projects, made its debut when the museum reopened in December 2014, following a major renovation. It is essentially an interactive experience where visitors can manipulate a digital interactive touch-table to browse our collection of wallpapers and view them at scale, in real time, via twin projectors mounted to the ceiling. Additionally, visitors can switch into design mode and create their own wallpapers; adjusting the scale, orientation, and positioning of a repeating pattern on the wall. This latter feature is arguably what makes the experience what it is. Visitors from all walks of life love spending time drawing in the Immersion Room, typically resulting in a selfie or two like the ones you see in the images below.

? #cooperhewitt #londondesignbiennale

A photo posted by Sibel Yalcin (@sibellyalcin) on

#londondesignbiennale #immersionroom #cooperhewitt #doodling

A photo posted by Helen (@helen3210) on

What I’ve just described is essentially the minimal viable product for this entire effort. One interactive table, two ceiling mounted projectors, a couple of computers, and a couple of walls to project on.

From bar napkin to fabrication–we've managed to clone the Immersion Room!

A photo posted by Micah Walter (@micahwalter) on

The Immersion Room uses two separate computers, each running an application written in OpenFrameworks. There is the “projector app,” which manages what is displayed to the two projectors, and there is the “table app,” which manages what visitors see and interact with on the 55” Ideum table. The two apps communicate with each other over a local network, with the table app essentially instructing the projector app with what it should be displaying in real time.

Here is a basic diagram of how that all fits together.

Twin projector and computer setup for Wallpaper Immersion Room

Twin projector and computer setup for Wallpaper Immersion Room

Each application loads in content on startup. This is provided to the application by a giant json file that is managed by our Collections API and meant to be updated each night through a cron job. When the applications start up, they look at the json file and pull down any new or changed assets they might need.

At Cooper Hewitt, this means that our curators are able to update content whenever they want using our collections management system, The Museum System (TMS). Updates they make in TMS get reflected on the digital table following a data-deploy and reboot of the table and projector applications. This is essentially the workflow at Cooper Hewitt. Curators fill in object data in TMS, and through a series of tubes, that data eventually finds its way to the interactive tables and our collections website and API. For this project in London, we’d do essentially the same process, with a few caveats.

Make it do all the things

We started asking ourselves a number of questions. It’s a mix of feature creep and a strong desire to put some of the technology we’ve built through it’s paces–to determine if it’s possible to recontextualize much of what we’ve created at Cooper Hewitt and have it work outside the museum walls.

Questions like:

  • What if we want to allow visitors to save the wallpapers and the designs they create?
  • What if we wanted to hand out a Cooper Hewitt Pen to each visitor?
  • What if we want to let people use the Pen to save their creations, wallpapers, and ALL the other installations around the Somerset House?!

All of a sudden, the project becomes a bit more complicated, but still, a great way to figure out how we would translate a ton of the technology we’ve built at Cooper Hewitt into something useful for the rest of the world. We had loads of other ideas, features, and add-ons, but at some point you have to decide what falls in and out of scope.

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

So this is what we decided to do.

  • We would devise a way to construct the physical build out of a second Immersion Room. This would essentially be a “set” with walls and a truss system for suspending two rather heavy projectors. It would have a floor, and would be slightly off the ground so we could conceal wiring and create a place for the 55” touch table to rest.
  • We’d pre-fabricate the entire rig in New York and ship it to London to be assembled onsite.
  • We’d enable the Immersion Room to allow visitors to save from a selection of 101 wallpapers from our permanent collection. These would be curated for the Utopia theme of the London Design Biennale.
  • We’d enable the design feature of the Immersion Room and allow visitors to save their designs.
  • We’d hand out Cooper Hewitt Pens to each visitor who wanted one, along with a printed receipt containing a URL and a unique code.
  • We’d post coded NFC tags all throughout Somerset House to allow visitors to use their Pens to collect information about each participating country, including our own.
  • We’d build a bespoke website where visitors would go following their visit to see all the things they’ve collected or created.

These are all of the things we decided to do from a technology standpoint. Here is how we did it.

pen-www

The first step to making this all work was to extract the relevant code from our production collections website and API. We named this “pen-www” and intended that this codebase serve as a mini framework for developing a collecting system and website. In essence it’s simply a web application (written in PHP) and a REST API (also PHP). It really needed to be “just the code” required to make all the above work. So here is another list, explaining what all those requirements are.

  • It needs to somehow generate a simple collections website that is capable of storing relevant info about all the things one could potentially collect. This was very similar to our current codebase at Cooper Hewitt, but we added the idea of “organizations” so that you could have multiple participants contributing info, and not just Cooper Hewitt.
  • It needs all the API methods that make the Pen work. There are actually just a handful that do all the hard work. I’ll get to those in a bit.
  • It needs to handle image uploads and processing of those images (saved designs from the Immersion Room table).
  • It needs to create “visits” which are the pages a visitor lands on when entering their unique code.
  • It needs a series of scripts to help us import data and set things up.
  • We would also need some new code to allow us to generate paper receipts with unique codes printed on them. At Cooper Hewitt this is all done via our Tessitura ticket printing system, so since we wouldn’t have that at Somerset House, we’d need to devise a new way of dealing with registering and pairing pens, and printing out some kind of receipt.

So, pen-www would become this sort of boilerplate framework for the Pen. The idea being, we’d distill the giant codebase we’ve developed at Cooper Hewitt down to the most essential parts, and then make it specific to what we wanted to do for London. This is an important point. We aren’t attempting to build an actual framework. What we are trying to do is to boil out the necessary code as a starting point, and then use that code as the basis for a new project altogether.

From our point of view, this exercise allows us to understand how everything works, and gets us close enough to the core code so that we can think of repeating this a third or a fourth time—or more.

The API at the center of everything

We built the Cooper Hewitt API with intentions of making it flexible enough to be easily expanded upon or altered. It tries to adhere to the REST API pattern as much as it can, but it’s probably better described as “REST-ish.” What’s nice about this approach has been that we’ve been able to build lots and lots of internal interfaces using this same pattern and code base. This means that when we want to do something as bespoke as building an entire replica of our seemingly complex Pen/Visit system, and deploy it in another country, we have some ground to stand on.

In fact, just about all of the systems we have built use the API in some way. So, in theory, spinning up a new API for the London project should just mean pointing things like the Immersion Room interactive table at a new API endpoint. Since the methods are the same, and the responses use the same pattern, it should all just work!

So let’s unpack the API methods required to make the Pen and Immersion Room come to life. These are all internal/private API methods, so you can’t take them for a spin, and I can’t share the actual code with you that lies beneath, but I think you’ll get the idea.

Pens – there’s a whole class of API methods that deal with the Pen itself. Here are the relevant ones:

  • pens.checkoutPen – This marks a Pen as having been checked out for an associated visit
  • pens.getCurrentCheckout – This gets the currently checked out Pen for a specific visit
  • pens.getCurrentVisit – This does the opposite of the getCurrentCheckout, and returns a visit for a specific Pen.
  • pens.returnPen – This marks the Pen as having been returned.

Visits – There is another class of API methods that deal with the idea of “visits.” A visit is meant to represent one individual’s visit to the museum or exhibition, or some other physical location. Each visit has an ID and a corresponding unique code (the thing we print on a visitor’s paper receipt).

  • visits.getActivity – Returns all the activity associated with a visit
  • visits.getInfo – Returns detailed info about a specific visit
  • visits.processPenActivity – This is a major API method that takes any activity recorded by the Pen and processes it before storing the info in the appropriate location in the database. This one gets called frequently and is the method that happens when you tap your Pen on a reader board at one of our digital tables. The reader board downloads all the info on the Pen, and calls this API method to deal with whatever came across.
  • visits.registerVisit – This marks a visit as having been registered. It’s what generates your unique code for the visit.

Believe it or not, that is basically it. It’s just a handful of actions that need to be performed to make this whole thing work. With these methods in place, we can:

  • Pair pens with newly created visits so we can hand Pens out to visitors.
  • Process data collected by the Pen, either from NFC stickers it has read, or via our Interactive Table.
  • Do a final read of the Pen and return the Pen to the pool of possible pens.

So, now that we have an API, and all the relevant methods we can start building the website and massaging the API code to do things in the slightly different ways that make this whole thing live up to its bespokiness.

On the website end of things we will follow the KISS principle or “Keep it simple, stupid.” The site will be devoid of fancy image display features, extended relationship mapping and tagging, and all the goodies we’ve spent years developing for the Cooper Hewitt Collections website. It won’t have search, or fancy search, or search by color, or search by anything. It won’t have a shoebox or even a random button (ok, maybe I’ll add that later today). For all intents and purposes, the website will simply be a place to enter your unique code, and see all your stuff.

https://londonbiennale.cooperhewitt.org

https://londonbiennale.cooperhewitt.org

The website and its API will live at https://londonbiennale.cooperhewitt.org. It consists of just two web front ends running Apache and sitting behind an NGINX load balancer, and one MySQL instance via Amazon’s RDS service. It’s very, very similar to just about all of our other systems and services except that it doesn’t have fancy extras like Logstash logging, or an Elasticsearch index. I did take the time to install server monitoring and alerting, just so we can sleep at night, but really, it’s pretty bare bones.

At first glance there isn’t much there to look at. You can browse the different participants and you can create a Cooper Hewitt account or sign in using our Single Sign On service, but other than that, there is really just one thing to do–enter your code and see all your stuff.

Participants

Participants

All your content are belong to us

In order for this project to really work, we’d need to have content. Not only our own Cooper Hewitt content, but content from all the participants representing the 36 other countries from around the world.

So here is the breakdown:

  • Each participant or organization will have a page, like this one for Australia https://londonbiennale.cooperhewitt.org/participants/australia/
  • Each participant will have one “object.” In the case of all 37 participants, this object will represent their “booth” like this one from Australia – https://londonbiennale.cooperhewitt.org/objects/37643049/
  • Each “booth” will contain an image and the catalog text provided by the London Design Biennale team. If there is time, we will consider adding additional information from each participant (we haven’t done this as of yet).
  • Cooper Hewitt’s record will have some more stuff. In addition to the object representing Cooper Hewitt’s booth, we will also have 100 wallcoverings from our permanent collection.
  • You can collect all of these via the Immersion Room table and your Pen. Here is our page – https://londonbiennale.cooperhewitt.org/participants/usa/ There are also two physical wallapapers that are part of our installation, which you can of course collect as well.

All told, that means 140 objects in this little microsite/sitelet. You can actually browse them all at once if you are so inclined here – https://londonbiennale.cooperhewitt.org/objects/

"Booth" pages

“Booth” pages

Visit Pages

So what does a visitor get when they go to the webpage and type in their unique code. Well, the answer to that question is “it depends.” For objects that we imported from our permanent collection (the 101 wallpapers) you get a nice photo of the wallpaper, a chatty description of the wallpaper written by our curator, Greg Herringshaw, having to do with “Utopia” — the theme of this year’s London Design Biennale. You also get a link back to the collection page on the Cooper Hewitt website. For the 37 booths, you get a photo and the catalog info for each participants, and if you created and saved your own design in the wallpaper immersion room, you get a copy of the PNG version of your design, which you can, of course, download and do with what you like. (Hint: they make cool wall posters.)

Additionally, you get timestamps related to your visit. This way, just like on the Cooper Hewitt website, you get to retain a record of your visit–the date and time of each collected object and a way to recall your visit anytime in the future.

Visit page example

Visit page example

Slow Progress

All of this code replication, extraction, and re-configuring took quite a long time. The team spent long hours, nights, and weekends trying to sort it all out. In theory this should all just work, but like any project, there are unique aspects to the thing you are currently trying to accomplish, which means that, no matter what, you’re gonna be writing some new code.

Ok, so let’s check in with what we’ve got so far.

  • A physical manifestation of the Wallpaper Immersion Room and all it’s hardware, computers, wires, etc.
  • A website and API to power all the fun stuff.
  • A bunch of content from our own permanent collection and the catalog info from the London Design Biennale team.
  • Visit pages

We still need the following:

  • A way to issue Pens to visitors as they arrive.
  • A way to print a unique code on some kind of receipt, which we give to the visitors as well.
  • A way to check in Pens as visitors return them.
  • The means to get the table pointing at the right API endpoint so it can save things and processPenActivity as well.

To accomplish the first three items on the list, we enlisted the help of Rev Dan Catt.

That time @revdancatt assembled 700 pens for @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Dan is planning to write another extensive blog post about his role in all of this, but in a nutshell, he took our Pen registration code and built his own little mini-registration station and ticket printer. It’s pictured below and performs all of the functions above (1 through 3). It uses a small Adafruit thermal printer to print the receipts and unique codes, and it is simple enough to use with a small web based UI to give the operator some basic feedback. Other than that, you tap a pen and it does the rest.

Dan's Raspberry Pi powered Pen registration and ticket printing station.

Dan’s Raspberry Pi powered Pen registration and ticket printing station.

Tickets printing for the first time

Tickets printing for the first time

For the last item on the list, I had to re-compile the code Local Projects delivered to us. In the code I had to find the references to the Cooper Hewitt API endpoints and adjust them to point at the London API endpoint. Once I did this, and recompiled the OpenFrameworks project we were in business. For a while, I had it all set up for development and testing on my laptop using Parallels and Visual Studio. Eventually I compiled a final version and we installed it on the actual Immersion Room Table.

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Cracking open the Local Projects code was a little scary. I’m not really an OpenFrameworks programmer, or at least I haven’t been since grad school, and the Local Projects code base is pretty vast. We’ve had this code compiled and running on all the interactive tables at Cooper Hewitt since December of 2014. This is the first time I (or anyone I know of) has attempted to recompile it from source, not to mention make changes to it beforehand.

That said, it all worked just fine. I had to find an old copy of Visual Studio 2012, but other than that, and tracking down a few dependencies, it wasn’t a very big deal. Now we had a copy of the Immersion Room table application set up to talk to the London API endpoint. As I mentioned before, all the API methods are named the same, and set up the same way, so the data began to flow back and forth pretty quickly.

Content Management

I mentioned above that we had to import 100 wallpapers from our collection as well as the data for all 37 booths. To accomplish all of this, we wrote a bunch of Python and PHP scripts.

We needed to do the following with regard to content:

  • Create a record for each of the 37 participants
  • Import the catalog info as an object for each of the 37 participants
  • Import the 100 wallcoverings from the Cooper Hewitt collection. We just used, you guessed it, our own API to do this.
  • Massage the JSON files that live on the Projector and Table applications so they have the correct 100 wallpapers and all their metadata.
  • Display the emoji flag for each country, because emoji.

In the end, this was just a matter of building the necessary scripts and running them a number of times until we had what we wanted. As a sort of side note, we decided to use London Integers for this project instead of Brooklyn Integers, which we normally use at Cooper Hewitt, but that’s probably a topic for a future post.

Shipping code, literally

At some point we would need to put all the hardware and construction pieces into crates and ship them across the pond. At the time, our thinking was to get the code running on the digital table and projector computers as close to production ready as we could. We installed all the final builds of the code on the two computers, packed them up with the 55” interactive table, and shipped them over to London, along with six other crates full of the “set” and all its hardware and parts. It was, in a nutshell, impressive.

As the freight went to London, we continued working on the website code back home—making the site look the way we wanted it to look and behave the way we wanted it to behave. As I mentioned before, it’s pretty feature free, but it still required some spit and polish in the form of some of Rachel’s Sassy-CSS. Eventually we all settled on the aesthetics of the site, added a lockup that reflected both the Cooper Hewitt and London Design Biennale brands (both happen to be by Pentagram) and called it a day. We continued testing the table application and Dan continued working on the Pen registration app and receipt printer so it would be ready when we landed.

Building the set with the team at Somerset House.

Building the set with the team at Somerset House.

We landed, started to build the set, and many, many things started to go wrong. I think all of the things that went wrong are probably the topic of yet another blog post, but let’s just say for now: if you ever decide to travel a whole bunch of A/V equipment and computers to another country, get everything working with the local power standard and don’t try to transform anything.

2500 batteries #ldb16

A photo posted by Micah Walter (@micahwalter) on

Eventually, through a lot of long days and sleepless nights, and with the help of many, many kind-hearted people, we managed to get all the systems up and running, and everything began to work.

dscf0213

We flipped the switch and the whole thing came to life and visitors started to walk up to our booth, curious and excited to see what it would do. We started handing out Pens and I started watching the data flow through.

By the close of the show, visitors had used the Pen to collect over 27,000 objects. Eventually, I’ll do a deeper data analysis, but for now, the feeling is really great. We created a portable version of the Pen and all of its underlying systems. We traveled a giant kit of A/V tech and parts overseas, and now people in a country other than the United States can experience what Cooper Hewitt is all about: a dynamic, interactive deep dive into design.

Design your own Utopia at the London Design Biennale

Design your own Utopia at the London Design Biennale

-m

Exhibition Channels on Cooperhewitt.org

There’s a new organizational function on cooperhewitt.org that we’re calling “channels.” Channels are a filtering system for WordPress posts that allow us to group content in a blog-style format around themes. Our first iteration of this feature groups posts into exhibition-themed channels. Subsequent iterations can expand the implementation of channels to broader themed groupings that will help break cooperhewitt.org content out of the current menu organization. In our long-term web strategy this is an important progression to making the site more user-focused and less dictated by internal departmental organization.

The idea is that channels will promote browsing across different types of content on the site because any type of WordPress post—publication, event, Object of the Day, press, or video—can be added to a channel. Posts can also live in multiple channels at once. In this way, the channel configuration moves us toward our goal of creating pathways through cooperhewitt.org content that focus on user needs; as we develop a clearer picture of our web visitors, we can start implementing channels that cater to specific sets of users with content tailored to their interests and requirements. Leaning more heavily on posts and channels than pages in WordPress also leads us into shifting our focus from website = a static archive to website = an ever-changing flow of information, which will help keep our web content fresher and more engaged with concurrent museum programs and events.

Screenshot of the Fragile Beasts exhibition channel page on cooperhewitt.org

The Fragile Beasts exhibition channel page. Additional posts in the channel load as snippets below the main exhibition post (pictured here). The sidebar is populated with metadata entered into custom fields in the CMS.

In WordPress terms, channels are a type of taxonomy added through the CustomPress plugin. We enabled the channel taxonomy for all post types so that in the CMS our staff can flag posts to belong to whichever channels they wish. For the current exhibition channel system to work we also created a new type of post specifically for exhibitions. When an exhibition post is added to a channel, the channel code recognizes that this should be the featured post, which means its “featured image” (designated in the WordPress CMS) becomes the header image for the whole channel and the post is pinned to the top of the page. The exhibition post content is configured to appear in its entirety on the channel page, while all other posts in the channel display as snippets, cascading in reverse chronological order.

Through CustomPress we also created several custom fields for exhibition posts, which populate the sidebar with pertinent metadata and links. The new custom fields on exhibition posts are: Exhibition Title, Collection Site Exhibition URL, Exhibition Start Date, and Exhibition End Date. The sidebar accommodates important “at-a-glance” information provided by the custom field input: for example, if the date range falls in the present, the sidebar displays a link to online ticketing. Tags show up as well to act as short descriptors of the exhibition and channel content. The collection site URL builds a bridge to our other web presence at collection.cooperhewitt.org, where users can find extended curatorial information about the exhibition.

Screenshot of the sidebar on the <em>Fragile Beasts</em> exhibition channel page.

The sidebar on the Fragile Beasts exhibition channel page displays quick reference information and links.

On a channel page, clicking on a snippet (below the leading exhibition post) directs users to a post page where they can read extended content. On the post page we added an element in the sidebar called “Related Channels.” This link provides navigation back to the channel from which users flowed. It can also be a jumping-off point to a new channel. Since posts can live in multiple channels at once this feature promotes the lateral cross-content navigation we’re looking to foster.

Screenshot of sidebar on a post page displaying Related Channel navigation.

The sidebar on post pages provides “Related Channel” navigation, which can be a hub to jump into several editorial streams.

Our plan over the coming weeks is to on-board CMS users to the requirements of the new channel system. As we launch new channels we will help keep information flowing by maintaining a publishing schedule and identifying content that can fit into channel themes. Our upcoming exhibition Scraps: Fashion, Textiles and Creative Reuse will be our first major test of the channels system. The Scraps channel will include a wealth of extra-exhibition content, which we’re looking forward to showcasing with this new system.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site. Additional feature roll-out will be on-going.

A Very Happy & Open Birthday for the Pen

lisa-pen-table-pic

Today marks the first birthday of our beloved Pen. It’s been an amazing year, filled will many iterations, updates, and above all, visits! Today is a celebration of the Pen, but also of all of our amazing partners whose continued support have helped to make the Pen a reality. So I’d like to start with a special thank you first and foremost to Bloomberg Philanthropies for their generous support of our vision from the start, and to all of our team partners at Sistel Networks, GE, Undercurrent, Local Projects, and Tellart.

Updates

Over the course of the past year, we’ve been hard at work, making the Pen Experience at Cooper Hewitt the best it can be. Right after we launched the Pen, we immediately realized there was quite a bit of work to do behind the scenes so that our Visitor Experience staff could better deal with deploying the Pen, and so that our visitors have the best experience possible.

Here are some highlights:

Redesigning post-purchase touchpoints – We quickly realized that our ticket purchase flow needed to be better. This article goes over how we tried to make improvements so that visitors would have a more streamlined experience at the Visitor Experience desk and afterwards.

Exporting your visits – The idea of “downloading” your data seemed like an obvious necessity. It’s always nice to be able to “get all your stuff.” Aaron built a download tool that archives all the things you collected or created and packages it in a nice browser friendly format. (Affectionately known as parallel-visit)

Improving Back-of-House Interactions – We spent a lot of time behind the visitor services desk trying to understand where the pain points were. This is an ongoing effort, which we have iterated on numerous times over the year, but this post recounts the first major change we made, and it made all the difference.

Collecting all the things – We realized pretty quickly that visitors might want to extend their experience after they’ve visited, or more simply,  save things on our website. So we added the idea of a “shoebox” so that visitors to our website could save objects, just as if they had a Pen and were in our galleries.

Label Writer – In order to deploy and rotate new exhibitions and objects, Sam built an Android-based application that allows our exhibition staff to easily program our NFC based wall labels. This tool means any staff member can walk around with an Android device and reprogram any wall label using our API. Cool!

Improving visitor information with paper – Onboarding new visitors is a critical component. We’ve since iterated on this design, but the basic concept is still there–hand out postcards with visual information about how to use the Pen. It works.

Visual consistency – This has more to do with our collection’s website, but it applies to the Pen as well, in that it helps maintain a consistent look and feel for our visitors during their post visit. This was a major overhaul of the collections website that we think makes things much easier to understand and helps provide a more cohesive experience across all our digital and physical platforms.

Iterating the Post-Visit Experience – Another major improvement to our post-visit end of things. We changed the basic ticket design so that visitors would be more likely to find their way to their stuff, and we redesigned what it looks like when they get there.

Press and hold to save your visit – This is another experimental deployment where we are trying to find out if a new component of our visitor experience is helpful or confusing.

On Exhibitions and Iterations – Sam summarizes the rollout of a major exhibition and the changes we’ve had to make in order to cope with a complex exhibition.

Curating Exhibition Video for Digital Platforms – Lisa makes her Labs debut with this excellent article on how we are changing our video production workflow and what that means when someone collects an object in our galleries that contains video content.

The Big Numbers

Back in August we published some initial numbers. Here are the high level updates.

Here are some of the numbers we reported in August 2015:

  • March 10 to August 10 total number of times the Pen has been distributed – 62,015
  • March 10 to August 10 total objects collected – 1,394,030
  • March 10 to August 10 total visitor-made designs saved – 54,029
  • March 10 to August 10 mean zero collection rate – 26.7%
  • March 10 to August 10 mean time on campus – 99.56 minutes
  • March 10 to August 10 post visit website retrieval rate – 33.8%

And here are the latest numbers from March 10, 2015 through March 9, 2016

  • March 10, 2015 to March 9, 2016 total number of times the Pen has been distributed – 154,812
  • March 10, 2015 to March 9, 2016 total objects collected – 3,972,359
  • March 10, 2015 to March 9, 2016 total visitor-made designs saved – 122,655
  • March 10, 2015 to March 9, 2016 mean zero collection rate – 23.8%
  • March 10, 2015 to March 9, 2016 mean time on campus – 110.63 minutes
  • Feb 25, 2016 to March 9, 2016 post visit website retrieval rate – 28.02%

That last number is interesting. A few weeks ago we added some new code to our backend system to better track this data point. Previously we had relied on Google Analytics to tell us what percentage of visitors access their post visit website, but we found this to be pretty inaccurate. It didn’t account for multiple access to the same visit by multiple users (think social sharing of a visit) and so the number was typically higher than what we thought reflected reality.

So, we are now tracking a visit page’s “first access” in code and storing that value as a timestamp. This means we now have a very accurate picture of our post visit website retrieval rate and we are also able to easily tell how much time there is between the beginning of a visit and the first access of the visit website–currently at about 1 day and 10 hours on average.

The Pen generates a massive amount of data. So, we decided to publish some of the higher level statistics on a public webpage which you can always check in on at https://collection.cooperhewitt.org/stats. This page reports daily and includes a few basic stats including a list of the most popular objects of all time. Yes, it’s the staircase models. They’ve been the frontrunners since we launched.

Those staircase models!

Those staircase models!

As you can see, we are just about to hit the 4 million objects collected mark. This is pretty significant and it means that our visitors on average have used the Pen to collect 26 objects per visit.

But it’s hard to gain a real sense of what’s going on if you just look at the high level numbers, so lets track some things over time. Below is a chart that shows objects collected by day for the last year.

Screen Shot 2016-03-09 at 3.50.36 PM

Objects collected by day since March 10, 2015

On the right you can easily see a big jump. This corresponds with the opening of the exhibition Beauty–Cooper Hewitt Design Triennial. It’s partly due to increased visitation following the opening, but what’s really going on here is a heavy use of object bundling. If you follow this blog, you’ll have recently read the post by Sam where he talks about the need to bundle many objects on one tag. This means that when a visitor taps his or her pen on a tag, they very often collect multiple objects. Beauty makes heavy use of this feature, bundling a dozen or so objects per tag in many cases and resulting in a dramatic increase in collected objects per day.

Pen checkouts per day since March 10, 2015

Pen checkouts per day since March 10, 2015

We can easily see that this, is in fact, what is happening if we look at our daily pen checkouts. Here we see a reasonable increase in checkouts following the launch of Beauty, but it’s not nearly as dramatic as the number of objects being collected each day.

Screen Shot 2016-03-09 at 11.40.09 PM

Immersion room creations by day since March 10, 2015

Above is a chart that shows how many designs were created in the immersion room each day over the past year. It’s also going to be directly connected to the number of visitors we have, but it’s interesting to see the mass of it along this period of time. The immersion room is one of our more popular interactive installations and it has been on view since we launched. So it’s not a big surprise it has a pretty steady curve to it. Also, keep in mind that this is only representative of “things saved” as we are not tracking the thousands of drawings that visitors make and walk away from.

We can slice and dice the Pen data all we want. I suppose we could take requests. But I have a better idea.

Open Data

Today we are opening up the Pen Data. This means a number of things, so listen closely.

  1. The data we are releasing is an anonymized and obfuscated version of some of the actual data.
  2. If you saved your visit to an account within thirty days of this post (and future data updates) we won’t include your data in this public release.
  3. This data is being licensed under Creative Commons – Attribution, Non-Commercial. This means a company can’t use this data for commercial purposes.
  4. The data we are releasing today is meant to be used in conjunction with out public domain collection metadata or our public API.

The data we are releasing is meant to facilitate the development of an understanding of Cooper Hewitt, its collection and interactive experiences. The idea here is that designers, artists, researchers and data analysts will have easy access to the data generated by the Pen and will be able to analyze  and create data visualizations so that we can better understand the impact our in-gallery technology has on visitors.

We believe there is a lot more going on in our galleries than we might currently understand. Visitors are spending incredible amounts of time at our interactive tables, and have been using the Pen in ways we hadn’t originally thought of. For example, we know that some visitors (children especial) try to collect every single object on view. We call these our treasure hunters. We also know that a percentage of our visitors take a pen and don’t use it to collect anything at all, though they tend to use the stylus end quite a bit. Through careful analysis of this kind of data, we believe that we will be able to begin to uncover new behavior patterns and aspects of “collecting” we haven’t yet discovered.

If you fit this category and are curious enough to take our data for a spin, please get in touch, we’d love to see what you create!

Curating Exhibition Video for Digital Platforms

First, let me begin this post with a hearty “hello”! This is my first Labs blog post, though I’ve been on board with the Digital and Emerging Media team since July 2015 as Media Technologist. Day-to-day I participate in much of the Labs activity that you’ve read about here: maintaining and improving our website; looking for ways to enhance visitor experience; and expanding the meaningful implementation of technology at Cooper Hewitt. In this post I will focus on the slice of my work that pertains to video content and exhibitions.

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

The topic of exhibition video is fresh in my mind since we are just off the installation of Beauty—Cooper Hewitt Design Triennial. This is a multi-floor exhibit that contains twenty-one videos hand-picked or commissioned by the exhibition curators. My part in the exhibition workflow is to format, brand, caption and quality-check videos, ushering them through a production flow that results in their display in the galleries and distribution online. Along with the rest of the Labs team, I also advise on the presentation and installation of videos and interactive experiences in exhibitions and on the web, and help steer the integration of Pen functionality with exhibition content. This post gathers some of my video-minded observations collected on the road to installing Beauty.

The Beauty curators and the Labs team came together when content for the show began to arrive—both loans of physical objects and digital file transfers. At this time, my video workflow shifted into high gear, and I began to really see the landscape of digital content planned for the exhibit. Videos in Beauty fall into roughly two categories: those that are the primary highlighted object on display and those that supplement the display of another object. Sam Brenner recently posted about reformatting our web presentation of video content when it stands in as primary collection object and has a medium that is “video, animation or other[wise] screen-based.” This change was a result of thinking through the flag we raised earlier for the curators around linking collection records to tags, i.e. “what visitors get when they collect works with the Pen.” As has been mentioned before on this blog, the relationship of collecting points (NFC tags) to collection objects does not need to be one-to-one; Beauty expanded our exploration of the tags-to-collection records relationship in a few interesting ways.

Collecting Neri Oxmann

When visitors collect at the Neri Oxman tag they save a cluster of collections database records, including 12 glass vessels and a video.

In the Beauty exhibition, collecting points are presented uniformly: one tag in each object label. Additionally, tags positioned beside wall text panels allow visitors to save chunks of written exhibition content. The curatorial format of the Triennial exhibition organized around designers (sometimes with multiple works on display), however, encouraged us to think carefully about the tag-collecting relationship. I was impressed to see the curators curating the Pen experience, including notes to me along with each video, like “the works in the show are jewelry pieces; the video will supplement,” “video is primary object; digital prints supplement,” and “video clips sequenced together for display but each video is separately collectible.” They were really thinking about the user flow of the Pen and the post-visit experience, extending their role in organizing and presenting information to all aspects of the museum experience.

Another first in the Beauty exhibition is the video content created specifically for interactive tables. With the curators’ encouragement, the designers featured in the exhibition considered the tables as a unique environment to present bonus content. For example, Olivier van Herpt provided a video of his 3D printer at work on the ceramic vessels on display in the exhibition. It was interesting to see the possibilities that the tables and post-visit outlets opened up—for one thing, the quality standards can be more relaxed for videos shown outside the monitors in the galleries. Also notable is the fact that the Beauty curators selected behind-the-scenes-type videos for tables and post-visit, suggesting that these outlets make room for content that might not typically make it onto gallery walls.

Still from Oliver van Herpt's "3D Printed Ceramic Process"

The video “3D Printed Ceramic Process” by Oliver van Herpt is an example of behind-the-scenes video content that was made for tables and website display only.

The practical fallout on my end was that these supplemental videos added to the already video-heavy exhibition, putting increased pressure on the video workflow. In turn, this revealed a major lack of optimization. The diagram of my video workflow shows, for example, several repeated instances of formatting, captioning and exporting. Multiplied by twenty-one, each of these redundant procedures takes up significant time. Application of branding is probably the biggest time-hog in the workflow—all of it is done manually and locking in the information of maker, credit line and video title with curators and designers is a substantial task. It’s funny, the amount of video content is increasing in exhibitions at Cooper Hewitt and it’s receiving increased attention from curators, but the supplementary videos in the galleries are not treated as first-order exhibition objects, so they don’t go through as rigorous a documentation process as other works in the show. Because of this, video-specific information required for my workflow remained in flux until the very last minute. Even the video content itself continued to shift as designers pushed past my deadlines to request more time to make changes and additions. In truth, the deadlines related to in-gallery video content are much stricter than those for table/post-visit-only content because gallery videos require hardware installation. The environment of the tables and website afford continual change, but deadlines act as benchmarks to keep those interfaces stocked with new content that stays in sync with objects in the physical exhibition.

Exhibition Video Workflow

The workflow that videos follow to get to gallery screens, interactive tables and the collections website.

I maintain a spreadsheet to collect video information and maintain order over my exhibitions video workflow. These are the column headings.

All the steps and data points that need to be checked-off in my exhibition video workflow.

By the exhibition opening, I had all video information confirmed, and all branding and formatting completed. The running spreadsheet I keep as a video to-do list was filled with check-marks and green (good-to-go) highlighting. I had created media records in TMS and connected them to exhibition videos uploaded to YouTube; this allows a script to pull in the embed code so videos appear within the YouTube player on the collections site. I also linked the media records to other database entries so that they would show up on the collections site in relation to other objects and people. For example, since I linked the “Afreaks Process” video record to all of the records for the beaded Afreak objects, the video appears on each object page, like the one for The Haas Brothers and Haas Sisters’ “Evelyn”. Related videos like this one (that are not the primary object) are configured to appear at the bottom of an object page with the language, “We have 1 video that features Sculpture, Evelyn, from the Afreaks series, 2015.” Since the video has its own record in the database, there is also a corresponding “video page” for the same clip that presents the video at the top with related objects in a grid view below. I also connected object records to database entries for people, ensuring that visitors who click on a name find videos among the list of associated objects.

Screenshot of Haas Brothers record webpage

The webpage for the Haas Brothers record includes a video among the related objects.

It is highly gratifying to seed videos into this web of database connections. The content is so rich and so interesting that it really enhances the texture of the collections site and of exhibitions. Cooper Hewitt curators demonstrated their appreciation for the value of video by honoring video works as primary objects on display. They also utilized video in a demonstrative way to enhance the presentation of highlighted works. Beauty opened the doors for curating video works on interactive tables, and grouping videos in with clusters of data linked to collecting points (aka. tags). I’m pleased with the overall highly integrated and considered take on video content in the latest exhibition, and I hope we can push the integration even further as the curators become increasingly invested in adapting their practice to the extended exhibition platforms we have in place like tables, tags and web.