Category Archives: Backends

Traveling our technology to the U.K.

Visitors to the London Design Biennale use our “clone” of the Wallpaper Immersion Room.

Recently, we launched a major initiative at the inaugural London Design Biennale at Somerset House. The installation was up from September 7th through the 27th and now that it has closed and the dust has settled, I thought I’d try and explain the details behind all the technology that went into making this project come alive.

Quite a while back, an invitation was extended to Cooper Hewitt to represent the United States in the London Design Biennale, an exhibition featuring 37 countries from around the world. Our initial idea being to spin up a clone of our very popular “Wallpaper Immersion Room” and hand out Cooper Hewitt Pens.

The idea of traveling our technology outside the walls of the Carnegie Mansion has been of great interest to the museum ever since we reopened our doors in 2014. The process of figuring out how to make our technology portable, and have it make sense in different environments and contexts was definitely a challenge we were up for, and this event seemed like the perfect candidate to put that idea through its paces.

So we started out gathering up the basic requirements and working through all that would be needed to make it all come together, including some very generous support from the Secretary of the Smithsonian and the Smithsonian National Board, Bloomberg Philanthropies, and Amita and Purnendu Chatterjee.

The short version is, this was a huge undertaking. But it all worked in the end, and visitors at the first-ever London Design Biennale were able to use Cooper Hewitt Pens to explore 100 wallpapers from our collection, create their own designs and save them. Plus, visitors could collect and save installations from other Biennale participants.

https://www.instagram.com/p/BKGZ9u9gcHI

https://www.instagram.com/p/BKGZu4yA6dq

The long version is as follows.

An Immersion Room in England

First and foremost, we wanted to bring the Immersion Room over as our installation for the London Design Biennale. So, let’s break down what makes the Immersion Room what it is.

The original Immersion Room, designed by Cooper Hewitt and Local Projects, made its debut when the museum reopened in December 2014, following a major renovation. It is essentially an interactive experience where visitors can manipulate a digital interactive touch-table to browse our collection of wallpapers and view them at scale, in real time, via twin projectors mounted to the ceiling. Additionally, visitors can switch into design mode and create their own wallpapers; adjusting the scale, orientation, and positioning of a repeating pattern on the wall. This latter feature is arguably what makes the experience what it is. Visitors from all walks of life love spending time drawing in the Immersion Room, typically resulting in a selfie or two like the ones you see in the images below.

https://www.instagram.com/p/BKTZ7F0g_eA

https://www.instagram.com/p/BKRMU-QD11m

What I’ve just described is essentially the minimal viable product for this entire effort. One interactive table, two ceiling mounted projectors, a couple of computers, and a couple of walls to project on.

https://www.instagram.com/p/BIlDulOgrBu

The Immersion Room uses two separate computers, each running an application written in OpenFrameworks. There is the “projector app,” which manages what is displayed to the two projectors, and there is the “table app,” which manages what visitors see and interact with on the 55” Ideum table. The two apps communicate with each other over a local network, with the table app essentially instructing the projector app with what it should be displaying in real time.

Here is a basic diagram of how that all fits together.

Twin projector and computer setup for Wallpaper Immersion Room

Twin projector and computer setup for Wallpaper Immersion Room

Each application loads in content on startup. This is provided to the application by a giant json file that is managed by our Collections API and meant to be updated each night through a cron job. When the applications start up, they look at the json file and pull down any new or changed assets they might need.

At Cooper Hewitt, this means that our curators are able to update content whenever they want using our collections management system, The Museum System (TMS). Updates they make in TMS get reflected on the digital table following a data-deploy and reboot of the table and projector applications. This is essentially the workflow at Cooper Hewitt. Curators fill in object data in TMS, and through a series of tubes, that data eventually finds its way to the interactive tables and our collections website and API. For this project in London, we’d do essentially the same process, with a few caveats.

Make it do all the things

We started asking ourselves a number of questions. It’s a mix of feature creep and a strong desire to put some of the technology we’ve built through it’s paces–to determine if it’s possible to recontextualize much of what we’ve created at Cooper Hewitt and have it work outside the museum walls.

Questions like:

  • What if we want to allow visitors to save the wallpapers and the designs they create?
  • What if we wanted to hand out a Cooper Hewitt Pen to each visitor?
  • What if we want to let people use the Pen to save their creations, wallpapers, and ALL the other installations around the Somerset House?!

All of a sudden, the project becomes a bit more complicated, but still, a great way to figure out how we would translate a ton of the technology we’ve built at Cooper Hewitt into something useful for the rest of the world. We had loads of other ideas, features, and add-ons, but at some point you have to decide what falls in and out of scope.

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

So this is what we decided to do.

  • We would devise a way to construct the physical build out of a second Immersion Room. This would essentially be a “set” with walls and a truss system for suspending two rather heavy projectors. It would have a floor, and would be slightly off the ground so we could conceal wiring and create a place for the 55” touch table to rest.
  • We’d pre-fabricate the entire rig in New York and ship it to London to be assembled onsite.
  • We’d enable the Immersion Room to allow visitors to save from a selection of 101 wallpapers from our permanent collection. These would be curated for the Utopia theme of the London Design Biennale.
  • We’d enable the design feature of the Immersion Room and allow visitors to save their designs.
  • We’d hand out Cooper Hewitt Pens to each visitor who wanted one, along with a printed receipt containing a URL and a unique code.
  • We’d post coded NFC tags all throughout Somerset House to allow visitors to use their Pens to collect information about each participating country, including our own.
  • We’d build a bespoke website where visitors would go following their visit to see all the things they’ve collected or created.

These are all of the things we decided to do from a technology standpoint. Here is how we did it.

pen-www

The first step to making this all work was to extract the relevant code from our production collections website and API. We named this “pen-www” and intended that this codebase serve as a mini framework for developing a collecting system and website. In essence it’s simply a web application (written in PHP) and a REST API (also PHP). It really needed to be “just the code” required to make all the above work. So here is another list, explaining what all those requirements are.

  • It needs to somehow generate a simple collections website that is capable of storing relevant info about all the things one could potentially collect. This was very similar to our current codebase at Cooper Hewitt, but we added the idea of “organizations” so that you could have multiple participants contributing info, and not just Cooper Hewitt.
  • It needs all the API methods that make the Pen work. There are actually just a handful that do all the hard work. I’ll get to those in a bit.
  • It needs to handle image uploads and processing of those images (saved designs from the Immersion Room table).
  • It needs to create “visits” which are the pages a visitor lands on when entering their unique code.
  • It needs a series of scripts to help us import data and set things up.
  • We would also need some new code to allow us to generate paper receipts with unique codes printed on them. At Cooper Hewitt this is all done via our Tessitura ticket printing system, so since we wouldn’t have that at Somerset House, we’d need to devise a new way of dealing with registering and pairing pens, and printing out some kind of receipt.

So, pen-www would become this sort of boilerplate framework for the Pen. The idea being, we’d distill the giant codebase we’ve developed at Cooper Hewitt down to the most essential parts, and then make it specific to what we wanted to do for London. This is an important point. We aren’t attempting to build an actual framework. What we are trying to do is to boil out the necessary code as a starting point, and then use that code as the basis for a new project altogether.

From our point of view, this exercise allows us to understand how everything works, and gets us close enough to the core code so that we can think of repeating this a third or a fourth time—or more.

The API at the center of everything

We built the Cooper Hewitt API with intentions of making it flexible enough to be easily expanded upon or altered. It tries to adhere to the REST API pattern as much as it can, but it’s probably better described as “REST-ish.” What’s nice about this approach has been that we’ve been able to build lots and lots of internal interfaces using this same pattern and code base. This means that when we want to do something as bespoke as building an entire replica of our seemingly complex Pen/Visit system, and deploy it in another country, we have some ground to stand on.

In fact, just about all of the systems we have built use the API in some way. So, in theory, spinning up a new API for the London project should just mean pointing things like the Immersion Room interactive table at a new API endpoint. Since the methods are the same, and the responses use the same pattern, it should all just work!

So let’s unpack the API methods required to make the Pen and Immersion Room come to life. These are all internal/private API methods, so you can’t take them for a spin, and I can’t share the actual code with you that lies beneath, but I think you’ll get the idea.

Pens – there’s a whole class of API methods that deal with the Pen itself. Here are the relevant ones:

  • pens.checkoutPen – This marks a Pen as having been checked out for an associated visit
  • pens.getCurrentCheckout – This gets the currently checked out Pen for a specific visit
  • pens.getCurrentVisit – This does the opposite of the getCurrentCheckout, and returns a visit for a specific Pen.
  • pens.returnPen – This marks the Pen as having been returned.

Visits – There is another class of API methods that deal with the idea of “visits.” A visit is meant to represent one individual’s visit to the museum or exhibition, or some other physical location. Each visit has an ID and a corresponding unique code (the thing we print on a visitor’s paper receipt).

  • visits.getActivity – Returns all the activity associated with a visit
  • visits.getInfo – Returns detailed info about a specific visit
  • visits.processPenActivity – This is a major API method that takes any activity recorded by the Pen and processes it before storing the info in the appropriate location in the database. This one gets called frequently and is the method that happens when you tap your Pen on a reader board at one of our digital tables. The reader board downloads all the info on the Pen, and calls this API method to deal with whatever came across.
  • visits.registerVisit – This marks a visit as having been registered. It’s what generates your unique code for the visit.

Believe it or not, that is basically it. It’s just a handful of actions that need to be performed to make this whole thing work. With these methods in place, we can:

  • Pair pens with newly created visits so we can hand Pens out to visitors.
  • Process data collected by the Pen, either from NFC stickers it has read, or via our Interactive Table.
  • Do a final read of the Pen and return the Pen to the pool of possible pens.

So, now that we have an API, and all the relevant methods we can start building the website and massaging the API code to do things in the slightly different ways that make this whole thing live up to its bespokiness.

On the website end of things we will follow the KISS principle or “Keep it simple, stupid.” The site will be devoid of fancy image display features, extended relationship mapping and tagging, and all the goodies we’ve spent years developing for the Cooper Hewitt Collections website. It won’t have search, or fancy search, or search by color, or search by anything. It won’t have a shoebox or even a random button (ok, maybe I’ll add that later today). For all intents and purposes, the website will simply be a place to enter your unique code, and see all your stuff.

https://londonbiennale.cooperhewitt.org

https://londonbiennale.cooperhewitt.org

The website and its API will live at https://londonbiennale.cooperhewitt.org. It consists of just two web front ends running Apache and sitting behind an NGINX load balancer, and one MySQL instance via Amazon’s RDS service. It’s very, very similar to just about all of our other systems and services except that it doesn’t have fancy extras like Logstash logging, or an Elasticsearch index. I did take the time to install server monitoring and alerting, just so we can sleep at night, but really, it’s pretty bare bones.

At first glance there isn’t much there to look at. You can browse the different participants and you can create a Cooper Hewitt account or sign in using our Single Sign On service, but other than that, there is really just one thing to do–enter your code and see all your stuff.

Participants

Participants

All your content are belong to us

In order for this project to really work, we’d need to have content. Not only our own Cooper Hewitt content, but content from all the participants representing the 36 other countries from around the world.

So here is the breakdown:

  • Each participant or organization will have a page, like this one for Australia https://londonbiennale.cooperhewitt.org/participants/australia/
  • Each participant will have one “object.” In the case of all 37 participants, this object will represent their “booth” like this one from Australia – https://londonbiennale.cooperhewitt.org/objects/37643049/
  • Each “booth” will contain an image and the catalog text provided by the London Design Biennale team. If there is time, we will consider adding additional information from each participant (we haven’t done this as of yet).
  • Cooper Hewitt’s record will have some more stuff. In addition to the object representing Cooper Hewitt’s booth, we will also have 100 wallcoverings from our permanent collection.
  • You can collect all of these via the Immersion Room table and your Pen. Here is our page – https://londonbiennale.cooperhewitt.org/participants/usa/ There are also two physical wallapapers that are part of our installation, which you can of course collect as well.

All told, that means 140 objects in this little microsite/sitelet. You can actually browse them all at once if you are so inclined here – https://londonbiennale.cooperhewitt.org/objects/

"Booth" pages

“Booth” pages

Visit Pages

So what does a visitor get when they go to the webpage and type in their unique code. Well, the answer to that question is “it depends.” For objects that we imported from our permanent collection (the 101 wallpapers) you get a nice photo of the wallpaper, a chatty description of the wallpaper written by our curator, Greg Herringshaw, having to do with “Utopia” — the theme of this year’s London Design Biennale. You also get a link back to the collection page on the Cooper Hewitt website. For the 37 booths, you get a photo and the catalog info for each participants, and if you created and saved your own design in the wallpaper immersion room, you get a copy of the PNG version of your design, which you can, of course, download and do with what you like. (Hint: they make cool wall posters.)

Additionally, you get timestamps related to your visit. This way, just like on the Cooper Hewitt website, you get to retain a record of your visit–the date and time of each collected object and a way to recall your visit anytime in the future.

Visit page example

Visit page example

Slow Progress

All of this code replication, extraction, and re-configuring took quite a long time. The team spent long hours, nights, and weekends trying to sort it all out. In theory this should all just work, but like any project, there are unique aspects to the thing you are currently trying to accomplish, which means that, no matter what, you’re gonna be writing some new code.

Ok, so let’s check in with what we’ve got so far.

  • A physical manifestation of the Wallpaper Immersion Room and all it’s hardware, computers, wires, etc.
  • A website and API to power all the fun stuff.
  • A bunch of content from our own permanent collection and the catalog info from the London Design Biennale team.
  • Visit pages

We still need the following:

  • A way to issue Pens to visitors as they arrive.
  • A way to print a unique code on some kind of receipt, which we give to the visitors as well.
  • A way to check in Pens as visitors return them.
  • The means to get the table pointing at the right API endpoint so it can save things and processPenActivity as well.

To accomplish the first three items on the list, we enlisted the help of Rev Dan Catt.

https://www.instagram.com/p/BJ3WizdAYfc

Dan is planning to write another extensive blog post about his role in all of this, but in a nutshell, he took our Pen registration code and built his own little mini-registration station and ticket printer. It’s pictured below and performs all of the functions above (1 through 3). It uses a small Adafruit thermal printer to print the receipts and unique codes, and it is simple enough to use with a small web based UI to give the operator some basic feedback. Other than that, you tap a pen and it does the rest.

Dan's Raspberry Pi powered Pen registration and ticket printing station.

Dan’s Raspberry Pi powered Pen registration and ticket printing station.

Tickets printing for the first time

Tickets printing for the first time

For the last item on the list, I had to re-compile the code Local Projects delivered to us. In the code I had to find the references to the Cooper Hewitt API endpoints and adjust them to point at the London API endpoint. Once I did this, and recompiled the OpenFrameworks project we were in business. For a while, I had it all set up for development and testing on my laptop using Parallels and Visual Studio. Eventually I compiled a final version and we installed it on the actual Immersion Room Table.

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Cracking open the Local Projects code was a little scary. I’m not really an OpenFrameworks programmer, or at least I haven’t been since grad school, and the Local Projects code base is pretty vast. We’ve had this code compiled and running on all the interactive tables at Cooper Hewitt since December of 2014. This is the first time I (or anyone I know of) has attempted to recompile it from source, not to mention make changes to it beforehand.

That said, it all worked just fine. I had to find an old copy of Visual Studio 2012, but other than that, and tracking down a few dependencies, it wasn’t a very big deal. Now we had a copy of the Immersion Room table application set up to talk to the London API endpoint. As I mentioned before, all the API methods are named the same, and set up the same way, so the data began to flow back and forth pretty quickly.

Content Management

I mentioned above that we had to import 100 wallpapers from our collection as well as the data for all 37 booths. To accomplish all of this, we wrote a bunch of Python and PHP scripts.

We needed to do the following with regard to content:

  • Create a record for each of the 37 participants
  • Import the catalog info as an object for each of the 37 participants
  • Import the 100 wallcoverings from the Cooper Hewitt collection. We just used, you guessed it, our own API to do this.
  • Massage the JSON files that live on the Projector and Table applications so they have the correct 100 wallpapers and all their metadata.
  • Display the emoji flag for each country, because emoji.

In the end, this was just a matter of building the necessary scripts and running them a number of times until we had what we wanted. As a sort of side note, we decided to use London Integers for this project instead of Brooklyn Integers, which we normally use at Cooper Hewitt, but that’s probably a topic for a future post.

Shipping code, literally

At some point we would need to put all the hardware and construction pieces into crates and ship them across the pond. At the time, our thinking was to get the code running on the digital table and projector computers as close to production ready as we could. We installed all the final builds of the code on the two computers, packed them up with the 55” interactive table, and shipped them over to London, along with six other crates full of the “set” and all its hardware and parts. It was, in a nutshell, impressive.

As the freight went to London, we continued working on the website code back home—making the site look the way we wanted it to look and behave the way we wanted it to behave. As I mentioned before, it’s pretty feature free, but it still required some spit and polish in the form of some of Rachel’s Sassy-CSS. Eventually we all settled on the aesthetics of the site, added a lockup that reflected both the Cooper Hewitt and London Design Biennale brands (both happen to be by Pentagram) and called it a day. We continued testing the table application and Dan continued working on the Pen registration app and receipt printer so it would be ready when we landed.

Building the set with the team at Somerset House.

Building the set with the team at Somerset House.

We landed, started to build the set, and many, many things started to go wrong. I think all of the things that went wrong are probably the topic of yet another blog post, but let’s just say for now: if you ever decide to travel a whole bunch of A/V equipment and computers to another country, get everything working with the local power standard and don’t try to transform anything.

https://www.instagram.com/p/BJ2dROFAjLQ

Eventually, through a lot of long days and sleepless nights, and with the help of many, many kind-hearted people, we managed to get all the systems up and running, and everything began to work.

dscf0213

We flipped the switch and the whole thing came to life and visitors started to walk up to our booth, curious and excited to see what it would do. We started handing out Pens and I started watching the data flow through.

By the close of the show, visitors had used the Pen to collect over 27,000 objects. Eventually, I’ll do a deeper data analysis, but for now, the feeling is really great. We created a portable version of the Pen and all of its underlying systems. We traveled a giant kit of A/V tech and parts overseas, and now people in a country other than the United States can experience what Cooper Hewitt is all about: a dynamic, interactive deep dive into design.

Design your own Utopia at the London Design Biennale

Design your own Utopia at the London Design Biennale

-m

Object Phone: The continued evolution of a little chatbot

Object Phone is a project that started small, took less than a day to code, and consisted of about a page of code. Initially it was just an experiment–a way for me to explore a new interface to our API. Object Phone allowed users to call or text objects in our collection, and receive some kind of response. It was met with mild fanfare.

Next, I was curious about using Object Phone in our galleries. I looked towards developing some better audio content, and we decided to produce a short audio tour of the David Adjaye Selects exhibit. It was somewhat cumbersome to use but an interesting experiment and one of my first “in-gallery beta-tests.” Needless to say, I tried to be as clear as possible that this was an “experiment.”

Later I started thinking about the broader uses for a system like Object Phone. Could it replace an expensive audio guide? Could it be used as an accessibility device? I started to think of many possible uses for the platform, and started to rewrite the code to support multiple outputs. In a way, I was thinking about the code for Object Phone as a mini framework for building voice and text based interactions with our content.

All of this got put on the back burner for a while. Object Phone is after all my little side project. Something I come back to when I need to center myself and let my brain think through a few problems. It’s very much a project I meditate on when I need to do that kind of thing.

About 6 months later I started playing with the code again. I realized it was pretty trivial to deliver images via MMS using Twilio’s API and I had also started to notice that MMS worked pretty nicely on devices like an Apple Watch, and looked pretty good in the notification screen on my iPhone. All of the sudden it was kind of fun again to receive texts from Object Phone. So, I set up a subscription service.

Inspired by a few chatty SMS based apps out there like Poncho and The Edit, I built a simple subscription service that would send random objects and images to subscribers once a day at noon. Again, I set this up quickly, sent out a request for some people to try it out, and started to make realizations.

Object Phone is getting some upgrades. Feature requests welcome.

A photo posted by Micah Walter (@micahwalter) on

The main realization I had was that Object Phone had just become a chatbot. To be clear, Object Phone has technically always been a chatbot. You send it messages, and it replies with some response. But now that it sends you something periodically based on your preferences (currently just the preference that you want to continue receiving messages) it seems more like a real chatbot. More importantly, this experiment has started to make me “think” of Object Phone as a chatbot–something I should have likely realized from the start.

I also realized that Object Phone’s chattiness happens in multiple directions. It indeed chats with its subscribers. It can send you messages once a day, and it can reply to your requests for info about objects with ease. But, I also added a back end feature which follows this same line of thinking. If a user sends Object Phone a message that it doesn’t understand, Object Phone asks me for some assistance. Here is the flow:

  1. A user messages Object Phone something like “Tell me about spanking cat.”
  2. Object Phone isn’t smart enough yet to decipher the message.
  3. Object Phone replies “OK, I don’t really understand what you are saying but I’ll ask around and get back to you.”
  4. Object Phone then sends our Cooper Hewitt Slack channel a message.
  5. The Slack message contains the user’s phone number, their message, and a link to an admin page where the operator can reply directly to the user.
Screen_Shot_2016-07-04_at_10_13_26_AM

A Slack Channel where Object Phone can tell our staff when it needs a little assistance.

Screen_Shot_2016-07-04_at_10_16_12_AM

An Object Phone admin page where our staff can reply directly to users

All of the sudden Object Phone is a conduit between Cooper Hewitt staff and its visitors. It’s talking directly to visitors, but also relaying messages back and forth to more knowledgeable staff when it needs assistance.

What the cool kids are doing

Conversational user experiences are all the rage right now. Facebook has recently opened up their Messenger platform and API to developers, which means anyone can build a simple chatbot on Facebook and reach all their followers with ease. Many other messaging services have open APIs as well. WeChat, LINE, What’sApp and Slack are just a few examples.

Slack for iOS Upload

Screenshot of the CNN chatbot for Facebook Messenger

It’s pretty clear that messaging apps are increasing in popularity, with users spending much of their days talking on platforms like SnapChat rather than thumbing through their Facebook feeds. Apple too has followed suit by announcing a much upgraded Messages app in their latest update to iOS.

Chatbots have also become much more sophisticated, with huge advancements in Natural Language Processing and Natural Language Understanding. There is now a wealth of information and publicly available code and APIs out there, making it easier than ever to spin up a pretty intelligent chatbot with little overhead.

The Future of Object Phone

My next steps are to make Object Phone more intelligent. It should be able to learn about your tastes and preferences. If you only want to receive objects from our Textiles department, you should be able to say so. If you want to get your daily update at 5am, you should be able to just tell it that.

More importantly, you should be able to interact with more than just objects. Users should be able to find out general info about our museum. Are we open today? How do I get to Cooper Hewitt? Can I buy tickets right here, right now?

Lastly, I’d love to see Object Phone make its way onto the platform of your choice. I think this is a critical next step. SMS is great, and available to nearly anyone with a cell phone, but apps like FB Messenger, WhatsApp, and LINE have the ability to connect a service like Object Phone with a captive audience, all over the world.

I think institutions like museums have a great opportunity in the chatbot space. If anything it represents a new way to broaden our reach and connect with people on the platforms they are already using. What’s more interesting to me is that chatbots themselves represent a way to interact with people that is by its very nature, bi-directional. It presents us with the challenge of conversation, and forces us to listen to our constituents in a very close and connected kind of a way. We should already be doing this.

If you’d like to participate in testing out Object Phone, please go to https://objectphone.cooperhewitt.org and sign up. You will receive an object every day at 12pm EST until you reply STOP.

Museums and the Web Conference Recap: Administrative Tools at Cooper Hewitt

The Labs team had a great time at Museums and the Web this year. We published two papers for the conference and presented them both to the audience of cultural heritage thinkers, makers, planners and administrators. Sam Brenner and I shared our paper, “Winning (and losing) hearts and minds of museum staff: Administrative interfaces at Cooper Hewitt,” which outlines the process of designing, developing and iterating two in-house built, staff-facing tools: Tagatron and the Pen Pairing Station. Both administrative tools are essential aides to staff managing new responsibilities associated with visitor-facing gallery technologies.

Here is the deck from our presentation:

Administrative interfaces at Cooper Hewitt (14)

Administrative interfaces at Cooper Hewitt

Introduction

  • Cooper Hewitt, Smithsonian Design Museum. New York, New York.
  • Our strategy around presenting design is to expose process—how things are made, how they are conceived, how they are designed.
  • This presentation will speak to our philosophy of openness around design process in sharing part of the back-story of how our current visitor-facing experience came together and how it’s maintained.

Administrative interfaces at Cooper Hewitt (1)

Visitor Interfaces

  • The visitor-facing technologies in the museum, introduced in 2014, invite new forms of engagement with the Cooper Hewitt collection. They encourage active participation, letting visitors play, design and collect through multi-touch table applications and the Pen.
  • Before we were able to re-design the visitor’s relationship to the museum we went through comprehensive changes at every level.

Administrative interfaces at Cooper Hewitt (2)

Comprehensive Re-design / Institutional Shift

  • We began a restoration of the mansion, stripping it down to its Carnegie steel girders.
  • To a similar degree we rethought the organizational infrastructure of Cooper Hewitt with a comprehensive re-design of operations, workflows and responsibilities.

Administrative interfaces at Cooper Hewitt (3)

New Responsibilities (for Everyone)

  • There were new jobs created to support the new visitor experience, including that of our Gallery Technology Manager, Mary Fe, whose job responsibilities include maintaining the Pens and troubleshooting touch tables and gallery interactives
  • The re-design affects every staff member at Cooper Hewitt:
  • Registrars: aggressive timetable to enter data
  • Security: understand the mission and visitor experience, teaching visitors on pen usage
  • Exhibitions: label programming, maintenance
  • Curators: tags, relations, chat formatting for length
  • Visitor services: pen pairing – whole new step in between “welcome” and ticket sale
  • Before we got to this stage there was the task of onboarding staff to new responsibilities, which fell largely to the Digital & Emerging Media department. With the allocation of new responsibilities also came the opportunity to create tools that could facilitate some of the work.

Administrative interfaces at Cooper Hewitt (4)

Defining the Need for Considered Interfaces

  • Why did we decide that new interfaces were necessary in certain parts of the workflow?
  • We started with observation, watching workflows as they emerged. We created tools to assist where necessary. The need for interfaces was in part logistical, in part technical and also in part human.
  • Candidates for interface development are parts of the new digital ecosystem where there is:
  • High volume of data
  • Large number of users
  • Complex tasks
  • Something that needs constraints or enforcement
  • Example: the job of assigning tags and related objects to everything we put on display for the reopening. The touch table interfaces utilize tag and related object information. This data does not live in TMS, so it is housed in a custom database.
  • The task of creating the data fell to the curators. Originally this was stored in Excel files. While the curators were happy using spreadsheets, we identified a few major issues with them. The biggest one was that every department had devised their own schema for storing the data, which would ultimately have to be reconciled
  • This example fits all of the criteria above.

Administrative interfaces at Cooper Hewitt (5)

Case Study 1: Tagatron

  • Explicit purpose of the Tagatron tool: make the work quicker; make the metadata consistent; make the organization of the metadata consistent
  • Making this tool highlighted for the digital team the complex relationship between the work, the tool, and the people responsible for each—even though we believed the tool made things easier, the tool had its own set of ongoing technical and usability issues
  • We found that those issues propagated an amount of distrust or lack of confidence in the larger project. Some of these were due to bugs in the tool, but some of it was just that now it was known that this was work that would be “enforced” or taken more seriously, which made users uncomfortable.
  • Key idea: the interface takes on a symbolic value in representing “new responsibilities” and can bring about issues that it might not have been designed to address. It takes on a complex position between human needs and technical needs.

Administrative interfaces at Cooper Hewitt (6)

Tagatron (continued)

  • These graphs illustrate how prolific the task of tagging and relating objects is. It was important to build Tagatron because it is crucial tool in the ongoing operation of the museum’s digital experience. More so than the spreadsheets ever could, it allows for scalability.
  • Since the re-opening the tool went through one major design and backend overhaul, and continues to see small iterations.

Administrative interfaces at Cooper Hewitt (7)

Case Study 2: Pen Pairing Stations

  • Context of Pen Pairing: Every visitor to the museum receives a Pen. At the museum’s front desk each Pen is paired with a unique admission ticket. Every ticket has a shortcode identifier that allows visitors to retrieve their Pen visit data online when they enter the code on their ticket.
  • Pen pairing is done at a very critical point in the visitor experience when the interaction needs to be quick and frictionless. Visitor Services Associates have to coordinate a number of simultaneous tasks.

Pen Pairing Station (continued)

  • This video depicts the Pen pairing process behind the front desk. It documents the first version of the Pen Pairing application, and shows the exposed Pen-reading circuit board before housing was built.
  • Pen pairing is one of the most demanding of the new responsibilities created by the “new experience”–has to fit between welcoming a visitor, taking their money, answering any questions, looking up their member ID.
  • Each use of the tool only lasts 5-10 seconds but we’ve spent many hours and built many versions of this tool to figure out exactly what needs to happen in that time to accomplish all the tasks, including updating databases, handling failures, serial communication
  • Every one of these iterations gave us an opportunity to be connected to the staff using the tools, not only to make something that works better, but to be a part of the conversation

Administrative interfaces at Cooper Hewitt (8)

Administrative Interfaces: What does success look like? How does it feel?

  • In informal interviews with Tagatron users we found trust to be a central theme of users’ response to the interface
  • Since Tagatron augments the curators’ use of TMS, they were less trusting of its database as a long-lasting data repository
  • Improving user feedback (like confirmation messages) helped build trust in the interface
  • Bill Moggridge, Designing Interaction: designing interaction is designing the relationship between people and things
  • We came to realize the responsibility of designing interfaces—validating and responding to users’ concerns; acknowledging the burden of new responsibilities
  • Administrative interfaces at the crux of the staff relationship to the new Cooper Hewitt experience
  • Anticipating issues in developing and maintaining administrative interfaces (when success feels like failure):
  • First, the human factor: being open to the feedback and creating an environment where the channels exist to communicate staff thoughts on the tool.
  • Second, the technical factor: being able to act on what you hear from staff and make the required changes to complete the feedback loop.
  • Our responsibility as facilitators of technology in the museum to hear and act on concerns.

Administrative interfaces at Cooper Hewitt-9

Questions to ask when starting an administrative application to anticipate issues and accommodate of feedback.

Question 1: To what degree should the (administrative) tool fit with pre-existing notions?

  • This question addresses the need to understand contextual use of the tool
  • Tagatron: curatorial culture around spreadsheets and TMS
  • Pen Pairing Station: this tool disrupted the expected ticket selling workflow. We learned the that the tool needed to make Pen Pairing as unobtrusive as possible

Administrative interfaces at Cooper Hewitt (10)

Question 2: How much of the underlying technology should come through to the interface?

  • Infrastructure & interfaces are layers of an onion—the best mental model for a tool’s interface might not reflect the best technical model for its back end
  • Tagatron: the filtering tools were a reflection of how data was stored in the database, not how curators expected it
  • Pen Pairing Station: error messages from all parts of the application stack came through to the user unaltered, this was not helpful to users
  • Highlights the need for a technical solution that allows for flexibility in the middle, “translation layer” of an application

Administrative interfaces at Cooper Hewitt (11)

Question 3: What kinds of feedback does the tool provide?

  • Feedback is the voice of the interface/ its personality–is it finicky or reliable? Annoying or supportive?
  • Tagatron: missing feedback created distrust
  • Pen Pairing: too much feedback caused confusion (error messages, validation handshake)
  • Our design and production methodology: working code always wins/ learning through doing; build small, working prototypes and continually iterate.
  • A more anticipatory form of design (like design thinking) could have helped us find answers to this question sooner

Administrative interfaces at Cooper Hewitt (12)

Question 4: Is it an appropriate time for experimentation?

  • Tagatron’s v1 included relatively unknown-to-us technology like MongoDB and nodejs. We should have used more familiar technology or done small-scale tests before implementing a project of this scale–it severely hindered our ability to accommodate feedback
  • Other tools we built that involved experimental tech were only successful because their scale and userbase were far smaller (label writer)

Administrative interfaces at Cooper Hewitt (13)

The result of everything: bridges, lines of communication opened

  • Building administrative tools for staff created cross-departmental conversation—in taking on the role of building and maintaining Tagatron and the Pen Pairing Station, the Digital & Emerging Media team engaged users from departments across the museum and observed closely how the tools fit into staff members’ larger roles

A Very Happy & Open Birthday for the Pen

lisa-pen-table-pic

Today marks the first birthday of our beloved Pen. It’s been an amazing year, filled will many iterations, updates, and above all, visits! Today is a celebration of the Pen, but also of all of our amazing partners whose continued support have helped to make the Pen a reality. So I’d like to start with a special thank you first and foremost to Bloomberg Philanthropies for their generous support of our vision from the start, and to all of our team partners at Sistel Networks, GE, Undercurrent, Local Projects, and Tellart.

Updates

Over the course of the past year, we’ve been hard at work, making the Pen Experience at Cooper Hewitt the best it can be. Right after we launched the Pen, we immediately realized there was quite a bit of work to do behind the scenes so that our Visitor Experience staff could better deal with deploying the Pen, and so that our visitors have the best experience possible.

Here are some highlights:

Redesigning post-purchase touchpoints – We quickly realized that our ticket purchase flow needed to be better. This article goes over how we tried to make improvements so that visitors would have a more streamlined experience at the Visitor Experience desk and afterwards.

Exporting your visits – The idea of “downloading” your data seemed like an obvious necessity. It’s always nice to be able to “get all your stuff.” Aaron built a download tool that archives all the things you collected or created and packages it in a nice browser friendly format. (Affectionately known as parallel-visit)

Improving Back-of-House Interactions – We spent a lot of time behind the visitor services desk trying to understand where the pain points were. This is an ongoing effort, which we have iterated on numerous times over the year, but this post recounts the first major change we made, and it made all the difference.

Collecting all the things – We realized pretty quickly that visitors might want to extend their experience after they’ve visited, or more simply,  save things on our website. So we added the idea of a “shoebox” so that visitors to our website could save objects, just as if they had a Pen and were in our galleries.

Label Writer – In order to deploy and rotate new exhibitions and objects, Sam built an Android-based application that allows our exhibition staff to easily program our NFC based wall labels. This tool means any staff member can walk around with an Android device and reprogram any wall label using our API. Cool!

Improving visitor information with paper – Onboarding new visitors is a critical component. We’ve since iterated on this design, but the basic concept is still there–hand out postcards with visual information about how to use the Pen. It works.

Visual consistency – This has more to do with our collection’s website, but it applies to the Pen as well, in that it helps maintain a consistent look and feel for our visitors during their post visit. This was a major overhaul of the collections website that we think makes things much easier to understand and helps provide a more cohesive experience across all our digital and physical platforms.

Iterating the Post-Visit Experience – Another major improvement to our post-visit end of things. We changed the basic ticket design so that visitors would be more likely to find their way to their stuff, and we redesigned what it looks like when they get there.

Press and hold to save your visit – This is another experimental deployment where we are trying to find out if a new component of our visitor experience is helpful or confusing.

On Exhibitions and Iterations – Sam summarizes the rollout of a major exhibition and the changes we’ve had to make in order to cope with a complex exhibition.

Curating Exhibition Video for Digital Platforms – Lisa makes her Labs debut with this excellent article on how we are changing our video production workflow and what that means when someone collects an object in our galleries that contains video content.

The Big Numbers

Back in August we published some initial numbers. Here are the high level updates.

Here are some of the numbers we reported in August 2015:

  • March 10 to August 10 total number of times the Pen has been distributed – 62,015
  • March 10 to August 10 total objects collected – 1,394,030
  • March 10 to August 10 total visitor-made designs saved – 54,029
  • March 10 to August 10 mean zero collection rate – 26.7%
  • March 10 to August 10 mean time on campus – 99.56 minutes
  • March 10 to August 10 post visit website retrieval rate – 33.8%

And here are the latest numbers from March 10, 2015 through March 9, 2016

  • March 10, 2015 to March 9, 2016 total number of times the Pen has been distributed – 154,812
  • March 10, 2015 to March 9, 2016 total objects collected – 3,972,359
  • March 10, 2015 to March 9, 2016 total visitor-made designs saved – 122,655
  • March 10, 2015 to March 9, 2016 mean zero collection rate – 23.8%
  • March 10, 2015 to March 9, 2016 mean time on campus – 110.63 minutes
  • Feb 25, 2016 to March 9, 2016 post visit website retrieval rate – 28.02%

That last number is interesting. A few weeks ago we added some new code to our backend system to better track this data point. Previously we had relied on Google Analytics to tell us what percentage of visitors access their post visit website, but we found this to be pretty inaccurate. It didn’t account for multiple access to the same visit by multiple users (think social sharing of a visit) and so the number was typically higher than what we thought reflected reality.

So, we are now tracking a visit page’s “first access” in code and storing that value as a timestamp. This means we now have a very accurate picture of our post visit website retrieval rate and we are also able to easily tell how much time there is between the beginning of a visit and the first access of the visit website–currently at about 1 day and 10 hours on average.

The Pen generates a massive amount of data. So, we decided to publish some of the higher level statistics on a public webpage which you can always check in on at https://collection.cooperhewitt.org/stats. This page reports daily and includes a few basic stats including a list of the most popular objects of all time. Yes, it’s the staircase models. They’ve been the frontrunners since we launched.

Those staircase models!

Those staircase models!

As you can see, we are just about to hit the 4 million objects collected mark. This is pretty significant and it means that our visitors on average have used the Pen to collect 26 objects per visit.

But it’s hard to gain a real sense of what’s going on if you just look at the high level numbers, so lets track some things over time. Below is a chart that shows objects collected by day for the last year.

Screen Shot 2016-03-09 at 3.50.36 PM

Objects collected by day since March 10, 2015

On the right you can easily see a big jump. This corresponds with the opening of the exhibition Beauty–Cooper Hewitt Design Triennial. It’s partly due to increased visitation following the opening, but what’s really going on here is a heavy use of object bundling. If you follow this blog, you’ll have recently read the post by Sam where he talks about the need to bundle many objects on one tag. This means that when a visitor taps his or her pen on a tag, they very often collect multiple objects. Beauty makes heavy use of this feature, bundling a dozen or so objects per tag in many cases and resulting in a dramatic increase in collected objects per day.

Pen checkouts per day since March 10, 2015

Pen checkouts per day since March 10, 2015

We can easily see that this, is in fact, what is happening if we look at our daily pen checkouts. Here we see a reasonable increase in checkouts following the launch of Beauty, but it’s not nearly as dramatic as the number of objects being collected each day.

Screen Shot 2016-03-09 at 11.40.09 PM

Immersion room creations by day since March 10, 2015

Above is a chart that shows how many designs were created in the immersion room each day over the past year. It’s also going to be directly connected to the number of visitors we have, but it’s interesting to see the mass of it along this period of time. The immersion room is one of our more popular interactive installations and it has been on view since we launched. So it’s not a big surprise it has a pretty steady curve to it. Also, keep in mind that this is only representative of “things saved” as we are not tracking the thousands of drawings that visitors make and walk away from.

We can slice and dice the Pen data all we want. I suppose we could take requests. But I have a better idea.

Open Data

Today we are opening up the Pen Data. This means a number of things, so listen closely.

  1. The data we are releasing is an anonymized and obfuscated version of some of the actual data.
  2. If you saved your visit to an account within thirty days of this post (and future data updates) we won’t include your data in this public release.
  3. This data is being licensed under Creative Commons – Attribution, Non-Commercial. This means a company can’t use this data for commercial purposes.
  4. The data we are releasing today is meant to be used in conjunction with out public domain collection metadata or our public API.

The data we are releasing is meant to facilitate the development of an understanding of Cooper Hewitt, its collection and interactive experiences. The idea here is that designers, artists, researchers and data analysts will have easy access to the data generated by the Pen and will be able to analyze  and create data visualizations so that we can better understand the impact our in-gallery technology has on visitors.

We believe there is a lot more going on in our galleries than we might currently understand. Visitors are spending incredible amounts of time at our interactive tables, and have been using the Pen in ways we hadn’t originally thought of. For example, we know that some visitors (children especial) try to collect every single object on view. We call these our treasure hunters. We also know that a percentage of our visitors take a pen and don’t use it to collect anything at all, though they tend to use the stylus end quite a bit. Through careful analysis of this kind of data, we believe that we will be able to begin to uncover new behavior patterns and aspects of “collecting” we haven’t yet discovered.

If you fit this category and are curious enough to take our data for a spin, please get in touch, we’d love to see what you create!

On Exhibitions and Iterations

Since reopening in December 2014, we’ve found that the coming opening of an exhibition is a big driver of iteration. The work involved in preparing an exhibition involves the whole museum and is one of the most coordinated and planned-out things we do, and because of this, new exhibitions push us to improve in a number of ways.

First, new exhibitions can highlight existing gaps or inefficiencies in our systems. Our tagging tool, for example, always sees a round of bug fixes or new features before an exhibition because it coincides with a time when it will see heavy use. Second, exhibitions present us with new technical challenges. Objects in the Heatherwick exhibition, for example, were displayed in the galleries grouped into “projects,” which is also how we wanted users to collect them with their Pens and view them on the website. To accomplish this we had to figure out a way that TMS, our collections management software, could store both the individual objects (for internal purposes) and the grouped projects (which would hold all the public-facing images and text), and figure out how to see that through to the website in a way that made registrars, curators and ourselves comfortable. Finally, a new exhibition can present an opportunity for experimentation. David Adjaye Selects gave us the opportunity to scale up Object Phone, a telephone-based riff on the audio guide, which originally started as a small, rough prototype.

Last week was the opening of our triennial exhibition “Beauty,” which similarly presented us with a number of technical challenges and opportunities to experiment. In this post I’ll share some of those challenges and the work we did to approach them.

Collecting Exhibition Text

Triennial's wall text, with the collect icon in the lower-right corner

Triennial’s wall text, with the collect icon in the lower-right corner

Since the beginning of the pen project we’ve been saying that the Pens don’t just have to collect objects. Aaron and Seb wrote in their paper on the project that “nothing would prevent the museum from allowing visitors to ‘collect’ individual designers, entire exhibitions or even architectural elements from the building itself in the future.” To that end, we’ve experimented with collecting shop items and decided that with the triennial we would allow visitors to collect exhibition text as well.

Exhibition text (in museum argot, “A-Panel” is the main text at the beginning of an exhibition and “B-Panel” are any additional texts you might find along the way) makes total sense as something that a visitor should be able to remember for later. It explains and contextualizes an exhibition’s goals, contents and organization. We’ve had the text on our collections since we reopened but it took a few clicks to get through from a visitor’s post-visit website. Now, the text will be right there alongside all of a visitor’s objects.

The exhibition text on a post-visit website

The exhibition text on a post-visit website

The open-ended part of this is what visitors will expect when they collect an “exhibition.” We installed the collection points with no helper text, i.e. it doesn’t say “press here to collect this exhibition’s text.” We think it’s clear that the crosshairs refer to the text, but one of our original ideas was that we could have a way for the visitor to automatically collect every object in the exhibition and I wonder if that might be the implied function of the text tag. We will have to observe and adapt accordingly on that point.

Videos Instead of Images

When we first added videos to our collections site, we found that the fastest way to accomplish what we needed was to use TMS for relating videos to objects but use custom software for the formatting and uploading of the videos. We generate four versions of every video file — subtitled and not subtitled at two resolutions each — which we use in the galleries, on the tables and on the website. One of the weaknesses of this pipeline is that because the videos don’t live in the usual asset repository the way all of our images do, the link between TMS and the actual file’s location is made by nothing more than a “magic string” and a bit of guesswork. This makes it difficult to work with the video records in TMS: users get no preview and it can be difficult to know which video ID refers to which specific video. All of this is something we’ll be taking another look at in the near future, but there is one small chunk of this problem we approached in advance of the Triennial: how to make our website show the video in place of the primary image if it would be more appropriate to do so.

Here’s an example. Daniel Brown’s On Growth and Form is an animation on display in the Triennial. Before, it would have looked like this — the primary image is a still rendering that has been added in TMS, and the video appears as related content further down the page.

growthandform

What we did is to say if the object is itself a video, animation or other screen-based media and we have an associated video record linked to the object, remove the primary image and put the video there instead. That looks like this:

Screen Shot 2016-02-16 at 3.33.50 PM

Like all good iterations, this one opened up a bunch of next steps. First, we need to figure out how to add videos into our main digital asset pipeline so that the guesswork can be removed from picking primary videos and a curator or image specialist can select it as “primary” the same way they would do with an image. Next, it brought up an item that’s been on the backburner for a while, which is a better way to display alternate images of an object. Currently, they have their own page, which gets the job done, but it would be nice to present some alternate views on the main object page as well.

Just a Reflektor Sandbox

It's fun!

It’s fun!

We had a great opportunity to do some experimentation on our collections site due to the inclusion of Aaron Koblin and Vincent Morisset’s interactive video for Arcade Fire’s Just a Reflektor. The project’s source code is already available online and contains a “sandbox” environment, a tool that demonstrates some of the interactive visual effects created for the music video in a fun, open-ended environment. We were able to quickly adapt the sandbox’s source code to fit on our collections site so that visitors who collect the video with their Pen will be able to explore a more barebones version of the final interactive piece. You can check that out here.

Fully Loaded Labels

When we were working on the Pen prototypes, we tried six different NFC tags before getting to the one that met all of our requirements. We ended up with these NTAG203 tags whose combination of size and antenna design made them work well with our Pens and our wall labels. Their onboard memory of 144 bytes, combined with the system we devised for encoding collection data on them, meant that we could store a maximum of 11 objects on a tag. Of course we didn’t see that ever being a problem… until it was. The labels in the triennial exhibition are grouped by designer, not by object, and in some cases we have 35 objects from a designer on display that all need to be collected with one Pen press. There were two solutions: find tags with more memory (aka “throw more hardware at it”) or figure out a new way to encode the tags using fewer bytes and update the codebase to support both the new and old ways (aka “maintenance nightmare”). Fortunately for us, the NTAG216 series of tags have become more commonly available in the past year, which feature 888 bytes of memory, enough for around 70 objects on a tag. After a few rounds of end-to-end testing (writing the tag, collecting it with a pen and having it show up on the post-visit website), we rolled the new tags out to the galleries for the dozen or so “high capacity” labels.

The new tag (smaller, on the left) and the old tag (right)

The new tag (smaller, on the left) and the old tag (right)

The most interesting iteration that’s been made overall, I think, is how our exhibition workflow has changed over time to accommodate the Pen. With each new exhibition, we take what sneaked up on us the last time and try to anticipate it. As the most recent exhibition, Beauty’s timeline included more digitally-focused milestones from the outset than any other exhibition yet. Not only did this allow us to anticipate the tag capacity issue many months in advance, but it also gave us more time to double check and fix small problems in the days before opening and gave us more time to try new, experimental approaches to the collections website and post-visit experience. We’re all excited to keep this momentum going as work ramps up on the next exhibitions!

 

Press and hold to save your visit

01

During the development of a major project it’s inevitable that certain features just won’t make it into production in time for launch. Sometimes things fall out of scope, or they get left off the priority list and put on the back burner. Hopefully these features are not critical to the project, but invariably they were good ideas, and in many cases should warrant a revisit sometime down the road, when the dust has settled.

At Cooper Hewitt, one such “feature” was affectionately known as the “transfer stations.” The basic idea being that throughout the museum there would be small kiosk like stations where visitors could tap their pens and “transfer” the data from their pen to our database so that they could immediately see their collections from a mobile phone.

03

It was a nice idea, and we began to implement it, collecting specs and designing the steel stands that would support the kiosks. Eventually the parts showed up, but by that time, we were pretty knee deep in launching the Pen and the museum, so the transfer station idea got set aside.

Fast forward to today, nearly a year since the Pen has been in visitors hands and we’ve been thinking about how we can better onboard our visitors, and how we can remind them that there is something to do after they leave the museum. It’s a complex problem that we’ve tried to address in several ways.

  1. Visitors arriving at the museum typically don’t know anything about the Pen. At our Visitor Experience desk our staff are trained to quickly explain teach each visitor what they can do with the Pen, while in the background processing their orders and “pairing” each Pen to a ticket. It’s a critical part of the process and one we’ve spent a good deal of time optimizing.
  2. While visitors are waiting in line, there is an opportunity to help people learn about the Pen. We have a looping video playing in that spot that tries to do this job visually, and additionally, we have small postcards available that explain things further.
  3. On the way out the door, visitors are reminded that they should hold on to their tickets. This is supposed to happen at the door, in the moment where they are returning their pen.

There are lots of other visual cues and verbal reminders happening while you walk through the galleries, but no matter what, we find ticket stubs left behind. We know from our data that lots of our visitors are checking out their websites after their visits, but maybe we can do better. Also, part of the whole concept behind the Pen is that you “can” look at your collection from your mobile phone right away–we should make that happen more seamlessly.

Technically speaking, the transfer stations have been in play all along. When you walk up to one of our interactive tables and “dock” your pen, we read all the data on your pen ( the things you’ve collected so far ) and store them in our database. So, if you just keep walking up to tables and docking your pen, you’d be able to visit your collection on your mobile phone–no problem. But this doesn’t really do a great job of reminding you, or even letting you know that it’s possible. The tables are about browsing the collection on the tables, and that’s pretty much what their UI describes.

Also, we’ve been using an early version of the transfer station behind the scenes to do a final dump of your pen after you’ve left. This is so that in case you collected objects and didn’t go to one of the tables, you’ll be okay.

All along though, a few of us have been a little skeptical of the function and design of the transfer stations. Will they just create confusion with the visitor? Are they even necessary? Should they have a responsive visual user interface? To get to the bottom of some of these questions, well, we need to birth something into the world and see how it goes.

To get started, we chose to deploy two transfer stations in two areas on the second floor. There was a good deal of work that needed to happen. The transfer station parts needed to be identified, assembled and configured. We’d need to set up their built in Raspberry Pi computers to behave properly, and we’d need to work through their connection to power and network within the galleries. Enter Mary Fe!  She is our Gallery Technologist, the person you might see performing maintenance on some part of the technology throughout the galleries the next time you visit Cooper Hewitt. Mary Fe is the person who shows up at 8am before the museum opens to make sure everything is working and looking good.

I asked Mary Fe to work on this project from start to finish, and she’s written up a little documentation on how things went. She says:

I was called in to *clone* the existing and fully working Register station. The stations consist of Raspberry Pi mini computers connected to our museum network over ethernet and an NFC reader board designed by Sistel Networks that are able to download data from a Pen. The Raspberry Pi is mounted in the base of the extremely heavy stands you see in the photo below, and its corresponding NFC reader board is located at the top, behind the “plus” icon.

02

What we’d need

  • Raspberry Pi units programmed to save only ( vs. save and check in a pen )
  • Functioning data and power at the locations where wish to deploy the stands
  • The stands
  • Easy to understand signage

The first part was pretty easy to accomplish. I began by cloning SD cards for use with the Raspberry Pi. These had to be configured to “save” Pen data, and not “check in” the Pen so that the visitor could continue their visit, saving as many times as they like. After cloning, we assigned new names and unique IPs to the Raspberry Pi stations.

I ran into a little trouble when I started testing the data ports. Long story short, I had to learn how to tone/probe data ports from their locations in the galleries to their corresponding positions in the network closet. After a good deal of troubleshooting with our IT specialists in DC, the ports came to life and we assigned their IP addresses.

Once the ports were figured out, and the Raspberry Pi’s setup and configured, everything began to work. Right away I noticed visitors starting to use the transfer stations.

What’s next

We have more transfer stations waiting in the wings. However, as I mentioned above, deploying these two transfer stations is sort of an experiment. We want to watch and see how visitors react to them, if they cause more confusion, and how often they are getting used.

We’ve already been thinking of ways we might incorporate a screen to add a visual user interface to the stations. Perhaps a more guided experience would get visitors more involved with them. What kinds of problems will introducing a screen add to the device? Maybe we should think about e-ink, a touch screen, or a thermal printer? It’s hard to say at this point.

The next step is to collect some visitor feedback, look at the data, and start prototyping new ideas.

Mailchimp & Tessitura together at last

tessitura-mailchimp

The short version is:

There is now an integration between Tessitura & Mailchimp. If you are a Tessitura Licensee, and have access to their BitBucket account, you can get it here.

So you can use this lovely Mailchimp Interface to create your emails…

Screen Shot 2015-08-20 at 12.36.19 PM

… and connect it with all of the power of Tessitura, through this easy to use tool.

readme-02

The long version is:

It’s the last day of the Tessitura Learning & Community Conference, and I’m all checked out of my hotel, sitting in the conference hall, thinking about all of the things I’ve learned this week, and all of the people I’ve met.

So many of the people I’ve talked to have been asking about the Tessitura-Mailchimp Integration we launched in partnership with Mailchimp and JCA, Inc. this past week, and so I thought I’d write up a blog post to try and explain what it is, how you get it/use it/make it better, and more importantly, why we did it in the first place.

A long while ago, Cooper Hewitt had an enormous email list. Some 60K emails on one massive list powered by a e-marketing service that was clearly heading out of business. This giant list wasn’t working. We weren’t getting the results we thought we should, and what’s more, we had no way of measuring our success. So, we switched to Mailchimp. It was a pretty obvious choice. Mailchimp offered the museum an incredibly quick set up time, a beautiful user interface, with super clean and easy to use templates. What’s more, Mailchimp placed a lot of emphasis on “list quality” and advised us to put out an appeal to our current bloated list to do an “opt-in” and create a whole new list made up of real people, with valid email addresses, who actually wanted to receive mail from us.

The list dropped down. Way down. After a few “last chance” appeals, our 60K subscribers were whittled down to about 2500. This was challenging territory for many departments in the museum who relied on the large numbers for a sense of security more than their effectiveness, like at almost every non-profit.

But we pressed on, and noticed quickly that our open rates were dramatically high. Our click through rates were excellent, and it was clear that people were actually reading our emails, and acting on our calls to action. If you haven’t noticed by now, I’m trying to include as many marketing buzzwords into this post as possible. You know, due-dilligence and all.

This is a long way around to explain that we all started to fall in love with Mailchimp. It’s ease of use and deep analytics and reporting tools were a huge win for the museum as a whole. Our list continues to grow and our “satisfaction” rating remains pretty steadily on the high end. The staff seem to enjoy working in Mailchimp, especially following the recent redesign of the user interface.

One day along the way, the museum decided to implement Tessitura as our CRM ( constituent relationship management ) and ticketing system. It’s a super robust, enterprise class system that is sort of the swiss army knife for non-profits, performing arts centers, and more recently, museums.

In the long term strategic plan for Tessitura, it appeared as though we would have to ditch Mailchimp and move to one of the two providers that offer an integration with Tessitura. We looked at both of them, and while they both did the job at hand, neither of them offered the pleasant experience and incredible analytics tools that Mailchimp did. It would have been a tough sell to our staff to move them off something they clearly all had grown to love and on to a system that would probably work just fine, but not make their hearts any warmer.

So, we talked with Mailchimp. Mailchimp has a wide variety of third party integrations, and we started to converse about what an integration with Tessitura would look like. We all got really excited at the possibilities, and so once a small amount of funding was secured, we partnered with JCA, Inc. to build us something.

Mailchimp was really excited about the idea, and being a forward thinking tech company, they pushed us to make the integration free, and open source. This is something we strongly believe in at Cooper Hewitt as well, so we worked with the staff at Tessitura, and figured out a way to share the code within the Tessitura Network, so as not to violate any non-disclosure agreements. Things were starting to take shape.

So what will it do, and how does it work?

readme-03

We tried to limit the scope of the project to the bare essentials. We wanted to stay within our budget, and build a simple tool that does what it says on the tin. The hope here is that Tessiutra licensees will try it out, see that it’s a good thing, and run with it, adding features and customizing it to suit their needs. Open source goodness.

At the moment, the project is a pretty simple .NET application that anyone can install on a windows machine that can talk to Tessitura and Mailchimp. You fill out some initial config information, and then schedule a nightly synchronization job. This allows Tessitura licensees to export their primary lists on a nightly basis into Mailchimp.

You can also perform synchronizations on an ad-hoc basis, meaning, any Tessitura user can easily create a segmented list in Tessitura for a specific purpose, and sync that list to Mailchimp for immediate sending.

This is a really nice feature, because it actually creates or updates a segment in Mailchimp. Rather than create many bespoke email lists, you can then just use a single master list in Mailchimp, and use the exported segments so you are only sending to the addresses you are interested in.

What it’s not

It’s important to understand that this is an open source tool and is provided “as is.” There is no support staff waiting to take your calls and answer your questions. This remains the responsibility of the Tessitura community.

As I mentioned, it’s a simple tool, and at the moment, it basically does the two functions I’ve outlined above. There is no syncing of analytics data back in to Tessitura, for example. We really love the analytics tools built right into Mailchimp, and so for most this may not be a deal breaker. These are the kinds of features we hope will get added by the community down the road.

What it is, again.

It’s a super exciting thing for us to all think about! The Tessitura community really needs to take more control over the entire eco-system of third-party applications and extensions. Without a vested interest in building our own tools, open sourcing the work we are all doing, and joining in the conversation with regards to direction and strategy, the community will always be waiting on the next update from those vendors who have chosen to build products from the system.

How do I get it?

First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to web_dev@tessituranetwork.com. Once you are there, you can go to here, and download the code, or the binaries to try it out on your system. The repository has a README with all the relevant info on how to install it from scratch, build from the source, and set things up in Mailchimp. If you don’t have this capability you can also download the compiled binaries and just try it out.

How do I contribute?

If you are a Tessitura Network licensee, and you’ve gotten this far, read the README to get the full picture on how to fork the code and contribute. For the time being, feel free to log issues, and send feature requests, and I will do my best to follow up on them and help get them resolved, but eventually, we hope that someone within the community will pick up the torch and help us to continue to develop what we think is a really valuable integration and option for the broader Tessitura community.

Reminder: First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to web_dev@tessituranetwork.com

Long live RSS

Screen Shot 2015-07-10 at 2.35.17 PM

I just made a new Tumblr. It’s called “Recently Digitized Design.” It took me all of five minutes. I hope this blog post will take me all of ten.

But it’s actually kinda cool, and here’s why. Cooper Hewitt is in the midst of mass digitization project where we will have digitized our entire collection of over 215K objects by mid to late next year. Wow! 215K objects. That’s impressive, especially when you consider that probably 5000 of those are buttons!

What’s more is that we now have a pretty decent “pipeline” up and running. This means that as objects are being digitized and added to our collections management system, they are automatically winding up on our collections website after winding their way through a pretty hefty series of processing tasks.

Over on the West Coast, Aaron, felt the need to make a little RSS feed for these “recently digitized” so we could all easily watch the new things come in. RSS, which stands for “Rich Site Summary”, has been around forever, and many have said that it is now a dead technology.

Lately I’ve been really interested in the idea of Microservices. I guess I never really thought of it this way, but an RSS or ATOM feed is kind of a microservice. Here’s a highlight from “Building Microservices by Sam Newman” that explains this idea in more detail.

Another approach is to try to use HTTP as a way of propagating events. ATOM is a REST-compliant specification that defines semantics ( among other things ) for publishing feeds of resources. Many client libraries exist that allow us to create and consume these feeds. So our customer service could just publish an event to such a feed when our customer service changes. Our consumers just poll the feed, looking for changes.

Taking this a bit further, I’ve been reading this blog post, which explains how one might turn around and publish RSS feeds through an existing API. It’s an interesting concept, and I can see us making use of it for something just like Recently Digitized Design. It sort of brings us back to the question of how we publish our content on the web in general.

In the case of Recently Digitized Design the RSS feed is our little microservice that any client can poll. We then use IFTTT as the client, and Tumblr as the output where we are publishing the new data every day. 

RSS certainly lives up to its nickname ( Really Simple Syndication ), offering a really simple way to serve up new data, and that to me makes it a useful thing for making quick and dirty prototypes like this one. It’s not a streaming API or a fancy push notification service, but it gets the job done, and if you log in to your Tumblr Dashboard, please feel free to follow it. You’ll be presented with 10-20 newly photographed objects from our collection each day.

UPDATE:

So this happened: https://twitter.com/recentlydigital

Happy Staff = Happy Visitors: Improving Back-of-House Interfaces

“You have to make the back of the fence that people won’t see look just as beautiful as the front, just like a great carpenter would make the back of a chest of drawers … Even though others won’t see it, you will know it’s there, and that will make you more proud of your design.”

—Steve Jobs

In my last post I talked about improvements to online ticketing based on observations made in the first weeks after launching the Pen.

Today’s post is about an important internal tool: the registration station whose job is to pair a new ticket with a new pen. Though visitors will never see this interface, it’s really important that it be simple, easy, clear, and fast. It is also critical that staff are able to understand the feedback from this app because if a pen is incorrectly paired with a ticket then the visitor’s data (collections and creations) will be lost.

Like a Steve-Jobs-approved iPod or a Van Cleef & Arpels ruby brooch, the “inside” of our system should be as carefully and thoughtfully designed as the outside.

the view from behind a desk with screens and wires everywhere. a tablet positioned upright with some tiny text and bars of color.

Version 1 of the app was functional but cluttered, with too much text, and no clear point of focus for the eye.

Because the first version of the app was built to be procedurally functional, its visual design was given little consideration. However, the application as a whole was designed so that the user interface – running in a web browser – was completely separate from the underlying pen pairing functionality, which makes updating the front-end a relatively straightforward task.

Also, we were getting a few complaints from visitors who returned home eager to see their visit diary, and were disappointed to see that their custom URL contained no data. We suspected this could have been a result of the poor UI at ticketing.

With this in mind, I sat behind the desk to observe our staff in action with real customers. I did about three sessions, for about ten minutes each, sometimes during heavy visitor traffic and sometimes during light traffic. Here’s what I kept an eye on while observing:

  • How many actions are required per transaction? Is there any way to minimize the number of “clicks” (in this case, “taps”) required from staff?
  • Is the visual feedback clear enough to be understood with only partial attention? Or do  typography, colors, and composition require an operator’s full attention to understand what’s going on?
  • What extraneous information can we minimize or omit?
  • What’s the critical information we should enlarge or emphasize?

After observing, I tried my hand at the app myself. This was actually more edifying than doing observations. Kathleen, our head of Visitor Services, had a batch of about 30 Pens to pair for a group, and I offered to help. I was very slow with the app, so I wasn’t really of much help, moving through my batch of pens at about half the speed of Kathleen’s staff.

Some readers may be thinking that since the desk staff had adjusted to a less-than-excellent visual design and were already moving pretty fast with it, this could be a reason not to improve it. As designers, we should always be helping and improving. Nobody should have to live with a crappy interface, even if they’ve adjusted to it! And, there will be new staff, and they will get to skip the adjustment process and start on the right foot with a better-designed tool.

My struggle to use the app was fuel for its redesign, which you can see germinating in my drawings below.

some marker sketches of a tablet interface with lots of scribbled notes

After several rounds of paper sketches like these, the desk reps and I decided on this sequence as the starting point for version two of the app.

These were the last in a series of drawings that I worked through with the desk staff. So our first few “iterative prototypes” were created and improved upon in a matter of minutes, since they were simply scribbled on paper. We arrived at the above stopping point, which Sam turned into working code.

Here’s what’s new in version 2:

  • The most important information—the alphanumeric shortcode— is emphasized. The font is about 6 or 7 times bigger, with exaggerated spacing and lots of padding (white space) on all sides for increased legibility. Or as I like to call it, “glanceability.” This helps make sure that the front of house staff pair the correct pen with the correct ticket.
  • Fewer words. For example, “Check Out Pen With This Shortcode” changed to “GO”, “Pen has been successfully checked out and written with shortcode ABCD” changed to “Success,” etc. This makes it easier for staff to know, quickly, that the process has worked and they can move on to the next ticket/pen/customer.

“I didn’t have time to write a short letter, so I wrote a long one instead.”
Mark Twain

  • More accurate words. Our team uses a different vernacular from the people working at the desk. This is normal, since we don’t work together often, and like any neighboring tribes, we’ve developed subtly different words for different things. Since this app is used by desk staff, I wanted it to reflect their language, not ours. For example, “Pair” is what they call “check-out” and “Return” is what they call “check-in.”
  • Better visual hierarchy: The original app had many competing horizontal bands of content, with no clear visual clue as to which band needed the operator’s attention at any given time. We used white space, color (green/yellow/red for go/wait/stop), and re-arranging of elements (less-used features to the bottom, more-used features to the top) to better direct the eye and make it clear to the user what she ought to be looking at.
  • Simple animations to help the user understand when the app is “working” and they should just wait.

Still to come are added features (bulk pairing, maintenance mode) and any ideas the desk reps might develop after a couple of weeks of using the new version.

Imagine how difficult this process would have been if the museum had outsourced all of its design and programming work, or if it were all encased in a proprietary system.