Category Archives: Backends

Object Phone: The continued evolution of a little chatbot

Object Phone is a project that started small, took less than a day to code, and consisted of about a page of code. Initially it was just an experiment–a way for me to explore a new interface to our API. Object Phone allowed users to call or text objects in our collection, and receive some kind of response. It was met with mild fanfare.

Next, I was curious about using Object Phone in our galleries. I looked towards developing some better audio content, and we decided to produce a short audio tour of the David Adjaye Selects exhibit. It was somewhat cumbersome to use but an interesting experiment and one of my first “in-gallery beta-tests.” Needless to say, I tried to be as clear as possible that this was an “experiment.”

Later I started thinking about the broader uses for a system like Object Phone. Could it replace an expensive audio guide? Could it be used as an accessibility device? I started to think of many possible uses for the platform, and started to rewrite the code to support multiple outputs. In a way, I was thinking about the code for Object Phone as a mini framework for building voice and text based interactions with our content.

All of this got put on the back burner for a while. Object Phone is after all my little side project. Something I come back to when I need to center myself and let my brain think through a few problems. It’s very much a project I meditate on when I need to do that kind of thing.

About 6 months later I started playing with the code again. I realized it was pretty trivial to deliver images via MMS using Twilio’s API and I had also started to notice that MMS worked pretty nicely on devices like an Apple Watch, and looked pretty good in the notification screen on my iPhone. All of the sudden it was kind of fun again to receive texts from Object Phone. So, I set up a subscription service.

Inspired by a few chatty SMS based apps out there like Poncho and The Edit, I built a simple subscription service that would send random objects and images to subscribers once a day at noon. Again, I set this up quickly, sent out a request for some people to try it out, and started to make realizations.

Object Phone is getting some upgrades. Feature requests welcome.

A photo posted by Micah Walter (@micahwalter) on

The main realization I had was that Object Phone had just become a chatbot. To be clear, Object Phone has technically always been a chatbot. You send it messages, and it replies with some response. But now that it sends you something periodically based on your preferences (currently just the preference that you want to continue receiving messages) it seems more like a real chatbot. More importantly, this experiment has started to make me “think” of Object Phone as a chatbot–something I should have likely realized from the start.

I also realized that Object Phone’s chattiness happens in multiple directions. It indeed chats with its subscribers. It can send you messages once a day, and it can reply to your requests for info about objects with ease. But, I also added a back end feature which follows this same line of thinking. If a user sends Object Phone a message that it doesn’t understand, Object Phone asks me for some assistance. Here is the flow:

  1. A user messages Object Phone something like “Tell me about spanking cat.”
  2. Object Phone isn’t smart enough yet to decipher the message.
  3. Object Phone replies “OK, I don’t really understand what you are saying but I’ll ask around and get back to you.”
  4. Object Phone then sends our Cooper Hewitt Slack channel a message.
  5. The Slack message contains the user’s phone number, their message, and a link to an admin page where the operator can reply directly to the user.
Screen_Shot_2016-07-04_at_10_13_26_AM

A Slack Channel where Object Phone can tell our staff when it needs a little assistance.

Screen_Shot_2016-07-04_at_10_16_12_AM

An Object Phone admin page where our staff can reply directly to users

All of the sudden Object Phone is a conduit between Cooper Hewitt staff and its visitors. It’s talking directly to visitors, but also relaying messages back and forth to more knowledgeable staff when it needs assistance.

What the cool kids are doing

Conversational user experiences are all the rage right now. Facebook has recently opened up their Messenger platform and API to developers, which means anyone can build a simple chatbot on Facebook and reach all their followers with ease. Many other messaging services have open APIs as well. WeChat, LINE, What’sApp and Slack are just a few examples.

Slack for iOS Upload

Screenshot of the CNN chatbot for Facebook Messenger

It’s pretty clear that messaging apps are increasing in popularity, with users spending much of their days talking on platforms like SnapChat rather than thumbing through their Facebook feeds. Apple too has followed suit by announcing a much upgraded Messages app in their latest update to iOS.

Chatbots have also become much more sophisticated, with huge advancements in Natural Language Processing and Natural Language Understanding. There is now a wealth of information and publicly available code and APIs out there, making it easier than ever to spin up a pretty intelligent chatbot with little overhead.

The Future of Object Phone

My next steps are to make Object Phone more intelligent. It should be able to learn about your tastes and preferences. If you only want to receive objects from our Textiles department, you should be able to say so. If you want to get your daily update at 5am, you should be able to just tell it that.

More importantly, you should be able to interact with more than just objects. Users should be able to find out general info about our museum. Are we open today? How do I get to Cooper Hewitt? Can I buy tickets right here, right now?

Lastly, I’d love to see Object Phone make its way onto the platform of your choice. I think this is a critical next step. SMS is great, and available to nearly anyone with a cell phone, but apps like FB Messenger, WhatsApp, and LINE have the ability to connect a service like Object Phone with a captive audience, all over the world.

I think institutions like museums have a great opportunity in the chatbot space. If anything it represents a new way to broaden our reach and connect with people on the platforms they are already using. What’s more interesting to me is that chatbots themselves represent a way to interact with people that is by its very nature, bi-directional. It presents us with the challenge of conversation, and forces us to listen to our constituents in a very close and connected kind of a way. We should already be doing this.

If you’d like to participate in testing out Object Phone, please go to http://objectphone.cooperhewitt.org and sign up. You will receive an object every day at 12pm EST until you reply STOP.

Museums and the Web Conference Recap: Administrative Tools at Cooper Hewitt

The Labs team had a great time at Museums and the Web this year. We published two papers for the conference and presented them both to the audience of cultural heritage thinkers, makers, planners and administrators. Sam Brenner and I shared our paper, “Winning (and losing) hearts and minds of museum staff: Administrative interfaces at Cooper Hewitt,” which outlines the process of designing, developing and iterating two in-house built, staff-facing tools: Tagatron and the Pen Pairing Station. Both administrative tools are essential aides to staff managing new responsibilities associated with visitor-facing gallery technologies.

Here is the deck from our presentation:

Administrative interfaces at Cooper Hewitt (14)

Administrative interfaces at Cooper Hewitt

Introduction

  • Cooper Hewitt, Smithsonian Design Museum. New York, New York.
  • Our strategy around presenting design is to expose process—how things are made, how they are conceived, how they are designed.
  • This presentation will speak to our philosophy of openness around design process in sharing part of the back-story of how our current visitor-facing experience came together and how it’s maintained.

Administrative interfaces at Cooper Hewitt (1)

Visitor Interfaces

  • The visitor-facing technologies in the museum, introduced in 2014, invite new forms of engagement with the Cooper Hewitt collection. They encourage active participation, letting visitors play, design and collect through multi-touch table applications and the Pen.
  • Before we were able to re-design the visitor’s relationship to the museum we went through comprehensive changes at every level.

Administrative interfaces at Cooper Hewitt (2)

Comprehensive Re-design / Institutional Shift

  • We began a restoration of the mansion, stripping it down to its Carnegie steel girders.
  • To a similar degree we rethought the organizational infrastructure of Cooper Hewitt with a comprehensive re-design of operations, workflows and responsibilities.

Administrative interfaces at Cooper Hewitt (3)

New Responsibilities (for Everyone)

  • There were new jobs created to support the new visitor experience, including that of our Gallery Technology Manager, Mary Fe, whose job responsibilities include maintaining the Pens and troubleshooting touch tables and gallery interactives
  • The re-design affects every staff member at Cooper Hewitt:
  • Registrars: aggressive timetable to enter data
  • Security: understand the mission and visitor experience, teaching visitors on pen usage
  • Exhibitions: label programming, maintenance
  • Curators: tags, relations, chat formatting for length
  • Visitor services: pen pairing – whole new step in between “welcome” and ticket sale
  • Before we got to this stage there was the task of onboarding staff to new responsibilities, which fell largely to the Digital & Emerging Media department. With the allocation of new responsibilities also came the opportunity to create tools that could facilitate some of the work.

Administrative interfaces at Cooper Hewitt (4)

Defining the Need for Considered Interfaces

  • Why did we decide that new interfaces were necessary in certain parts of the workflow?
  • We started with observation, watching workflows as they emerged. We created tools to assist where necessary. The need for interfaces was in part logistical, in part technical and also in part human.
  • Candidates for interface development are parts of the new digital ecosystem where there is:
  • High volume of data
  • Large number of users
  • Complex tasks
  • Something that needs constraints or enforcement
  • Example: the job of assigning tags and related objects to everything we put on display for the reopening. The touch table interfaces utilize tag and related object information. This data does not live in TMS, so it is housed in a custom database.
  • The task of creating the data fell to the curators. Originally this was stored in Excel files. While the curators were happy using spreadsheets, we identified a few major issues with them. The biggest one was that every department had devised their own schema for storing the data, which would ultimately have to be reconciled
  • This example fits all of the criteria above.

Administrative interfaces at Cooper Hewitt (5)

Case Study 1: Tagatron

  • Explicit purpose of the Tagatron tool: make the work quicker; make the metadata consistent; make the organization of the metadata consistent
  • Making this tool highlighted for the digital team the complex relationship between the work, the tool, and the people responsible for each—even though we believed the tool made things easier, the tool had its own set of ongoing technical and usability issues
  • We found that those issues propagated an amount of distrust or lack of confidence in the larger project. Some of these were due to bugs in the tool, but some of it was just that now it was known that this was work that would be “enforced” or taken more seriously, which made users uncomfortable.
  • Key idea: the interface takes on a symbolic value in representing “new responsibilities” and can bring about issues that it might not have been designed to address. It takes on a complex position between human needs and technical needs.

Administrative interfaces at Cooper Hewitt (6)

Tagatron (continued)

  • These graphs illustrate how prolific the task of tagging and relating objects is. It was important to build Tagatron because it is crucial tool in the ongoing operation of the museum’s digital experience. More so than the spreadsheets ever could, it allows for scalability.
  • Since the re-opening the tool went through one major design and backend overhaul, and continues to see small iterations.

Administrative interfaces at Cooper Hewitt (7)

Case Study 2: Pen Pairing Stations

  • Context of Pen Pairing: Every visitor to the museum receives a Pen. At the museum’s front desk each Pen is paired with a unique admission ticket. Every ticket has a shortcode identifier that allows visitors to retrieve their Pen visit data online when they enter the code on their ticket.
  • Pen pairing is done at a very critical point in the visitor experience when the interaction needs to be quick and frictionless. Visitor Services Associates have to coordinate a number of simultaneous tasks.

Pen Pairing Station (continued)

  • This video depicts the Pen pairing process behind the front desk. It documents the first version of the Pen Pairing application, and shows the exposed Pen-reading circuit board before housing was built.
  • Pen pairing is one of the most demanding of the new responsibilities created by the “new experience”–has to fit between welcoming a visitor, taking their money, answering any questions, looking up their member ID.
  • Each use of the tool only lasts 5-10 seconds but we’ve spent many hours and built many versions of this tool to figure out exactly what needs to happen in that time to accomplish all the tasks, including updating databases, handling failures, serial communication
  • Every one of these iterations gave us an opportunity to be connected to the staff using the tools, not only to make something that works better, but to be a part of the conversation

Administrative interfaces at Cooper Hewitt (8)

Administrative Interfaces: What does success look like? How does it feel?

  • In informal interviews with Tagatron users we found trust to be a central theme of users’ response to the interface
  • Since Tagatron augments the curators’ use of TMS, they were less trusting of its database as a long-lasting data repository
  • Improving user feedback (like confirmation messages) helped build trust in the interface
  • Bill Moggridge, Designing Interaction: designing interaction is designing the relationship between people and things
  • We came to realize the responsibility of designing interfaces—validating and responding to users’ concerns; acknowledging the burden of new responsibilities
  • Administrative interfaces at the crux of the staff relationship to the new Cooper Hewitt experience
  • Anticipating issues in developing and maintaining administrative interfaces (when success feels like failure):
  • First, the human factor: being open to the feedback and creating an environment where the channels exist to communicate staff thoughts on the tool.
  • Second, the technical factor: being able to act on what you hear from staff and make the required changes to complete the feedback loop.
  • Our responsibility as facilitators of technology in the museum to hear and act on concerns.

Administrative interfaces at Cooper Hewitt-9

Questions to ask when starting an administrative application to anticipate issues and accommodate of feedback.

Question 1: To what degree should the (administrative) tool fit with pre-existing notions?

  • This question addresses the need to understand contextual use of the tool
  • Tagatron: curatorial culture around spreadsheets and TMS
  • Pen Pairing Station: this tool disrupted the expected ticket selling workflow. We learned the that the tool needed to make Pen Pairing as unobtrusive as possible

Administrative interfaces at Cooper Hewitt (10)

Question 2: How much of the underlying technology should come through to the interface?

  • Infrastructure & interfaces are layers of an onion—the best mental model for a tool’s interface might not reflect the best technical model for its back end
  • Tagatron: the filtering tools were a reflection of how data was stored in the database, not how curators expected it
  • Pen Pairing Station: error messages from all parts of the application stack came through to the user unaltered, this was not helpful to users
  • Highlights the need for a technical solution that allows for flexibility in the middle, “translation layer” of an application

Administrative interfaces at Cooper Hewitt (11)

Question 3: What kinds of feedback does the tool provide?

  • Feedback is the voice of the interface/ its personality–is it finicky or reliable? Annoying or supportive?
  • Tagatron: missing feedback created distrust
  • Pen Pairing: too much feedback caused confusion (error messages, validation handshake)
  • Our design and production methodology: working code always wins/ learning through doing; build small, working prototypes and continually iterate.
  • A more anticipatory form of design (like design thinking) could have helped us find answers to this question sooner

Administrative interfaces at Cooper Hewitt (12)

Question 4: Is it an appropriate time for experimentation?

  • Tagatron’s v1 included relatively unknown-to-us technology like MongoDB and nodejs. We should have used more familiar technology or done small-scale tests before implementing a project of this scale–it severely hindered our ability to accommodate feedback
  • Other tools we built that involved experimental tech were only successful because their scale and userbase were far smaller (label writer)

Administrative interfaces at Cooper Hewitt (13)

The result of everything: bridges, lines of communication opened

  • Building administrative tools for staff created cross-departmental conversation—in taking on the role of building and maintaining Tagatron and the Pen Pairing Station, the Digital & Emerging Media team engaged users from departments across the museum and observed closely how the tools fit into staff members’ larger roles

A Very Happy & Open Birthday for the Pen

lisa-pen-table-pic

Today marks the first birthday of our beloved Pen. It’s been an amazing year, filled will many iterations, updates, and above all, visits! Today is a celebration of the Pen, but also of all of our amazing partners whose continued support have helped to make the Pen a reality. So I’d like to start with a special thank you first and foremost to Bloomberg Philanthropies for their generous support of our vision from the start, and to all of our team partners at Sistel Networks, GE, Undercurrent, Local Projects, and Tellart.

Updates

Over the course of the past year, we’ve been hard at work, making the Pen Experience at Cooper Hewitt the best it can be. Right after we launched the Pen, we immediately realized there was quite a bit of work to do behind the scenes so that our Visitor Experience staff could better deal with deploying the Pen, and so that our visitors have the best experience possible.

Here are some highlights:

Redesigning post-purchase touchpoints – We quickly realized that our ticket purchase flow needed to be better. This article goes over how we tried to make improvements so that visitors would have a more streamlined experience at the Visitor Experience desk and afterwards.

Exporting your visits – The idea of “downloading” your data seemed like an obvious necessity. It’s always nice to be able to “get all your stuff.” Aaron built a download tool that archives all the things you collected or created and packages it in a nice browser friendly format. (Affectionately known as parallel-visit)

Improving Back-of-House Interactions – We spent a lot of time behind the visitor services desk trying to understand where the pain points were. This is an ongoing effort, which we have iterated on numerous times over the year, but this post recounts the first major change we made, and it made all the difference.

Collecting all the things – We realized pretty quickly that visitors might want to extend their experience after they’ve visited, or more simply,  save things on our website. So we added the idea of a “shoebox” so that visitors to our website could save objects, just as if they had a Pen and were in our galleries.

Label Writer – In order to deploy and rotate new exhibitions and objects, Sam built an Android-based application that allows our exhibition staff to easily program our NFC based wall labels. This tool means any staff member can walk around with an Android device and reprogram any wall label using our API. Cool!

Improving visitor information with paper – Onboarding new visitors is a critical component. We’ve since iterated on this design, but the basic concept is still there–hand out postcards with visual information about how to use the Pen. It works.

Visual consistency – This has more to do with our collection’s website, but it applies to the Pen as well, in that it helps maintain a consistent look and feel for our visitors during their post visit. This was a major overhaul of the collections website that we think makes things much easier to understand and helps provide a more cohesive experience across all our digital and physical platforms.

Iterating the Post-Visit Experience – Another major improvement to our post-visit end of things. We changed the basic ticket design so that visitors would be more likely to find their way to their stuff, and we redesigned what it looks like when they get there.

Press and hold to save your visit – This is another experimental deployment where we are trying to find out if a new component of our visitor experience is helpful or confusing.

On Exhibitions and Iterations – Sam summarizes the rollout of a major exhibition and the changes we’ve had to make in order to cope with a complex exhibition.

Curating Exhibition Video for Digital Platforms – Lisa makes her Labs debut with this excellent article on how we are changing our video production workflow and what that means when someone collects an object in our galleries that contains video content.

The Big Numbers

Back in August we published some initial numbers. Here are the high level updates.

Here are some of the numbers we reported in August 2015:

  • March 10 to August 10 total number of times the Pen has been distributed – 62,015
  • March 10 to August 10 total objects collected – 1,394,030
  • March 10 to August 10 total visitor-made designs saved – 54,029
  • March 10 to August 10 mean zero collection rate – 26.7%
  • March 10 to August 10 mean time on campus – 99.56 minutes
  • March 10 to August 10 post visit website retrieval rate – 33.8%

And here are the latest numbers from March 10, 2015 through March 9, 2016

  • March 10, 2015 to March 9, 2016 total number of times the Pen has been distributed – 154,812
  • March 10, 2015 to March 9, 2016 total objects collected – 3,972,359
  • March 10, 2015 to March 9, 2016 total visitor-made designs saved – 122,655
  • March 10, 2015 to March 9, 2016 mean zero collection rate – 23.8%
  • March 10, 2015 to March 9, 2016 mean time on campus – 110.63 minutes
  • Feb 25, 2016 to March 9, 2016 post visit website retrieval rate – 28.02%

That last number is interesting. A few weeks ago we added some new code to our backend system to better track this data point. Previously we had relied on Google Analytics to tell us what percentage of visitors access their post visit website, but we found this to be pretty inaccurate. It didn’t account for multiple access to the same visit by multiple users (think social sharing of a visit) and so the number was typically higher than what we thought reflected reality.

So, we are now tracking a visit page’s “first access” in code and storing that value as a timestamp. This means we now have a very accurate picture of our post visit website retrieval rate and we are also able to easily tell how much time there is between the beginning of a visit and the first access of the visit website–currently at about 1 day and 10 hours on average.

The Pen generates a massive amount of data. So, we decided to publish some of the higher level statistics on a public webpage which you can always check in on at https://collection.cooperhewitt.org/stats. This page reports daily and includes a few basic stats including a list of the most popular objects of all time. Yes, it’s the staircase models. They’ve been the frontrunners since we launched.

Those staircase models!

Those staircase models!

As you can see, we are just about to hit the 4 million objects collected mark. This is pretty significant and it means that our visitors on average have used the Pen to collect 26 objects per visit.

But it’s hard to gain a real sense of what’s going on if you just look at the high level numbers, so lets track some things over time. Below is a chart that shows objects collected by day for the last year.

Screen Shot 2016-03-09 at 3.50.36 PM

Objects collected by day since March 10, 2015

On the right you can easily see a big jump. This corresponds with the opening of the exhibition Beauty–Cooper Hewitt Design Triennial. It’s partly due to increased visitation following the opening, but what’s really going on here is a heavy use of object bundling. If you follow this blog, you’ll have recently read the post by Sam where he talks about the need to bundle many objects on one tag. This means that when a visitor taps his or her pen on a tag, they very often collect multiple objects. Beauty makes heavy use of this feature, bundling a dozen or so objects per tag in many cases and resulting in a dramatic increase in collected objects per day.

Pen checkouts per day since March 10, 2015

Pen checkouts per day since March 10, 2015

We can easily see that this, is in fact, what is happening if we look at our daily pen checkouts. Here we see a reasonable increase in checkouts following the launch of Beauty, but it’s not nearly as dramatic as the number of objects being collected each day.

Screen Shot 2016-03-09 at 11.40.09 PM

Immersion room creations by day since March 10, 2015

Above is a chart that shows how many designs were created in the immersion room each day over the past year. It’s also going to be directly connected to the number of visitors we have, but it’s interesting to see the mass of it along this period of time. The immersion room is one of our more popular interactive installations and it has been on view since we launched. So it’s not a big surprise it has a pretty steady curve to it. Also, keep in mind that this is only representative of “things saved” as we are not tracking the thousands of drawings that visitors make and walk away from.

We can slice and dice the Pen data all we want. I suppose we could take requests. But I have a better idea.

Open Data

Today we are opening up the Pen Data. This means a number of things, so listen closely.

  1. The data we are releasing is an anonymized and obfuscated version of some of the actual data.
  2. If you saved your visit to an account within thirty days of this post (and future data updates) we won’t include your data in this public release.
  3. This data is being licensed under Creative Commons – Attribution, Non-Commercial. This means a company can’t use this data for commercial purposes.
  4. The data we are releasing today is meant to be used in conjunction with out public domain collection metadata or our public API.

The data we are releasing is meant to facilitate the development of an understanding of Cooper Hewitt, its collection and interactive experiences. The idea here is that designers, artists, researchers and data analysts will have easy access to the data generated by the Pen and will be able to analyze  and create data visualizations so that we can better understand the impact our in-gallery technology has on visitors.

We believe there is a lot more going on in our galleries than we might currently understand. Visitors are spending incredible amounts of time at our interactive tables, and have been using the Pen in ways we hadn’t originally thought of. For example, we know that some visitors (children especial) try to collect every single object on view. We call these our treasure hunters. We also know that a percentage of our visitors take a pen and don’t use it to collect anything at all, though they tend to use the stylus end quite a bit. Through careful analysis of this kind of data, we believe that we will be able to begin to uncover new behavior patterns and aspects of “collecting” we haven’t yet discovered.

If you fit this category and are curious enough to take our data for a spin, please get in touch, we’d love to see what you create!

On Exhibitions and Iterations

Since reopening in December 2014, we’ve found that the coming opening of an exhibition is a big driver of iteration. The work involved in preparing an exhibition involves the whole museum and is one of the most coordinated and planned-out things we do, and because of this, new exhibitions push us to improve in a number of ways.

First, new exhibitions can highlight existing gaps or inefficiencies in our systems. Our tagging tool, for example, always sees a round of bug fixes or new features before an exhibition because it coincides with a time when it will see heavy use. Second, exhibitions present us with new technical challenges. Objects in the Heatherwick exhibition, for example, were displayed in the galleries grouped into “projects,” which is also how we wanted users to collect them with their Pens and view them on the website. To accomplish this we had to figure out a way that TMS, our collections management software, could store both the individual objects (for internal purposes) and the grouped projects (which would hold all the public-facing images and text), and figure out how to see that through to the website in a way that made registrars, curators and ourselves comfortable. Finally, a new exhibition can present an opportunity for experimentation. David Adjaye Selects gave us the opportunity to scale up Object Phone, a telephone-based riff on the audio guide, which originally started as a small, rough prototype.

Last week was the opening of our triennial exhibition “Beauty,” which similarly presented us with a number of technical challenges and opportunities to experiment. In this post I’ll share some of those challenges and the work we did to approach them.

Collecting Exhibition Text

Triennial's wall text, with the collect icon in the lower-right corner

Triennial’s wall text, with the collect icon in the lower-right corner

Since the beginning of the pen project we’ve been saying that the Pens don’t just have to collect objects. Aaron and Seb wrote in their paper on the project that “nothing would prevent the museum from allowing visitors to ‘collect’ individual designers, entire exhibitions or even architectural elements from the building itself in the future.” To that end, we’ve experimented with collecting shop items and decided that with the triennial we would allow visitors to collect exhibition text as well.

Exhibition text (in museum argot, “A-Panel” is the main text at the beginning of an exhibition and “B-Panel” are any additional texts you might find along the way) makes total sense as something that a visitor should be able to remember for later. It explains and contextualizes an exhibition’s goals, contents and organization. We’ve had the text on our collections since we reopened but it took a few clicks to get through from a visitor’s post-visit website. Now, the text will be right there alongside all of a visitor’s objects.

The exhibition text on a post-visit website

The exhibition text on a post-visit website

The open-ended part of this is what visitors will expect when they collect an “exhibition.” We installed the collection points with no helper text, i.e. it doesn’t say “press here to collect this exhibition’s text.” We think it’s clear that the crosshairs refer to the text, but one of our original ideas was that we could have a way for the visitor to automatically collect every object in the exhibition and I wonder if that might be the implied function of the text tag. We will have to observe and adapt accordingly on that point.

Videos Instead of Images

When we first added videos to our collections site, we found that the fastest way to accomplish what we needed was to use TMS for relating videos to objects but use custom software for the formatting and uploading of the videos. We generate four versions of every video file — subtitled and not subtitled at two resolutions each — which we use in the galleries, on the tables and on the website. One of the weaknesses of this pipeline is that because the videos don’t live in the usual asset repository the way all of our images do, the link between TMS and the actual file’s location is made by nothing more than a “magic string” and a bit of guesswork. This makes it difficult to work with the video records in TMS: users get no preview and it can be difficult to know which video ID refers to which specific video. All of this is something we’ll be taking another look at in the near future, but there is one small chunk of this problem we approached in advance of the Triennial: how to make our website show the video in place of the primary image if it would be more appropriate to do so.

Here’s an example. Daniel Brown’s On Growth and Form is an animation on display in the Triennial. Before, it would have looked like this — the primary image is a still rendering that has been added in TMS, and the video appears as related content further down the page.

growthandform

What we did is to say if the object is itself a video, animation or other screen-based media and we have an associated video record linked to the object, remove the primary image and put the video there instead. That looks like this:

Screen Shot 2016-02-16 at 3.33.50 PM

Like all good iterations, this one opened up a bunch of next steps. First, we need to figure out how to add videos into our main digital asset pipeline so that the guesswork can be removed from picking primary videos and a curator or image specialist can select it as “primary” the same way they would do with an image. Next, it brought up an item that’s been on the backburner for a while, which is a better way to display alternate images of an object. Currently, they have their own page, which gets the job done, but it would be nice to present some alternate views on the main object page as well.

Just a Reflektor Sandbox

It's fun!

It’s fun!

We had a great opportunity to do some experimentation on our collections site due to the inclusion of Aaron Koblin and Vincent Morisset’s interactive video for Arcade Fire’s Just a Reflektor. The project’s source code is already available online and contains a “sandbox” environment, a tool that demonstrates some of the interactive visual effects created for the music video in a fun, open-ended environment. We were able to quickly adapt the sandbox’s source code to fit on our collections site so that visitors who collect the video with their Pen will be able to explore a more barebones version of the final interactive piece. You can check that out here.

Fully Loaded Labels

When we were working on the Pen prototypes, we tried six different NFC tags before getting to the one that met all of our requirements. We ended up with these NTAG203 tags whose combination of size and antenna design made them work well with our Pens and our wall labels. Their onboard memory of 144 bytes, combined with the system we devised for encoding collection data on them, meant that we could store a maximum of 11 objects on a tag. Of course we didn’t see that ever being a problem… until it was. The labels in the triennial exhibition are grouped by designer, not by object, and in some cases we have 35 objects from a designer on display that all need to be collected with one Pen press. There were two solutions: find tags with more memory (aka “throw more hardware at it”) or figure out a new way to encode the tags using fewer bytes and update the codebase to support both the new and old ways (aka “maintenance nightmare”). Fortunately for us, the NTAG216 series of tags have become more commonly available in the past year, which feature 888 bytes of memory, enough for around 70 objects on a tag. After a few rounds of end-to-end testing (writing the tag, collecting it with a pen and having it show up on the post-visit website), we rolled the new tags out to the galleries for the dozen or so “high capacity” labels.

The new tag (smaller, on the left) and the old tag (right)

The new tag (smaller, on the left) and the old tag (right)

The most interesting iteration that’s been made overall, I think, is how our exhibition workflow has changed over time to accommodate the Pen. With each new exhibition, we take what sneaked up on us the last time and try to anticipate it. As the most recent exhibition, Beauty’s timeline included more digitally-focused milestones from the outset than any other exhibition yet. Not only did this allow us to anticipate the tag capacity issue many months in advance, but it also gave us more time to double check and fix small problems in the days before opening and gave us more time to try new, experimental approaches to the collections website and post-visit experience. We’re all excited to keep this momentum going as work ramps up on the next exhibitions!

 

Press and hold to save your visit

01

During the development of a major project it’s inevitable that certain features just won’t make it into production in time for launch. Sometimes things fall out of scope, or they get left off the priority list and put on the back burner. Hopefully these features are not critical to the project, but invariably they were good ideas, and in many cases should warrant a revisit sometime down the road, when the dust has settled.

At Cooper Hewitt, one such “feature” was affectionately known as the “transfer stations.” The basic idea being that throughout the museum there would be small kiosk like stations where visitors could tap their pens and “transfer” the data from their pen to our database so that they could immediately see their collections from a mobile phone.

03

It was a nice idea, and we began to implement it, collecting specs and designing the steel stands that would support the kiosks. Eventually the parts showed up, but by that time, we were pretty knee deep in launching the Pen and the museum, so the transfer station idea got set aside.

Fast forward to today, nearly a year since the Pen has been in visitors hands and we’ve been thinking about how we can better onboard our visitors, and how we can remind them that there is something to do after they leave the museum. It’s a complex problem that we’ve tried to address in several ways.

  1. Visitors arriving at the museum typically don’t know anything about the Pen. At our Visitor Experience desk our staff are trained to quickly explain teach each visitor what they can do with the Pen, while in the background processing their orders and “pairing” each Pen to a ticket. It’s a critical part of the process and one we’ve spent a good deal of time optimizing.
  2. While visitors are waiting in line, there is an opportunity to help people learn about the Pen. We have a looping video playing in that spot that tries to do this job visually, and additionally, we have small postcards available that explain things further.
  3. On the way out the door, visitors are reminded that they should hold on to their tickets. This is supposed to happen at the door, in the moment where they are returning their pen.

There are lots of other visual cues and verbal reminders happening while you walk through the galleries, but no matter what, we find ticket stubs left behind. We know from our data that lots of our visitors are checking out their websites after their visits, but maybe we can do better. Also, part of the whole concept behind the Pen is that you “can” look at your collection from your mobile phone right away–we should make that happen more seamlessly.

Technically speaking, the transfer stations have been in play all along. When you walk up to one of our interactive tables and “dock” your pen, we read all the data on your pen ( the things you’ve collected so far ) and store them in our database. So, if you just keep walking up to tables and docking your pen, you’d be able to visit your collection on your mobile phone–no problem. But this doesn’t really do a great job of reminding you, or even letting you know that it’s possible. The tables are about browsing the collection on the tables, and that’s pretty much what their UI describes.

Also, we’ve been using an early version of the transfer station behind the scenes to do a final dump of your pen after you’ve left. This is so that in case you collected objects and didn’t go to one of the tables, you’ll be okay.

All along though, a few of us have been a little skeptical of the function and design of the transfer stations. Will they just create confusion with the visitor? Are they even necessary? Should they have a responsive visual user interface? To get to the bottom of some of these questions, well, we need to birth something into the world and see how it goes.

To get started, we chose to deploy two transfer stations in two areas on the second floor. There was a good deal of work that needed to happen. The transfer station parts needed to be identified, assembled and configured. We’d need to set up their built in Raspberry Pi computers to behave properly, and we’d need to work through their connection to power and network within the galleries. Enter Mary Fe!  She is our Gallery Technologist, the person you might see performing maintenance on some part of the technology throughout the galleries the next time you visit Cooper Hewitt. Mary Fe is the person who shows up at 8am before the museum opens to make sure everything is working and looking good.

I asked Mary Fe to work on this project from start to finish, and she’s written up a little documentation on how things went. She says:

I was called in to *clone* the existing and fully working Register station. The stations consist of Raspberry Pi mini computers connected to our museum network over ethernet and an NFC reader board designed by Sistel Networks that are able to download data from a Pen. The Raspberry Pi is mounted in the base of the extremely heavy stands you see in the photo below, and its corresponding NFC reader board is located at the top, behind the “plus” icon.

02

What we’d need

  • Raspberry Pi units programmed to save only ( vs. save and check in a pen )
  • Functioning data and power at the locations where wish to deploy the stands
  • The stands
  • Easy to understand signage

The first part was pretty easy to accomplish. I began by cloning SD cards for use with the Raspberry Pi. These had to be configured to “save” Pen data, and not “check in” the Pen so that the visitor could continue their visit, saving as many times as they like. After cloning, we assigned new names and unique IPs to the Raspberry Pi stations.

I ran into a little trouble when I started testing the data ports. Long story short, I had to learn how to tone/probe data ports from their locations in the galleries to their corresponding positions in the network closet. After a good deal of troubleshooting with our IT specialists in DC, the ports came to life and we assigned their IP addresses.

Once the ports were figured out, and the Raspberry Pi’s setup and configured, everything began to work. Right away I noticed visitors starting to use the transfer stations.

What’s next

We have more transfer stations waiting in the wings. However, as I mentioned above, deploying these two transfer stations is sort of an experiment. We want to watch and see how visitors react to them, if they cause more confusion, and how often they are getting used.

We’ve already been thinking of ways we might incorporate a screen to add a visual user interface to the stations. Perhaps a more guided experience would get visitors more involved with them. What kinds of problems will introducing a screen add to the device? Maybe we should think about e-ink, a touch screen, or a thermal printer? It’s hard to say at this point.

The next step is to collect some visitor feedback, look at the data, and start prototyping new ideas.

Mailchimp & Tessitura together at last

tessitura-mailchimp

The short version is:

There is now an integration between Tessitura & Mailchimp. If you are a Tessitura Licensee, and have access to their BitBucket account, you can get it here.

So you can use this lovely Mailchimp Interface to create your emails…

Screen Shot 2015-08-20 at 12.36.19 PM

… and connect it with all of the power of Tessitura, through this easy to use tool.

readme-02

The long version is:

It’s the last day of the Tessitura Learning & Community Conference, and I’m all checked out of my hotel, sitting in the conference hall, thinking about all of the things I’ve learned this week, and all of the people I’ve met.

So many of the people I’ve talked to have been asking about the Tessitura-Mailchimp Integration we launched in partnership with Mailchimp and JCA, Inc. this past week, and so I thought I’d write up a blog post to try and explain what it is, how you get it/use it/make it better, and more importantly, why we did it in the first place.

A long while ago, Cooper Hewitt had an enormous email list. Some 60K emails on one massive list powered by a e-marketing service that was clearly heading out of business. This giant list wasn’t working. We weren’t getting the results we thought we should, and what’s more, we had no way of measuring our success. So, we switched to Mailchimp. It was a pretty obvious choice. Mailchimp offered the museum an incredibly quick set up time, a beautiful user interface, with super clean and easy to use templates. What’s more, Mailchimp placed a lot of emphasis on “list quality” and advised us to put out an appeal to our current bloated list to do an “opt-in” and create a whole new list made up of real people, with valid email addresses, who actually wanted to receive mail from us.

The list dropped down. Way down. After a few “last chance” appeals, our 60K subscribers were whittled down to about 2500. This was challenging territory for many departments in the museum who relied on the large numbers for a sense of security more than their effectiveness, like at almost every non-profit.

But we pressed on, and noticed quickly that our open rates were dramatically high. Our click through rates were excellent, and it was clear that people were actually reading our emails, and acting on our calls to action. If you haven’t noticed by now, I’m trying to include as many marketing buzzwords into this post as possible. You know, due-dilligence and all.

This is a long way around to explain that we all started to fall in love with Mailchimp. It’s ease of use and deep analytics and reporting tools were a huge win for the museum as a whole. Our list continues to grow and our “satisfaction” rating remains pretty steadily on the high end. The staff seem to enjoy working in Mailchimp, especially following the recent redesign of the user interface.

One day along the way, the museum decided to implement Tessitura as our CRM ( constituent relationship management ) and ticketing system. It’s a super robust, enterprise class system that is sort of the swiss army knife for non-profits, performing arts centers, and more recently, museums.

In the long term strategic plan for Tessitura, it appeared as though we would have to ditch Mailchimp and move to one of the two providers that offer an integration with Tessitura. We looked at both of them, and while they both did the job at hand, neither of them offered the pleasant experience and incredible analytics tools that Mailchimp did. It would have been a tough sell to our staff to move them off something they clearly all had grown to love and on to a system that would probably work just fine, but not make their hearts any warmer.

So, we talked with Mailchimp. Mailchimp has a wide variety of third party integrations, and we started to converse about what an integration with Tessitura would look like. We all got really excited at the possibilities, and so once a small amount of funding was secured, we partnered with JCA, Inc. to build us something.

Mailchimp was really excited about the idea, and being a forward thinking tech company, they pushed us to make the integration free, and open source. This is something we strongly believe in at Cooper Hewitt as well, so we worked with the staff at Tessitura, and figured out a way to share the code within the Tessitura Network, so as not to violate any non-disclosure agreements. Things were starting to take shape.

So what will it do, and how does it work?

readme-03

We tried to limit the scope of the project to the bare essentials. We wanted to stay within our budget, and build a simple tool that does what it says on the tin. The hope here is that Tessiutra licensees will try it out, see that it’s a good thing, and run with it, adding features and customizing it to suit their needs. Open source goodness.

At the moment, the project is a pretty simple .NET application that anyone can install on a windows machine that can talk to Tessitura and Mailchimp. You fill out some initial config information, and then schedule a nightly synchronization job. This allows Tessitura licensees to export their primary lists on a nightly basis into Mailchimp.

You can also perform synchronizations on an ad-hoc basis, meaning, any Tessitura user can easily create a segmented list in Tessitura for a specific purpose, and sync that list to Mailchimp for immediate sending.

This is a really nice feature, because it actually creates or updates a segment in Mailchimp. Rather than create many bespoke email lists, you can then just use a single master list in Mailchimp, and use the exported segments so you are only sending to the addresses you are interested in.

What it’s not

It’s important to understand that this is an open source tool and is provided “as is.” There is no support staff waiting to take your calls and answer your questions. This remains the responsibility of the Tessitura community.

As I mentioned, it’s a simple tool, and at the moment, it basically does the two functions I’ve outlined above. There is no syncing of analytics data back in to Tessitura, for example. We really love the analytics tools built right into Mailchimp, and so for most this may not be a deal breaker. These are the kinds of features we hope will get added by the community down the road.

What it is, again.

It’s a super exciting thing for us to all think about! The Tessitura community really needs to take more control over the entire eco-system of third-party applications and extensions. Without a vested interest in building our own tools, open sourcing the work we are all doing, and joining in the conversation with regards to direction and strategy, the community will always be waiting on the next update from those vendors who have chosen to build products from the system.

How do I get it?

First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to web_dev@tessituranetwork.com. Once you are there, you can go to here, and download the code, or the binaries to try it out on your system. The repository has a README with all the relevant info on how to install it from scratch, build from the source, and set things up in Mailchimp. If you don’t have this capability you can also download the compiled binaries and just try it out.

How do I contribute?

If you are a Tessitura Network licensee, and you’ve gotten this far, read the README to get the full picture on how to fork the code and contribute. For the time being, feel free to log issues, and send feature requests, and I will do my best to follow up on them and help get them resolved, but eventually, we hope that someone within the community will pick up the torch and help us to continue to develop what we think is a really valuable integration and option for the broader Tessitura community.

Reminder: First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to web_dev@tessituranetwork.com

Long live RSS

Screen Shot 2015-07-10 at 2.35.17 PM

I just made a new Tumblr. It’s called “Recently Digitized Design.” It took me all of five minutes. I hope this blog post will take me all of ten.

But it’s actually kinda cool, and here’s why. Cooper Hewitt is in the midst of mass digitization project where we will have digitized our entire collection of over 215K objects by mid to late next year. Wow! 215K objects. That’s impressive, especially when you consider that probably 5000 of those are buttons!

What’s more is that we now have a pretty decent “pipeline” up and running. This means that as objects are being digitized and added to our collections management system, they are automatically winding up on our collections website after winding their way through a pretty hefty series of processing tasks.

Over on the West Coast, Aaron, felt the need to make a little RSS feed for these “recently digitized” so we could all easily watch the new things come in. RSS, which stands for “Rich Site Summary”, has been around forever, and many have said that it is now a dead technology.

Lately I’ve been really interested in the idea of Microservices. I guess I never really thought of it this way, but an RSS or ATOM feed is kind of a microservice. Here’s a highlight from “Building Microservices by Sam Newman” that explains this idea in more detail.

Another approach is to try to use HTTP as a way of propagating events. ATOM is a REST-compliant specification that defines semantics ( among other things ) for publishing feeds of resources. Many client libraries exist that allow us to create and consume these feeds. So our customer service could just publish an event to such a feed when our customer service changes. Our consumers just poll the feed, looking for changes.

Taking this a bit further, I’ve been reading this blog post, which explains how one might turn around and publish RSS feeds through an existing API. It’s an interesting concept, and I can see us making use of it for something just like Recently Digitized Design. It sort of brings us back to the question of how we publish our content on the web in general.

In the case of Recently Digitized Design the RSS feed is our little microservice that any client can poll. We then use IFTTT as the client, and Tumblr as the output where we are publishing the new data every day. 

RSS certainly lives up to its nickname ( Really Simple Syndication ), offering a really simple way to serve up new data, and that to me makes it a useful thing for making quick and dirty prototypes like this one. It’s not a streaming API or a fancy push notification service, but it gets the job done, and if you log in to your Tumblr Dashboard, please feel free to follow it. You’ll be presented with 10-20 newly photographed objects from our collection each day.

UPDATE:

So this happened: http://twitter.com/recentlydigital

Happy Staff = Happy Visitors: Improving Back-of-House Interfaces

“You have to make the back of the fence that people won’t see look just as beautiful as the front, just like a great carpenter would make the back of a chest of drawers … Even though others won’t see it, you will know it’s there, and that will make you more proud of your design.”

—Steve Jobs

In my last post I talked about improvements to online ticketing based on observations made in the first weeks after launching the Pen.

Today’s post is about an important internal tool: the registration station whose job is to pair a new ticket with a new pen. Though visitors will never see this interface, it’s really important that it be simple, easy, clear, and fast. It is also critical that staff are able to understand the feedback from this app because if a pen is incorrectly paired with a ticket then the visitor’s data (collections and creations) will be lost.

Like a Steve-Jobs-approved iPod or a Van Cleef & Arpels ruby brooch, the “inside” of our system should be as carefully and thoughtfully designed as the outside.

the view from behind a desk with screens and wires everywhere. a tablet positioned upright with some tiny text and bars of color.

Version 1 of the app was functional but cluttered, with too much text, and no clear point of focus for the eye.

Because the first version of the app was built to be procedurally functional, its visual design was given little consideration. However, the application as a whole was designed so that the user interface – running in a web browser – was completely separate from the underlying pen pairing functionality, which makes updating the front-end a relatively straightforward task.

Also, we were getting a few complaints from visitors who returned home eager to see their visit diary, and were disappointed to see that their custom URL contained no data. We suspected this could have been a result of the poor UI at ticketing.

With this in mind, I sat behind the desk to observe our staff in action with real customers. I did about three sessions, for about ten minutes each, sometimes during heavy visitor traffic and sometimes during light traffic. Here’s what I kept an eye on while observing:

  • How many actions are required per transaction? Is there any way to minimize the number of “clicks” (in this case, “taps”) required from staff?
  • Is the visual feedback clear enough to be understood with only partial attention? Or do  typography, colors, and composition require an operator’s full attention to understand what’s going on?
  • What extraneous information can we minimize or omit?
  • What’s the critical information we should enlarge or emphasize?

After observing, I tried my hand at the app myself. This was actually more edifying than doing observations. Kathleen, our head of Visitor Services, had a batch of about 30 Pens to pair for a group, and I offered to help. I was very slow with the app, so I wasn’t really of much help, moving through my batch of pens at about half the speed of Kathleen’s staff.

Some readers may be thinking that since the desk staff had adjusted to a less-than-excellent visual design and were already moving pretty fast with it, this could be a reason not to improve it. As designers, we should always be helping and improving. Nobody should have to live with a crappy interface, even if they’ve adjusted to it! And, there will be new staff, and they will get to skip the adjustment process and start on the right foot with a better-designed tool.

My struggle to use the app was fuel for its redesign, which you can see germinating in my drawings below.

some marker sketches of a tablet interface with lots of scribbled notes

After several rounds of paper sketches like these, the desk reps and I decided on this sequence as the starting point for version two of the app.

These were the last in a series of drawings that I worked through with the desk staff. So our first few “iterative prototypes” were created and improved upon in a matter of minutes, since they were simply scribbled on paper. We arrived at the above stopping point, which Sam turned into working code.

Here’s what’s new in version 2:

  • The most important information—the alphanumeric shortcode— is emphasized. The font is about 6 or 7 times bigger, with exaggerated spacing and lots of padding (white space) on all sides for increased legibility. Or as I like to call it, “glanceability.” This helps make sure that the front of house staff pair the correct pen with the correct ticket.
  • Fewer words. For example, “Check Out Pen With This Shortcode” changed to “GO”, “Pen has been successfully checked out and written with shortcode ABCD” changed to “Success,” etc. This makes it easier for staff to know, quickly, that the process has worked and they can move on to the next ticket/pen/customer.

“I didn’t have time to write a short letter, so I wrote a long one instead.”
Mark Twain

  • More accurate words. Our team uses a different vernacular from the people working at the desk. This is normal, since we don’t work together often, and like any neighboring tribes, we’ve developed subtly different words for different things. Since this app is used by desk staff, I wanted it to reflect their language, not ours. For example, “Pair” is what they call “check-out” and “Return” is what they call “check-in.”
  • Better visual hierarchy: The original app had many competing horizontal bands of content, with no clear visual clue as to which band needed the operator’s attention at any given time. We used white space, color (green/yellow/red for go/wait/stop), and re-arranging of elements (less-used features to the bottom, more-used features to the top) to better direct the eye and make it clear to the user what she ought to be looking at.
  • Simple animations to help the user understand when the app is “working” and they should just wait.

Still to come are added features (bulk pairing, maintenance mode) and any ideas the desk reps might develop after a couple of weeks of using the new version.

Imagine how difficult this process would have been if the museum had outsourced all of its design and programming work, or if it were all encased in a proprietary system.

Understanding how the Pen interacts with the API

Detail of instructional postcard now available to museum visitors at entry to accompany The Pen.

Detail of instructional postcard now available to museum visitors at entry to accompany The Pen.

The Pen has been up and running now for five weeks and the museum as a whole has been coming to terms with exactly what that means. Some things can be planned for, others can be hedged against, but inevitably there will be surprises – pleasant and unpleasant. We can report that our expectations of usage have been far exceeded with extremely high take up rates, over 400,000 ‘acts of collection’ (saving museum objects with the Pen), and a great post-visit log in rate.

The Pen touches almost every operation of the museum – even though the museum was able to operate completely without it from our opening in December until March. At its most simple, object labels need NFC tags which in turn needs up-to-the-minute location information entered into our collection management system (TMS); the ticketing system needs a constant connection not only to its own servers but also to our API functions that create unique shortcodes for each visitor’s visit; and the Pens need regular cleaning and their monthly battery change. So everyone in the museum has been continuously improving and altering backend systems, improving workflows, and even the front-end UI on tablets that the ticket staff use to pair Pens with tickets.

Its complex.

Katie drew up (another) useful diagram of the journey of a Pen through a visit and how it interacts with our API.

Single visit 'lifecycle' of The Pen. Illustration by Katie Shelly, 2015. [click to enlarge]

Single visit ‘lifecycle’ of The Pen. Illustration by Katie Shelly, 2015. [click to enlarge]

Even more details of the overall system design and development saga can be found in the (long) Museums and the Web 2015 paper by Chan & Cope.

The digital experience at Cooper Hewitt is supported by Bloomberg Philanthropies. The Pen is the result of a collaboration between Cooper Hewitt, SistelNetworks, GE, MakeSimply, Undercurrent, and an original concept by Local Projects with Diller Scofidio + Renfro.