Iterating the “Post-Visit Experience”

The final phase of a visitor’s experience at Cooper Hewitt, after they’ve left the museum, is what we call the “post-visit experience.” Introduced along with the Pen in March, it is a personalized website that displays a visitor’s interactions with the museum as a grid of images, including objects they collected from the galleries and wallpapers they created in the Immersion Room.

Our focus leading up to its launch was just to have it working, and as such, some of the details of a visitor’s experience with the application were overlooked. As a result of this, our theoretically simple interface became cluttered with extra buttons, calls to action and explanatory texts. In this post, I’ll present the experience as it existed before and describe some of the steps we took in the past month to iterate on the post-visit experience.

The “Before” Experience


First, let’s walk through the experience as it existed up until this week. The post-visit begins when a visitor accesses their personal website, which they could do by going to a URL on their physical ticket. On the ticket above, that URL is The domain is our “URL shortener,”, followed by /v/ to indicate a visit (the shortener also supports /o/ for objects or /p/ for people), followed by a four or five-character alphanumeric code which we call the “shortcode.” If a visitor recognized this whole thing as a URL, they would get access to their visit. If a visitor didn’t recognize this as a URL, they would hopefully go to our homepage and find the link that took them to the “visit shortcode page” seen below.

Screen Shot 2015-10-29 at 4.03.33 PM copy

From here, they would enter their shortcode and get their visit. A visit page contains a grid of all the images of items you collected and created during your visit to the museum, which looked like this:

Screen Shot 2015-10-29 at 12.37.47 PM

You will notice the unwieldy CTA. It’s big, it’s ugly and it gets in the way of what we’re all here to do, but this was our first opportunity to present the concept of “visit claiming” to the visitor. Visit claiming is the idea that your visit is initially anonymous, but you can create an account and claim it as your own. Let’s say the visitor engages the CTA and claims their account. They are taken through a log in / sign up flow and return to their visit page which has now been linked to their account.

After claiming a visit, the visitor has access to some new functionality. At the top of the page are the token share tools. Under every image now live privacy controls, in the form of a repeated paragraph. At the very bottom of the page are buttons to make everything public, export the visit and delete the visit.

Screen Shot 2015-10-29 at 12.38.28 PM Screen Shot 2015-10-29 at 12.38.57 PM

What to Work On?

The goal for this work was to redesign the post-visit experience to put the visitor’s experience above all of our functional and technical requirements. At this point, we were all familiar with the many complex details along the way, so we met to discuss the end-to-end experience. Taking a step back and thinking in terms of expectations — both ours and the visitors’ — helped us rebuild the experience from the ground up. Feedback we had collected both anecdotally and through our online feedback form was helpful in this process. Once we had an idea of a visitor’s overall expectations of the post-visit experience, we were able to turn that into actionable tasks.

Step 1: Redesigning Visit Retrieval

Screen Shot 2015-11-02 at 5.45.33 PM

The first pain point we identified was the beginning of the experience: visit retrieval. Katie, our former Labs technologist, has written before about some of the ways we’ve tried to get visitors quickly up to speed on “how everything works” — the idea that you get a pen, you use the pen to collect objects, you go to a website and you get your objects. Her work focused on informational postcards and the introductory script used by the visitor experience staff. In the case of the visit retrieval flow chart above, this helped reduce the number of “no” answers to the two questions: “do I have my ticket?” and “do I recognize the URL on my ticket?”

That second question — “do I recognize the URL on my ticket?” — is not a question we would have expected visitors to even be asking. To us, the no-vowel/non-standard-TLD “URL-shortener”-style URL, a la or, has an instantly recognizable purpose. Through visitor feedback, we learned that for some visitors, it understandably looked more like an internal tracking number than the actual website we wanted people to go visit.

For these visitors, the best-case scenario is that the they would go to our main website where we provide links, both in the header and on the homepage, to the “visit retrieval” page. Here it is again, for reference:

Screen Shot 2015-10-29 at 4.03.33 PM copy

Since we expected users to go straight to the URL on their ticket, this page was more of a backup and as such hadn’t received a lot of attention. As a consequence of this, there were a few things that confused users on the page. First, the confirm button’s CTA is “fetch,” which is different from the “retrieve” used in the header and “access” used on the ticket. Second, the placeholder text in the input field is cut off. Third, the introduction of the word “shortcode,” which we’ve always used internally to refer to a visitor’s visit ID, had no meaning in the visitor’s mind. We tried explain it by saying that it means “the alphanumeric code after the final slash on your ticket,” which is a useless jumble of words.

Our approach to this was to eliminate the “do I recognize the URL?” question and its resulting outcomes (the dotted box in the flow chart above) and replace it with self-evident instructions. To that end, we redesigned both the visit retrieval page and the ticket itself. Here’s the new ticket:


We’ve provided a much more human-friendly URL in “” and established the shortcode (now just called “code”) as a separate entity. Regardless of whether or not visitors were confused by the short URL, the language on the new ticket fits with our desire to use natural language wherever we can to avoid having the digital experience feel unnecessarily technical.

The visit retrieval page (which is accessed via the URL on the ticket) also got an update. The code entry field got much bigger and we tucked a small FAQ below it. We also standardized on the word “retrieve” as the imperative.

Screen Shot 2015-10-29 at 12.17.46 PM

Screen Shot 2015-11-03 at 2.46.14 PM

Step 2: Redesigning the Visit Page

The next pain point we identified was the visit page itself, and specifically how we used it to explain claiming and privacy. Here’s the page again for reference:

Screen Shot 2015-10-29 at 12.37.47 PM

The problem concerning how we explained claiming is fairly straightforward. The visual design of the CTA is obtrusive, but it was our only opportunity to explain the benefits of claiming a visit. We sought to find a less obtrusive, more intuitive way to explain why claiming a visit is an option our visitors might want to take advantage of.

The problem concerning how we explained privacy is the more complicated of the two issues. It specifically regards the concept of the “anonymous visit.” Visits aren’t connected with visitor’s identities in any way except in that only they know the code. We do this because we need a way to uniquely identify each museum visit and the shortcode keeps that unique ID at a reasonable length. We also want to allow visitors to have an anonymous post-visit experience, meaning they can see everything they did in the museum without having to sign up for an account. But we don’t expect everyone to remember their shortcode or hold on to their ticket forever, so we allow visitors to create an account on our website and “claim” their visits. A claimed visit is linked to a visitor’s email address, so now they can throw out their ticket and forget their shortcode. Over time, we also hope that visitors will claim multiple visits with their accounts so they get a complete history of their relationship with our museum.

The problem this presents is that we have to treat every visitor who has a code as if they are the owner of that visit. This manifests itself in a specific (but important) use case. If a visitor shares their visit on social media while it is unclaimed, then any person who accesses the visit will also have the option to sign up and claim it as their own.

Further compounding this issue is the fact that we automatically make claimed visits private. We do this because in claiming an account, the visitor is effectively de-anonymizing it. Claimed visits are linked to real-world identities (in the form of a username) and for that reason we make it an opt-in choice to go public with that connection.

The goal of redesigning this page, then, was to allow the visitor to navigate the complex business logic without having to fully comprehend it. In talking this through we concluded that by consolidating the visit controls (which previously only appeared on the claimed visit page) and adding them (greyed out) to the unclaimed visit we could solve many of our problems. Why have a paragraph of explanatory text about why you should claim your visit when we could just show you the control panel that claimed visits have access to? A control panel presents the functionality plainly and concisely, without confusing language.

This also allowed us to establish a language of icons that we could reuse elsewhere to replace explanatory sentences. We also agreed to standardize on the word “claim” as the action that we wanted visitors to engage in, as it more effectively conveys the idea that other people have visits as well but we need to know that this one was yours.

Best of all, it allowed us to build off the work we’d done earlier this year which had the explicit purpose of organizing our code and visual hierarchy to better support future iterations.

Here’s what that ended up looking like.

Screen Shot 2015-10-29 at 12.21.16 PM

Interacting with any of the controls invokes a modal dialog that prompts the user claim their visit. If they’re not logged in, they are presented with a login / signup prompt. Otherwise they are asked to confirm their desire to claim the visit. Once claimed, the controls function as expected. Like the changes we made to the ticket design, it moves towards a more self-evident experience that requires less information processing time on the visitor’s part.

Screen Shot 2015-10-29 at 12.21.44 PM

Finally, some bonus gifs to show off the interaction details. The control panel has some rollover action:


We use modal dialogs to confirm privacy changing, deleting and claiming actions:


A Brief Bit About Code

Powering the redesign was a complete overhaul of the Javascript that powers these pages. Specifically, we reorganized it to remove inline code and decouple API logic from DOM logic. In lay terms, this means separating the code that says “when I click this thing…” from the code that says “…perform this action.” When those separate intentions are tightly coupled, the website is less flexible and doing maintenance work or experimenting with alternate user flows requires more effort than necessary. When separate, it makes reusing code much more straightforward, which will allow us to tweak and test with ease going forward. Recent frameworks such as Angular or React, which we’ve only just started experimenting with, excel at this. For now, we opted for a slightly modified module pattern, which gives us just enough structure to keep things organized without having to learn a new framework.

What’s Next?

The changes have only been live for a few days now so it will take some time to build up enough numbers to see where to focus our future improvements on this part of the site. Specifically, we will be looking at the percentage of visitors who visit their website and the percentage of those visitors who create accounts, and hope to see the rate of change increasing for both of those numbers.

One part of this visitor flow where we hope to do structured A/B tests is with the “sign up” functionality. Right now, when a visitor enters their code and clicks the “Retrieve” button, they are taken immediately to their visit page. We want to test whether adding in a guided “visit claiming” flow, which would optionally hold the user’s hand through the account creation process before they’ve seen their objects, results in more account creations. We’ll wait and collect enough “A” data before rolling the “B” test out.

Of course, there are big questions we can start answering as well. How can we enhance the value of a visitor’s personal collection? Right now we have rudimentary note-taking functionality which is severely underutilized. What do we do with that? What about new features? We have all of our object metadata sitting right there waiting to be turned into personalized visualizations. (Speaking of that – we have public API methods for visit data!) Finally, how can we complete the cycle and turn the current “post-visit” into the next “pre-visit” experience?

With each iteration, we strive not only to apply what we’ve learned from visitors, colleagues and peers to our digital ecosystem, but also to improve the ease with which future iterations can be made. We are better able to answer questions both big and small with these iterations, which we hope over time will result in a stronger and more meaningful relationship between Cooper Hewitt and our visitors.

Random Button Television

AppleTV SDK Fun

Recently, Apple announced a number of updates to their product line, including a pretty major update to AppleTV, the small set top box that allows people to listen to their music collection, rent movies and TV shows, and stream audio from their phones to their televisions. The biggest update to AppleTV was definitely the fact that it now supports “Apps”, allowing iOS developers to design and build whatever they can imagine.

I put my hat in the ring, and applied for Apple’s lottery and about a week later a brand new AppleTV Software Development Kit showed up at the Labs.

To be honest, I haven’t had much interest in developing apps for a long time now. It’s problematic at best to go down the road of building something for iOS ( or any brand specific device ) in the context of a museum, and yes, it’s been a long while since I even glanced at Objective-C. But, the device is a curious object, and at the very least made me wonder what it might be like to introduce a way to open up access to our collections through the warm and inviting glow of a television screen. Imagine it for a moment, sitting there atop your dresser, or mounted to your living room wall, next to the fire place, in full HD. The television, no matter how you divide it up over the years has a pretty permanently fixed position in our homes, and in our minds.

As a little side project, I decided to see what I could do with the AppleTV SDK and our Collections API. I decided ahead of time I wouldn’t spend too much time on this, and although I wound up spending at least one night reading up on NSDictionary and a few other oddball data-types in Objective-C, I was able to stick with my original plan and quickly built a little “Hello AppleTV” app that simply allows the “viewer” to flip through objects in our collection by pressing the “select” button on their remotes.

It uses one API method, our old favorite cooperhewitt.objects.getRandom, and yeah, that’s all it does. Keep pressing the select button and you continue to get objects. It’s quite fun!

AppleTV XCode

So here’s how it works. As we say over here at the Cooper Hewitt Labs, “Working code always wins.”

  1. There is a ViewController. This is the thing that represents the screen on your AppleTV, and the thing you can apply all of the subsequent properties to.
  2. There is an ImageView. This is where the image is applied to.
  3. There are a couple Labels which simply allow you to display the title and object ID for each object.
  4. There is a asynchronous way of calling the API.

Here’s the code. There’s basically just the two files for the ViewController that define everything we’re talking about. Beyond that, there is some “wiring up” of the ImageView and Labels so the visuals know what code they are connected to, and there is really the one method “fetchRandom” that does the work of calling the API, parsing the response, and storing the things we are interested in.

And here is the end result.

To be quite honest, we probably won’t be uploading this to the iTunes store. It’s really just a “Hello World” app and only meant to be a conversation starter for staff members who happen by the Labs area. But it does make me wonder — what else could museums do with a device like this?

The device itself is a curious one, with plenty of built in human interface challenges and opportunities. Sitting back, clicking the remote and checking out collection objects, isn’t really my idea of an exciting way to spend an evening at home, but take this a few steps further, maybe a few additional API calls, and who knows what might unfold.

Slowly improving Copyright clarity

Ever since the online collection first properly went live in 2012 our collection images had a little line under them that said “please don’t steal our images, yeah?”. Whilst it was often commented that this was a friendly, casual approach that felt in keeping with the prevailing winds of the Internet, the statement was purposely vague and, at the end of the day, pretty unhelpful.

screencap-cooper-hewitt-rights.jpg 1,181×570 pixels

After all, “what is ‘stealing’ an image”? “Isn’t the Smithsonian, as a public institution, already owned by the ‘public'”? “What about ‘fair use'”? And, as many pointed out, “why are you claiming some kind of rights over images of objects that are clearly date from before the 20th century?”. Some also spotted the clear disconnect between the ‘please don’t steal’ language and our other visible commitments to open licensing and open source.

First, a bit of history.

The majority of the Cooper Hewitt collection predates its acquisition by the Smithsonian. The collection was originally at Cooper Union until the museum there closed in 1963. It was officially acquired by the Smithsonian in 1968 and the Cooper Hewitt was opened in the Andrew Carnegie Mansion in 1976. The effect of this history is that much of the pre-1968 collection is unevenly documented and its provenance very much still under active research. Post-1976 it is possible to see, in the metadata, the different waves of museum management and collection documentation, as new objects were added to the collection and new collection policies became formalised. Being a ‘new museum’ in 1976 also meant that much of the focus was on exhibitions, not so much on the business of documenting collections. Add to this the rise of computer-based catalogues and you have a very ‘layered’ history.

Cooper Hewitt has not had the resources or staff to undertake the type of multi-year Copyright audits that museums like the V&A have done, and as a result, with provenance and documentation in many cases quite scant, the museum has had to make ‘best efforts’.

With the recent tweaks to the online collection, we have finally been able to make some clarifying changes.

Like all Smithsonian museums, all online content is subject to institution-wide ‘Terms of use‘. This governs the ‘permitted uses’ of anything on our websites, irrespective of underlying rights. These terms are not created at an individual museum level but are part of Smithsonian-wide policy. You can see that whilst these terms allow only ‘allows personal, educational, and other non-commercial uses’ they encourage the use of Fair Use under US Copyright law.

However, that said, we think it is important to be clear on what is definitely out of Copyright, and what may not be. And over time, as the collection gets better documented, more of the unknowns will become known.

So here’s what we have done – its not perfect – but at least its better than it was. And, to be perfectly honest, we’re only talking about the possible rights inherent in the underlying object in the image, as the digital image itself was created by the Smithsonian. Some of the types of object in our collection may not be eligible for Copyright protection in the first place.

For objects from our permanent collection

1. acquired before 1923 then we say “This object has no known Copyright restrictions. You are welcome to use this image in compliance with our Terms of Use.” For example, this medal acquired in 1907.

2. acquired in or after 1923 but has a known creation date [‘end date’ in our collection database] that is before 1923, then we say “This object has no known Copyright restrictions. You are welcome to use this image in compliance with our Terms of Use.” This 1922 textile acquired in 2015 is a good example.

3. acquired in or after 1923 but without a known, documented, creation date [‘end date’ in our collection database], then we say “This object may be subject to Copyright or other restrictions. You are welcome to make fair use of this image under U.S. Copyright law and in compliance with our terms of use. Please note that you are responsible for determining whether your use is fair and for responding to any claims that may arise from your use.” For example this ‘early 20th century’ Indonesian textile.

This scenario is far too common and you will come across objects that clearly appear to be pre-20th century that have not been formally dated, as well as objects that say in their name or description that they are pre-20th century but have not been correctly entered into the database and don’t have their ‘end date’ field completed. An especially egregious example is this 18th century French textile that has incomplete cataloguing. In the collection database it has no ‘end date’ (it should have 1799 as an ‘end date’) and clearly should have no Copyright restrictions.

4. acquired in or after 1923 with a known creation date also in or after 1923 [‘end date’ in our collection database], then we say “This object may be subject to Copyright or other restrictions. You are welcome to make fair use of this image under U.S. Copyright law and in compliance with our terms of use. Please note that you are responsible for determining whether your use is fair and for responding to any claims that may arise from your use.” For example this 2010 wallpaper.

Many of the ‘utilitarian objects’ in our collection – clocks, tables, chairs, much of the product design collection – are legally untested in terms of whether Copyright applies, however in many of these cases other IP protection may apply.

As the US Copyright Office states,

“Copyright does not protect the mechanical or utilitarian aspects of such works of craftsmanship. It may, however, protect any pictorial, graphic, or sculptural authorship that can be identified separately from the utilitarian aspects of an object. Thus a useful article may have both copyrightable and uncopyrightable features. For example, a carving on the back of a chair or a floral relief design on silver flatware could be protected by copyright, but the design of the chair or flatware itself could not. Some designs of useful articles may qualify for protection under the federal patent law.” [source]

For objects on loan from other institutions, companies or individuals

5. irrespective of its known age, we now say “This object may be subject to Copyright, loan conditions or other restrictions”.

As you can see we have had to make some very conservative decisions, largely as a result of the incompleteness of our data and museum records.

If you spot any of these (you could download the entire metadata from Github to programmatically do this), log them with their accession number in our Zendesk and they will be prioritised to be fixed.

Small steps.



Update: Steven Lubar asked us on Twitter to share the number of object records that fall in to each of the categories. Here are those numbers:

Acquired before 1923 32,442
Acquired on or after 1923 and known creation date before 1923 5,232
Acquired on or after 1923 and no known creation date 136,372
Acquired on or after 1923 and known creation date on or after 1923 30,357
Loan objects 13,477

Mailchimp & Tessitura together at last


The short version is:

There is now an integration between Tessitura & Mailchimp. If you are a Tessitura Licensee, and have access to their BitBucket account, you can get it here.

So you can use this lovely Mailchimp Interface to create your emails…

Screen Shot 2015-08-20 at 12.36.19 PM

… and connect it with all of the power of Tessitura, through this easy to use tool.


The long version is:

It’s the last day of the Tessitura Learning & Community Conference, and I’m all checked out of my hotel, sitting in the conference hall, thinking about all of the things I’ve learned this week, and all of the people I’ve met.

So many of the people I’ve talked to have been asking about the Tessitura-Mailchimp Integration we launched in partnership with Mailchimp and JCA, Inc. this past week, and so I thought I’d write up a blog post to try and explain what it is, how you get it/use it/make it better, and more importantly, why we did it in the first place.

A long while ago, Cooper Hewitt had an enormous email list. Some 60K emails on one massive list powered by a e-marketing service that was clearly heading out of business. This giant list wasn’t working. We weren’t getting the results we thought we should, and what’s more, we had no way of measuring our success. So, we switched to Mailchimp. It was a pretty obvious choice. Mailchimp offered the museum an incredibly quick set up time, a beautiful user interface, with super clean and easy to use templates. What’s more, Mailchimp placed a lot of emphasis on “list quality” and advised us to put out an appeal to our current bloated list to do an “opt-in” and create a whole new list made up of real people, with valid email addresses, who actually wanted to receive mail from us.

The list dropped down. Way down. After a few “last chance” appeals, our 60K subscribers were whittled down to about 2500. This was challenging territory for many departments in the museum who relied on the large numbers for a sense of security more than their effectiveness, like at almost every non-profit.

But we pressed on, and noticed quickly that our open rates were dramatically high. Our click through rates were excellent, and it was clear that people were actually reading our emails, and acting on our calls to action. If you haven’t noticed by now, I’m trying to include as many marketing buzzwords into this post as possible. You know, due-dilligence and all.

This is a long way around to explain that we all started to fall in love with Mailchimp. It’s ease of use and deep analytics and reporting tools were a huge win for the museum as a whole. Our list continues to grow and our “satisfaction” rating remains pretty steadily on the high end. The staff seem to enjoy working in Mailchimp, especially following the recent redesign of the user interface.

One day along the way, the museum decided to implement Tessitura as our CRM ( constituent relationship management ) and ticketing system. It’s a super robust, enterprise class system that is sort of the swiss army knife for non-profits, performing arts centers, and more recently, museums.

In the long term strategic plan for Tessitura, it appeared as though we would have to ditch Mailchimp and move to one of the two providers that offer an integration with Tessitura. We looked at both of them, and while they both did the job at hand, neither of them offered the pleasant experience and incredible analytics tools that Mailchimp did. It would have been a tough sell to our staff to move them off something they clearly all had grown to love and on to a system that would probably work just fine, but not make their hearts any warmer.

So, we talked with Mailchimp. Mailchimp has a wide variety of third party integrations, and we started to converse about what an integration with Tessitura would look like. We all got really excited at the possibilities, and so once a small amount of funding was secured, we partnered with JCA, Inc. to build us something.

Mailchimp was really excited about the idea, and being a forward thinking tech company, they pushed us to make the integration free, and open source. This is something we strongly believe in at Cooper Hewitt as well, so we worked with the staff at Tessitura, and figured out a way to share the code within the Tessitura Network, so as not to violate any non-disclosure agreements. Things were starting to take shape.

So what will it do, and how does it work?


We tried to limit the scope of the project to the bare essentials. We wanted to stay within our budget, and build a simple tool that does what it says on the tin. The hope here is that Tessiutra licensees will try it out, see that it’s a good thing, and run with it, adding features and customizing it to suit their needs. Open source goodness.

At the moment, the project is a pretty simple .NET application that anyone can install on a windows machine that can talk to Tessitura and Mailchimp. You fill out some initial config information, and then schedule a nightly synchronization job. This allows Tessitura licensees to export their primary lists on a nightly basis into Mailchimp.

You can also perform synchronizations on an ad-hoc basis, meaning, any Tessitura user can easily create a segmented list in Tessitura for a specific purpose, and sync that list to Mailchimp for immediate sending.

This is a really nice feature, because it actually creates or updates a segment in Mailchimp. Rather than create many bespoke email lists, you can then just use a single master list in Mailchimp, and use the exported segments so you are only sending to the addresses you are interested in.

What it’s not

It’s important to understand that this is an open source tool and is provided “as is.” There is no support staff waiting to take your calls and answer your questions. This remains the responsibility of the Tessitura community.

As I mentioned, it’s a simple tool, and at the moment, it basically does the two functions I’ve outlined above. There is no syncing of analytics data back in to Tessitura, for example. We really love the analytics tools built right into Mailchimp, and so for most this may not be a deal breaker. These are the kinds of features we hope will get added by the community down the road.

What it is, again.

It’s a super exciting thing for us to all think about! The Tessitura community really needs to take more control over the entire eco-system of third-party applications and extensions. Without a vested interest in building our own tools, open sourcing the work we are all doing, and joining in the conversation with regards to direction and strategy, the community will always be waiting on the next update from those vendors who have chosen to build products from the system.

How do I get it?

First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to Once you are there, you can go to here, and download the code, or the binaries to try it out on your system. The repository has a README with all the relevant info on how to install it from scratch, build from the source, and set things up in Mailchimp. If you don’t have this capability you can also download the compiled binaries and just try it out.

How do I contribute?

If you are a Tessitura Network licensee, and you’ve gotten this far, read the README to get the full picture on how to fork the code and contribute. For the time being, feel free to log issues, and send feature requests, and I will do my best to follow up on them and help get them resolved, but eventually, we hope that someone within the community will pick up the torch and help us to continue to develop what we think is a really valuable integration and option for the broader Tessitura community.

Reminder: First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to

Content sharing and ambient display with Electric Objects EO1

Scenic panel El Dorado, designed by Joseph Fuchs, Eugène Ehrmann and Georges Zipélius and manufactured by Zuber & Cie , 1915-25, Gift of Dr. and Mrs. William Collis. From Cooper Hewitt Collection displayed on an EO1. Photo by Zoe Salditch

Scenic panel El Dorado, designed by Joseph Fuchs, Eugène Ehrmann and Georges Zipélius and manufactured by Zuber & Cie , 1915-25, Gift of Dr. and Mrs. William Collis. From Cooper Hewitt Collection displayed on an EO1. Photo by Zoe Salditch

One of the cornerstones of Cooper Hewitt’s very visible digital strategy has been promiscuity. From the first steps in early 2012 when the online collection was released, we’ve partnered with many people from Google Art Project and Artsy to Artstor and now Electric Objects.

Electric Objects is a little different from the others in that we’ve worked with them to share a very select and small number of collection objects, much in the way that Pam Horn and Chad Phillips have worked to grow the museum’s ‘licensed product’ lines of merchandise.

Electric Objects is a New York startup that raised a significant amount of money on Kickstarter to build and ship a ‘system for displaying digital art’. Jake Levine, Zoe Salditch and their team have now developed the EO1 into a small ecosystem of screens deployed in the homes and offices of about 2500 ‘early adopters’ and digital artists who have been creating bespoke commissions for the system.

Cooper Hewitt joined the New York Public Library in providing a selection of collection materials to see what this community might make of it – and, internally, to think about what it might mean to have a future in which digital art might become ‘ambient’ in people’s homes.

I spoke to Jake and Zoe late last week in their office in New York.

Seb Chan – I like how the EO1 has ‘considered limitations’ – the lack of a slideshow mode, the lack of a landscape mode – can you tell us a bit more about what went into these decisions? And now that EO1s are in homes and offices around the world, what the response has been like?

Jake Levine – Computing has for the last 50 to 60 years been characterized by interaction, generally for the sake of productivity or entertainment. Largely as a result, we’ve built software whose basis for success is defined by volume of interaction. Most companies start with: ‘how often can we get users to engage with our product? ‘

What we’ve been left with is a world filled with software competing for our attention, demanding our interaction. And we feel like crap. We feel overwhelmed.

EO1 was an experiment in a kind of computing that, by definition, could not demand anything from us. We asked whether we could build a computer that brought value into its environment without asking for user interaction. How do we ensure that the experiment remains valid? We make interaction impossible. You can’t ‘use’ EO1, just like you can’t ‘use’ art.

In the interest of exploring a different kind of computing, we made sure not to take any existing software paradigms for granted. The slideshow, of course, is ubiquitous in digital photo frames, to which we are often compared. For that decision, we went back to first principles — why? Why do we want slideshows? My experience with slideshows is characterized by distraction. The image changes, it catches my eye, it interrupts my conversation. Change demands our attention.

We say we want slideshows, but how much of that has to do with expectations informed by how screens have behaved in the past, without enough time spent thinking about how they might behave in the future? We’re so accustomed to the speed of the web, that even while we complain about it, when we’re presented with an alternative, we decide that we miss it.

But what is the value of change on the Internet? For me it’s not about randomness, it’s not about timers and playlists and settings. Change at its most meaningful happens in social contexts, in software that lives on top of a network, where ephemerality is actually just conversation, people talking. Twitter, Facebook, Instagram, Tumblr — these services aren’t an overwhelming flood of information, they are people talking to each other, and that’s why we keep coming back.

So you will likely see change enter the Electric Objects experience in the future, but it won’t be programmatic. It will be social.

Electric Objects, like all networked media discovery software, is a shared experience. And that’s also why we lack landscape. It’s important that everyone experiences Electric Objects in the same way, to create a deeper connection among its members. It also makes for a better user experience.

SC – Defaults matter, I think we all learned that from Flickr, and I really like that EO1 is ‘by default’ Public. This obviously limits the use of the EO1 as a digital photo frame, so what sort of things are you seeing as ‘popular’?

JL – People love water! So many subtly moving water images! But beyond the collective fascination with water, a lot of people are displaying the artwork we’re producing for Art Club, our growing collection of new and original art made for EO1 (including the awesome collection of wallpaper from Cooper Hewitt!).

Sidewall, wallpaper with stylized trees, ca 1920, designed by René Crevel and manufactured by C. H. H. Geffroy and distributed by Nancy McClelland, Inc. From Cooper Hewitt Collection displayed on an EO1. Photo by Zoe Salditch.

Sidewall, wallpaper with stylised trees, ca 1920, designed by René Crevel and manufactured by C. H. H. Geffroy and distributed by Nancy McClelland, Inc. Gift of Nancy McClelland. From Cooper Hewitt Collection displayed on an EO1. Photo by Zoe Salditch.

SC – Cooper Hewitt joined the Art Club early on and we’re excited to see a selection of our historic wallpapers available on the device. This wasn’t as straight forward as any of us had expected, though. Can you tell us about the process of getting our ‘digitised wallpapers’ ready and prepared for the EO1?

JL – When you’re bringing any art onto a screen, you have to deal with a fixed aspect ratio. Software designers and engineers know the pain of accommodating varying screen sizes all too well. In many ways what we offer artists — a single aspect ratio across all of our users — is a welcome relief. What’s more challenging is “porting” existing work into the new dimensions.

Wallpapers were actually a great starting point, because they’re designed to be tiled. Still, we hand cropped and tiled each object, to ensure an optimal experience for the user (and the art!).

SC – Our friends at Ghostly and NYPL took a slightly different route. Can you tell us about how both of those collaborators chose and supplied the works that they have made available?

JL – Ghostly is a label that represents a fantastic group of artists and musicians. Together, we selected a few artists to participate in the Ghostly x EO collection, featuring original work made specifically for Electric Objects.

And NYPL was somewhere between Ghostly and what we did with Cooper Hewitt. NYPL has this incredible collection of maps that they’ve digitized. We knew we didn’t want to simply show a cropped version of the maps on EO1, so we turned to the artist community, and starting taking proposals. We asked: what would you do with these beautiful maps as source material?

Natural Elements by Jenny Oddell from the NYPL x EO Collection

Natural Elements by Jenny Oddell from the NYPL x EO Collection

Jenny Odell produced an incredible series of collages. She spent ninety-two hours cutting out the illustrations that cartographers often include on the edges of the maps in photoshop — these beautiful illustrations that rarely get any attention since the maps have a primarily functional purpose. In this case we used something old to make something new, something designed with and for the screen. It was perfect.

SC – Art Club feels like it could be sort of a ‘Bandcamp for net art’. I know you’ve been commissioning specific works for the EO1 and making sure artists get paid, so tell us more about how you see this might work in the future?

Zoe Salditch – Without art, EO1 would just be any other screen. And we’ve known since the early days that art made for EO1 is always a better experience.

There are many ways people engage with and have historically paid for art, so we’re exploring a couple different ideas. Right now, we commission artists upfront and ask them to create small series for EO1, and this collection is available for free for EO1 owners for now. Our plan is to eventually put this ever-growing collection behind a subscription, so that the customer can subscribe to gain access to the entire collection.

Other strategies we’re exploring include limited editions, and a commission service for those who want to have something that feels more exclusive and custom. We believe that artists should be paid for their work, and that people will pay for great art. Other than that, we’re open to experimenting, and we have a lot to learn from our community now that EO1 is out in the wild!

SC – Cooper Hewitt’s wallpapers have been up for a little while as you’ve been shipping out units to Kickstarter backers. What can you tell us about how people have been showing them? What sorts of stats are we looking at?

JL – Art from the Cooper Hewitt collection has been displayed 783 times in homes all over the world, with an aggregate on-display time of over 217 days! The three El Dorado scenic panels have been most popular!

Explore the Cooper Hewitt objects available for ambient viewing through Electric Objects, to visit Shop Cooper Hewitt in-store at 2 East 91st in New York to buy an EO1 unit from the museum tax-free [sorry, not currently available via our online store].

5 months with the Pen: data, data, data

Its been five hectic months since the Pen started being distributed to visitors at the ticket counter, and we’ve been learning a lot. We last made some basic stats available at the 100 day mark, but how has usage changed – especially now that almost every area of the museum has been changed over in terms of exhibitions and objects? And what are the tweaks that have made the difference?

Take up rates are improving

March 10 to August 10 total number of times the Pen has been distributed – 62,015
March 10 to August 10 total number of eligible visitors – 65,935
March 10 to August 10 mean take up rate – 94.05%

The Pen launched on March 10, four months after the museum opened its doors, and by the end of March the Pen had a take up rate of 80.44%. By the end of April this had improved to 96.88% and by the end of July to 97.44%. A huge amount of effort by the front-of-house team to improve their scripting and ‘pitch to visitors’ made the difference upfront, and that was backed up by optimisations to the Pen distribution processes later. Late July also saw the introduction of the Pen into Pay-What-You-Wish Saturday evenings which relied a lot on having more streamlined Pen handout processes being implemented. Still to come is the integration of the Pen into education visits and school groups.

Pen usage is improving

March 10 to August 10 total objects collected – 1,394,030
March 10 to August 10 total visitor-made designs saved – 54,029
March 10 to August 10 mean zero collection rate – 26.7%

Not everyone who takes the Pen ends up using it. Some visitors wander around with it but choose not to save anything.

In April we saw a high of 31.28% not using their Pen, and we believe that a sizeable portion of this was actually the result of some backend issues that saw some visitors not being able to ‘write’ the contents of their Pen to their account. We noticed an uptick in “I visited, used the Pen, but there’s nothing when I go to my ticket’ emails coming in to our Zendesk customer service helpdesk. Throughout May and June we tracked down the source of some of these problems and began to resolve them. By the end of July the non-use rate was down to 22.4% and is tracking under 20% for August so far.

Those who do use the Pen, though, use it a lot. The average number of objects saved by a visitor has varied between 33.2 (March) and 26.99 (June) – significantly more than expected. The average number of ‘visitor-made designs’ (wallpapers, 3d models, Sketchbot portraits) has stayed relatively steady at 1.2 per visitor.

Time on campus is stable

March 10 to August 10 mean time on campus – 99.56 minutes

Cooper Hewitt is not a large museum. There’s a lot to do, but it is physically quite small at 16,000 square feet of gallery space. One of the aims of the new museum experience and redesign was to extend the time that visitors spent on site. As the Pen is handed out at the moment of admission and returned upon exit, the time between these two events is a pretty accurate indication of the time each visitor spends in our building (inclusive of shop and cafe).

Month to month the average has oscillated between 91.84 minutes and 104.31 minutes. Because of changes in the way that Pens are collected at the end of the visit, times from July onwards have to be adjusted downwards by 30 minutes. In order to speed the museum exit experience, front-of-house staff clear the Pen deposit box every 30 minutes instead of individually meaning that some Pens may have been sitting ‘unreturned’ for a while.

Post-visit logins need improvement

March 10 to August 10 post visit website retrieval rate – 33.8%

Each ticket that is paired with a Pen contains a unique URL which allows a visitor to login after their visit to see what they collected and designed. For well over 20 years this has been seen, perhaps misguidedly, as the holy grail of museum experiences – “they came to the museum and they enjoyed themselves so much, they went back to the website for more afterwards”. Falk & Dierking, amongst others, have emphasised that visitors recall their museum visits as an amalgam of experiences and often not in the categories or strict differentiations of specific exhibitions, programs, or objects that museum professionals expect.

For the first 4 months, March through June, the percentage of visitors retrieving their visit data from the unique URL on their ticket was flat at 35%. In July we started to see this drop to 30.65%. We’re looking into some of the potential causes for this drop – this may be related to the Pen box at the exit operating in a less-staffed mode (previously every Pen was collected by a front-of-house staff member who would verbally remind the visitor to check out their visit using the URL on their ticket as they left the museum). We will soon be trialling a slightly redesigned ticket with a simpler, clearer call-to-action and URL, as well as better exit signage as a reminder.

That said, these figures for post-visit access are vastly better than most other known initiatives in the museum sector where post-visit web use is usually well under 10%.

Soon, too, the post-visit experience online will see some small tweaks and improvements deployed that will make it easier to navigate, explore, and export your visit.


Visitors continue to surprise us. Many of the creations that are being drawn in the Immersion Room are astounding in their complexity and they remain a firm favourite on social media. A simple look at Instagram photos posted from our location make it very clear that visitors love the interactivity and the ability to ‘put themselves into the museum’. Popular objects, too, continue to be a balance of ‘unexpected gems’ and ‘known favourites’.

We’re in the process of drawing up some maps that will help us visualise the distribution of ‘popularity’ throughout the physical gallery spaces. This sort of spatial visualisation, coupled with new data as the objects on all three floors of the museum are switched out for new exhibitions, will help the museum differentiate between the effect of ‘location as an attractor’ [are things closer to doors/thresholds more popular than things in the middle of the room etc], ‘aesthetic qualities as an attractor’ [are bold objects more popular than more subtly displayed/lit objects], and the influence of ‘known classics’ or the concept of ‘landmark objects’ in design of exhibitions (see the work of Stephen Bitgood).

We’re also interested in sequencing. What order do visitors move through spaces? Does this change by visitor-type or by the type of exhibitions on view? How long does it take before visitors of different types take to make their ‘first collection’?

So many questions!

You can always keep an eye on the top line numbers and very basic Pen statistics on our site, and Labs will continue to blog results at periodic milestones.

The digital experience at Cooper Hewitt is supported by Bloomberg Philanthropies.

Long live RSS

Screen Shot 2015-07-10 at 2.35.17 PM

I just made a new Tumblr. It’s called “Recently Digitized Design.” It took me all of five minutes. I hope this blog post will take me all of ten.

But it’s actually kinda cool, and here’s why. Cooper Hewitt is in the midst of mass digitization project where we will have digitized our entire collection of over 215K objects by mid to late next year. Wow! 215K objects. That’s impressive, especially when you consider that probably 5000 of those are buttons!

What’s more is that we now have a pretty decent “pipeline” up and running. This means that as objects are being digitized and added to our collections management system, they are automatically winding up on our collections website after winding their way through a pretty hefty series of processing tasks.

Over on the West Coast, Aaron, felt the need to make a little RSS feed for these “recently digitized” so we could all easily watch the new things come in. RSS, which stands for “Rich Site Summary”, has been around forever, and many have said that it is now a dead technology.

Lately I’ve been really interested in the idea of Microservices. I guess I never really thought of it this way, but an RSS or ATOM feed is kind of a microservice. Here’s a highlight from “Building Microservices by Sam Newman” that explains this idea in more detail.

Another approach is to try to use HTTP as a way of propagating events. ATOM is a REST-compliant specification that defines semantics ( among other things ) for publishing feeds of resources. Many client libraries exist that allow us to create and consume these feeds. So our customer service could just publish an event to such a feed when our customer service changes. Our consumers just poll the feed, looking for changes.

Taking this a bit further, I’ve been reading this blog post, which explains how one might turn around and publish RSS feeds through an existing API. It’s an interesting concept, and I can see us making use of it for something just like Recently Digitized Design. It sort of brings us back to the question of how we publish our content on the web in general.

In the case of Recently Digitized Design the RSS feed is our little microservice that any client can poll. We then use IFTTT as the client, and Tumblr as the output where we are publishing the new data every day. 

RSS certainly lives up to its nickname ( Really Simple Syndication ), offering a really simple way to serve up new data, and that to me makes it a useful thing for making quick and dirty prototypes like this one. It’s not a streaming API or a fancy push notification service, but it gets the job done, and if you log in to your Tumblr Dashboard, please feel free to follow it. You’ll be presented with 10-20 newly photographed objects from our collection each day.


So this happened: