Category Archives: Backends

Mailchimp & Tessitura together at last


The short version is:

There is now an integration between Tessitura & Mailchimp. If you are a Tessitura Licensee, and have access to their BitBucket account, you can get it here.

So you can use this lovely Mailchimp Interface to create your emails…

Screen Shot 2015-08-20 at 12.36.19 PM

… and connect it with all of the power of Tessitura, through this easy to use tool.


The long version is:

It’s the last day of the Tessitura Learning & Community Conference, and I’m all checked out of my hotel, sitting in the conference hall, thinking about all of the things I’ve learned this week, and all of the people I’ve met.

So many of the people I’ve talked to have been asking about the Tessitura-Mailchimp Integration we launched in partnership with Mailchimp and JCA, Inc. this past week, and so I thought I’d write up a blog post to try and explain what it is, how you get it/use it/make it better, and more importantly, why we did it in the first place.

A long while ago, Cooper Hewitt had an enormous email list. Some 60K emails on one massive list powered by a e-marketing service that was clearly heading out of business. This giant list wasn’t working. We weren’t getting the results we thought we should, and what’s more, we had no way of measuring our success. So, we switched to Mailchimp. It was a pretty obvious choice. Mailchimp offered the museum an incredibly quick set up time, a beautiful user interface, with super clean and easy to use templates. What’s more, Mailchimp placed a lot of emphasis on “list quality” and advised us to put out an appeal to our current bloated list to do an “opt-in” and create a whole new list made up of real people, with valid email addresses, who actually wanted to receive mail from us.

The list dropped down. Way down. After a few “last chance” appeals, our 60K subscribers were whittled down to about 2500. This was challenging territory for many departments in the museum who relied on the large numbers for a sense of security more than their effectiveness, like at almost every non-profit.

But we pressed on, and noticed quickly that our open rates were dramatically high. Our click through rates were excellent, and it was clear that people were actually reading our emails, and acting on our calls to action. If you haven’t noticed by now, I’m trying to include as many marketing buzzwords into this post as possible. You know, due-dilligence and all.

This is a long way around to explain that we all started to fall in love with Mailchimp. It’s ease of use and deep analytics and reporting tools were a huge win for the museum as a whole. Our list continues to grow and our “satisfaction” rating remains pretty steadily on the high end. The staff seem to enjoy working in Mailchimp, especially following the recent redesign of the user interface.

One day along the way, the museum decided to implement Tessitura as our CRM ( constituent relationship management ) and ticketing system. It’s a super robust, enterprise class system that is sort of the swiss army knife for non-profits, performing arts centers, and more recently, museums.

In the long term strategic plan for Tessitura, it appeared as though we would have to ditch Mailchimp and move to one of the two providers that offer an integration with Tessitura. We looked at both of them, and while they both did the job at hand, neither of them offered the pleasant experience and incredible analytics tools that Mailchimp did. It would have been a tough sell to our staff to move them off something they clearly all had grown to love and on to a system that would probably work just fine, but not make their hearts any warmer.

So, we talked with Mailchimp. Mailchimp has a wide variety of third party integrations, and we started to converse about what an integration with Tessitura would look like. We all got really excited at the possibilities, and so once a small amount of funding was secured, we partnered with JCA, Inc. to build us something.

Mailchimp was really excited about the idea, and being a forward thinking tech company, they pushed us to make the integration free, and open source. This is something we strongly believe in at Cooper Hewitt as well, so we worked with the staff at Tessitura, and figured out a way to share the code within the Tessitura Network, so as not to violate any non-disclosure agreements. Things were starting to take shape.

So what will it do, and how does it work?


We tried to limit the scope of the project to the bare essentials. We wanted to stay within our budget, and build a simple tool that does what it says on the tin. The hope here is that Tessiutra licensees will try it out, see that it’s a good thing, and run with it, adding features and customizing it to suit their needs. Open source goodness.

At the moment, the project is a pretty simple .NET application that anyone can install on a windows machine that can talk to Tessitura and Mailchimp. You fill out some initial config information, and then schedule a nightly synchronization job. This allows Tessitura licensees to export their primary lists on a nightly basis into Mailchimp.

You can also perform synchronizations on an ad-hoc basis, meaning, any Tessitura user can easily create a segmented list in Tessitura for a specific purpose, and sync that list to Mailchimp for immediate sending.

This is a really nice feature, because it actually creates or updates a segment in Mailchimp. Rather than create many bespoke email lists, you can then just use a single master list in Mailchimp, and use the exported segments so you are only sending to the addresses you are interested in.

What it’s not

It’s important to understand that this is an open source tool and is provided “as is.” There is no support staff waiting to take your calls and answer your questions. This remains the responsibility of the Tessitura community.

As I mentioned, it’s a simple tool, and at the moment, it basically does the two functions I’ve outlined above. There is no syncing of analytics data back in to Tessitura, for example. We really love the analytics tools built right into Mailchimp, and so for most this may not be a deal breaker. These are the kinds of features we hope will get added by the community down the road.

What it is, again.

It’s a super exciting thing for us to all think about! The Tessitura community really needs to take more control over the entire eco-system of third-party applications and extensions. Without a vested interest in building our own tools, open sourcing the work we are all doing, and joining in the conversation with regards to direction and strategy, the community will always be waiting on the next update from those vendors who have chosen to build products from the system.

How do I get it?

First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to Once you are there, you can go to here, and download the code, or the binaries to try it out on your system. The repository has a README with all the relevant info on how to install it from scratch, build from the source, and set things up in Mailchimp. If you don’t have this capability you can also download the compiled binaries and just try it out.

How do I contribute?

If you are a Tessitura Network licensee, and you’ve gotten this far, read the README to get the full picture on how to fork the code and contribute. For the time being, feel free to log issues, and send feature requests, and I will do my best to follow up on them and help get them resolved, but eventually, we hope that someone within the community will pick up the torch and help us to continue to develop what we think is a really valuable integration and option for the broader Tessitura community.

Reminder: First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to

Long live RSS

Screen Shot 2015-07-10 at 2.35.17 PM

I just made a new Tumblr. It’s called “Recently Digitized Design.” It took me all of five minutes. I hope this blog post will take me all of ten.

But it’s actually kinda cool, and here’s why. Cooper Hewitt is in the midst of mass digitization project where we will have digitized our entire collection of over 215K objects by mid to late next year. Wow! 215K objects. That’s impressive, especially when you consider that probably 5000 of those are buttons!

What’s more is that we now have a pretty decent “pipeline” up and running. This means that as objects are being digitized and added to our collections management system, they are automatically winding up on our collections website after winding their way through a pretty hefty series of processing tasks.

Over on the West Coast, Aaron, felt the need to make a little RSS feed for these “recently digitized” so we could all easily watch the new things come in. RSS, which stands for “Rich Site Summary”, has been around forever, and many have said that it is now a dead technology.

Lately I’ve been really interested in the idea of Microservices. I guess I never really thought of it this way, but an RSS or ATOM feed is kind of a microservice. Here’s a highlight from “Building Microservices by Sam Newman” that explains this idea in more detail.

Another approach is to try to use HTTP as a way of propagating events. ATOM is a REST-compliant specification that defines semantics ( among other things ) for publishing feeds of resources. Many client libraries exist that allow us to create and consume these feeds. So our customer service could just publish an event to such a feed when our customer service changes. Our consumers just poll the feed, looking for changes.

Taking this a bit further, I’ve been reading this blog post, which explains how one might turn around and publish RSS feeds through an existing API. It’s an interesting concept, and I can see us making use of it for something just like Recently Digitized Design. It sort of brings us back to the question of how we publish our content on the web in general.

In the case of Recently Digitized Design the RSS feed is our little microservice that any client can poll. We then use IFTTT as the client, and Tumblr as the output where we are publishing the new data every day. 

RSS certainly lives up to its nickname ( Really Simple Syndication ), offering a really simple way to serve up new data, and that to me makes it a useful thing for making quick and dirty prototypes like this one. It’s not a streaming API or a fancy push notification service, but it gets the job done, and if you log in to your Tumblr Dashboard, please feel free to follow it. You’ll be presented with 10-20 newly photographed objects from our collection each day.


So this happened:

Happy Staff = Happy Visitors: Improving Back-of-House Interfaces

“You have to make the back of the fence that people won’t see look just as beautiful as the front, just like a great carpenter would make the back of a chest of drawers … Even though others won’t see it, you will know it’s there, and that will make you more proud of your design.”

—Steve Jobs

In my last post I talked about improvements to online ticketing based on observations made in the first weeks after launching the Pen.

Today’s post is about an important internal tool: the registration station whose job is to pair a new ticket with a new pen. Though visitors will never see this interface, it’s really important that it be simple, easy, clear, and fast. It is also critical that staff are able to understand the feedback from this app because if a pen is incorrectly paired with a ticket then the visitor’s data (collections and creations) will be lost.

Like a Steve-Jobs-approved iPod or a Van Cleef & Arpels ruby brooch, the “inside” of our system should be as carefully and thoughtfully designed as the outside.

the view from behind a desk with screens and wires everywhere. a tablet positioned upright with some tiny text and bars of color.

Version 1 of the app was functional but cluttered, with too much text, and no clear point of focus for the eye.

Because the first version of the app was built to be procedurally functional, its visual design was given little consideration. However, the application as a whole was designed so that the user interface – running in a web browser – was completely separate from the underlying pen pairing functionality, which makes updating the front-end a relatively straightforward task.

Also, we were getting a few complaints from visitors who returned home eager to see their visit diary, and were disappointed to see that their custom URL contained no data. We suspected this could have been a result of the poor UI at ticketing.

With this in mind, I sat behind the desk to observe our staff in action with real customers. I did about three sessions, for about ten minutes each, sometimes during heavy visitor traffic and sometimes during light traffic. Here’s what I kept an eye on while observing:

  • How many actions are required per transaction? Is there any way to minimize the number of “clicks” (in this case, “taps”) required from staff?
  • Is the visual feedback clear enough to be understood with only partial attention? Or do  typography, colors, and composition require an operator’s full attention to understand what’s going on?
  • What extraneous information can we minimize or omit?
  • What’s the critical information we should enlarge or emphasize?

After observing, I tried my hand at the app myself. This was actually more edifying than doing observations. Kathleen, our head of Visitor Services, had a batch of about 30 Pens to pair for a group, and I offered to help. I was very slow with the app, so I wasn’t really of much help, moving through my batch of pens at about half the speed of Kathleen’s staff.

Some readers may be thinking that since the desk staff had adjusted to a less-than-excellent visual design and were already moving pretty fast with it, this could be a reason not to improve it. As designers, we should always be helping and improving. Nobody should have to live with a crappy interface, even if they’ve adjusted to it! And, there will be new staff, and they will get to skip the adjustment process and start on the right foot with a better-designed tool.

My struggle to use the app was fuel for its redesign, which you can see germinating in my drawings below.

some marker sketches of a tablet interface with lots of scribbled notes

After several rounds of paper sketches like these, the desk reps and I decided on this sequence as the starting point for version two of the app.

These were the last in a series of drawings that I worked through with the desk staff. So our first few “iterative prototypes” were created and improved upon in a matter of minutes, since they were simply scribbled on paper. We arrived at the above stopping point, which Sam turned into working code.

Here’s what’s new in version 2:

  • The most important information—the alphanumeric shortcode— is emphasized. The font is about 6 or 7 times bigger, with exaggerated spacing and lots of padding (white space) on all sides for increased legibility. Or as I like to call it, “glanceability.” This helps make sure that the front of house staff pair the correct pen with the correct ticket.
  • Fewer words. For example, “Check Out Pen With This Shortcode” changed to “GO”, “Pen has been successfully checked out and written with shortcode ABCD” changed to “Success,” etc. This makes it easier for staff to know, quickly, that the process has worked and they can move on to the next ticket/pen/customer.

“I didn’t have time to write a short letter, so I wrote a long one instead.”
Mark Twain

  • More accurate words. Our team uses a different vernacular from the people working at the desk. This is normal, since we don’t work together often, and like any neighboring tribes, we’ve developed subtly different words for different things. Since this app is used by desk staff, I wanted it to reflect their language, not ours. For example, “Pair” is what they call “check-out” and “Return” is what they call “check-in.”
  • Better visual hierarchy: The original app had many competing horizontal bands of content, with no clear visual clue as to which band needed the operator’s attention at any given time. We used white space, color (green/yellow/red for go/wait/stop), and re-arranging of elements (less-used features to the bottom, more-used features to the top) to better direct the eye and make it clear to the user what she ought to be looking at.
  • Simple animations to help the user understand when the app is “working” and they should just wait.

Still to come are added features (bulk pairing, maintenance mode) and any ideas the desk reps might develop after a couple of weeks of using the new version.

Imagine how difficult this process would have been if the museum had outsourced all of its design and programming work, or if it were all encased in a proprietary system.

Understanding how the Pen interacts with the API

Detail of instructional postcard now available to museum visitors at entry to accompany The Pen.

Detail of instructional postcard now available to museum visitors at entry to accompany The Pen.

The Pen has been up and running now for five weeks and the museum as a whole has been coming to terms with exactly what that means. Some things can be planned for, others can be hedged against, but inevitably there will be surprises – pleasant and unpleasant. We can report that our expectations of usage have been far exceeded with extremely high take up rates, over 400,000 ‘acts of collection’ (saving museum objects with the Pen), and a great post-visit log in rate.

The Pen touches almost every operation of the museum – even though the museum was able to operate completely without it from our opening in December until March. At its most simple, object labels need NFC tags which in turn needs up-to-the-minute location information entered into our collection management system (TMS); the ticketing system needs a constant connection not only to its own servers but also to our API functions that create unique shortcodes for each visitor’s visit; and the Pens need regular cleaning and their monthly battery change. So everyone in the museum has been continuously improving and altering backend systems, improving workflows, and even the front-end UI on tablets that the ticket staff use to pair Pens with tickets.

Its complex.

Katie drew up (another) useful diagram of the journey of a Pen through a visit and how it interacts with our API.

Single visit 'lifecycle' of The Pen. Illustration by Katie Shelly, 2015. [click to enlarge]

Single visit ‘lifecycle’ of The Pen. Illustration by Katie Shelly, 2015. [click to enlarge]

Even more details of the overall system design and development saga can be found in the (long) Museums and the Web 2015 paper by Chan & Cope.

The digital experience at Cooper Hewitt is supported by Bloomberg Philanthropies. The Pen is the result of a collaboration between Cooper Hewitt, SistelNetworks, GE, MakeSimply, Undercurrent, and an original concept by Local Projects with Diller Scofidio + Renfro.

A new feature you may never see – ticketing follow up emails

A few weeks ago we rolled out a small update to the ticketing website that sends a reminder email to anyone who has purchased tickets in advance.

This is sort of an experiment. but first, some background.

Recently I attended an event at another cultural institution ( which I won’t name ). A few days prior to the event I received a very up-selling reminder email, reminding me of membership discounts and other events I might like. The links within the email took me to the landing page for the event, and offered little actual information that I found useful unless I wanted to buy even more tickets, or combine my purchase with a book in the shop.

To make things worse, on the night of the event, and just as it was finishing up, I received a “follow up email” , which I found really annoying. It was literally timed to send exactly as the event was ending and while I was on my way out the door, as if to say, “wait, come back in and buy the book too!”

In fact, the subject line read “How did I enjoy [insert event title here]?” But, the email itself didn’t offer me a way to answer that question ( even if I wanted to ) and instead simply pointed me to the same landing page of the event I had just attended, along with links to their social media channels and other upcoming shows I might be interested in. The whole thing made me cringe a little as I pressed the delete button on my phone.

I thought to myself, “let’s not do that.”

I really just wanted to send a gentle reminder email, full of actually useful info to people who were planning to come visit the museum. I thought it would be nice if I had booked tickets in advance to get something like this the day before I was planning to visit. Something with a map and some info on how to get here, and potentially a little synopsis of what I might do once I arrived.

So, here was my thought process.

Like I said, it’s an experiment, and so I’m still just sort of beta-testing this feature, and trying to analyze how useful/annoying people find it. We already get so many emails, so I really wanted to make sure I wasn’t bombarding our visitors with additional garbage, or even worse, confusing them with unneeded information like what I’d recently experienced.

First of all, it would be all about timing. While talking out loud in the Labs about this one, Aaron’s comment was simply “time zones.” Computer’s have time zones ( all of ours are set to UTC ), people are in time zones. It was clearly something to consider.

Right now, we can only assume that you will be here sometime during our open hours on the day you purchased the ticket. We don’t presently do timed tickets, and unlike an event space, each day’s “performance” spans the entirety of our hours.

So we decided to try out sending the reminders the day before at 4pm, our time. I guess it’s generally safe to say that visitors are nearing our time zone the day before their visit, but its really still a best guess. Also, we are not going to “remind you” if you are booking for the same day as that’s probably overkill. So for now, at 4pm, the day before your visit, is when the email goes out.

Next I had some fun coming up with a way to extract all the relevant info from our Ticketing CRM ( Tessitura ).

I needed the following info:

  • All the things going on tomorrow ( this is sort of future proofing for when we let you book other things beyond general admission )
  • All the current orders for all the things going on tomorrow.
  • All the email addresses for all the orders for all the things going on tomorrow.

Getting “tomorrow” was pretty easy in PHP.

$datetime = new DateTime(‘tomorrow’);
$tomorrow = $datetime->format(‘Y-m-d’);

4pm EST is 9pm UTC on the same day, so all good there in calculating “tomorrow.”

Our Tessitura API wrapper I mentioned in my last post has a method that lets us get all the “performances” in Tessitura for a given date range. Simply passing it “tomorrow” yields us all the things we are looking for. We also have a method that can get all the orders placed for a given performance. Finally, we have a method that gets the email for the user that placed the order.

Now we can send the actual email.

Obviously, people place multiple orders and buy multiple tickets per order. I really only want to send one email regardless of what you’ve booked. So when I am looping through all the orders, I only add the email address to the list once.

The last step was to make a cron job that runs this script once a day at 4pm. Done!

( Incidentally, all of our servers are set to UTC, but for some reason our RedHat server’s crontab doesn’t seem to care, and somehow ( possibly magically ) thinks it’s on EST. I have yet to figure out why this is, but for now I am just going with it. )

Right now, the email is a fixed template. We are sending out emails via Mandrill, so we get some decent analytics and can track open rates, and click rates. We also added Google Analytics tracking codes to all the links in the email so we can see what people are clicking right in GA.

So far we’ve experienced an open rate of about 75% and a click rate of about 20%, which seems pretty good to me.

Our open and click rate for the last 30 days.

Our open and click rate for the last 30 days.

And here is the GA results for the “Ticket Reminder” campaign from the same time period. From here you can dive deeper into the analytics to see what pages people are heading to once they are on the site, and all sorts of other metrics.

GA TicketReminder

Since you can only really “see” this feature if you book an advance ticket I’ve posted an image of what the email looks like below. We went through a few design iterations to get it to look the way it looks, and I’d really love to hear your thoughts about it. If you were visiting us, and received this email the day before your visit at 4pm, would you find it useful, annoying, or confusing?

Reminder Email Template

Our reminder email template

Of course this will change again when the Pen goes live shortly.

How re-opening the museum enhanced our online collection: new views, new API methods

At the backend of our museum’s new interactive experiences lies our API, which is responsible for providing the frontend with all the data necessary to flesh out the experience. From everyday information like an object’s title to more novel features such as tags, videos and people relationships, the API gathers and organizes everything that you see on our digital tables before it gets displayed.

In order to meet the needs of the experiences designed for us by Local Projects on our interactive tables, we added a lot of new data to the API. Some of it was sitting there and we just had to go find it, other aspects we had to generate anew.

Either way, this marks a huge step towards a more complete and meaningful representation of our collection on the internet.

Today, we’re happy to announce that all of this newly-gathered data is live on our website and is also publicly available over the API (head to the API methods documentation to see more about that if you’re interested in playing with it programmatically).


For the Hewitt Sisters Collect exhibition, Local Projects designed a front-end experience for the multitouch tables that highlights the early donors to the museum’s collection and how they were connected to each other. Our in-house “TMS liaison”, Sara Rubinow, worked to gather and structure this information before adding it to TMS, our collection management system, as “constituent associations”. From there I extracted the structured data to add to our website.

We created a the following new views on the web frontend to house this data:

We also added a few new biography-related fields: portraits or photographs of Hewitt Sisters people and two new biographies, one 75 words and the other 50 characters. These changes are viewable on applicable people pages (e.g. Eleanor Garnier Hewitt) and the search results page.

The overall effect of this is to make more use of this ‘people-related’ data, and to encourage the further expansion of it over time. We can already imagine a future where other interfaces examining and revealing the network of relationships behind the people in our collection are easily explored.

Object Locations and Things On Display

Some of the more difficult tasks in updating our backend to meet the new requirements related to dealing with objects no longer being static – but moving on and off display. As far as the website was concerned, it was a luxury in our three years of renovation that objects weren’t moving around a whole lot because it meant we didn’t have to prioritize the writing of code to handle their movement.

But now that we are open we need to better distinguish those objects in storage from those that are on display. More importantly, if it is on display, we also need to say which exhibition, and which room it is on display.

Object locations have a lot of moving parts in TMS, and I won’t get into the specifics here. In brief, object movements from location to location are stored chronologically in a database. The “movement” is its own row that references where it moved and why it moved there. By appropriately querying this history we can say what objects have ever been in the galleries (like all museums there are a large portion of objects that have never been part of an exhibition) and what objects are there right now.

We created the following views to house this information:


The additions we’ve made to exhibitions are:

There is still some work to be done with exhibitions. This includes figuring out a way to handle object rotations (the process of swapping out some objects mid-exhibition) and outgoing loans (the process of lending objects to other institutions for their exhibitions). We’re expecting that objects on loan should say where they are, and in which external exhibition they are part of — creating a valuable public ‘trail’ of where an object has traveled over its life.


Over the summer, we began an ongoing effort to ‘tag’ all the objects that would appear on the multitouch tables. This includes everything on display, plus about 3,000 objects related to those. The express purpose for tags was to provide a simple, curated browsing experience on the interactive tables – loosely based around themes ‘user’ and ‘motif’. Importantly these are not unstructured, and Sara Rubinow did a great job normalizing them where possible, but there haven’t been enough exhibitions, yet, to release a public thesaurus of tags.

We also added tags to the physical object labels to help visitors draw their own connections between our objects as they scan the exhibitions.

On the website, we’ve added tags in a few places:

That’s it for now – happy exploring! I’ll follow up with more new features once we’re able to make the associated data public.

Until then, our complete list of API methods is available here.

Our new ticketing website

Screenshot 2014-12-16 16.25.03

This past week we launched — a new online ticketing system which leverages our Consitutent Relationship Management application, Tessitura, as its “source of truth.”

It’s a simple application, really. It lets you pick the day you want to come visit us, select the kind of tickets you want to buy, and then you fill out your basic info, plug in your credit card digits and off you go. Moments later you receive an email with PDF versions of your tickets attached.

On the user-facing side of things, it is designed to be as simple as possible. You don’t need to log in, there is no “shopping cart”, and above all, you can do all of this from your phone if you want to skip ahead of the lines this Winter on your first visit back since we closed nearly three years ago.

A little background

The idea to pre-book tickets online came at us from a number of directions. Some time last year we decided to invest in Tessitura to handle all of our CRM needs. Tessitura, if you have never heard of it before, is an enterprise class, battleship that grew out of the Met Opera House and has made its way around the performing arts sector. It’s a great tool if you are looking to centralize everything there is to know about a Constituent. As a museum, it is also appealing in that it does many of the things that non-profit type cultural institutions need to do out of the box.

So, Tessitura. It is now a thing at our museum. Everyone on our staff started ramping up on the software and getting settled into the idea of using Tessitura for one thing or another. Our department began to get requests.

Obviously our membership department would like to use Tessitura to sell and manage memberships. Development would like to use it to manage and collect donations and gifts. Education would like to centralize all public programs, book tours, manage special events, and all of the other crazy things they do. And did I mention we have a museum that sells tickets?

This is how it always starts. The avalanche of ideas, whiteboard sessions, product demos and gentle emails that say things like “When will Tessitura be ready?” begin to pour in. You have to soak it all in and then wring it all out.

The Simplest, Dumbest Thing

Aaron says that quite often. “Just do the simplest, dumbest thing…” and he’s right. Often times you have to boil things down a bit to get to the real core issues at hand. It was clear from the start that this would be an essential part of the “design process” on this project.

So, I started out by asking myself this question:  “What is the most basic thing we want to do with Tessitura?”

I wound up with two clear answers.

  1. We wanted to be able to sell tickets online. Just basic, general admission tickets. Nothing fancy yet.
  2. We eventually want to use Tessitura as our identity provider, and as a way to pair your ticket with the Pen. More on that towards the end…

Tackling Tickets

So to get started, I thought about the challenge of selling a ticket online. I looked at other sites I liked such as StubHub, EventBrite, and other venue websites that I knew used Tessitura like BAM, Jazz at Lincoln Center, and the 92Y. I did some research, I bought some tickets, and I asked all my friends who used these sites what they liked and disliked. Eventually I started to find my way gravitating towards the Eventbrite way of doing things. We have been using Eventbrite for a couple of years here at Cooper Hewitt, for the most part as a way to sell tickets to education programs and events.

To tell you the truth, Eventbrite has been a dream come true for us and our sales for these events, both paid and free, have been very good. So, what is Eventbrite doing right? Simply put they’ve made the process of purchasing a ticket to an event online stupidly simple.

I wanted to know more. So, I spent some time and slowly walked myself through the process of booking all kinds of things on Eventbrite. I tried to step through each page in the process, I tried to notice what kind of user feedback I got, and what sort of emails and notifications I got. I tried the same on mobile devices and through their iPhone app. Here are a few takeaways.

  1. You don’t need to register in order to purchase a ticket on Eventbrite.
  2. If you don’t register when making an initial purchase, you can register later and see your purchase history.
  3. As soon as you book your tickets, you get them in an email.
  4. The Thank You page is just as useful when you are logged out as when you are logged in.
  5. Most importantly, you can only buy one thing at a time. In other words, there is no idea of a Shopping Cart.

That last one was pretty huge. Most eCommerce sites are built around the idea that users put items in a cart and then “Checkout.” Eventbrite doesn’t do it this way. Instead, you simply pick the thing you are wanting to attend, select the kinds of tickets you want ( student, senior, etc ) and then put in your credit card info. Once you hit submit, you’ve paid for your tickets and your transaction is complete.

I felt this flow was incredibly powerful and probably one of the reasons Eventbrite was working so well for our education programs. There are simply less chances to change your mind, less confusion over what you are buying, and the end-to-end process of picking something out and paying for it is just so much smaller than the more traditional shopping cart experience.

I began to think of it kind of like the difference between getting your weekly groceries and just picking up a six pack. The behaviors are totally different because you are trying to accomplish two totally different tasks. One is very routine, requires a little creativity and some patience, and a willingness to wander around and “pick.” The other is a strategic strike, designed to get in and get out so you can get home and relax with a nice cold one.

The Eventbrite concept seemed like what we wanted. I had my simplest dumbest thing, and something to model it on.

Technical Challenges

With every new project comes some kind of technical challenge(s). Tessitura is a “new to us” application and our staff at Cooper Hewitt were clearly at the bottom left of a steep learning curve when we started the project. We also had many challenges we knew we were going to have to face because we are a “Governmental Institution.” So things like PCI compliance, complex network configurations, and security scans were all things I was going to need to learn about.

Tessitura comes with two APIs. One is a somewhat older ( as in the first thing they built ) SOAP API, and the other is a newer ( as in still under development ) REST API. Both allow data to get in and out of Tessitura in a variety of forms.

In addition to the standard SOAP and REST APIs, Tessitura has the facility to expose just about anything you can build into an MS SQL Stored Procedure through its API. This is an incredibly powerful feature, which can also be quite dangerous if you think about it.

When I attended the Tessitura Learning Conference & Convention this past summer, it became clear to me that many institutions that use Tessitura are building some kind of API wrapper, or some type of middleware that helps them make sense of it all. We chose to do the same. To accomplish this, I chose to model the API wrapper off our own Collections API, which is a REST-ish API based on Flamework, and uses oAuth2 for authentication. Having this API wrapper allows us to all speak the same language and use the same interface. It is also very, very similar to the Collections API, so among our own staff, it is pretty easy to navigate. The API wrapper, wraps methods from both the Tessitura SOAP and the REST APIs and presents a unified interface to both of them. It doesn’t implement every single API method, and it exposes “new” methods that we have custom built via those Stored Procedures I mentioned.

The Tickets website is a separate project that talks directly to the API. It is also a Flamework project, written primarily in PHP. It uses a MySQL database to store a small amount of local data, but for the most part it is making calls to the API wrapper, which in turn is making calls to either the SOAP or REST Tessitura APIs. Tessitura is the source of truth for most of the things the Ticketing website does.

The Front End

Screenshot 2014-12-16 16.25.44

The Tickets site from the user’s perspective is designed to be extremely simple. I worked with Sam ( our in house front end guru ) to build a responsive, and simple web application that does basically one thing, but the devil is always in the details.

At first glance, all the site does is allow you to select the day you want to come visit us, pick out what kinds of tickets you want, and then fill out your billing info and receive your tickets. It’s basically a calendar and a form and not much more. But like I said, the devil is in the details.

First, Sam built a beautiful calendar like one I’d never seen before. We talked at length about how dumb most website calendars were, and we tried to push things in a new direction. Our calendar starts out by showing you what you most likely are looking for–Today. It displays the next bunch of days up to two weeks worth by default, and if you are looking for a special date, it lets you drop down and navigate around until you find it. On mobile its slightly different in that it doesn’t show you any past dates ( why would you want to book the past? ) and it limits things a little so you’re not as overwhelmed by the interface. We call this “designing for context” and we thought that users might be using their mobile phones to buy a ticket online and jump up to the front of the queue.

Once you’ve selected the date you want, the app loads up the available tickets right below the calendar. You can easily change your mind and pick a different date. From here you just select the type and quantity of each ticket you want. Sam’s code does a bunch of front-end validations to make sure everything you are trying to do makes sense ( you can only purchase a youth ticket with another paid ticket for example ). Between the two of us, we try to do as much validation to what you are selecting as possible, both in Javascript on the front-end and in PHP on the back.

Once you hit Order Now, an order form is generated and displayed. I think its important to note here that nothing has really happened yet in Tessitura-land. We asked Tessitura for some details about the tickets you are interested in, but we haven’t “added them to your cart” or anything like that as of yet.

Screenshot 2014-12-16 16.28.34

You can then fill out your vitals. We ask you to give us your name, email, credit card details and billing address. We store all of this, with the exception of your credit card, in Tessitura. We make you an account, and at this point we send you an activation email which allows you to set up your password at your leisure. If all goes well with your credit card, we build your tickets ( I chose to do all this with FPDF rather than try and use Tessitura’s built in Print at Home server thing ) send them in an email, and then take you to a Thank You Page. You never had to register, or log in, and you technically never do. Your PDF tickets arrive in your inbox and that is technically all you need.

Screenshot 2014-12-16 16.27.35

As a little bonus, we just stick the barcode and some basic metadata about each ticket you bought on the Thank You Page so you can just present your phone at the door. This part is still a little rough and I chose to leave it that way for the time being so we can do some user research in the galleries. It’s a nice feature, but only time will tell if people actually use it or if it needs some finesse.

Screenshot 2014-12-16 16.26.50

Now that you are in the system, you can buy more tickets using the same email address and they will be connected to your same account, even if you still have never logged in. If you do choose to activate your account and login, you can look at your order history, and reprint your tickets if you’ve lost your email copy.

Screenshot 2014-12-16 16.27.10

Tessitura & The Pen

A while back in this blog post, I mentioned that we also wanted to use Tessitura as our identity provider and as a way to pair the ticket you’ve purchased with the pen we’ve handed you. This work is nearly done, but not yet in production. It will go live when our pen is available sometime in early 2015. But, the short story is, when you buy a ticket, either online or in person, we generate a special coded version of your ticket. This code gets paired up with the internal ID of the pen we gave you and that pairing gets stored in a database. What this all amounts to is that when you get home, and you want to see all the cool things you’ve done with your pen, you simply enter the code ( or go to a custom short URL ) on our website. We look up your pairing and are able to connect your Identity ( Tessitura ) with your Visit ( on the Collections site ). But that is all the topic of a future series of blog posts.


Now that the Tickets site is up and running, and we are watching the sales roll in, it’s easy to start thinking of more features and new ways to expand what the site can do. I’ve already started building simple admin tools and have been thinking about building a basic check-in app for off-site events. It’s too early to talk about all of the things we aim to do and how we plan to expand our online sales, but I’m hopeful that we will stay focused and narrow in our approach, offering our users the most elegant visitor experience possible. Or at the very least, the simplest, dumbest thing.

API methods (new and old) to reflect reality


A quick end-of-week blog post to mention that now that the museum has re-opened we have updated the cooperhewitt.galleries.openingHours and cooperhewitt.galleries.isOpen API methods to reflect… well, reality.

In addition to the cooperhewitt.galleries API methods we’ve also published corresponding openingHours and isOpen methods for the cafe!

For example, cooperhewitt.galleries.isOpen.

curl '***'

	"open": 0,
	"holiday": 0,
	"hours": {
		"open": "10:00",
		"close": "18:00"
	"time": "18:01",
	"timezone": "America/New_York",
	"stat": "ok"


curl -X GET '***'

	"hours": {
		"Sunday": {
			"open": "07:30",
			"close": "18:00"
		"Monday": {
			"open": "07:30",
			"close": "18:00"
		"Tuesday": {
			"open": "07:30",
			"close": "18:00"
		"Wednesday": {
			"open": "07:30",
			"close": "18:00"
		"Thursday": {
			"open": "07:30",
			"close": "18:00"
		"Friday": {
			"open": "07:30",
			"close": "18:00"
		"Saturday": {
			"open": "07:30",
			"close": "21:00"
	"timezone": "America/New_York",
	"stat": "ok"

Because coffee, right?

Sharing our videos, forever

This is one in a series of Labs blogposts exploring the inhouse built technologies and tools that enable everything you see in our galleries.

Our galleries and Pen experience are driven by the idea that everything a visitor can see or do in the museum itself should be accessible later on.

Part of getting the collections site and API (which drives all the interfaces in the galleries designed by Local Projects) ready for reopening has involved the gathering and, in some cases, generation of data to display with our exhibits and on our new interactive tables. In the coming weeks, I’ll be playing blogger catch-up and will write about these new features. Today, I’ll start with videos.

jazz hands

Besides the dozens videos produced in-house by Katie – such as the amazing Design Dictionary series – we have other videos relating to people, objects and exhibitions in the museum. Currently, these are all streamed on our YouTube channel. While this made hosting much easier, it meant that videos were not easily related to the rest of our collection and therefore much harder to find. In the past, there were also many videos that we simply didn’t have the rights to show after their related exhibition had ended, and all the research and work that went into producing the video was lost to anyone who missed it in the gallery. A large part of this effort was ensuring that we have the rights to keep these videos public, and so we are immensely grateful to Matthew Kennedy, who handles all our image rights, for doing that hard work.

A few months ago, we began the process of adding videos and their metadata in to our collections website and API. As a result, when you take a look at our page for Tokujin Yoshioka’s Honey-Pop chair , below the object metadata, you can see its related video in which our curators and conservators discuss its unique qualities. Similarly, when you visit our page for our former director, the late Bill Moggridge, you can see two videos featuring him, which in turn link to their own exhibitions and objects. Or, if you’d prefer, you can just see all of our videos here.

In addition to its inclusion in the website, video data is also now available over our API. When calling an API method for an object, person or exhibition from our collection, paths to the various video sizes, formats and subtitle files are returned. Here’s an example response for one of Bill’s two videos:

  "id": "68764297",
  "youtube_url": "",
  "title": "Bill Moggridge on Interaction Design",
  "description": "Bill Moggridge, industrial designer and co-founder of IDEO, talks about the advent of interaction design.",
  "formats": {
    "mp4": {
      "1080": "",
      "1080_subtitled": "",
      "720": "",
      "720_subtitled": ""
  "srt": ""

The first step in accomplishing this was to process the videos into all the formats we would need. To facilitate this task, I built VidSmanger, which processes source videos of multiple sizes and formats into consistent, predictable derivative versions. At its core, VidSmanger is a wrapper around ffmpeg, an open-source multimedia encoding program. As its input, VidSmanger takes a folder of source videos and, optionally, a folder of SRT subtitle files. It outputs various sizes (currently 1280×720 and 1920×1080), various formats (currently only mp4, though any ffmpeg-supported codec will work), and will bake-in subtitles for in-gallery display. It gives all of these derivative versions predictable names that we will use when constructing the API response.

a flowchart showing two icons passing through an arrow that says "vidsmang" and resulting in four icons

Because VidSmanger is a shell script composed mostly of simple command line commands, it is easily augmented. We hope to add animated gif generation for our thumbnail images and automatic S3 uploading into the process soon. Here’s a proof-of-concept gif generated over the command line using these instructions. We could easily add the appropriate commands into VidSmanger so these get made for every video.


For now, VidSmanger is open-source and available on our GitHub page! To use it, first clone the repo and the run:


This will initialize the folder structure and install any dependencies (homebrew and ffmpeg). Then add all your videos to the source-to-encode folder and run:


Now you’re smanging!

HTTP ponies

Most of the image processing for the collections website is done using the Python programming language. This includes things like: extracting colours or calculating an image’s entropy (its “busy-ness”) or generating those small halftone versions of image that you might see while you wait for a larger image to load.

Soon we hope to start doing some more sophisticated computer vision related work which will almost certainly mean using the OpenCV tool chain. This likely means that we’ll continue to use Python because it has easy to use and easy to install bindings to hide most of the fiddly bits required to look at images with “robot eyes”.


The collections website itself is not written in Python and that’s okay. There are lots of ways for different languages to hold hands inside of a single “application” and we’ve used many of them. But we also think that most of these little pieces of functionality are useful in and of themselves and shouldn’t require that a person (including us) have to burden themselves with the minutiae of the collections website infrastructure to use them.

We’ve slowly been taking the various bits of code we’ve written over the years and putting them in to discrete libraries that can then be wrapped up in little standalone HTTP “pony” or “plumbing” servers. This idea of exposing bespoke pieces of functionality via a web server is hardly new. Dave Winer has been talking about “fractional horsepower HTTP servers” since 1997. What’s changed between then and now is that it’s more fun to say “HTTP pony” and it’s much easier to bake a little web server in to an application because HTTP has become the lingua franca of the internet and that means almost every programming language in use today knows how to “speak” it.

In practice we end up with a “stack” of individual pieces that looks something like this:

  1. Other people’s code that usually does all the heavy-lifting. An example of this might be Giv Parvaneh’s RoyGBiv library for extracting colours from images or Mike Migurski’s Atkinson library for dithering images.
  2. A variety of cooperhewitt.* libraries to hide the details of other people’s code.
  3. The cooperhewitt.flask.http_pony library which exports a setup of helper utilities for the running Flask-based HTTP servers. Things like: doing a minimum amount of sanity checking for uploads and filenames or handling (common) server configuration details.
  4. A variety of plumbing-SOMETHING-server HTTP servers which export functionality via HTTP GET and POST requests. For example: plumbing-atkinson-server, plumbing-palette-server and so on.
  5. Flask, a self-described “micro-framework” which is what handles all the details of the HTTP call and response life cycle.
  6. Optionally, a WSGI-compiliant server-container-thing-y for managing requests to a number Flask instances. Personally we like gunicorn but there are many to choose from.

Here is a not-really-but-treat-it-like-pseudo-code-anyway example without any error handling for the sake of brevity of a so-called “plumbing” server:

# Let's pretend this file is called ''.

import flask
from flask_cors import cross_origin
import cooperhewitt.example.code as code
import cooperhewitt.flask.http_pony as http_pony

app = http_pony.setup_flask_app('EXAMPLE_SERVER')

@app.route('/', methods=['GET', 'POST'])
@cross_origin(methods=['GET', 'POST'])
def do_something():

    if flask.request.method=='POST':
       path = http_pony.get_upload_path(app)
       path = http_pony.get_local_path(app)

    rsp = code.do_something(path)
    return flask.jsonify(rsp)

if __name__ == '__main__':

So then if we just wanted to let Flask take care of handling HTTP requests we would start the server like this:

$> python -c example-server.cfg

And then we might talk to it like this:

$> curl -X POST -F 'file=@/path/to/file' http://localhost:5000

Or from the programming language of our choosing:

function example_do_something($path){
        $url = "http://localhost:5000";
        $file = curl_file_create($path);
        $body = array('file' => $file);
        $rsp = http_post($url, $body);
        return $rsp;

Notice the way that all the requests are being sent to localhost? We don’t expose any of these servers to the public internet or even between different machines on the same network. But we like having the flexibility to do that if necessary.

Finally if we just need to do something natively or want to write a simple command-line tool we can ignore all the HTTP stuff and do this:

$> python
>>> import cooperhewitt.example.code as code
>>> code.do_something("/path/to/file")

Which is a nice separation of concerns. It doesn’t mean that programs write themselves but they probably shouldn’t anyway.

If you think about things in terms of bricks and mortar you start to notice that there is a bad habit in (software) engineering culture of trying to standardize the latter or to treat it as if, with enough care and forethought, it might achieve sentience.

That’s a thing we try to guard against. Bricks, in all their uniformity, are awesome but the point of a brick is to enable a multiplicity of outcomes so we prefer to leave those details, and the work they entail, to people rather than software libraries.


Most of this code has been open-sourced and hiding in plain sight for a while now but since we’re writing a blog post about it all, here is a list of related tools and libraries. These all fall into categories 2, 3 or 4 in the list above.

  • cooperhewitt.flask — Utility functions for writing Flask-based HTTP applications. The most important thing to remember about things in this class is that they are utility functions. They simply wrap some of the boilerplate tasks required to set up a Flask application but you will still need to take care of all the details.

Everything has a standard Python for installing all the required bits (and more importantly dependencies) in all the right places. Hopefully this will make it easier for us break out little bits of awesomeness as free agents and share them with the world. The proof, as always, will be in the doing.


We’ve also released go-ucd which is a set of libraries and tools written in Go for working with Unicode data. Or more specifically, for the time being since they are not general purpose Unicode tools, looking up the corresponding ASCII name for a Unicode character.

For example:

$> ucd 䍕


$> ucd THIS → WAY

There is, of course, a handy “pony” server (called ucd-server) for asking these questions over HTTP:

$> curl -X GET -s 'http://localhost:8080/?text=♕%20HAT' | python -mjson.tool
    "Chars": [
            "Char": "u2655",
            "Hex": "2655",
            "Name": "WHITE CHESS QUEEN"
            "Char": " ",
            "Hex": "0020",
            "Name": "SPACE"
            "Char": "H",
            "Hex": "0048",
            "Name": "LATIN CAPITAL LETTER H"
            "Char": "A",
            "Hex": "0041",
            "Name": "LATIN CAPITAL LETTER A"
            "Char": "T",
            "Hex": "0054",
            "Name": "LATIN CAPITAL LETTER T"

This one, potentially, has a very real and practical use-case but it’s not something we’re quite ready to talk about yet. In the meantime, it’s a fun and hopefully useful tool so we thought we’d share it with you.

Note: There are equivalent libraries and an HTTP pony for ucd written in Python but they are incomplete compared to the Go version and may eventually be deprecated altogether.

Comments, suggestions and gentle clue-bats are welcome and encouraged. Enjoy!