Author Archives: Micah Walter

About Micah Walter

Micah is the Director of Digital & Emerging Media.

Parting gifts

Today is my last day at Cooper Hewitt, Smithsonian Design Museum. It’s been an incredible seven years. You can read all about my departure over on my personal blog if you are interested. I’ve got some exciting things on the horizon, so would love it if you’d follow me!

Parting gifts

Before I leave, I have been working on a couple last “parting gifts” which I’d like to talk a little about here. The first one is a new feature on our Collections website, which I’ve been working on in the margins with former Labs member, Aaron Straup Cope. He did most of the work, and has written extensively on the topic. I was mainly the facilitator in this story.

Zoomable Object Images

The feature, which I am calling “Zoomable Object Images” is live now and lets you zoom in on any object in our collection with a handy interface. Just go to any object page and add “/zoom” to the URL to try it out. It’s very much an experiment/prototype at this point, so things may not work as expected, and not ALL objects will work, but for the majority that do, I think it’s pretty darn cool!

Here’s an example, zoomed all the way in – https://collection.cooperhewitt.org/objects/18382347/zoom

Notice that there is also a handy camera button that lets you grab a snapshot of whatever you have currently showing in the window!

Click on the camera to download a snapshot of the view!

This project started out as a fork of some code developed around the relatively new IIIF standard that seems to be making some waves within the cultural sector. So, it’s not perfectly compliant with the standard at the moment ( Aaron says “there are capabilities in the info.json files that don’t actually exist, because static files”), but we think it’s close enough.

You can’t tell form this image, but you can pinch and zoom this on your mobile device.

To do the job of creating zoomable image tiles for over 200,000 object we needed a few things. First, we built a very large EC2 server. We tried a few instance sizes and eventually landed on an m4.2xlarge which seemed to have enough CPU and RAM to get the job done. I also added a terabyte of storage to the instance so we’d have enough room to store all the images as we processed them.

Aaron wrote the code to download all of our high res image files and process them into tiles. The code is designed to run with multiple threads so we can get the most efficiency out of our big-giant-ec2 server as possible (otherwise we’d be waiting all year for the results). We found out after a few attempts that there was one major flaw in the whole operation. As we started to generate thousands and thousands of small files, we also started to run up to the limit of inodes available to us on our boot drive. I don’t know if we ever really figure this out, but it seemed to be that the OS was trying to index all those tiny files in some way that was causing the extra overhead. There is likely a very simple way to turn that off, but in the interest of getting the job done we decided to take an alternative route.

Instead of processing and saving the images locally and then transferring them all to our S3 storage bucket, Aaron rewrote the code so that it would upload the files as they were processed. S3 would be their final resting place in any case, so they needed to get there one way or another and this meant that we wouldn’t need so much storage during processing. We let the script run and after a few weeks or so, we had all our tiles, neatly organized and ready to go on S3.

Last but not least, Aaron wrote some code that allowed me to plug it into our collections website, resulting in the /zoom pages that are now available. Woosh!

Check out the code here https://github.com/thisisaaronland/go-iiif – and dive into the tl;dr discussion over here if you’re into this kinda thing.

Cooper Hewitt in a box

The second little gift is around some work I’ve been doing to try and make developing and maintaining code at Cooper Hewitt a tiny bit better.

Since we started all this work we’ve been utilizing a server that sits on the third-floor (right next to Josue) called, affectionately, “Bill.” Bill (pictured above) has been our development machine for many years, and has also served as the server in charge of extracting all of our collection data and images from TMS and getting them published to the web. Bill is a pretty important piece of equipment.

The pros to doing things this way are pretty clear. We always have, at the ready, a clone of our entire technology stack available to use for development. All a developer needs to do is log in to Bill and get coding. Being within the Smithsonian network also means we get built in security, so we don’t need to think about putting passwords in place or trying to hide in plain site.

The cons are many.

For one, you need to be aware of each other. If more than one developer is working on the same file at the same time, bad things can happen. Sam sort of fixed this by creating individual user instances of the major web applications, but that requires a good bit of work each time you set up a new developer. Also, you are pretty much forced to use either Emacs or Vi. We’ve all grown to love Emacs over time, but it’s a pain if you’ve never used it before as it requires a good deal of muscle memory. Finally, you have to be sitting pretty close to Bill. You at least need to be on the internal network to access it easily, so remote work is not really possible.

To deal with all of this and make our development environment a little more developer friendly, I spent some time building a Vagrant machine to essentially clone Bill and make it portable.

Vagrant is a popular system amongst developers since it can easily replicate just about any production environment, and allows you to work locally on your MacBook from your favorite coffee shop. Usually, Vagrant is setup on a project by project basis, but since our tech stack has grown in complexity beyond a single project ( I’ve had to chew on lots of server diagrams over the years ), I chose to build more of a “workspace.”

I got the idea from Dan Phiffer at Mapzen who did the same for their Who’s on First project.

Essentially, there is a simple Vagrantfile that builds the machine to match our production server type, and then there is a setup.sh script that does the work of installing everything. Within each project repository there is also a /vagrant/setup.sh script that installs the correct components, customized a little for ease of use within a Vagrant instance. There is also a folder in each repo called /data with tools that are designed to fetch the most recent data-snapshots from each application so you can have a very recent clone of what’s in production. To make this as seamless as possible, I’ve also added nightly scripts to create “latest” snapshots of each database in a place where they Vagrant tools can easily download them.

This is all starting to sound very nerdy, so I’ll just sum it up by saying that now anyone who winds up working on Cooper Hewitt’s tech stack will have a pretty simple way to get started quickly, will be able to use their favorite code editor, and will be able to do so from just about anywhere. Woosh!

Lastly

Lastly, it’s been an incredible journey working here at Cooper Hewitt and being part of the “Labs.” We’ve done some amazing work and have tried to talk about it here as openly as possible–I hope that continues after I’m gone. Everyone who has been a part of the Labs in one way or another has meant a great deal to me. It’s a really exciting place to be.

There is a ton more work to do here at Cooper Hewitt Labs, and I am really excited to continue to watch this space and see what unfolds. You should too!

Thanks for everything.

-micah

Traveling our technology to the U.K.

Visitors to the London Design Biennale use our “clone” of the Wallpaper Immersion Room.

Recently, we launched a major initiative at the inaugural London Design Biennale at Somerset House. The installation was up from September 7th through the 27th and now that it has closed and the dust has settled, I thought I’d try and explain the details behind all the technology that went into making this project come alive.

Quite a while back, an invitation was extended to Cooper Hewitt to represent the United States in the London Design Biennale, an exhibition featuring 37 countries from around the world. Our initial idea being to spin up a clone of our very popular “Wallpaper Immersion Room” and hand out Cooper Hewitt Pens.

The idea of traveling our technology outside the walls of the Carnegie Mansion has been of great interest to the museum ever since we reopened our doors in 2014. The process of figuring out how to make our technology portable, and have it make sense in different environments and contexts was definitely a challenge we were up for, and this event seemed like the perfect candidate to put that idea through its paces.

So we started out gathering up the basic requirements and working through all that would be needed to make it all come together, including some very generous support from the Secretary of the Smithsonian and the Smithsonian National Board, Bloomberg Philanthropies, and Amita and Purnendu Chatterjee.

The short version is, this was a huge undertaking. But it all worked in the end, and visitors at the first-ever London Design Biennale were able to use Cooper Hewitt Pens to explore 100 wallpapers from our collection, create their own designs and save them. Plus, visitors could collect and save installations from other Biennale participants.

Thanks to a whole bunch of people, there's an Immersion Room in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Thanks to a whole lot of people, there are @cooperhewitt pens in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

The long version is as follows.

An Immersion Room in England

First and foremost, we wanted to bring the Immersion Room over as our installation for the London Design Biennale. So, let’s break down what makes the Immersion Room what it is.

The original Immersion Room, designed by Cooper Hewitt and Local Projects, made its debut when the museum reopened in December 2014, following a major renovation. It is essentially an interactive experience where visitors can manipulate a digital interactive touch-table to browse our collection of wallpapers and view them at scale, in real time, via twin projectors mounted to the ceiling. Additionally, visitors can switch into design mode and create their own wallpapers; adjusting the scale, orientation, and positioning of a repeating pattern on the wall. This latter feature is arguably what makes the experience what it is. Visitors from all walks of life love spending time drawing in the Immersion Room, typically resulting in a selfie or two like the ones you see in the images below.

? #cooperhewitt #londondesignbiennale

A photo posted by Sibel Yalcin (@sibellyalcin) on

#londondesignbiennale #immersionroom #cooperhewitt #doodling

A photo posted by Helen (@helen3210) on

What I’ve just described is essentially the minimal viable product for this entire effort. One interactive table, two ceiling mounted projectors, a couple of computers, and a couple of walls to project on.

From bar napkin to fabrication–we've managed to clone the Immersion Room!

A photo posted by Micah Walter (@micahwalter) on

The Immersion Room uses two separate computers, each running an application written in OpenFrameworks. There is the “projector app,” which manages what is displayed to the two projectors, and there is the “table app,” which manages what visitors see and interact with on the 55” Ideum table. The two apps communicate with each other over a local network, with the table app essentially instructing the projector app with what it should be displaying in real time.

Here is a basic diagram of how that all fits together.

Twin projector and computer setup for Wallpaper Immersion Room

Twin projector and computer setup for Wallpaper Immersion Room

Each application loads in content on startup. This is provided to the application by a giant json file that is managed by our Collections API and meant to be updated each night through a cron job. When the applications start up, they look at the json file and pull down any new or changed assets they might need.

At Cooper Hewitt, this means that our curators are able to update content whenever they want using our collections management system, The Museum System (TMS). Updates they make in TMS get reflected on the digital table following a data-deploy and reboot of the table and projector applications. This is essentially the workflow at Cooper Hewitt. Curators fill in object data in TMS, and through a series of tubes, that data eventually finds its way to the interactive tables and our collections website and API. For this project in London, we’d do essentially the same process, with a few caveats.

Make it do all the things

We started asking ourselves a number of questions. It’s a mix of feature creep and a strong desire to put some of the technology we’ve built through it’s paces–to determine if it’s possible to recontextualize much of what we’ve created at Cooper Hewitt and have it work outside the museum walls.

Questions like:

  • What if we want to allow visitors to save the wallpapers and the designs they create?
  • What if we wanted to hand out a Cooper Hewitt Pen to each visitor?
  • What if we want to let people use the Pen to save their creations, wallpapers, and ALL the other installations around the Somerset House?!

All of a sudden, the project becomes a bit more complicated, but still, a great way to figure out how we would translate a ton of the technology we’ve built at Cooper Hewitt into something useful for the rest of the world. We had loads of other ideas, features, and add-ons, but at some point you have to decide what falls in and out of scope.

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

So this is what we decided to do.

  • We would devise a way to construct the physical build out of a second Immersion Room. This would essentially be a “set” with walls and a truss system for suspending two rather heavy projectors. It would have a floor, and would be slightly off the ground so we could conceal wiring and create a place for the 55” touch table to rest.
  • We’d pre-fabricate the entire rig in New York and ship it to London to be assembled onsite.
  • We’d enable the Immersion Room to allow visitors to save from a selection of 101 wallpapers from our permanent collection. These would be curated for the Utopia theme of the London Design Biennale.
  • We’d enable the design feature of the Immersion Room and allow visitors to save their designs.
  • We’d hand out Cooper Hewitt Pens to each visitor who wanted one, along with a printed receipt containing a URL and a unique code.
  • We’d post coded NFC tags all throughout Somerset House to allow visitors to use their Pens to collect information about each participating country, including our own.
  • We’d build a bespoke website where visitors would go following their visit to see all the things they’ve collected or created.

These are all of the things we decided to do from a technology standpoint. Here is how we did it.

pen-www

The first step to making this all work was to extract the relevant code from our production collections website and API. We named this “pen-www” and intended that this codebase serve as a mini framework for developing a collecting system and website. In essence it’s simply a web application (written in PHP) and a REST API (also PHP). It really needed to be “just the code” required to make all the above work. So here is another list, explaining what all those requirements are.

  • It needs to somehow generate a simple collections website that is capable of storing relevant info about all the things one could potentially collect. This was very similar to our current codebase at Cooper Hewitt, but we added the idea of “organizations” so that you could have multiple participants contributing info, and not just Cooper Hewitt.
  • It needs all the API methods that make the Pen work. There are actually just a handful that do all the hard work. I’ll get to those in a bit.
  • It needs to handle image uploads and processing of those images (saved designs from the Immersion Room table).
  • It needs to create “visits” which are the pages a visitor lands on when entering their unique code.
  • It needs a series of scripts to help us import data and set things up.
  • We would also need some new code to allow us to generate paper receipts with unique codes printed on them. At Cooper Hewitt this is all done via our Tessitura ticket printing system, so since we wouldn’t have that at Somerset House, we’d need to devise a new way of dealing with registering and pairing pens, and printing out some kind of receipt.

So, pen-www would become this sort of boilerplate framework for the Pen. The idea being, we’d distill the giant codebase we’ve developed at Cooper Hewitt down to the most essential parts, and then make it specific to what we wanted to do for London. This is an important point. We aren’t attempting to build an actual framework. What we are trying to do is to boil out the necessary code as a starting point, and then use that code as the basis for a new project altogether.

From our point of view, this exercise allows us to understand how everything works, and gets us close enough to the core code so that we can think of repeating this a third or a fourth time—or more.

The API at the center of everything

We built the Cooper Hewitt API with intentions of making it flexible enough to be easily expanded upon or altered. It tries to adhere to the REST API pattern as much as it can, but it’s probably better described as “REST-ish.” What’s nice about this approach has been that we’ve been able to build lots and lots of internal interfaces using this same pattern and code base. This means that when we want to do something as bespoke as building an entire replica of our seemingly complex Pen/Visit system, and deploy it in another country, we have some ground to stand on.

In fact, just about all of the systems we have built use the API in some way. So, in theory, spinning up a new API for the London project should just mean pointing things like the Immersion Room interactive table at a new API endpoint. Since the methods are the same, and the responses use the same pattern, it should all just work!

So let’s unpack the API methods required to make the Pen and Immersion Room come to life. These are all internal/private API methods, so you can’t take them for a spin, and I can’t share the actual code with you that lies beneath, but I think you’ll get the idea.

Pens – there’s a whole class of API methods that deal with the Pen itself. Here are the relevant ones:

  • pens.checkoutPen – This marks a Pen as having been checked out for an associated visit
  • pens.getCurrentCheckout – This gets the currently checked out Pen for a specific visit
  • pens.getCurrentVisit – This does the opposite of the getCurrentCheckout, and returns a visit for a specific Pen.
  • pens.returnPen – This marks the Pen as having been returned.

Visits – There is another class of API methods that deal with the idea of “visits.” A visit is meant to represent one individual’s visit to the museum or exhibition, or some other physical location. Each visit has an ID and a corresponding unique code (the thing we print on a visitor’s paper receipt).

  • visits.getActivity – Returns all the activity associated with a visit
  • visits.getInfo – Returns detailed info about a specific visit
  • visits.processPenActivity – This is a major API method that takes any activity recorded by the Pen and processes it before storing the info in the appropriate location in the database. This one gets called frequently and is the method that happens when you tap your Pen on a reader board at one of our digital tables. The reader board downloads all the info on the Pen, and calls this API method to deal with whatever came across.
  • visits.registerVisit – This marks a visit as having been registered. It’s what generates your unique code for the visit.

Believe it or not, that is basically it. It’s just a handful of actions that need to be performed to make this whole thing work. With these methods in place, we can:

  • Pair pens with newly created visits so we can hand Pens out to visitors.
  • Process data collected by the Pen, either from NFC stickers it has read, or via our Interactive Table.
  • Do a final read of the Pen and return the Pen to the pool of possible pens.

So, now that we have an API, and all the relevant methods we can start building the website and massaging the API code to do things in the slightly different ways that make this whole thing live up to its bespokiness.

On the website end of things we will follow the KISS principle or “Keep it simple, stupid.” The site will be devoid of fancy image display features, extended relationship mapping and tagging, and all the goodies we’ve spent years developing for the Cooper Hewitt Collections website. It won’t have search, or fancy search, or search by color, or search by anything. It won’t have a shoebox or even a random button (ok, maybe I’ll add that later today). For all intents and purposes, the website will simply be a place to enter your unique code, and see all your stuff.

https://londonbiennale.cooperhewitt.org

https://londonbiennale.cooperhewitt.org

The website and its API will live at https://londonbiennale.cooperhewitt.org. It consists of just two web front ends running Apache and sitting behind an NGINX load balancer, and one MySQL instance via Amazon’s RDS service. It’s very, very similar to just about all of our other systems and services except that it doesn’t have fancy extras like Logstash logging, or an Elasticsearch index. I did take the time to install server monitoring and alerting, just so we can sleep at night, but really, it’s pretty bare bones.

At first glance there isn’t much there to look at. You can browse the different participants and you can create a Cooper Hewitt account or sign in using our Single Sign On service, but other than that, there is really just one thing to do–enter your code and see all your stuff.

Participants

Participants

All your content are belong to us

In order for this project to really work, we’d need to have content. Not only our own Cooper Hewitt content, but content from all the participants representing the 36 other countries from around the world.

So here is the breakdown:

  • Each participant or organization will have a page, like this one for Australia https://londonbiennale.cooperhewitt.org/participants/australia/
  • Each participant will have one “object.” In the case of all 37 participants, this object will represent their “booth” like this one from Australia – https://londonbiennale.cooperhewitt.org/objects/37643049/
  • Each “booth” will contain an image and the catalog text provided by the London Design Biennale team. If there is time, we will consider adding additional information from each participant (we haven’t done this as of yet).
  • Cooper Hewitt’s record will have some more stuff. In addition to the object representing Cooper Hewitt’s booth, we will also have 100 wallcoverings from our permanent collection.
  • You can collect all of these via the Immersion Room table and your Pen. Here is our page – https://londonbiennale.cooperhewitt.org/participants/usa/ There are also two physical wallapapers that are part of our installation, which you can of course collect as well.

All told, that means 140 objects in this little microsite/sitelet. You can actually browse them all at once if you are so inclined here – https://londonbiennale.cooperhewitt.org/objects/

"Booth" pages

“Booth” pages

Visit Pages

So what does a visitor get when they go to the webpage and type in their unique code. Well, the answer to that question is “it depends.” For objects that we imported from our permanent collection (the 101 wallpapers) you get a nice photo of the wallpaper, a chatty description of the wallpaper written by our curator, Greg Herringshaw, having to do with “Utopia” — the theme of this year’s London Design Biennale. You also get a link back to the collection page on the Cooper Hewitt website. For the 37 booths, you get a photo and the catalog info for each participants, and if you created and saved your own design in the wallpaper immersion room, you get a copy of the PNG version of your design, which you can, of course, download and do with what you like. (Hint: they make cool wall posters.)

Additionally, you get timestamps related to your visit. This way, just like on the Cooper Hewitt website, you get to retain a record of your visit–the date and time of each collected object and a way to recall your visit anytime in the future.

Visit page example

Visit page example

Slow Progress

All of this code replication, extraction, and re-configuring took quite a long time. The team spent long hours, nights, and weekends trying to sort it all out. In theory this should all just work, but like any project, there are unique aspects to the thing you are currently trying to accomplish, which means that, no matter what, you’re gonna be writing some new code.

Ok, so let’s check in with what we’ve got so far.

  • A physical manifestation of the Wallpaper Immersion Room and all it’s hardware, computers, wires, etc.
  • A website and API to power all the fun stuff.
  • A bunch of content from our own permanent collection and the catalog info from the London Design Biennale team.
  • Visit pages

We still need the following:

  • A way to issue Pens to visitors as they arrive.
  • A way to print a unique code on some kind of receipt, which we give to the visitors as well.
  • A way to check in Pens as visitors return them.
  • The means to get the table pointing at the right API endpoint so it can save things and processPenActivity as well.

To accomplish the first three items on the list, we enlisted the help of Rev Dan Catt.

That time @revdancatt assembled 700 pens for @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Dan is planning to write another extensive blog post about his role in all of this, but in a nutshell, he took our Pen registration code and built his own little mini-registration station and ticket printer. It’s pictured below and performs all of the functions above (1 through 3). It uses a small Adafruit thermal printer to print the receipts and unique codes, and it is simple enough to use with a small web based UI to give the operator some basic feedback. Other than that, you tap a pen and it does the rest.

Dan's Raspberry Pi powered Pen registration and ticket printing station.

Dan’s Raspberry Pi powered Pen registration and ticket printing station.

Tickets printing for the first time

Tickets printing for the first time

For the last item on the list, I had to re-compile the code Local Projects delivered to us. In the code I had to find the references to the Cooper Hewitt API endpoints and adjust them to point at the London API endpoint. Once I did this, and recompiled the OpenFrameworks project we were in business. For a while, I had it all set up for development and testing on my laptop using Parallels and Visual Studio. Eventually I compiled a final version and we installed it on the actual Immersion Room Table.

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Cracking open the Local Projects code was a little scary. I’m not really an OpenFrameworks programmer, or at least I haven’t been since grad school, and the Local Projects code base is pretty vast. We’ve had this code compiled and running on all the interactive tables at Cooper Hewitt since December of 2014. This is the first time I (or anyone I know of) has attempted to recompile it from source, not to mention make changes to it beforehand.

That said, it all worked just fine. I had to find an old copy of Visual Studio 2012, but other than that, and tracking down a few dependencies, it wasn’t a very big deal. Now we had a copy of the Immersion Room table application set up to talk to the London API endpoint. As I mentioned before, all the API methods are named the same, and set up the same way, so the data began to flow back and forth pretty quickly.

Content Management

I mentioned above that we had to import 100 wallpapers from our collection as well as the data for all 37 booths. To accomplish all of this, we wrote a bunch of Python and PHP scripts.

We needed to do the following with regard to content:

  • Create a record for each of the 37 participants
  • Import the catalog info as an object for each of the 37 participants
  • Import the 100 wallcoverings from the Cooper Hewitt collection. We just used, you guessed it, our own API to do this.
  • Massage the JSON files that live on the Projector and Table applications so they have the correct 100 wallpapers and all their metadata.
  • Display the emoji flag for each country, because emoji.

In the end, this was just a matter of building the necessary scripts and running them a number of times until we had what we wanted. As a sort of side note, we decided to use London Integers for this project instead of Brooklyn Integers, which we normally use at Cooper Hewitt, but that’s probably a topic for a future post.

Shipping code, literally

At some point we would need to put all the hardware and construction pieces into crates and ship them across the pond. At the time, our thinking was to get the code running on the digital table and projector computers as close to production ready as we could. We installed all the final builds of the code on the two computers, packed them up with the 55” interactive table, and shipped them over to London, along with six other crates full of the “set” and all its hardware and parts. It was, in a nutshell, impressive.

As the freight went to London, we continued working on the website code back home—making the site look the way we wanted it to look and behave the way we wanted it to behave. As I mentioned before, it’s pretty feature free, but it still required some spit and polish in the form of some of Rachel’s Sassy-CSS. Eventually we all settled on the aesthetics of the site, added a lockup that reflected both the Cooper Hewitt and London Design Biennale brands (both happen to be by Pentagram) and called it a day. We continued testing the table application and Dan continued working on the Pen registration app and receipt printer so it would be ready when we landed.

Building the set with the team at Somerset House.

Building the set with the team at Somerset House.

We landed, started to build the set, and many, many things started to go wrong. I think all of the things that went wrong are probably the topic of yet another blog post, but let’s just say for now: if you ever decide to travel a whole bunch of A/V equipment and computers to another country, get everything working with the local power standard and don’t try to transform anything.

2500 batteries #ldb16

A photo posted by Micah Walter (@micahwalter) on

Eventually, through a lot of long days and sleepless nights, and with the help of many, many kind-hearted people, we managed to get all the systems up and running, and everything began to work.

dscf0213

We flipped the switch and the whole thing came to life and visitors started to walk up to our booth, curious and excited to see what it would do. We started handing out Pens and I started watching the data flow through.

By the close of the show, visitors had used the Pen to collect over 27,000 objects. Eventually, I’ll do a deeper data analysis, but for now, the feeling is really great. We created a portable version of the Pen and all of its underlying systems. We traveled a giant kit of A/V tech and parts overseas, and now people in a country other than the United States can experience what Cooper Hewitt is all about: a dynamic, interactive deep dive into design.

Design your own Utopia at the London Design Biennale

Design your own Utopia at the London Design Biennale

-m

Object Phone: The continued evolution of a little chatbot

Object Phone is a project that started small, took less than a day to code, and consisted of about a page of code. Initially it was just an experiment–a way for me to explore a new interface to our API. Object Phone allowed users to call or text objects in our collection, and receive some kind of response. It was met with mild fanfare.

Next, I was curious about using Object Phone in our galleries. I looked towards developing some better audio content, and we decided to produce a short audio tour of the David Adjaye Selects exhibit. It was somewhat cumbersome to use but an interesting experiment and one of my first “in-gallery beta-tests.” Needless to say, I tried to be as clear as possible that this was an “experiment.”

Later I started thinking about the broader uses for a system like Object Phone. Could it replace an expensive audio guide? Could it be used as an accessibility device? I started to think of many possible uses for the platform, and started to rewrite the code to support multiple outputs. In a way, I was thinking about the code for Object Phone as a mini framework for building voice and text based interactions with our content.

All of this got put on the back burner for a while. Object Phone is after all my little side project. Something I come back to when I need to center myself and let my brain think through a few problems. It’s very much a project I meditate on when I need to do that kind of thing.

About 6 months later I started playing with the code again. I realized it was pretty trivial to deliver images via MMS using Twilio’s API and I had also started to notice that MMS worked pretty nicely on devices like an Apple Watch, and looked pretty good in the notification screen on my iPhone. All of the sudden it was kind of fun again to receive texts from Object Phone. So, I set up a subscription service.

Inspired by a few chatty SMS based apps out there like Poncho and The Edit, I built a simple subscription service that would send random objects and images to subscribers once a day at noon. Again, I set this up quickly, sent out a request for some people to try it out, and started to make realizations.

Object Phone is getting some upgrades. Feature requests welcome.

A photo posted by Micah Walter (@micahwalter) on

The main realization I had was that Object Phone had just become a chatbot. To be clear, Object Phone has technically always been a chatbot. You send it messages, and it replies with some response. But now that it sends you something periodically based on your preferences (currently just the preference that you want to continue receiving messages) it seems more like a real chatbot. More importantly, this experiment has started to make me “think” of Object Phone as a chatbot–something I should have likely realized from the start.

I also realized that Object Phone’s chattiness happens in multiple directions. It indeed chats with its subscribers. It can send you messages once a day, and it can reply to your requests for info about objects with ease. But, I also added a back end feature which follows this same line of thinking. If a user sends Object Phone a message that it doesn’t understand, Object Phone asks me for some assistance. Here is the flow:

  1. A user messages Object Phone something like “Tell me about spanking cat.”
  2. Object Phone isn’t smart enough yet to decipher the message.
  3. Object Phone replies “OK, I don’t really understand what you are saying but I’ll ask around and get back to you.”
  4. Object Phone then sends our Cooper Hewitt Slack channel a message.
  5. The Slack message contains the user’s phone number, their message, and a link to an admin page where the operator can reply directly to the user.
Screen_Shot_2016-07-04_at_10_13_26_AM

A Slack Channel where Object Phone can tell our staff when it needs a little assistance.

Screen_Shot_2016-07-04_at_10_16_12_AM

An Object Phone admin page where our staff can reply directly to users

All of the sudden Object Phone is a conduit between Cooper Hewitt staff and its visitors. It’s talking directly to visitors, but also relaying messages back and forth to more knowledgeable staff when it needs assistance.

What the cool kids are doing

Conversational user experiences are all the rage right now. Facebook has recently opened up their Messenger platform and API to developers, which means anyone can build a simple chatbot on Facebook and reach all their followers with ease. Many other messaging services have open APIs as well. WeChat, LINE, What’sApp and Slack are just a few examples.

Slack for iOS Upload

Screenshot of the CNN chatbot for Facebook Messenger

It’s pretty clear that messaging apps are increasing in popularity, with users spending much of their days talking on platforms like SnapChat rather than thumbing through their Facebook feeds. Apple too has followed suit by announcing a much upgraded Messages app in their latest update to iOS.

Chatbots have also become much more sophisticated, with huge advancements in Natural Language Processing and Natural Language Understanding. There is now a wealth of information and publicly available code and APIs out there, making it easier than ever to spin up a pretty intelligent chatbot with little overhead.

The Future of Object Phone

My next steps are to make Object Phone more intelligent. It should be able to learn about your tastes and preferences. If you only want to receive objects from our Textiles department, you should be able to say so. If you want to get your daily update at 5am, you should be able to just tell it that.

More importantly, you should be able to interact with more than just objects. Users should be able to find out general info about our museum. Are we open today? How do I get to Cooper Hewitt? Can I buy tickets right here, right now?

Lastly, I’d love to see Object Phone make its way onto the platform of your choice. I think this is a critical next step. SMS is great, and available to nearly anyone with a cell phone, but apps like FB Messenger, WhatsApp, and LINE have the ability to connect a service like Object Phone with a captive audience, all over the world.

I think institutions like museums have a great opportunity in the chatbot space. If anything it represents a new way to broaden our reach and connect with people on the platforms they are already using. What’s more interesting to me is that chatbots themselves represent a way to interact with people that is by its very nature, bi-directional. It presents us with the challenge of conversation, and forces us to listen to our constituents in a very close and connected kind of a way. We should already be doing this.

If you’d like to participate in testing out Object Phone, please go to https://objectphone.cooperhewitt.org and sign up. You will receive an object every day at 12pm EST until you reply STOP.

A Very Happy & Open Birthday for the Pen

lisa-pen-table-pic

Today marks the first birthday of our beloved Pen. It’s been an amazing year, filled will many iterations, updates, and above all, visits! Today is a celebration of the Pen, but also of all of our amazing partners whose continued support have helped to make the Pen a reality. So I’d like to start with a special thank you first and foremost to Bloomberg Philanthropies for their generous support of our vision from the start, and to all of our team partners at Sistel Networks, GE, Undercurrent, Local Projects, and Tellart.

Updates

Over the course of the past year, we’ve been hard at work, making the Pen Experience at Cooper Hewitt the best it can be. Right after we launched the Pen, we immediately realized there was quite a bit of work to do behind the scenes so that our Visitor Experience staff could better deal with deploying the Pen, and so that our visitors have the best experience possible.

Here are some highlights:

Redesigning post-purchase touchpoints – We quickly realized that our ticket purchase flow needed to be better. This article goes over how we tried to make improvements so that visitors would have a more streamlined experience at the Visitor Experience desk and afterwards.

Exporting your visits – The idea of “downloading” your data seemed like an obvious necessity. It’s always nice to be able to “get all your stuff.” Aaron built a download tool that archives all the things you collected or created and packages it in a nice browser friendly format. (Affectionately known as parallel-visit)

Improving Back-of-House Interactions – We spent a lot of time behind the visitor services desk trying to understand where the pain points were. This is an ongoing effort, which we have iterated on numerous times over the year, but this post recounts the first major change we made, and it made all the difference.

Collecting all the things – We realized pretty quickly that visitors might want to extend their experience after they’ve visited, or more simply,  save things on our website. So we added the idea of a “shoebox” so that visitors to our website could save objects, just as if they had a Pen and were in our galleries.

Label Writer – In order to deploy and rotate new exhibitions and objects, Sam built an Android-based application that allows our exhibition staff to easily program our NFC based wall labels. This tool means any staff member can walk around with an Android device and reprogram any wall label using our API. Cool!

Improving visitor information with paper – Onboarding new visitors is a critical component. We’ve since iterated on this design, but the basic concept is still there–hand out postcards with visual information about how to use the Pen. It works.

Visual consistency – This has more to do with our collection’s website, but it applies to the Pen as well, in that it helps maintain a consistent look and feel for our visitors during their post visit. This was a major overhaul of the collections website that we think makes things much easier to understand and helps provide a more cohesive experience across all our digital and physical platforms.

Iterating the Post-Visit Experience – Another major improvement to our post-visit end of things. We changed the basic ticket design so that visitors would be more likely to find their way to their stuff, and we redesigned what it looks like when they get there.

Press and hold to save your visit – This is another experimental deployment where we are trying to find out if a new component of our visitor experience is helpful or confusing.

On Exhibitions and Iterations – Sam summarizes the rollout of a major exhibition and the changes we’ve had to make in order to cope with a complex exhibition.

Curating Exhibition Video for Digital Platforms – Lisa makes her Labs debut with this excellent article on how we are changing our video production workflow and what that means when someone collects an object in our galleries that contains video content.

The Big Numbers

Back in August we published some initial numbers. Here are the high level updates.

Here are some of the numbers we reported in August 2015:

  • March 10 to August 10 total number of times the Pen has been distributed – 62,015
  • March 10 to August 10 total objects collected – 1,394,030
  • March 10 to August 10 total visitor-made designs saved – 54,029
  • March 10 to August 10 mean zero collection rate – 26.7%
  • March 10 to August 10 mean time on campus – 99.56 minutes
  • March 10 to August 10 post visit website retrieval rate – 33.8%

And here are the latest numbers from March 10, 2015 through March 9, 2016

  • March 10, 2015 to March 9, 2016 total number of times the Pen has been distributed – 154,812
  • March 10, 2015 to March 9, 2016 total objects collected – 3,972,359
  • March 10, 2015 to March 9, 2016 total visitor-made designs saved – 122,655
  • March 10, 2015 to March 9, 2016 mean zero collection rate – 23.8%
  • March 10, 2015 to March 9, 2016 mean time on campus – 110.63 minutes
  • Feb 25, 2016 to March 9, 2016 post visit website retrieval rate – 28.02%

That last number is interesting. A few weeks ago we added some new code to our backend system to better track this data point. Previously we had relied on Google Analytics to tell us what percentage of visitors access their post visit website, but we found this to be pretty inaccurate. It didn’t account for multiple access to the same visit by multiple users (think social sharing of a visit) and so the number was typically higher than what we thought reflected reality.

So, we are now tracking a visit page’s “first access” in code and storing that value as a timestamp. This means we now have a very accurate picture of our post visit website retrieval rate and we are also able to easily tell how much time there is between the beginning of a visit and the first access of the visit website–currently at about 1 day and 10 hours on average.

The Pen generates a massive amount of data. So, we decided to publish some of the higher level statistics on a public webpage which you can always check in on at https://collection.cooperhewitt.org/stats. This page reports daily and includes a few basic stats including a list of the most popular objects of all time. Yes, it’s the staircase models. They’ve been the frontrunners since we launched.

Those staircase models!

Those staircase models!

As you can see, we are just about to hit the 4 million objects collected mark. This is pretty significant and it means that our visitors on average have used the Pen to collect 26 objects per visit.

But it’s hard to gain a real sense of what’s going on if you just look at the high level numbers, so lets track some things over time. Below is a chart that shows objects collected by day for the last year.

Screen Shot 2016-03-09 at 3.50.36 PM

Objects collected by day since March 10, 2015

On the right you can easily see a big jump. This corresponds with the opening of the exhibition Beauty–Cooper Hewitt Design Triennial. It’s partly due to increased visitation following the opening, but what’s really going on here is a heavy use of object bundling. If you follow this blog, you’ll have recently read the post by Sam where he talks about the need to bundle many objects on one tag. This means that when a visitor taps his or her pen on a tag, they very often collect multiple objects. Beauty makes heavy use of this feature, bundling a dozen or so objects per tag in many cases and resulting in a dramatic increase in collected objects per day.

Pen checkouts per day since March 10, 2015

Pen checkouts per day since March 10, 2015

We can easily see that this, is in fact, what is happening if we look at our daily pen checkouts. Here we see a reasonable increase in checkouts following the launch of Beauty, but it’s not nearly as dramatic as the number of objects being collected each day.

Screen Shot 2016-03-09 at 11.40.09 PM

Immersion room creations by day since March 10, 2015

Above is a chart that shows how many designs were created in the immersion room each day over the past year. It’s also going to be directly connected to the number of visitors we have, but it’s interesting to see the mass of it along this period of time. The immersion room is one of our more popular interactive installations and it has been on view since we launched. So it’s not a big surprise it has a pretty steady curve to it. Also, keep in mind that this is only representative of “things saved” as we are not tracking the thousands of drawings that visitors make and walk away from.

We can slice and dice the Pen data all we want. I suppose we could take requests. But I have a better idea.

Open Data

Today we are opening up the Pen Data. This means a number of things, so listen closely.

  1. The data we are releasing is an anonymized and obfuscated version of some of the actual data.
  2. If you saved your visit to an account within thirty days of this post (and future data updates) we won’t include your data in this public release.
  3. This data is being licensed under Creative Commons – Attribution, Non-Commercial. This means a company can’t use this data for commercial purposes.
  4. The data we are releasing today is meant to be used in conjunction with out public domain collection metadata or our public API.

The data we are releasing is meant to facilitate the development of an understanding of Cooper Hewitt, its collection and interactive experiences. The idea here is that designers, artists, researchers and data analysts will have easy access to the data generated by the Pen and will be able to analyze  and create data visualizations so that we can better understand the impact our in-gallery technology has on visitors.

We believe there is a lot more going on in our galleries than we might currently understand. Visitors are spending incredible amounts of time at our interactive tables, and have been using the Pen in ways we hadn’t originally thought of. For example, we know that some visitors (children especial) try to collect every single object on view. We call these our treasure hunters. We also know that a percentage of our visitors take a pen and don’t use it to collect anything at all, though they tend to use the stylus end quite a bit. Through careful analysis of this kind of data, we believe that we will be able to begin to uncover new behavior patterns and aspects of “collecting” we haven’t yet discovered.

If you fit this category and are curious enough to take our data for a spin, please get in touch, we’d love to see what you create!

Press and hold to save your visit

01

During the development of a major project it’s inevitable that certain features just won’t make it into production in time for launch. Sometimes things fall out of scope, or they get left off the priority list and put on the back burner. Hopefully these features are not critical to the project, but invariably they were good ideas, and in many cases should warrant a revisit sometime down the road, when the dust has settled.

At Cooper Hewitt, one such “feature” was affectionately known as the “transfer stations.” The basic idea being that throughout the museum there would be small kiosk like stations where visitors could tap their pens and “transfer” the data from their pen to our database so that they could immediately see their collections from a mobile phone.

03

It was a nice idea, and we began to implement it, collecting specs and designing the steel stands that would support the kiosks. Eventually the parts showed up, but by that time, we were pretty knee deep in launching the Pen and the museum, so the transfer station idea got set aside.

Fast forward to today, nearly a year since the Pen has been in visitors hands and we’ve been thinking about how we can better onboard our visitors, and how we can remind them that there is something to do after they leave the museum. It’s a complex problem that we’ve tried to address in several ways.

  1. Visitors arriving at the museum typically don’t know anything about the Pen. At our Visitor Experience desk our staff are trained to quickly explain teach each visitor what they can do with the Pen, while in the background processing their orders and “pairing” each Pen to a ticket. It’s a critical part of the process and one we’ve spent a good deal of time optimizing.
  2. While visitors are waiting in line, there is an opportunity to help people learn about the Pen. We have a looping video playing in that spot that tries to do this job visually, and additionally, we have small postcards available that explain things further.
  3. On the way out the door, visitors are reminded that they should hold on to their tickets. This is supposed to happen at the door, in the moment where they are returning their pen.

There are lots of other visual cues and verbal reminders happening while you walk through the galleries, but no matter what, we find ticket stubs left behind. We know from our data that lots of our visitors are checking out their websites after their visits, but maybe we can do better. Also, part of the whole concept behind the Pen is that you “can” look at your collection from your mobile phone right away–we should make that happen more seamlessly.

Technically speaking, the transfer stations have been in play all along. When you walk up to one of our interactive tables and “dock” your pen, we read all the data on your pen ( the things you’ve collected so far ) and store them in our database. So, if you just keep walking up to tables and docking your pen, you’d be able to visit your collection on your mobile phone–no problem. But this doesn’t really do a great job of reminding you, or even letting you know that it’s possible. The tables are about browsing the collection on the tables, and that’s pretty much what their UI describes.

Also, we’ve been using an early version of the transfer station behind the scenes to do a final dump of your pen after you’ve left. This is so that in case you collected objects and didn’t go to one of the tables, you’ll be okay.

All along though, a few of us have been a little skeptical of the function and design of the transfer stations. Will they just create confusion with the visitor? Are they even necessary? Should they have a responsive visual user interface? To get to the bottom of some of these questions, well, we need to birth something into the world and see how it goes.

To get started, we chose to deploy two transfer stations in two areas on the second floor. There was a good deal of work that needed to happen. The transfer station parts needed to be identified, assembled and configured. We’d need to set up their built in Raspberry Pi computers to behave properly, and we’d need to work through their connection to power and network within the galleries. Enter Mary Fe!  She is our Gallery Technologist, the person you might see performing maintenance on some part of the technology throughout the galleries the next time you visit Cooper Hewitt. Mary Fe is the person who shows up at 8am before the museum opens to make sure everything is working and looking good.

I asked Mary Fe to work on this project from start to finish, and she’s written up a little documentation on how things went. She says:

I was called in to *clone* the existing and fully working Register station. The stations consist of Raspberry Pi mini computers connected to our museum network over ethernet and an NFC reader board designed by Sistel Networks that are able to download data from a Pen. The Raspberry Pi is mounted in the base of the extremely heavy stands you see in the photo below, and its corresponding NFC reader board is located at the top, behind the “plus” icon.

02

What we’d need

  • Raspberry Pi units programmed to save only ( vs. save and check in a pen )
  • Functioning data and power at the locations where wish to deploy the stands
  • The stands
  • Easy to understand signage

The first part was pretty easy to accomplish. I began by cloning SD cards for use with the Raspberry Pi. These had to be configured to “save” Pen data, and not “check in” the Pen so that the visitor could continue their visit, saving as many times as they like. After cloning, we assigned new names and unique IPs to the Raspberry Pi stations.

I ran into a little trouble when I started testing the data ports. Long story short, I had to learn how to tone/probe data ports from their locations in the galleries to their corresponding positions in the network closet. After a good deal of troubleshooting with our IT specialists in DC, the ports came to life and we assigned their IP addresses.

Once the ports were figured out, and the Raspberry Pi’s setup and configured, everything began to work. Right away I noticed visitors starting to use the transfer stations.

What’s next

We have more transfer stations waiting in the wings. However, as I mentioned above, deploying these two transfer stations is sort of an experiment. We want to watch and see how visitors react to them, if they cause more confusion, and how often they are getting used.

We’ve already been thinking of ways we might incorporate a screen to add a visual user interface to the stations. Perhaps a more guided experience would get visitors more involved with them. What kinds of problems will introducing a screen add to the device? Maybe we should think about e-ink, a touch screen, or a thermal printer? It’s hard to say at this point.

The next step is to collect some visitor feedback, look at the data, and start prototyping new ideas.

Getting the IT department your museum needs

You may have noticed, we have a new position currently open for an IT Specialist. It will be open for the next three weeks, so if you are at all interested, please read through this post first and then go apply before the cut off. This is a really exciting position, and so I thought it might be worth it to try and explain why, as the job description doesn’t exactly “get into it.”

But first, a little background. Cooper Hewitt has gone through a real, honest to goodness, digital transformation. For the last four years, while the doors were closed, the museum underwent a complete restoration and renovation to it’s physical building and at the same time completely overhauled all things technology and digital throughout the museum and online. We built some groundbreaking stuff, and when we finally reopened just one year ago, we delivered a truly amazing experience–the likes of which we haven’t seen too much of, up until now.

Okay, so this is a pitch. I’m trying to draw you in. It’s such a great place to work, and we do really awesome stuff all day long, every single day. We rock! Are you hooked yet? Just watch the video below and you’ll understand.

But, really, we have done some pretty amazing things at Cooper Hewitt, and all the while with minimum staff resources and short deadlines. If you are a museum person, you’ve probably read the articles. If you are a design or technology person, same goes for you. Across the board, each and every staff person has had to morph into something new, something present in the digital world. In many cases, without even realizing it, each and every aspect of Cooper Hewitt has transformed.

Looking back, we were able to do a lot of this work because of a lot of incredible coincidences, generous people, and dedicated staff members. And, although many of us were annoyed or uncomfortable at the time, I think we can all see the payoff now. We all seem to now understand the vision behind it all. It is all starting to fall into place for many of us. I’m starting to get teary-eyed, so I’ll move on.

As you may imagine, our needs for IT infrastructure have had to keep in lock step with all of the innovation we’ve imposed along the way. We’ve adopted a complete constituent relationship management system ( Tessitura ) which is directly connected to the way we sell tickets, which is directly connected to the way our visitors engage with our technology, which directly connected to the way we analyze the visitor experience, which circles back to inform us on how we continue to develop these systems over and over again.

Our museum is online. The physical building is the number one consumer of our own API. When a visitor walks around the building, interacting with our interactives, our API is involved each step of the way. Sometimes I can feel the museum pulsating with activity–messages moving back forth from the galleries to the cloud, from one server to another. It’s sort of beautiful if you think about it.

We’re proud to announce that Cooper Hewitt has achieved LEED Silver certification for our museum campus, including the Carnegie Mansion and adjoining Miller/Fox townhouses. The project to attain certification began with the renovation planning phase in 2006, and was supported by architect of record Beyer Blinder Belle, design architect Gluckman Mayner Architects, and Atelier 10, who joined the project in 2011. Developed by the U.S. Green Building Council, the LEED rating system is the foremost program for buildings, homes and communities that are designed, constructed, maintained and operated for improved environmental and human health performance. Cooper Hewitt was awarded certification for optimizing energy performance, purchasing green-e certified electricity supply, maintaining 95 percent of existing structure and envelope, water use reduction, community connectivity and public transportation access. #cooperhewitt #leedcertified

A photo posted by Cooper Hewitt (@cooperhewitt) on

Of course with all of this comes an incredibly increased need for infrastructure management. We are no longer a museum that just requires some basic desktop support–someone to fix the projector bulb, or replace the toner cartridge and set up your network account. These days we are looking for creative people who can solve complex problems. We need the type of person who can look at our AWS account and explain to us how we could make better use of “reserved” or “spot instances”, come up with better ways for our developers to continuously test and deploy new code, and someone who can log on to any kind of machine and figure out what the problem is, what service is down, what needs to be patched, what log messages mean what.

All design, all the time. ✒️ Regram @choppingblock #cooperhewitt

A photo posted by Cooper Hewitt (@cooperhewitt) on

So how are things different these days? Well, first of all, the major shift is that IT is now a part of Digital & Emerging Media. They used to be two separate entities with IT reporting into our Director of Operations. It made sense before the transformation because IT was mainly about the day to day operational types of problems like I’ve mentioned above. Ultimately, IT was in charge of making sure everyone’s computers were working and that there was toner in all the printers. The need for this still exists today but it’s a much smaller part of the big picture. Our staff are far more self sufficient now, and we are able to lean more heavily on our psuedo-outsourced-desktop support line through our mothership down in D.C. ( Did I mention we are part of Smithsonian ? )

Now, IT is a central player in the world of Digital & Emerging Media and it’s critical that we approach our new hires with this in mind. Like I said above, we need creative, curious souls who want to be part of something really exciting.

  • If you are obsessed with getting the last drop of performance out of 25 servers, please apply.
  • If you ran your own BBS in 1989, your own phpBB in 2005 or your own Discourse in 2016, please apply.
  • If you see equal design brilliance in both ESU 400 and RFC 2246, please apply.
  • If you like to $ sudo apt-get update; sudo apt-get upgrade -y, please apply.
  • If you prefer mapping a network drive over just using Dropbox… well… still apply and we’ll see…

OK, what next? Rush right over to the posting and apply. This position is a Federal job, which means it comes with all the benefits government workers receive, including a nice retirement plan, great health care and a free unlimited ride subway card! But, do be sure to put time and effort into the application. I know it may seem a little outdated ( trust me, this is something we need to work on in #GovClub ), but be sure to respond to each aspect of the application very carefully. And of course, if you have any questions about the position or the application itself, feel free to reach out, I’d love to know if you’ve applied.

Random Button Television

AppleTV SDK Fun

Recently, Apple announced a number of updates to their product line, including a pretty major update to AppleTV, the small set top box that allows people to listen to their music collection, rent movies and TV shows, and stream audio from their phones to their televisions. The biggest update to AppleTV was definitely the fact that it now supports “Apps”, allowing iOS developers to design and build whatever they can imagine.

I put my hat in the ring, and applied for Apple’s lottery and about a week later a brand new AppleTV Software Development Kit showed up at the Labs.

To be honest, I haven’t had much interest in developing apps for a long time now. It’s problematic at best to go down the road of building something for iOS ( or any brand specific device ) in the context of a museum, and yes, it’s been a long while since I even glanced at Objective-C. But, the device is a curious object, and at the very least made me wonder what it might be like to introduce a way to open up access to our collections through the warm and inviting glow of a television screen. Imagine it for a moment, sitting there atop your dresser, or mounted to your living room wall, next to the fire place, in full HD. The television, no matter how you divide it up over the years has a pretty permanently fixed position in our homes, and in our minds.

As a little side project, I decided to see what I could do with the AppleTV SDK and our Collections API. I decided ahead of time I wouldn’t spend too much time on this, and although I wound up spending at least one night reading up on NSDictionary and a few other oddball data-types in Objective-C, I was able to stick with my original plan and quickly built a little “Hello AppleTV” app that simply allows the “viewer” to flip through objects in our collection by pressing the “select” button on their remotes.

It uses one API method, our old favorite cooperhewitt.objects.getRandom, and yeah, that’s all it does. Keep pressing the select button and you continue to get objects. It’s quite fun!

AppleTV XCode

So here’s how it works. As we say over here at the Cooper Hewitt Labs, “Working code always wins.”

  1. There is a ViewController. This is the thing that represents the screen on your AppleTV, and the thing you can apply all of the subsequent properties to.
  2. There is an ImageView. This is where the image is applied to.
  3. There are a couple Labels which simply allow you to display the title and object ID for each object.
  4. There is a asynchronous way of calling the API.

Here’s the code. There’s basically just the two files for the ViewController that define everything we’re talking about. Beyond that, there is some “wiring up” of the ImageView and Labels so the visuals know what code they are connected to, and there is really the one method “fetchRandom” that does the work of calling the API, parsing the response, and storing the things we are interested in.

And here is the end result.

To be quite honest, we probably won’t be uploading this to the iTunes store. It’s really just a “Hello World” app and only meant to be a conversation starter for staff members who happen by the Labs area. But it does make me wonder — what else could museums do with a device like this?

The device itself is a curious one, with plenty of built in human interface challenges and opportunities. Sitting back, clicking the remote and checking out collection objects, isn’t really my idea of an exciting way to spend an evening at home, but take this a few steps further, maybe a few additional API calls, and who knows what might unfold.

Mailchimp & Tessitura together at last

tessitura-mailchimp

The short version is:

There is now an integration between Tessitura & Mailchimp. If you are a Tessitura Licensee, and have access to their BitBucket account, you can get it here.

So you can use this lovely Mailchimp Interface to create your emails…

Screen Shot 2015-08-20 at 12.36.19 PM

… and connect it with all of the power of Tessitura, through this easy to use tool.

readme-02

The long version is:

It’s the last day of the Tessitura Learning & Community Conference, and I’m all checked out of my hotel, sitting in the conference hall, thinking about all of the things I’ve learned this week, and all of the people I’ve met.

So many of the people I’ve talked to have been asking about the Tessitura-Mailchimp Integration we launched in partnership with Mailchimp and JCA, Inc. this past week, and so I thought I’d write up a blog post to try and explain what it is, how you get it/use it/make it better, and more importantly, why we did it in the first place.

A long while ago, Cooper Hewitt had an enormous email list. Some 60K emails on one massive list powered by a e-marketing service that was clearly heading out of business. This giant list wasn’t working. We weren’t getting the results we thought we should, and what’s more, we had no way of measuring our success. So, we switched to Mailchimp. It was a pretty obvious choice. Mailchimp offered the museum an incredibly quick set up time, a beautiful user interface, with super clean and easy to use templates. What’s more, Mailchimp placed a lot of emphasis on “list quality” and advised us to put out an appeal to our current bloated list to do an “opt-in” and create a whole new list made up of real people, with valid email addresses, who actually wanted to receive mail from us.

The list dropped down. Way down. After a few “last chance” appeals, our 60K subscribers were whittled down to about 2500. This was challenging territory for many departments in the museum who relied on the large numbers for a sense of security more than their effectiveness, like at almost every non-profit.

But we pressed on, and noticed quickly that our open rates were dramatically high. Our click through rates were excellent, and it was clear that people were actually reading our emails, and acting on our calls to action. If you haven’t noticed by now, I’m trying to include as many marketing buzzwords into this post as possible. You know, due-dilligence and all.

This is a long way around to explain that we all started to fall in love with Mailchimp. It’s ease of use and deep analytics and reporting tools were a huge win for the museum as a whole. Our list continues to grow and our “satisfaction” rating remains pretty steadily on the high end. The staff seem to enjoy working in Mailchimp, especially following the recent redesign of the user interface.

One day along the way, the museum decided to implement Tessitura as our CRM ( constituent relationship management ) and ticketing system. It’s a super robust, enterprise class system that is sort of the swiss army knife for non-profits, performing arts centers, and more recently, museums.

In the long term strategic plan for Tessitura, it appeared as though we would have to ditch Mailchimp and move to one of the two providers that offer an integration with Tessitura. We looked at both of them, and while they both did the job at hand, neither of them offered the pleasant experience and incredible analytics tools that Mailchimp did. It would have been a tough sell to our staff to move them off something they clearly all had grown to love and on to a system that would probably work just fine, but not make their hearts any warmer.

So, we talked with Mailchimp. Mailchimp has a wide variety of third party integrations, and we started to converse about what an integration with Tessitura would look like. We all got really excited at the possibilities, and so once a small amount of funding was secured, we partnered with JCA, Inc. to build us something.

Mailchimp was really excited about the idea, and being a forward thinking tech company, they pushed us to make the integration free, and open source. This is something we strongly believe in at Cooper Hewitt as well, so we worked with the staff at Tessitura, and figured out a way to share the code within the Tessitura Network, so as not to violate any non-disclosure agreements. Things were starting to take shape.

So what will it do, and how does it work?

readme-03

We tried to limit the scope of the project to the bare essentials. We wanted to stay within our budget, and build a simple tool that does what it says on the tin. The hope here is that Tessiutra licensees will try it out, see that it’s a good thing, and run with it, adding features and customizing it to suit their needs. Open source goodness.

At the moment, the project is a pretty simple .NET application that anyone can install on a windows machine that can talk to Tessitura and Mailchimp. You fill out some initial config information, and then schedule a nightly synchronization job. This allows Tessitura licensees to export their primary lists on a nightly basis into Mailchimp.

You can also perform synchronizations on an ad-hoc basis, meaning, any Tessitura user can easily create a segmented list in Tessitura for a specific purpose, and sync that list to Mailchimp for immediate sending.

This is a really nice feature, because it actually creates or updates a segment in Mailchimp. Rather than create many bespoke email lists, you can then just use a single master list in Mailchimp, and use the exported segments so you are only sending to the addresses you are interested in.

What it’s not

It’s important to understand that this is an open source tool and is provided “as is.” There is no support staff waiting to take your calls and answer your questions. This remains the responsibility of the Tessitura community.

As I mentioned, it’s a simple tool, and at the moment, it basically does the two functions I’ve outlined above. There is no syncing of analytics data back in to Tessitura, for example. We really love the analytics tools built right into Mailchimp, and so for most this may not be a deal breaker. These are the kinds of features we hope will get added by the community down the road.

What it is, again.

It’s a super exciting thing for us to all think about! The Tessitura community really needs to take more control over the entire eco-system of third-party applications and extensions. Without a vested interest in building our own tools, open sourcing the work we are all doing, and joining in the conversation with regards to direction and strategy, the community will always be waiting on the next update from those vendors who have chosen to build products from the system.

How do I get it?

First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to web_dev@tessituranetwork.com. Once you are there, you can go to here, and download the code, or the binaries to try it out on your system. The repository has a README with all the relevant info on how to install it from scratch, build from the source, and set things up in Mailchimp. If you don’t have this capability you can also download the compiled binaries and just try it out.

How do I contribute?

If you are a Tessitura Network licensee, and you’ve gotten this far, read the README to get the full picture on how to fork the code and contribute. For the time being, feel free to log issues, and send feature requests, and I will do my best to follow up on them and help get them resolved, but eventually, we hope that someone within the community will pick up the torch and help us to continue to develop what we think is a really valuable integration and option for the broader Tessitura community.

Reminder: First, you need to be a Tessitura Network licensee. Then, you just need access to the Tessitura Netowrk code sharing site on BitBucket. You can get this by sending an email to web_dev@tessituranetwork.com

Long live RSS

Screen Shot 2015-07-10 at 2.35.17 PM

I just made a new Tumblr. It’s called “Recently Digitized Design.” It took me all of five minutes. I hope this blog post will take me all of ten.

But it’s actually kinda cool, and here’s why. Cooper Hewitt is in the midst of mass digitization project where we will have digitized our entire collection of over 215K objects by mid to late next year. Wow! 215K objects. That’s impressive, especially when you consider that probably 5000 of those are buttons!

What’s more is that we now have a pretty decent “pipeline” up and running. This means that as objects are being digitized and added to our collections management system, they are automatically winding up on our collections website after winding their way through a pretty hefty series of processing tasks.

Over on the West Coast, Aaron, felt the need to make a little RSS feed for these “recently digitized” so we could all easily watch the new things come in. RSS, which stands for “Rich Site Summary”, has been around forever, and many have said that it is now a dead technology.

Lately I’ve been really interested in the idea of Microservices. I guess I never really thought of it this way, but an RSS or ATOM feed is kind of a microservice. Here’s a highlight from “Building Microservices by Sam Newman” that explains this idea in more detail.

Another approach is to try to use HTTP as a way of propagating events. ATOM is a REST-compliant specification that defines semantics ( among other things ) for publishing feeds of resources. Many client libraries exist that allow us to create and consume these feeds. So our customer service could just publish an event to such a feed when our customer service changes. Our consumers just poll the feed, looking for changes.

Taking this a bit further, I’ve been reading this blog post, which explains how one might turn around and publish RSS feeds through an existing API. It’s an interesting concept, and I can see us making use of it for something just like Recently Digitized Design. It sort of brings us back to the question of how we publish our content on the web in general.

In the case of Recently Digitized Design the RSS feed is our little microservice that any client can poll. We then use IFTTT as the client, and Tumblr as the output where we are publishing the new data every day. 

RSS certainly lives up to its nickname ( Really Simple Syndication ), offering a really simple way to serve up new data, and that to me makes it a useful thing for making quick and dirty prototypes like this one. It’s not a streaming API or a fancy push notification service, but it gets the job done, and if you log in to your Tumblr Dashboard, please feel free to follow it. You’ll be presented with 10-20 newly photographed objects from our collection each day.

UPDATE:

So this happened: https://twitter.com/recentlydigital

A new feature you may never see – ticketing follow up emails

A few weeks ago we rolled out a small update to the ticketing website that sends a reminder email to anyone who has purchased tickets in advance.

This is sort of an experiment. but first, some background.

Recently I attended an event at another cultural institution ( which I won’t name ). A few days prior to the event I received a very up-selling reminder email, reminding me of membership discounts and other events I might like. The links within the email took me to the landing page for the event, and offered little actual information that I found useful unless I wanted to buy even more tickets, or combine my purchase with a book in the shop.

To make things worse, on the night of the event, and just as it was finishing up, I received a “follow up email” , which I found really annoying. It was literally timed to send exactly as the event was ending and while I was on my way out the door, as if to say, “wait, come back in and buy the book too!”

In fact, the subject line read “How did I enjoy [insert event title here]?” But, the email itself didn’t offer me a way to answer that question ( even if I wanted to ) and instead simply pointed me to the same landing page of the event I had just attended, along with links to their social media channels and other upcoming shows I might be interested in. The whole thing made me cringe a little as I pressed the delete button on my phone.

I thought to myself, “let’s not do that.”

I really just wanted to send a gentle reminder email, full of actually useful info to people who were planning to come visit the museum. I thought it would be nice if I had booked tickets in advance to get something like this the day before I was planning to visit. Something with a map and some info on how to get here, and potentially a little synopsis of what I might do once I arrived.

So, here was my thought process.

Like I said, it’s an experiment, and so I’m still just sort of beta-testing this feature, and trying to analyze how useful/annoying people find it. We already get so many emails, so I really wanted to make sure I wasn’t bombarding our visitors with additional garbage, or even worse, confusing them with unneeded information like what I’d recently experienced.

First of all, it would be all about timing. While talking out loud in the Labs about this one, Aaron’s comment was simply “time zones.” Computer’s have time zones ( all of ours are set to UTC ), people are in time zones. It was clearly something to consider.

Right now, we can only assume that you will be here sometime during our open hours on the day you purchased the ticket. We don’t presently do timed tickets, and unlike an event space, each day’s “performance” spans the entirety of our hours.

So we decided to try out sending the reminders the day before at 4pm, our time. I guess it’s generally safe to say that visitors are nearing our time zone the day before their visit, but its really still a best guess. Also, we are not going to “remind you” if you are booking for the same day as that’s probably overkill. So for now, at 4pm, the day before your visit, is when the email goes out.

Next I had some fun coming up with a way to extract all the relevant info from our Ticketing CRM ( Tessitura ).

I needed the following info:

  • All the things going on tomorrow ( this is sort of future proofing for when we let you book other things beyond general admission )
  • All the current orders for all the things going on tomorrow.
  • All the email addresses for all the orders for all the things going on tomorrow.

Getting “tomorrow” was pretty easy in PHP.

$datetime = new DateTime(‘tomorrow’);
$tomorrow = $datetime->format(‘Y-m-d’);

4pm EST is 9pm UTC on the same day, so all good there in calculating “tomorrow.”

Our Tessitura API wrapper I mentioned in my last post has a method that lets us get all the “performances” in Tessitura for a given date range. Simply passing it “tomorrow” yields us all the things we are looking for. We also have a method that can get all the orders placed for a given performance. Finally, we have a method that gets the email for the user that placed the order.

Now we can send the actual email.

Obviously, people place multiple orders and buy multiple tickets per order. I really only want to send one email regardless of what you’ve booked. So when I am looping through all the orders, I only add the email address to the list once.

The last step was to make a cron job that runs this script once a day at 4pm. Done!

( Incidentally, all of our servers are set to UTC, but for some reason our RedHat server’s crontab doesn’t seem to care, and somehow ( possibly magically ) thinks it’s on EST. I have yet to figure out why this is, but for now I am just going with it. )

Right now, the email is a fixed template. We are sending out emails via Mandrill, so we get some decent analytics and can track open rates, and click rates. We also added Google Analytics tracking codes to all the links in the email so we can see what people are clicking right in GA.

So far we’ve experienced an open rate of about 75% and a click rate of about 20%, which seems pretty good to me.

Our open and click rate for the last 30 days.

Our open and click rate for the last 30 days.

And here is the GA results for the “Ticket Reminder” campaign from the same time period. From here you can dive deeper into the analytics to see what pages people are heading to once they are on the site, and all sorts of other metrics.

GA TicketReminder

Since you can only really “see” this feature if you book an advance ticket I’ve posted an image of what the email looks like below. We went through a few design iterations to get it to look the way it looks, and I’d really love to hear your thoughts about it. If you were visiting us, and received this email the day before your visit at 4pm, would you find it useful, annoying, or confusing?

Reminder Email Template

Our reminder email template

Of course this will change again when the Pen goes live shortly.