Monthly Archives: March 2015

From concept to video prototype: the early form of the Pen

It was in late 2012 that the concept for the Pen was pitched to the museum by Local Projects, working then as subcontractors to Diller Scofidio & Renfro. The concept portrayed the Pen as an alternative to a mobile experience, and importantly, was symbol that was meant to activate visitors.

Early image of Pen

Original concept for the Pen by Local Projects with Diller Scofidio + Renfro, late 2012.

“Design Fiction is making things that tell stories. It’s like science-fiction in that the stories bring into focus certain matters-of-concern, such as how life is lived, questioning how technology is used and its implications, speculating bout the course of events; all of the unique abilities of science-fiction to incite imagination-filling conversations about alternative futures.” (Julian Bleeker, 2009)

In late 2013, Hanne Delodder and our media technologist, Katie Shelly, were tasked with making a short instructional video – a piece of ‘internal design fiction’ to help us expand the context of the Pen, beyond just the technology. (Hanne was spending three weeks observing work in the Labs courtesy of the Belgian Government as part of her professional development at Het Huis van Alijn, a history museum in Ghent.)

The video used the vWand from Sistelnetworks, an existing product that became the starting point from which the final Pen developed. At the time of production the museum had not yet begun the final development path that engaged Sistelnetworks, GE, Makesimply, Tellart and Undercurrent who would help augment and transform the vWand into the new product we now have.

The brief for the video was simply to create an instructional video of the kind that the museum might play in the Great Hall and on our website to instruct visitors how they might use the Pen. As it turned out, the video ended up being a hugely valuable tool in the ‘socialisation’ of the Pen as the entirety of the museum started to gets its head around what/how/when from curators to security staff, well before we had any working prototypes.

It ended up informing our design sprints with GE and Sistelnetworks which resulted in the form, operation and interaction design for the Pen; as well as a ‘stewardship’ sprint with SVA’s Products of Design where we worked through operational issues around distribution and return.

The video was also the starting point for the instructional video we ended up having produced that now plays online and in the Great Hall. You will notice that the emphasis in the final video has changed dramatically – focussing on collecting inside the museum and the importance of the visitor’s ticket (in contrast to the public collection of email addresses in the original).

The digital experience at Cooper Hewitt is supported by Bloomberg Philanthropies.

Things people make with our API #347: Nick Bartzokas

Shortly after Cooper Hewitt opened on December 12, 2014, the museum hosted a private event. At that preliminary scoping for the event, I bumped into Nick Bartzokas who had written a spiffy little application that he was planning on using for visuals on the night. We got talking and it turned out that he’d made it using the Cooper Hewitt API – all with no prompting. Even though it didn’t end up getting fully used, he has released it along with the source code.

Tell me a bit about yourself, what do you do, where do you do it?

I’m a creative coder. I like trying out new things. That’s lead me to develop a wide variety of projects: educational games, music visualizations, a Kinect flight simulator, an interactive API-fed wall of Arduinos and Raspberry Pis. These days I’m making interactive installations for the LAB at Rockwell Group. I came to the LAB from the American Museum of Natural History, so museums are in my blood, too.

The LAB is a unique place. We’re a team of designers, thinkers, and technologists exploring ways to connect the digital with the physical.

Here’s a couple links to our work: (1 / 2)

You made a web app for an event at Cooper Hewitt, what was the purpose of it, what does it do?

Our friends at Metropolis celebrated their magazine’s redesign at the Cooper Hewitt in December 2014. The LAB worked on a one-night-only interactive installation that ran on one of the museum’s 84″ touchtables. We love to experiment, so when opportunities like this come up, we jump at the chance to pick up a new tool and create.

In preparation for the event, I decided to prototype using Phaser, a 2D Javascript game framework. It markets itself as a tool for making web platformers, but it’s excellent for 2D projects of all kinds.

It gives you an update and render cycle that’s familiar territory for those that work with other game engines or creative coding toolkits like openFrameworks. It handles user input and asset management well. It has three physics engines of ranging sophistication, from simple Arcade collisions to full-body physics. You can choreograph sprites using built-in tweening. It has PIXI integrated under the hood, which supplies fast graphics with useful shaders and the ability to roll your own. So, lots of range. It’s a great tool for rapid browser-based prototyping.

The prototype we completed for the event brought Metropolis magazine’s digital assets to life. Photos drifted like leaves on a pond. When touched, they attracted photos of similar objects, assembling into flower petals and fans. If held, they grew excited until bursting apart. It ran in a fullscreened browser and was reponsive to over 40 simultaneous touch points. Here’s that version in action.

For the other prototype, I used Cooper Hewitt’s API to generate fireworks made of images from the museum’s collection. Since the collection is organized by color, I could ask the API for all the red images in the collection and turn them into a red firework burst.

I thought this project was really cool, so while it wasn’t selected for the Metropolis event, I decided to complete it anyway and post it..

OMG! You used the Cooper Hewitt API! How did you find out about the API? What was it like to work with the API? What was the best and the worst thing about the API?

When the LAB begins a project, we start by considering the story. We were celebrating the Metropolis magazine redesign. Of course that was the main focus. But their launch party was being held at the Cooper Hewitt, and they wrote about Caroline Baumann of the Cooper Hewitt in their launch issue, so the museum was a part of the story. We began gathering source material from Metropolis and Cooper Hewitt. It was then that I re-discovered the Cooper Hewitt API. It was something I’d heard about in the buzz leading up to the museum’s reopening, but this was my first time encountering it in the wild.

You all did a great job! Working with the API was so straightforward. Everything was well designed. The API website is simple and useful. The documentation is clear and complete with the ability to testdrive API methods in the browser. The structure of the API is sensible and intuitive. I taught a class on API programming for beginners. It was a challenge to select APIs with a low barrier to entry that beginners would be excited about and capable of navigating. Cooper Hewitt’s API is on my list now. I think beginners would find it quick, easy, and rewarding.

The pyramid diagram on the home page was a nice touch, a modest infographic with a big story behind it. It gives the newcomer a birds eye view of the API, the new gallery apps, the redesigned museum, all the culmination of a tremendous collaboration.

The ability to search the collection by color immediately jumped out to me. That feature is just rife with creative possibilities. My favorite part, no doubt. In fact, I think it’s worth expanding on the API’s knowledge of color. It knows an image contains blue, but perhaps it could have some sense of how much blue the image contains, perhaps a color average or a histogram.

In preparing a nodejs app to pull images for the fireworks, I checked to see if someone had written a node module for the Cooper Hewitt API, expecting I’d have to write my own. I was pleasantly surprised to see that the museum’s own Micah Walter authored one . That was another wow moment. When an institution opens up an API, that’s good. But this is really where Cooper Hewitt is building a bridge to the development community. It’s the little things.

So if others want to play with what you made where can they find it?

Folks can interact with the prototype here and they can peek at the source code on GitHub.

Thanks for having me, and congratulations on the API, the museum’s reopening, and a job well done!

We choose Bao Bao!

So, the Pen went live on March 10. We’re handing them out to every visitor and people are collecting objects all over the place. Yay!

The Pen not only represents a whole world of brand-new for the museum but an equally enormous world of change for staff and the ways they do their jobs. One of the places this has manifested itself is the sort of awkward reality of being able to collect an object in the galleries only to discover that the image for that object or, sometimes, the object itself still hasn’t been marked as public in the collections database.

It’s unfortunate but we’ll sort it all out over time. The more important question right now is how we handle objects that people have collected in the galleries (that are demonstrably public) but whose ground truth hasn’t bubbled back up to our own canonical source of truth.

In the early days when we were building and testing the API methods for recording the objects that people collected the site would return a freak-out-and-die error the moment it encountered something that a visitor didn’t have permissions to see. This is a pretty normal approach in software and systems development but it made testing over the overall system complicated and time-consuming.

In the interest of expediency we replaced the code that threw a temper tantrum with code that effectively said la la la la la… I can’t hear you! If a visitor tried to collect something that they didn’t have permissions to see we would simply drop it on the floor and pretend it never happened. This was useful in fleshing out the rest of the overall workflow of the system but we also understood that it was temporary at best.

Screen Shot 2015-03-20 at 12.07.07 PM

Allowing a user to collect something in the gallery and then denying any evidence of the event on their visit webpage would be… not good. So now we record the item being collected but we also record a status flag next to that event assuming that the disconnect between reality and the database will work itself out in favour of the visitor.

It also means that the act of collecting an object still has a permalink; something that a visitor can share or just hold on to for future reference even if the record itself is incomplete. And that record exists in the context of the visit itself. If you can see the other objects that you collected around the same time as a not-quite-public-yet object then they can act as a device to remember what that mystery thing is.

Which raises an important question: What should we use as a placeholder? Until a couple of days ago this is what we showed visitors.


Although the “Google Street View Cat” has a rich pedigree of internet meme-iness it remains something of an acquired taste. This was a case of early debugging and blowing-off-steam code leaking in to production. It was also the result of a bug ticket that I filed for Sam on January 21 being far enough down to the list of things to do before and immediately after the launch of the Pen that it didn’t get resolved until this week. The ticket was simply titled “Animated pandas”.

As in, this:

This is the same thread that we’ve been pulling on ever since we started rebuilding the collections website: When we are unable to show something to a visitor (for whatever reason) what do we replace the silence with?

We choose Bao Bao!