Category Archives: Collection data

Making ‘Dive into Color’

Guest post by Olivia Vane

‘Dive into Color’ is an interactive timeline for exploring the Cooper Hewitt collection by colour/colour harmony and time. It is exhibited in ‘Saturated: The Allure and Science of Color’ 11 May 2018 – 13 Jan 2019.

Since spending time at Cooper Hewitt last year on a fellowship, I returned to London where I’m a PhD student at Royal College of Art (RCA) in Innovation Design Engineering supervised by Professor Stephen Boyd Davis. At Cooper Hewitt, I developed a prototype timeline tool for visualising the museum collection by tags.

This post describes further work on that prototype, shifting the tool to exploring the collection by colour. As a curator explained: “[visualising by] colour, I think, is useful for the purposes of the study of the taste for different colours, but it’s also a more interesting exercise for the public just to be able to do and get pleasure out of.” Colour is enjoyable – it’s eye-catching and vibrant – and it offers a visual, intuitive way to explore a digitised collection without needing specialist knowledge. With a design collection like Cooper Hewitt’s, tracing colour through history is also interesting for looking at fashions and innovation in colour technology.

I’ve been asked a few times recently what my process is for designing visualisations. So in this post I’m going to step though the early prototypes and retrace my design decisions. Along the way, I will talk over practical points for working with colour (and colour harmonies!) in collection data, and working between digital and artist colour systems.

Previous colour-collections visualisations
Colour has previously been used both as a facet for search and for visualising collections. Geoff Hinchcliffe’s ‘Tate Explorer’ offers colour as a search facet paired with a timeline. This prototype from the Swedish National Heritage Board (write-up forthcoming) combines colours and tags for exploring a fashion collection. Richard Barrett-Small’s ‘ColourLens’ searches over Rijksmuseum and Walters Art Museum data by colour with a graphic for each item visualising its colour proportions. And Google Arts & Culture’s ‘Art Palette’ is a search engine that finds artworks based on a chosen colour palette.

Collections visualised by colour include the Library of Congress, where Laura Wrubel created this tool for overviewing the colour palettes of objects over a collection and Jer Thorp visualised the colour names present in the titles of works. Also using Cooper Hewitt data, Rubén Abad created this visualisation of the colours present by decade in Cooper Hewitt’s objects. Lev Manovich has visualised artworks, for example Mondrian and Rothko paintings, by colour characteristics including hue and saturation. Everardo Reyes visualised Paul Klee’s paintings by hue. And Brian Foo’s visualisation of the New York Public Library digitised collection has an option to organise items by colour.

I was interested to explore colour alongside the time dimension. And since Cooper Hewitt was preparing for an exhibition, ‘Saturated’, focusing on colour theory and design, I was intrigued to see if I could trace colour harmonies across the collection.

Colour harmonies are combinations of colours that are pleasing to the eye. The relative positions of colours in a colour wheel can be used to describe different harmonies like complementary colours (opposites on the colour wheel), or analogous colours (neighbours on the colour wheel). Artists and designers create different visual effects and contrasts with different harmony types.

Colour harmony examples, image from studiobinder

Extracting colours across the Cooper Hewitt collection
It’s already possible to search by colour on Cooper Hewitt’s collection site. Colour data was extracted using Giv Parvaneh’s great tool RoyGBiv (described in this Cooper Hewitt Labs post). Roughly, RoyGBiv works by checking the colour value of each pixel in an image, clustering colour values that are similar enough to be considered the same and returning up to 5 dominant colours in an image.

The colours extracted from Cooper Hewitt’s collection with RoyGBiv are good on the whole, but errors sometimes occur. The background colour is sometimes picked up. The effect of light and shadow on a 3D object can introduce multiple, illusory colours:

As always there are quirks working with digitised collections, like these Dutch tiles which had coloured stickers on them when they were photographed:

Or lace photographed against a dark background for contrast:

Since the possible number of unique colours extracted across the collection is huge, searching by colour on the Cooper Hewitt website is simplified by snapping extracted colours to the closest value in a standardised palette (the default is the CSS4 web colour palette, but the CSS3 and Crayola palettes are also options). On the Cooper Hewitt website, you can search the collection by 116 CSS4 colours. Both the original and ‘snapped to’ colour palettes are available in the Cooper Hewitt data – all stored as hex codes (six hexadecimal digits representing the levels of red, green and blue).

Prototyping
As a first step, I adapted my code to visualise collection items matching a CSS4 colour on a timeline (see visualisations below).

Although Cooper Hewitt has an API (an Application Programming Interface: a way for someone to make use of Cooper Hewitt’s data through a set of pre-defined requests made over the web), there is no method for returning all the objects matching a colour. Instead, I used collection data Cooper Hewitt had put on GitHub – an argument for institutions to offer both!

‘Orangered’

‘Steelblue’

‘Olivedrab’

I then started exploring how I might visualise objects featuring a colour harmony, first trying complementary colours (opposites on the colour wheel).

I initially tried to do this by matching a chosen CSS4 colour with the nearest CSS4 colour of opposite hue. The HSL and HSV (hue, saturation, lightness/value) colour systems define hue as an angle round a circle (0-360°), so I inverted hues by converting hex codes to HSL. The visualised results were unsatisfying though, as often the search failed to find any matches. This doesn’t mean to say there was a lack of objects with complementary colours, but that my search was too precise (and artificially precise since which CSS4 palette colours you can consider to be complementary is fuzzy, and the colour data is imprecise anyway e.g. the illusory colours extracted for 3D objects).

I tried extending the reach of my search to matching several colours close to the inverted hue, but it felt very frustrating not to have a visual reference to the range of colours in a region and what colours were being searched over.

So I started experimenting with using a colour wheel input as a way to pick colour combinations and simultaneously see possible hue relations. I first tried mapping the colours from the standardised palettes by HSL round a circle.

‘Snapped to’ colours in the Cooper Hewitt collection. CSS4 (left) and Crayola (right) palettes mapped by hue (in HSL). Angle = hue, radius = lightness.

And to make it easier to see the possible colours, I wrote some code to map the CSS4 palette colours to a wheel.

CSS4 palette colours → mapped round a wheel. Hue (HSL) = angle, ordered by lightness

I realised at this point, though, that the resulting design doesn’t match a typical artist’s/pigment colour wheel (which has red, yellow, blue – RYB – primary colours). HSL is a simple transformation of RGB colour space, and therefore the wheel has red, green, blue primaries. If this colour wheel is used to search over design artefacts, surely it would be more appropriate to use a design closer to the norm for artists and designers? (These in-depth articles by David Briggs – part 1, part 2 – explain the differences between traditional and modern colour theory, and colour training for artists).

There is no ‘correct’ colour wheel to adopt, but converting my HSL colour wheel to something closer to an artist’s version (using code from Ben Knight’s implementation of Adobe’s ‘Kuler’ colour wheel) felt like a reasonable compromise here.

(Left) Colours mapped by hue (HSL). Angle = hue, radius = lightness. (Right) Colour mapping adjusted to more closely resemble an artist’s colour wheel.

Using this colour wheel as a guide and an input, I could see and choose which colours to query. By searching over multiple colours, the visualised results were better. (In the images below, white and black borders around tiles in the colour wheel indicate the searched-over colour combinations):

Purple against olive timeline

Orangered against cyan/blue timeline

Querying against colour data in HSV
While the results looked better with this prototype, the user interface is a mess and complicated to use. And the search query was not excluding objects that featured other colours in addition to the searched colour combination.

Sticking with the CSS4 palette was greatly complicating the task, so I abandoned using it. I converted all the original extracted colours (not ‘snapped’) from hex codes to HSV and created my own Elasticsearch index with the HSV colours stored as a nested datatype. This way I can: search over a hue range, with a threshold on saturation and value; exclude objects also featuring other hues; and it is simple to define more complex colour harmony searches (e.g. analogous, triadic, quadratic and split complementary).

Different colour harmonies tried in prototyping

Colour wheel graphic for objects
As a by-product, I realised I could repurpose my code to map individual object palettes round a colour wheel too. Thus, you get a compact graphic describing the colour relationships present in a single design. This is a nice example of the serendipity of designing, where you identify new possibilities as a result of seeing what you have already made.



(Above) Object with colour palette, (below) palette mapped to a wheel: easy to see complementary harmony


Colour wheel graphic examples

I adapted a simple artist’s (RYB) colour wheel to use as an input and tested the prototype with visitors at Royal College of Art’s January 2018 ‘Work in Progress’ show.

Testing the prototype with visitors at Royal College of Art’s Jan 2018 ‘Work in Progress’ show

In order to avoid reducing the size of the images (so it’s still possible to see what the objects are), I’ve capped the number of visualised objects to the 100 most saturated in colour.

There were few hits for the more complex harmonies (quadratic, tetradic, split complementary) and the results felt less convincing. I had widened the hue range to search over in order to increase the small number of hits, so the results were less visually cohesive anyway. And, in conversations with the museum curators, we decided to drop these more complex harmonies from the interface.

As mentioned earlier, there are some errors in the colour data. At this stage, since this setup only allows a fixed set of possible searches, with repeatable results, it was worth it for me to do some manual editing of the colour data to remove obvious errors.

Final interface design
For the final (more polished!) interface design, which is now on display, I set on adopting a colour wheel input inspired by this Hilaire Hiler design in Cooper Hewitt’s collection. (This wheel actually has 4 ‘psychological’ colour primaries and features 30 hues). The interface has 4 harmony options: monochromatic, complementary, analogous and spectrum (a rainbow colour option).

Color wheel picker, inspired by Hiler’s design, used in ‘Dive into Color’

 

‘Dive into Color’ installed at Cooper Hewitt. Photo credit: Caroline Koh Smith

‘Dive into Color’ installed at Cooper Hewitt. Photo credit: Caroline Koh Smith

What does the tool reveal?
Visualising the Cooper Hewitt data this way gives some sense of when colours appear in time. There are no results for purple pre-19th century. Perhaps because of the difficulty and expense of producing purple before synthetic dyes/pigments were developed in the 19th century?

Though, as often is the case interpreting collection visualisations, it is difficult to disentangle historical trends from the biases and character of what has been collected and how it has been catalogued. (And bear in mind I’m only visualising the 100 objects most saturated in colour for a search). For example, these green and purple Japanese prints of irises are clearly part of a set rather than indicating some colour trend around 1910. Using the images themselves as data points is helpful for diagnosing this.

The same purple-green visualisation demonstrates how the tool can connect artefacts across time, in this case with by similar colour scheme/design:

   
(Left) Frieze (USA), 1890–1910; Manufactured by Hobbs, Benton & Heath, (right) Sidewall, Anemone, 1960–66; Designed by Phoebe Hyde 

The tool surfaces colours, used in a particular material, that are strongly attached to design types. For example these English vivid blue and white late-18th century ceramic buttons/medallions:


(From left) Medallion (England), late 18th century, stoneware; Medallion (England), late 18th century, stoneware; Button (England), late 18th century; stoneware

Blue and white ceramics manufactured in the Netherlands in the late-17th century, early-18th century:

  
(From left) Plate (Netherlands), 1675–1725, tin-glazed earthenware; Plaque (Netherlands), 1675–1725, tin-glazed earthenware; Obelisk (Netherlands), ca. 1700–25, tin-glazed earthenware

And French red and white textiles in the late 18th/ early 19th century:


(From left) Textile (France), late 18th century, cotton; Textile (France), ca. 1850, cotton; Textile (France), 18th century, cotton

Visualising blue-yellow shows more saturated colour from mid-19th century onwards. Is this signalling changing fashion, or the availability of new synthetic dyes/pigments? Can we connect the more saturated harmony in designs from the mid-1800s with Chevreul’s influential text from the time ‘The Law of Simultaneous Colour Contrast’, describing how colour harmony can be used to create a more vibrant effect? Though a number of the earlier objects are textiles and the colours will have faded over time. Possibly a combination of these factors is at play here.

User evaluations
While I’ve discussed historical trends and the Cooper Hewitt collection, I’m also interested in how others might use a tool like this in their own projects, and with other collections. I conducted a number of interviews around this tool design with history of design students and colour history specialists, exploring their impressions of it and if/how they might use it in their own work. (I was also interested to talk with designers about using a tool like this for design inspiration, but struggled with recruiting!)

The history of design students (Masters students in History of Design from the Royal College of Art/Victoria & Albert Museum programme) discussed examples from their own work where such a tool could be useful. Example projects included: tracing the use of blue through time in anti-vaccination movement posters to convey trust; or pink clothing in the history of women’s protest movements. In both these examples a hue, rather than a more specific colour, was of interest. Out of these conversations, the most useful features to add would be a filter by object type, and to be able to narrow down a time period.

For the specialist colour audience, though, this tool design has some issues. Not least because I do not know how accurately the photos represent the true colours of the objects (what lighting conditions the photographs were taken in, if they have been retouched etc.). While the overall extracted colours seem generally good, they may not be precise enough for some. The control in searching by colour – only by hue – may also be too limited for some needs.

Seeing is believing?
Colour data is computationally extracted, in contrast with manually added metadata. Do these different cases require different considerations for designing visualisations?

In this post, I’ve used arguments like it ‘looked better’, or the results were ‘more satisfying’ to explain my design decision-making. Working with colour data I knew had errors, I was more comfortable adjusting parameters in my search queries and editing obviously wrong colour data to return what looked ‘better’ to me. For colour, you can immediately see when images appear in the visualisation don’t match. (Though, of course, looking at the visualised results will not tell you if there are absent items). In interviews, I asked whether this data massaging to produce more satisfying results bothered interviewees and was often told the person didn’t mind, but they’d worry if there were obvious errors appearing in the visualisation.

While prototyping the design in conversation with curators at Cooper Hewitt, we discussed the possibility of different versions of this tool: offering more control in search and not massaging parameters for in-depth researchers. But there is also value in visually satisfying results. As a curator expressed it: “We have a large number of professional designers and design students who come here … Just seeing beautiful examples of how people have used particular colour schemes is research. So the visually satisfying… seeing the most compelling works has a value as well. For the professional designers who, say ‘wow this is really incredible use of this colour scheme. I want to share this with my students.’

‘Dive into Color’ has since been exhibited in the London Design Festival 2018, and will hopefully go online at some point. Any feedback is very welcome: olivia.fletcher-vane@network.rca.ac.uk

Many thanks to Cooper Hewitt for their help with this project: especially Pamela Horn, Jennifer Cohlman Bracchi, Susan Brown and the technical team for getting ‘Dive into Color’ up and running in the galleries. Thanks to Neil Parkinson who showed me the Colour Reference Library at RCA, to Dr Alexandra Loske, Patrick Baty and RCA students for their thoughts, and to Jonny Jiang for help with the final UI design. And thanks to Stephen Boyd Davis for his continued help and support!

 

Parting gifts

Today is my last day at Cooper Hewitt, Smithsonian Design Museum. It’s been an incredible seven years. You can read all about my departure over on my personal blog if you are interested. I’ve got some exciting things on the horizon, so would love it if you’d follow me!

Parting gifts

Before I leave, I have been working on a couple last “parting gifts” which I’d like to talk a little about here. The first one is a new feature on our Collections website, which I’ve been working on in the margins with former Labs member, Aaron Straup Cope. He did most of the work, and has written extensively on the topic. I was mainly the facilitator in this story.

Zoomable Object Images

The feature, which I am calling “Zoomable Object Images” is live now and lets you zoom in on any object in our collection with a handy interface. Just go to any object page and add “/zoom” to the URL to try it out. It’s very much an experiment/prototype at this point, so things may not work as expected, and not ALL objects will work, but for the majority that do, I think it’s pretty darn cool!

Here’s an example, zoomed all the way in – https://collection.cooperhewitt.org/objects/18382347/zoom

Notice that there is also a handy camera button that lets you grab a snapshot of whatever you have currently showing in the window!

Click on the camera to download a snapshot of the view!

This project started out as a fork of some code developed around the relatively new IIIF standard that seems to be making some waves within the cultural sector. So, it’s not perfectly compliant with the standard at the moment ( Aaron says “there are capabilities in the info.json files that don’t actually exist, because static files”), but we think it’s close enough.

You can’t tell form this image, but you can pinch and zoom this on your mobile device.

To do the job of creating zoomable image tiles for over 200,000 object we needed a few things. First, we built a very large EC2 server. We tried a few instance sizes and eventually landed on an m4.2xlarge which seemed to have enough CPU and RAM to get the job done. I also added a terabyte of storage to the instance so we’d have enough room to store all the images as we processed them.

Aaron wrote the code to download all of our high res image files and process them into tiles. The code is designed to run with multiple threads so we can get the most efficiency out of our big-giant-ec2 server as possible (otherwise we’d be waiting all year for the results). We found out after a few attempts that there was one major flaw in the whole operation. As we started to generate thousands and thousands of small files, we also started to run up to the limit of inodes available to us on our boot drive. I don’t know if we ever really figure this out, but it seemed to be that the OS was trying to index all those tiny files in some way that was causing the extra overhead. There is likely a very simple way to turn that off, but in the interest of getting the job done we decided to take an alternative route.

Instead of processing and saving the images locally and then transferring them all to our S3 storage bucket, Aaron rewrote the code so that it would upload the files as they were processed. S3 would be their final resting place in any case, so they needed to get there one way or another and this meant that we wouldn’t need so much storage during processing. We let the script run and after a few weeks or so, we had all our tiles, neatly organized and ready to go on S3.

Last but not least, Aaron wrote some code that allowed me to plug it into our collections website, resulting in the /zoom pages that are now available. Woosh!

Check out the code here https://github.com/thisisaaronland/go-iiif – and dive into the tl;dr discussion over here if you’re into this kinda thing.

Cooper Hewitt in a box

The second little gift is around some work I’ve been doing to try and make developing and maintaining code at Cooper Hewitt a tiny bit better.

Since we started all this work we’ve been utilizing a server that sits on the third-floor (right next to Josue) called, affectionately, “Bill.” Bill (pictured above) has been our development machine for many years, and has also served as the server in charge of extracting all of our collection data and images from TMS and getting them published to the web. Bill is a pretty important piece of equipment.

The pros to doing things this way are pretty clear. We always have, at the ready, a clone of our entire technology stack available to use for development. All a developer needs to do is log in to Bill and get coding. Being within the Smithsonian network also means we get built in security, so we don’t need to think about putting passwords in place or trying to hide in plain site.

The cons are many.

For one, you need to be aware of each other. If more than one developer is working on the same file at the same time, bad things can happen. Sam sort of fixed this by creating individual user instances of the major web applications, but that requires a good bit of work each time you set up a new developer. Also, you are pretty much forced to use either Emacs or Vi. We’ve all grown to love Emacs over time, but it’s a pain if you’ve never used it before as it requires a good deal of muscle memory. Finally, you have to be sitting pretty close to Bill. You at least need to be on the internal network to access it easily, so remote work is not really possible.

To deal with all of this and make our development environment a little more developer friendly, I spent some time building a Vagrant machine to essentially clone Bill and make it portable.

Vagrant is a popular system amongst developers since it can easily replicate just about any production environment, and allows you to work locally on your MacBook from your favorite coffee shop. Usually, Vagrant is setup on a project by project basis, but since our tech stack has grown in complexity beyond a single project ( I’ve had to chew on lots of server diagrams over the years ), I chose to build more of a “workspace.”

I got the idea from Dan Phiffer at Mapzen who did the same for their Who’s on First project.

Essentially, there is a simple Vagrantfile that builds the machine to match our production server type, and then there is a setup.sh script that does the work of installing everything. Within each project repository there is also a /vagrant/setup.sh script that installs the correct components, customized a little for ease of use within a Vagrant instance. There is also a folder in each repo called /data with tools that are designed to fetch the most recent data-snapshots from each application so you can have a very recent clone of what’s in production. To make this as seamless as possible, I’ve also added nightly scripts to create “latest” snapshots of each database in a place where they Vagrant tools can easily download them.

This is all starting to sound very nerdy, so I’ll just sum it up by saying that now anyone who winds up working on Cooper Hewitt’s tech stack will have a pretty simple way to get started quickly, will be able to use their favorite code editor, and will be able to do so from just about anywhere. Woosh!

Lastly

Lastly, it’s been an incredible journey working here at Cooper Hewitt and being part of the “Labs.” We’ve done some amazing work and have tried to talk about it here as openly as possible–I hope that continues after I’m gone. Everyone who has been a part of the Labs in one way or another has meant a great deal to me. It’s a really exciting place to be.

There is a ton more work to do here at Cooper Hewitt Labs, and I am really excited to continue to watch this space and see what unfolds. You should too!

Thanks for everything.

-micah

Traveling our technology to the U.K.

Visitors to the London Design Biennale use our “clone” of the Wallpaper Immersion Room.

Recently, we launched a major initiative at the inaugural London Design Biennale at Somerset House. The installation was up from September 7th through the 27th and now that it has closed and the dust has settled, I thought I’d try and explain the details behind all the technology that went into making this project come alive.

Quite a while back, an invitation was extended to Cooper Hewitt to represent the United States in the London Design Biennale, an exhibition featuring 37 countries from around the world. Our initial idea being to spin up a clone of our very popular “Wallpaper Immersion Room” and hand out Cooper Hewitt Pens.

The idea of traveling our technology outside the walls of the Carnegie Mansion has been of great interest to the museum ever since we reopened our doors in 2014. The process of figuring out how to make our technology portable, and have it make sense in different environments and contexts was definitely a challenge we were up for, and this event seemed like the perfect candidate to put that idea through its paces.

So we started out gathering up the basic requirements and working through all that would be needed to make it all come together, including some very generous support from the Secretary of the Smithsonian and the Smithsonian National Board, Bloomberg Philanthropies, and Amita and Purnendu Chatterjee.

The short version is, this was a huge undertaking. But it all worked in the end, and visitors at the first-ever London Design Biennale were able to use Cooper Hewitt Pens to explore 100 wallpapers from our collection, create their own designs and save them. Plus, visitors could collect and save installations from other Biennale participants.

Thanks to a whole bunch of people, there's an Immersion Room in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Thanks to a whole lot of people, there are @cooperhewitt pens in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

The long version is as follows.

An Immersion Room in England

First and foremost, we wanted to bring the Immersion Room over as our installation for the London Design Biennale. So, let’s break down what makes the Immersion Room what it is.

The original Immersion Room, designed by Cooper Hewitt and Local Projects, made its debut when the museum reopened in December 2014, following a major renovation. It is essentially an interactive experience where visitors can manipulate a digital interactive touch-table to browse our collection of wallpapers and view them at scale, in real time, via twin projectors mounted to the ceiling. Additionally, visitors can switch into design mode and create their own wallpapers; adjusting the scale, orientation, and positioning of a repeating pattern on the wall. This latter feature is arguably what makes the experience what it is. Visitors from all walks of life love spending time drawing in the Immersion Room, typically resulting in a selfie or two like the ones you see in the images below.

? #cooperhewitt #londondesignbiennale

A photo posted by Sibel Yalcin (@sibellyalcin) on

#londondesignbiennale #immersionroom #cooperhewitt #doodling

A photo posted by Helen (@helen3210) on

What I’ve just described is essentially the minimal viable product for this entire effort. One interactive table, two ceiling mounted projectors, a couple of computers, and a couple of walls to project on.

From bar napkin to fabrication–we've managed to clone the Immersion Room!

A photo posted by Micah Walter (@micahwalter) on

The Immersion Room uses two separate computers, each running an application written in OpenFrameworks. There is the “projector app,” which manages what is displayed to the two projectors, and there is the “table app,” which manages what visitors see and interact with on the 55” Ideum table. The two apps communicate with each other over a local network, with the table app essentially instructing the projector app with what it should be displaying in real time.

Here is a basic diagram of how that all fits together.

Twin projector and computer setup for Wallpaper Immersion Room

Twin projector and computer setup for Wallpaper Immersion Room

Each application loads in content on startup. This is provided to the application by a giant json file that is managed by our Collections API and meant to be updated each night through a cron job. When the applications start up, they look at the json file and pull down any new or changed assets they might need.

At Cooper Hewitt, this means that our curators are able to update content whenever they want using our collections management system, The Museum System (TMS). Updates they make in TMS get reflected on the digital table following a data-deploy and reboot of the table and projector applications. This is essentially the workflow at Cooper Hewitt. Curators fill in object data in TMS, and through a series of tubes, that data eventually finds its way to the interactive tables and our collections website and API. For this project in London, we’d do essentially the same process, with a few caveats.

Make it do all the things

We started asking ourselves a number of questions. It’s a mix of feature creep and a strong desire to put some of the technology we’ve built through it’s paces–to determine if it’s possible to recontextualize much of what we’ve created at Cooper Hewitt and have it work outside the museum walls.

Questions like:

  • What if we want to allow visitors to save the wallpapers and the designs they create?
  • What if we wanted to hand out a Cooper Hewitt Pen to each visitor?
  • What if we want to let people use the Pen to save their creations, wallpapers, and ALL the other installations around the Somerset House?!

All of a sudden, the project becomes a bit more complicated, but still, a great way to figure out how we would translate a ton of the technology we’ve built at Cooper Hewitt into something useful for the rest of the world. We had loads of other ideas, features, and add-ons, but at some point you have to decide what falls in and out of scope.

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

So this is what we decided to do.

  • We would devise a way to construct the physical build out of a second Immersion Room. This would essentially be a “set” with walls and a truss system for suspending two rather heavy projectors. It would have a floor, and would be slightly off the ground so we could conceal wiring and create a place for the 55” touch table to rest.
  • We’d pre-fabricate the entire rig in New York and ship it to London to be assembled onsite.
  • We’d enable the Immersion Room to allow visitors to save from a selection of 101 wallpapers from our permanent collection. These would be curated for the Utopia theme of the London Design Biennale.
  • We’d enable the design feature of the Immersion Room and allow visitors to save their designs.
  • We’d hand out Cooper Hewitt Pens to each visitor who wanted one, along with a printed receipt containing a URL and a unique code.
  • We’d post coded NFC tags all throughout Somerset House to allow visitors to use their Pens to collect information about each participating country, including our own.
  • We’d build a bespoke website where visitors would go following their visit to see all the things they’ve collected or created.

These are all of the things we decided to do from a technology standpoint. Here is how we did it.

pen-www

The first step to making this all work was to extract the relevant code from our production collections website and API. We named this “pen-www” and intended that this codebase serve as a mini framework for developing a collecting system and website. In essence it’s simply a web application (written in PHP) and a REST API (also PHP). It really needed to be “just the code” required to make all the above work. So here is another list, explaining what all those requirements are.

  • It needs to somehow generate a simple collections website that is capable of storing relevant info about all the things one could potentially collect. This was very similar to our current codebase at Cooper Hewitt, but we added the idea of “organizations” so that you could have multiple participants contributing info, and not just Cooper Hewitt.
  • It needs all the API methods that make the Pen work. There are actually just a handful that do all the hard work. I’ll get to those in a bit.
  • It needs to handle image uploads and processing of those images (saved designs from the Immersion Room table).
  • It needs to create “visits” which are the pages a visitor lands on when entering their unique code.
  • It needs a series of scripts to help us import data and set things up.
  • We would also need some new code to allow us to generate paper receipts with unique codes printed on them. At Cooper Hewitt this is all done via our Tessitura ticket printing system, so since we wouldn’t have that at Somerset House, we’d need to devise a new way of dealing with registering and pairing pens, and printing out some kind of receipt.

So, pen-www would become this sort of boilerplate framework for the Pen. The idea being, we’d distill the giant codebase we’ve developed at Cooper Hewitt down to the most essential parts, and then make it specific to what we wanted to do for London. This is an important point. We aren’t attempting to build an actual framework. What we are trying to do is to boil out the necessary code as a starting point, and then use that code as the basis for a new project altogether.

From our point of view, this exercise allows us to understand how everything works, and gets us close enough to the core code so that we can think of repeating this a third or a fourth time—or more.

The API at the center of everything

We built the Cooper Hewitt API with intentions of making it flexible enough to be easily expanded upon or altered. It tries to adhere to the REST API pattern as much as it can, but it’s probably better described as “REST-ish.” What’s nice about this approach has been that we’ve been able to build lots and lots of internal interfaces using this same pattern and code base. This means that when we want to do something as bespoke as building an entire replica of our seemingly complex Pen/Visit system, and deploy it in another country, we have some ground to stand on.

In fact, just about all of the systems we have built use the API in some way. So, in theory, spinning up a new API for the London project should just mean pointing things like the Immersion Room interactive table at a new API endpoint. Since the methods are the same, and the responses use the same pattern, it should all just work!

So let’s unpack the API methods required to make the Pen and Immersion Room come to life. These are all internal/private API methods, so you can’t take them for a spin, and I can’t share the actual code with you that lies beneath, but I think you’ll get the idea.

Pens – there’s a whole class of API methods that deal with the Pen itself. Here are the relevant ones:

  • pens.checkoutPen – This marks a Pen as having been checked out for an associated visit
  • pens.getCurrentCheckout – This gets the currently checked out Pen for a specific visit
  • pens.getCurrentVisit – This does the opposite of the getCurrentCheckout, and returns a visit for a specific Pen.
  • pens.returnPen – This marks the Pen as having been returned.

Visits – There is another class of API methods that deal with the idea of “visits.” A visit is meant to represent one individual’s visit to the museum or exhibition, or some other physical location. Each visit has an ID and a corresponding unique code (the thing we print on a visitor’s paper receipt).

  • visits.getActivity – Returns all the activity associated with a visit
  • visits.getInfo – Returns detailed info about a specific visit
  • visits.processPenActivity – This is a major API method that takes any activity recorded by the Pen and processes it before storing the info in the appropriate location in the database. This one gets called frequently and is the method that happens when you tap your Pen on a reader board at one of our digital tables. The reader board downloads all the info on the Pen, and calls this API method to deal with whatever came across.
  • visits.registerVisit – This marks a visit as having been registered. It’s what generates your unique code for the visit.

Believe it or not, that is basically it. It’s just a handful of actions that need to be performed to make this whole thing work. With these methods in place, we can:

  • Pair pens with newly created visits so we can hand Pens out to visitors.
  • Process data collected by the Pen, either from NFC stickers it has read, or via our Interactive Table.
  • Do a final read of the Pen and return the Pen to the pool of possible pens.

So, now that we have an API, and all the relevant methods we can start building the website and massaging the API code to do things in the slightly different ways that make this whole thing live up to its bespokiness.

On the website end of things we will follow the KISS principle or “Keep it simple, stupid.” The site will be devoid of fancy image display features, extended relationship mapping and tagging, and all the goodies we’ve spent years developing for the Cooper Hewitt Collections website. It won’t have search, or fancy search, or search by color, or search by anything. It won’t have a shoebox or even a random button (ok, maybe I’ll add that later today). For all intents and purposes, the website will simply be a place to enter your unique code, and see all your stuff.

https://londonbiennale.cooperhewitt.org

https://londonbiennale.cooperhewitt.org

The website and its API will live at https://londonbiennale.cooperhewitt.org. It consists of just two web front ends running Apache and sitting behind an NGINX load balancer, and one MySQL instance via Amazon’s RDS service. It’s very, very similar to just about all of our other systems and services except that it doesn’t have fancy extras like Logstash logging, or an Elasticsearch index. I did take the time to install server monitoring and alerting, just so we can sleep at night, but really, it’s pretty bare bones.

At first glance there isn’t much there to look at. You can browse the different participants and you can create a Cooper Hewitt account or sign in using our Single Sign On service, but other than that, there is really just one thing to do–enter your code and see all your stuff.

Participants

Participants

All your content are belong to us

In order for this project to really work, we’d need to have content. Not only our own Cooper Hewitt content, but content from all the participants representing the 36 other countries from around the world.

So here is the breakdown:

  • Each participant or organization will have a page, like this one for Australia https://londonbiennale.cooperhewitt.org/participants/australia/
  • Each participant will have one “object.” In the case of all 37 participants, this object will represent their “booth” like this one from Australia – https://londonbiennale.cooperhewitt.org/objects/37643049/
  • Each “booth” will contain an image and the catalog text provided by the London Design Biennale team. If there is time, we will consider adding additional information from each participant (we haven’t done this as of yet).
  • Cooper Hewitt’s record will have some more stuff. In addition to the object representing Cooper Hewitt’s booth, we will also have 100 wallcoverings from our permanent collection.
  • You can collect all of these via the Immersion Room table and your Pen. Here is our page – https://londonbiennale.cooperhewitt.org/participants/usa/ There are also two physical wallapapers that are part of our installation, which you can of course collect as well.

All told, that means 140 objects in this little microsite/sitelet. You can actually browse them all at once if you are so inclined here – https://londonbiennale.cooperhewitt.org/objects/

"Booth" pages

“Booth” pages

Visit Pages

So what does a visitor get when they go to the webpage and type in their unique code. Well, the answer to that question is “it depends.” For objects that we imported from our permanent collection (the 101 wallpapers) you get a nice photo of the wallpaper, a chatty description of the wallpaper written by our curator, Greg Herringshaw, having to do with “Utopia” — the theme of this year’s London Design Biennale. You also get a link back to the collection page on the Cooper Hewitt website. For the 37 booths, you get a photo and the catalog info for each participants, and if you created and saved your own design in the wallpaper immersion room, you get a copy of the PNG version of your design, which you can, of course, download and do with what you like. (Hint: they make cool wall posters.)

Additionally, you get timestamps related to your visit. This way, just like on the Cooper Hewitt website, you get to retain a record of your visit–the date and time of each collected object and a way to recall your visit anytime in the future.

Visit page example

Visit page example

Slow Progress

All of this code replication, extraction, and re-configuring took quite a long time. The team spent long hours, nights, and weekends trying to sort it all out. In theory this should all just work, but like any project, there are unique aspects to the thing you are currently trying to accomplish, which means that, no matter what, you’re gonna be writing some new code.

Ok, so let’s check in with what we’ve got so far.

  • A physical manifestation of the Wallpaper Immersion Room and all it’s hardware, computers, wires, etc.
  • A website and API to power all the fun stuff.
  • A bunch of content from our own permanent collection and the catalog info from the London Design Biennale team.
  • Visit pages

We still need the following:

  • A way to issue Pens to visitors as they arrive.
  • A way to print a unique code on some kind of receipt, which we give to the visitors as well.
  • A way to check in Pens as visitors return them.
  • The means to get the table pointing at the right API endpoint so it can save things and processPenActivity as well.

To accomplish the first three items on the list, we enlisted the help of Rev Dan Catt.

That time @revdancatt assembled 700 pens for @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Dan is planning to write another extensive blog post about his role in all of this, but in a nutshell, he took our Pen registration code and built his own little mini-registration station and ticket printer. It’s pictured below and performs all of the functions above (1 through 3). It uses a small Adafruit thermal printer to print the receipts and unique codes, and it is simple enough to use with a small web based UI to give the operator some basic feedback. Other than that, you tap a pen and it does the rest.

Dan's Raspberry Pi powered Pen registration and ticket printing station.

Dan’s Raspberry Pi powered Pen registration and ticket printing station.

Tickets printing for the first time

Tickets printing for the first time

For the last item on the list, I had to re-compile the code Local Projects delivered to us. In the code I had to find the references to the Cooper Hewitt API endpoints and adjust them to point at the London API endpoint. Once I did this, and recompiled the OpenFrameworks project we were in business. For a while, I had it all set up for development and testing on my laptop using Parallels and Visual Studio. Eventually I compiled a final version and we installed it on the actual Immersion Room Table.

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Cracking open the Local Projects code was a little scary. I’m not really an OpenFrameworks programmer, or at least I haven’t been since grad school, and the Local Projects code base is pretty vast. We’ve had this code compiled and running on all the interactive tables at Cooper Hewitt since December of 2014. This is the first time I (or anyone I know of) has attempted to recompile it from source, not to mention make changes to it beforehand.

That said, it all worked just fine. I had to find an old copy of Visual Studio 2012, but other than that, and tracking down a few dependencies, it wasn’t a very big deal. Now we had a copy of the Immersion Room table application set up to talk to the London API endpoint. As I mentioned before, all the API methods are named the same, and set up the same way, so the data began to flow back and forth pretty quickly.

Content Management

I mentioned above that we had to import 100 wallpapers from our collection as well as the data for all 37 booths. To accomplish all of this, we wrote a bunch of Python and PHP scripts.

We needed to do the following with regard to content:

  • Create a record for each of the 37 participants
  • Import the catalog info as an object for each of the 37 participants
  • Import the 100 wallcoverings from the Cooper Hewitt collection. We just used, you guessed it, our own API to do this.
  • Massage the JSON files that live on the Projector and Table applications so they have the correct 100 wallpapers and all their metadata.
  • Display the emoji flag for each country, because emoji.

In the end, this was just a matter of building the necessary scripts and running them a number of times until we had what we wanted. As a sort of side note, we decided to use London Integers for this project instead of Brooklyn Integers, which we normally use at Cooper Hewitt, but that’s probably a topic for a future post.

Shipping code, literally

At some point we would need to put all the hardware and construction pieces into crates and ship them across the pond. At the time, our thinking was to get the code running on the digital table and projector computers as close to production ready as we could. We installed all the final builds of the code on the two computers, packed them up with the 55” interactive table, and shipped them over to London, along with six other crates full of the “set” and all its hardware and parts. It was, in a nutshell, impressive.

As the freight went to London, we continued working on the website code back home—making the site look the way we wanted it to look and behave the way we wanted it to behave. As I mentioned before, it’s pretty feature free, but it still required some spit and polish in the form of some of Rachel’s Sassy-CSS. Eventually we all settled on the aesthetics of the site, added a lockup that reflected both the Cooper Hewitt and London Design Biennale brands (both happen to be by Pentagram) and called it a day. We continued testing the table application and Dan continued working on the Pen registration app and receipt printer so it would be ready when we landed.

Building the set with the team at Somerset House.

Building the set with the team at Somerset House.

We landed, started to build the set, and many, many things started to go wrong. I think all of the things that went wrong are probably the topic of yet another blog post, but let’s just say for now: if you ever decide to travel a whole bunch of A/V equipment and computers to another country, get everything working with the local power standard and don’t try to transform anything.

2500 batteries #ldb16

A photo posted by Micah Walter (@micahwalter) on

Eventually, through a lot of long days and sleepless nights, and with the help of many, many kind-hearted people, we managed to get all the systems up and running, and everything began to work.

dscf0213

We flipped the switch and the whole thing came to life and visitors started to walk up to our booth, curious and excited to see what it would do. We started handing out Pens and I started watching the data flow through.

By the close of the show, visitors had used the Pen to collect over 27,000 objects. Eventually, I’ll do a deeper data analysis, but for now, the feeling is really great. We created a portable version of the Pen and all of its underlying systems. We traveled a giant kit of A/V tech and parts overseas, and now people in a country other than the United States can experience what Cooper Hewitt is all about: a dynamic, interactive deep dive into design.

Design your own Utopia at the London Design Biennale

Design your own Utopia at the London Design Biennale

-m

A Very Happy & Open Birthday for the Pen

lisa-pen-table-pic

Today marks the first birthday of our beloved Pen. It’s been an amazing year, filled will many iterations, updates, and above all, visits! Today is a celebration of the Pen, but also of all of our amazing partners whose continued support have helped to make the Pen a reality. So I’d like to start with a special thank you first and foremost to Bloomberg Philanthropies for their generous support of our vision from the start, and to all of our team partners at Sistel Networks, GE, Undercurrent, Local Projects, and Tellart.

Updates

Over the course of the past year, we’ve been hard at work, making the Pen Experience at Cooper Hewitt the best it can be. Right after we launched the Pen, we immediately realized there was quite a bit of work to do behind the scenes so that our Visitor Experience staff could better deal with deploying the Pen, and so that our visitors have the best experience possible.

Here are some highlights:

Redesigning post-purchase touchpoints – We quickly realized that our ticket purchase flow needed to be better. This article goes over how we tried to make improvements so that visitors would have a more streamlined experience at the Visitor Experience desk and afterwards.

Exporting your visits – The idea of “downloading” your data seemed like an obvious necessity. It’s always nice to be able to “get all your stuff.” Aaron built a download tool that archives all the things you collected or created and packages it in a nice browser friendly format. (Affectionately known as parallel-visit)

Improving Back-of-House Interactions – We spent a lot of time behind the visitor services desk trying to understand where the pain points were. This is an ongoing effort, which we have iterated on numerous times over the year, but this post recounts the first major change we made, and it made all the difference.

Collecting all the things – We realized pretty quickly that visitors might want to extend their experience after they’ve visited, or more simply,  save things on our website. So we added the idea of a “shoebox” so that visitors to our website could save objects, just as if they had a Pen and were in our galleries.

Label Writer – In order to deploy and rotate new exhibitions and objects, Sam built an Android-based application that allows our exhibition staff to easily program our NFC based wall labels. This tool means any staff member can walk around with an Android device and reprogram any wall label using our API. Cool!

Improving visitor information with paper – Onboarding new visitors is a critical component. We’ve since iterated on this design, but the basic concept is still there–hand out postcards with visual information about how to use the Pen. It works.

Visual consistency – This has more to do with our collection’s website, but it applies to the Pen as well, in that it helps maintain a consistent look and feel for our visitors during their post visit. This was a major overhaul of the collections website that we think makes things much easier to understand and helps provide a more cohesive experience across all our digital and physical platforms.

Iterating the Post-Visit Experience – Another major improvement to our post-visit end of things. We changed the basic ticket design so that visitors would be more likely to find their way to their stuff, and we redesigned what it looks like when they get there.

Press and hold to save your visit – This is another experimental deployment where we are trying to find out if a new component of our visitor experience is helpful or confusing.

On Exhibitions and Iterations – Sam summarizes the rollout of a major exhibition and the changes we’ve had to make in order to cope with a complex exhibition.

Curating Exhibition Video for Digital Platforms – Lisa makes her Labs debut with this excellent article on how we are changing our video production workflow and what that means when someone collects an object in our galleries that contains video content.

The Big Numbers

Back in August we published some initial numbers. Here are the high level updates.

Here are some of the numbers we reported in August 2015:

  • March 10 to August 10 total number of times the Pen has been distributed – 62,015
  • March 10 to August 10 total objects collected – 1,394,030
  • March 10 to August 10 total visitor-made designs saved – 54,029
  • March 10 to August 10 mean zero collection rate – 26.7%
  • March 10 to August 10 mean time on campus – 99.56 minutes
  • March 10 to August 10 post visit website retrieval rate – 33.8%

And here are the latest numbers from March 10, 2015 through March 9, 2016

  • March 10, 2015 to March 9, 2016 total number of times the Pen has been distributed – 154,812
  • March 10, 2015 to March 9, 2016 total objects collected – 3,972,359
  • March 10, 2015 to March 9, 2016 total visitor-made designs saved – 122,655
  • March 10, 2015 to March 9, 2016 mean zero collection rate – 23.8%
  • March 10, 2015 to March 9, 2016 mean time on campus – 110.63 minutes
  • Feb 25, 2016 to March 9, 2016 post visit website retrieval rate – 28.02%

That last number is interesting. A few weeks ago we added some new code to our backend system to better track this data point. Previously we had relied on Google Analytics to tell us what percentage of visitors access their post visit website, but we found this to be pretty inaccurate. It didn’t account for multiple access to the same visit by multiple users (think social sharing of a visit) and so the number was typically higher than what we thought reflected reality.

So, we are now tracking a visit page’s “first access” in code and storing that value as a timestamp. This means we now have a very accurate picture of our post visit website retrieval rate and we are also able to easily tell how much time there is between the beginning of a visit and the first access of the visit website–currently at about 1 day and 10 hours on average.

The Pen generates a massive amount of data. So, we decided to publish some of the higher level statistics on a public webpage which you can always check in on at https://collection.cooperhewitt.org/stats. This page reports daily and includes a few basic stats including a list of the most popular objects of all time. Yes, it’s the staircase models. They’ve been the frontrunners since we launched.

Those staircase models!

Those staircase models!

As you can see, we are just about to hit the 4 million objects collected mark. This is pretty significant and it means that our visitors on average have used the Pen to collect 26 objects per visit.

But it’s hard to gain a real sense of what’s going on if you just look at the high level numbers, so lets track some things over time. Below is a chart that shows objects collected by day for the last year.

Screen Shot 2016-03-09 at 3.50.36 PM

Objects collected by day since March 10, 2015

On the right you can easily see a big jump. This corresponds with the opening of the exhibition Beauty–Cooper Hewitt Design Triennial. It’s partly due to increased visitation following the opening, but what’s really going on here is a heavy use of object bundling. If you follow this blog, you’ll have recently read the post by Sam where he talks about the need to bundle many objects on one tag. This means that when a visitor taps his or her pen on a tag, they very often collect multiple objects. Beauty makes heavy use of this feature, bundling a dozen or so objects per tag in many cases and resulting in a dramatic increase in collected objects per day.

Pen checkouts per day since March 10, 2015

Pen checkouts per day since March 10, 2015

We can easily see that this, is in fact, what is happening if we look at our daily pen checkouts. Here we see a reasonable increase in checkouts following the launch of Beauty, but it’s not nearly as dramatic as the number of objects being collected each day.

Screen Shot 2016-03-09 at 11.40.09 PM

Immersion room creations by day since March 10, 2015

Above is a chart that shows how many designs were created in the immersion room each day over the past year. It’s also going to be directly connected to the number of visitors we have, but it’s interesting to see the mass of it along this period of time. The immersion room is one of our more popular interactive installations and it has been on view since we launched. So it’s not a big surprise it has a pretty steady curve to it. Also, keep in mind that this is only representative of “things saved” as we are not tracking the thousands of drawings that visitors make and walk away from.

We can slice and dice the Pen data all we want. I suppose we could take requests. But I have a better idea.

Open Data

Today we are opening up the Pen Data. This means a number of things, so listen closely.

  1. The data we are releasing is an anonymized and obfuscated version of some of the actual data.
  2. If you saved your visit to an account within thirty days of this post (and future data updates) we won’t include your data in this public release.
  3. This data is being licensed under Creative Commons – Attribution, Non-Commercial. This means a company can’t use this data for commercial purposes.
  4. The data we are releasing today is meant to be used in conjunction with out public domain collection metadata or our public API.

The data we are releasing is meant to facilitate the development of an understanding of Cooper Hewitt, its collection and interactive experiences. The idea here is that designers, artists, researchers and data analysts will have easy access to the data generated by the Pen and will be able to analyze  and create data visualizations so that we can better understand the impact our in-gallery technology has on visitors.

We believe there is a lot more going on in our galleries than we might currently understand. Visitors are spending incredible amounts of time at our interactive tables, and have been using the Pen in ways we hadn’t originally thought of. For example, we know that some visitors (children especial) try to collect every single object on view. We call these our treasure hunters. We also know that a percentage of our visitors take a pen and don’t use it to collect anything at all, though they tend to use the stylus end quite a bit. Through careful analysis of this kind of data, we believe that we will be able to begin to uncover new behavior patterns and aspects of “collecting” we haven’t yet discovered.

If you fit this category and are curious enough to take our data for a spin, please get in touch, we’d love to see what you create!

Random Button Television

AppleTV SDK Fun

Recently, Apple announced a number of updates to their product line, including a pretty major update to AppleTV, the small set top box that allows people to listen to their music collection, rent movies and TV shows, and stream audio from their phones to their televisions. The biggest update to AppleTV was definitely the fact that it now supports “Apps”, allowing iOS developers to design and build whatever they can imagine.

I put my hat in the ring, and applied for Apple’s lottery and about a week later a brand new AppleTV Software Development Kit showed up at the Labs.

To be honest, I haven’t had much interest in developing apps for a long time now. It’s problematic at best to go down the road of building something for iOS ( or any brand specific device ) in the context of a museum, and yes, it’s been a long while since I even glanced at Objective-C. But, the device is a curious object, and at the very least made me wonder what it might be like to introduce a way to open up access to our collections through the warm and inviting glow of a television screen. Imagine it for a moment, sitting there atop your dresser, or mounted to your living room wall, next to the fire place, in full HD. The television, no matter how you divide it up over the years has a pretty permanently fixed position in our homes, and in our minds.

As a little side project, I decided to see what I could do with the AppleTV SDK and our Collections API. I decided ahead of time I wouldn’t spend too much time on this, and although I wound up spending at least one night reading up on NSDictionary and a few other oddball data-types in Objective-C, I was able to stick with my original plan and quickly built a little “Hello AppleTV” app that simply allows the “viewer” to flip through objects in our collection by pressing the “select” button on their remotes.

It uses one API method, our old favorite cooperhewitt.objects.getRandom, and yeah, that’s all it does. Keep pressing the select button and you continue to get objects. It’s quite fun!

AppleTV XCode

So here’s how it works. As we say over here at the Cooper Hewitt Labs, “Working code always wins.”

  1. There is a ViewController. This is the thing that represents the screen on your AppleTV, and the thing you can apply all of the subsequent properties to.
  2. There is an ImageView. This is where the image is applied to.
  3. There are a couple Labels which simply allow you to display the title and object ID for each object.
  4. There is a asynchronous way of calling the API.

Here’s the code. There’s basically just the two files for the ViewController that define everything we’re talking about. Beyond that, there is some “wiring up” of the ImageView and Labels so the visuals know what code they are connected to, and there is really the one method “fetchRandom” that does the work of calling the API, parsing the response, and storing the things we are interested in.

And here is the end result.

To be quite honest, we probably won’t be uploading this to the iTunes store. It’s really just a “Hello World” app and only meant to be a conversation starter for staff members who happen by the Labs area. But it does make me wonder — what else could museums do with a device like this?

The device itself is a curious one, with plenty of built in human interface challenges and opportunities. Sitting back, clicking the remote and checking out collection objects, isn’t really my idea of an exciting way to spend an evening at home, but take this a few steps further, maybe a few additional API calls, and who knows what might unfold.

Long live RSS

Screen Shot 2015-07-10 at 2.35.17 PM

I just made a new Tumblr. It’s called “Recently Digitized Design.” It took me all of five minutes. I hope this blog post will take me all of ten.

But it’s actually kinda cool, and here’s why. Cooper Hewitt is in the midst of mass digitization project where we will have digitized our entire collection of over 215K objects by mid to late next year. Wow! 215K objects. That’s impressive, especially when you consider that probably 5000 of those are buttons!

What’s more is that we now have a pretty decent “pipeline” up and running. This means that as objects are being digitized and added to our collections management system, they are automatically winding up on our collections website after winding their way through a pretty hefty series of processing tasks.

Over on the West Coast, Aaron, felt the need to make a little RSS feed for these “recently digitized” so we could all easily watch the new things come in. RSS, which stands for “Rich Site Summary”, has been around forever, and many have said that it is now a dead technology.

Lately I’ve been really interested in the idea of Microservices. I guess I never really thought of it this way, but an RSS or ATOM feed is kind of a microservice. Here’s a highlight from “Building Microservices by Sam Newman” that explains this idea in more detail.

Another approach is to try to use HTTP as a way of propagating events. ATOM is a REST-compliant specification that defines semantics ( among other things ) for publishing feeds of resources. Many client libraries exist that allow us to create and consume these feeds. So our customer service could just publish an event to such a feed when our customer service changes. Our consumers just poll the feed, looking for changes.

Taking this a bit further, I’ve been reading this blog post, which explains how one might turn around and publish RSS feeds through an existing API. It’s an interesting concept, and I can see us making use of it for something just like Recently Digitized Design. It sort of brings us back to the question of how we publish our content on the web in general.

In the case of Recently Digitized Design the RSS feed is our little microservice that any client can poll. We then use IFTTT as the client, and Tumblr as the output where we are publishing the new data every day. 

RSS certainly lives up to its nickname ( Really Simple Syndication ), offering a really simple way to serve up new data, and that to me makes it a useful thing for making quick and dirty prototypes like this one. It’s not a streaming API or a fancy push notification service, but it gets the job done, and if you log in to your Tumblr Dashboard, please feel free to follow it. You’ll be presented with 10-20 newly photographed objects from our collection each day.

UPDATE:

So this happened: https://twitter.com/recentlydigital