Tag Archives: alpha

A Timeline of Event Horizons

We’ve added a new experimental feature to the collections website. It’s an interactive visualization depicting when an object was produced and when that object was collected using some of the major milestones and individuals involved in the Cooper-Hewitt’s history itself as a bracketing device.

Specifically the years 1835 when Andrew Carnegie was born and 2014 when the museum will re-open after a major renovation to Carnegie’s New York City mansion where the collection is now housed. It’s not that Andrew Carnegie’s birth signals the beginning of time but rather it is the first of a series of events that shape the Cooper-Hewitt as we know it today.

The timeline’s goal is to visualize an individual object’s history relative to the velocity of major events that define the larger collection.

Many of those events overlap. The lives of Andrew Carnegie and the Hewitt Sisters all overlapped one another and they were all alive during the construction of Carnegie’s mansion and the creation of Hewitt Sister’s Cooper Union Museum for the Arts of Decoration. The life of the mansion overlaps the Cooper-Hewitt becoming part of the Smithsonian in 1976 and assuming the mantle of the National Design Museum in the mid-1990s.

Wherever possible we show both the start and end dates for an object represented as its own underlined event span. If we only know the start date for an object we indicate that using a blue arrow. The date that the object was acquired by the museum is indicated using a white arrow.

The soundtrack of histories that surround an object are depicted as a series of sequential and semi-transparent blocks layered one atop the other to try and reflect a density of proximate events. If you mouse over the label for an event it is highlighted, in orange, in the overall timeline.

We had three motivations in creating the timeline:

  • To continue to develop a visual language to represent the richness and the complexity of our collection. To create views that allows a person to understand the outline of a history and invite further investigation.
  • To start understanding the ways in which we need to expose the collection metadata so that it can play nicely with data visualization tools.
  • To get our feet wet with the D3 Javascript library which is currently the (friendly) 800-pound gorilla in the data visualization space. D3 is incredibly powerful but also a bit of a head-scratch to get started with so this is us, getting started.

This is only the first of many more visualizations to come and we are hoping to develop a series of building blocks and methodologies to allow to build more and more experimental features as quickly as we can think of them.

So head over to the experimental section of the collections website and enable the feature flag for the Object Timeline and have a play and let us know what you think!

We’ve also made the Github repository for the underlying Javascript library that powers the timeline public and released the code under a BSD license. It should be generic enough to work for any dataset that follows a similar pattern to ours and is not specific to a museum collection.

If you look under the hood you might be horrified at what you see. We made a very conscious decision, at this stage of things while we get to know D3, to focus more on the functionality of the timeline itself rather than the elegance of the code. This is a very early experiment and we would be grateful for bug fixes and suggestions for how to make it better.

Default Sort, or what would Shannon do?

Claude Shannon

Up until recently our collections website displayed search results ordered by, well, nothing in particular. This wasn’t necessarily by design, we just didn’t have any idea of how we should sort the results. We tossed around the idea of sorting things by date or alphabet, but this seemed kind of arbitrary. And as search results get more complex, ‘keyword frequency’ isn’t necessarily equivalent to ‘relevance’.

We added the ability on our ‘fancy search‘ to sort by all of these things, but we still needed a way to present search results by default.

Enter Claude Shannon and a bit of high level math (or ‘maths’ if, like some of the team, are from this thing called ‘the Commonwealth’).

Claude Shannon was a pretty smart guy and back in 1948 in a paper titled “A Mathematical Theory of Communication” he presented the idea of Entropy, or information theory. The concept is actually rather simple, and relies on a quick analysis of a dataset to discover the probability of different parts of data within the set.

For images you can think about it by looking at a histogram and thinking of the height of each bar in the histogram as a representation of the probability that a particular pixel value will be present. With this in mind you can get a sense of how “complex” an image is. Images with really flat histograms ( lots of pixel values present lots of times ) will have a very high Entropy, where as images with severely spiked histograms ( all black or all white for example ) will have a very low Entropy.

In other words, images with more fine detail have a higher Entropy and are more complicated to express, and usually take up more room on disk when compressed.

Sidewall, a-w-793, 193040

Think of an image of a wallpaper pattern like this one. It has a really high Entropy value because within the image there is lots of fine detail and texture. If we look at the histogram for the image we can see that there are lots of pixel values represented pretty evenly across the graph with a few spikes in the middle most likely representing the overall palette of the image.

Screen Shot 2013-06-21 at 10.24.25 AM

On the other hand, check out this image of a pretty smooth vase on a white background. The histogram for this image is less evenly distributed, leaning towards the right of the graph and thus has a much lower Entropy value.

Vase, 2010-6-3, 188388

Screen Shot 2013-06-21 at 10.36.42 AMWe thought it might be interesting to sort all of the images in our collection by Entropy, displaying the more complex and finer detailed images first, so I built a simple python script that takes an image as input and returns its “Shannon Entropy” as a float.

https://gist.github.com/micahwalter/5237697

To chew through the entire collection we built this into a simple “httpony” and built a background task to run through every image in our collection and add its Shannon Entropy as a value in the collection database. We then indexed these values in Solr and added the option to sort by “image complexity” in our Fancy Search page.

Screen Shot 2013-06-21 at 10.41.49 AM

Sorting by Shannon Entropy is kind of interesting, and we noticed right away that a small byproduct of this process is that objects that simply dont have an image wind up at the end of the sort. In the end we liked the search results so much that we made “image complexity” the default sort across the entire website. You can always go into Fancy Search and change the sort criteria to your liking, but we thought image complexity seemed to be a pretty good place to start.

But what is the relationship between Claude Shannon and Shannen Doherty? Well, it looks like Shannen, herself, has a very high Shannon Entropy…

tumblr_m5dcl0iT8T1qzr8o4o1_500

 

Screen Shot 2013-06-21 at 10.58.46 AM

And another award!

MUSE award-1024

This time we picked up a Gold award from the American Association of Museum’s Media and Technology MUSE awards. We won in the ‘APIs and applications’ category against some stiff competition from some very polished tablet and mobile apps. The category rewards “digital presentations, applications, and mashups that utilize existing data and online resources to transform content into new meaningful tools or experiences.”

Once again it is nice to see recognition, this time from the broader museum sector, for the value of ‘public alpha’ releases.

We won an award

_mw2013small

The annual international gathering that is Museums and the Web has just passed and this year we were lucky enough to win one of the Best of the Web Awards in the Research/Collections category.

We are especially proud of this award because it represents critical evaluation by our peers. And we love that they called out its tone, experimental nature, and its early alpha release. These are exactly the qualities that we believe offer the most to others in the field – something that shiny, polished, and ‘finished’ projects often don’t. What we are doing can (and perhaps, should) be copied by others.

We dedicate the award to Bill Moggridge and we’d like to particularly thank the generosity of curatorial and registration staff in letting us experiment to try re-inventing the collections online paradigm – a task that is far from over.

Congratulations to all the other winners – it is nice to be in such great company!

tms-tools == this is a blog post about code

Untitled
icebergs are kind of like giant underwater unicorns when you think about it

tms-tools

This is a blog post about code. Which means it’s really a blog post about data.

tms-tools is a suite of libraries and scripts to extract data from TMS as CSV files. Each database table is dumped as a separate CSV file. That’s it really.

It’s a blog post about data. Which means it’s really a blog post about control. It’s a blog post about preserving a measure of control over your own data.

At the end of it all TMS is a MS-SQL database and, in 2013, it still feels like an epic struggle just to get the raw data out of TMS so that single task is principally what these tools deal with.

tms-tools is the name we gave to the first set of scripts and libraries we wrote when we undertook to rebuild the collections website in the summer of 2012. The first step in that journey was creating a read-only clone of the collections database.

Quite a lot of this functionality can be accomplished from the TMS or MS-SQL applications themselves but that involves running a Windows machine and pressing a lot of buttons. This code is designed to be part of an otherwise automated system for working with your data.

TMS will remain the ultimate source of truth for our collection metadata but for us TMS didn’t turn out to be the best choice for developing and managing the public face of that data. The code in the tms-tools repository is meant to act as a bridge between those two different needs.

There is no attempt to interpret the data or the reconcile the twisty maze of relationships between the many tables in TMS. That is left as an exercise to the reader. This is not a one-button magic pony. This is code that works for us today. It has issues. If you choose to use it you will probably discover new issues. Yay, adventure!

We’re making the tms-tools code available today on Github, released under a BSD license.

We are making this code available because we know many others in our community face similar challenges. Maybe the work we’ve done so far can help others and going forward we can try to make things a little better, together.

Welcome to object phone. Your call has been placed in a queue.

I made another small thing. Again, another way for me to experiment with the Collection API, and again, another way to experiment with new ways of accessing the collection. This time, there aren’t many screen shots to display–there is no website to look at. This time, it’s “Welcome to object phone!”

(718) 213-4915

Object Phone” is ( presently ) a very, very simple implementation of a way to explore our collection by dialing a telephone, or sending a text message. I had been thinking of a few of the more popular museum oriented audio tour products, and how they all seem to be very CMS style in their design, and wondering if we could just use our own API.

For example, TourML and TAP ( which offer the web programmer a very powerful framework for programming a mobile guide using the Drupal CMS ) are very nice, but they are still very dependent on content production. The developer or content manager has to build and curate all of the content for the “tour.” This might be a good way to go about things, especially if you are leaning on an existing Drupal installation for a good deal of your content, but I was looking for a way to access existing data, and specifically the data in our collection website.

In the beginning of developing our collection website, we went through the process of assigning EVERYTHING a unique “bigint” in the form of what we are referring to as an “artisinal integer.” This means that each object record, each person record and each, well, everything else has a unique integer which no other thing can have. This is not in place of accession numbers–we will probably always have accession numbers The nice thing about unique integers is that they’re really easy to deal with on a programmatic level.

For example, if you text 18704235 to 718-213-4915 you should get a response that looks like the screenshot below. In fact you can text any object id number from our collection and get a similar response.

2013-04-18 10.15.18

You can also dial that same number and use your keypad to either search the collection by object ID, or ask for a random object. The application will respond to you using a text to speech converter, which is usually pretty good.

Presently, the app is not replying with a whole lot of information. You essentially get the object’s title and medium field if it has one. In many cases, asking for a random object may just result in something like “Drawing.” Many of our object records don’t have much more useful information than this, and also, I am trying to wrangle with the idea of how much information is useful in a voice and text message ( with a 160 character limit per SMS).

The whole system is leveraging the Twilio service and API. Twilio offers quite a range of possibilities, and I am very excited to experiment with more. For example, instead of text to speech, Twilio can play back .wav files. Additionally, Twilio can do things like dial another phone number, forward calls and record the caller’s voice. There are so many possibilities here that I wont even begin to list them, but for example, I could easily see us using this to capture user feedback in our galleries by phone and text.

I’m very interested in figuring out a way to search by voice. I’m sort of dreaming of programming the thing to go “Why don’t you just tell me the object number!” as in this great episode of Seinfeld which you can watch by clicking the image below.

Screen Shot 2013-04-18 at 10.35.01 AMIf you are interested, I have also made the code public on this Gist. It’s pretty messy and redundant right now, but you’ll get the idea.

One of the more complicated aspects of this project will be designing the phone interface so it makes sense. Currently, once you hear an object play back, the system just hangs up on you. It would be nice to offer the user a better way to manipulate the system which is still pleasant and easy to understand. By that same token, there is a completely different approach that is needed for the SMS end of things as you don’t really have a menu tree, but instead of list of possible commands the user need to learn. Fortunately, there is a ton of great work that has already been accomplished in this arena, specifically by the Walker Art Center’s very long running and very yellow website Art on Call.

Source code at github.com/cooperhewitt/objectphone

"cmd-P"

I made us a print stylesheet for object pages on the collections website. (What does that mean? It means you can print out the webpage and it will look nice).

Printout of Object #18621871 before stylesheet

Printout of Object #18621871.. before stylesheet.

Printout of Object #18621871 after stylesheet. Much better.

Printout of Object #18621871 after stylesheet. Much better. Office carpet courtesy of Tandus flooring.

This should be very useful for us in-house, especially curators and education.. and anyone doing exhibition planning.. (which right now is many of us).

It’s not very fancy or anything. Basically I just stripped away all the extraneous information and got right to the essential details, kind of like designing for mobile.

six printouts on standard paper from the collections website, taped in two rows to an iMac screen.

cascading style sheet is cascading.

In a moment of caffeinated Friday goofiness, Aaron printed out a bunch of weird objects he found (e.g. iPad described for aliens as “rectangular tablet computer with rounded corners”) and Scotch taped them all over Seb’s computer screen as a nice decorative touch for his return the next morning.

What we realized in looking at all the printouts, though, is that the simplified view of a collection record resembles a gallery wall label. And we’re currently knee-deep in the wall label discussion here at the Museum as we re-design the galleries (what does it need? what doesn’t it need? what can it do? how can it delight? how can it inform?).

I don’t yet have any conclusions to draw from that observation.. other than it’s a good frame to talk about our content and its presentation.

..to be continued!

Little Printer Experiments

We are fans of the Little Printer here in das labs, so when it was released last year and our Printers arrived, we started brainstorming ideas for a Cooper-Hewitt publication.

In a nutshell Little Printer is a cute little device that delivers a mini personalized newspaper to you every day. You choose which publications you want to receive, such as ‘Butterfly of the Day’ or ‘Birthday Reminders’. LP publications are created by everyone from the BBC to ARUP to individual illustrators and designers looking to share their content in a unique way.

Screen Shot 2013-04-08 at 4.02.53 PM

some existing LP publications

The first thing we thought of doing was a simple print spinoff of the existing and popular series on our blog called Object of the Day.

Aaron's first stab at simply translating our existing Object of the Day blog series into (Little) print format.

Aaron’s first stab at simply translating our existing Object of the Day blog series into (Little) print format.

Then we tried a few more iterations that were more playful, taking advantage of Little Printer’s nichey-ness as a space for us to let our institutional hair down.

little printer printout with a collecitons object in the middle and graphics that borrow from the carnegie mansion architectural details.

We tried to go full-blown with the decorative arts kitsch, but it came out kind of boring/didn’t really work.

Another interesting way to take it was making the publication a two-way communication as opposed to one-way, i.e., not just announcing the Object of the Day, but rather asking people to do something with the printout, like using it as a voting ballot or a coloring book. ((Rap Coloring Book is a publication that lets you color in a different rapper each week, I think it’s pretty popular. I was also thinking of the simple digital-to-analog-to-digital interaction behind Flickr’s famous “Our Tubes are Clogged” contest of 2006 which I read about in the book Designing for Emotion (great book, I highly recommend).))

paper prototype for little printer publication with hand drawn images and text

Took a stab at a horizontal print format with a simple voting interaction. Why has nobody designed a horizontal Little Printer publication yet? Somebody should do that…

The idea everybody seemed to like most was asking people to draw their own versions of collection objects that currently have no image.

If you look on our Collections Online, you’ll see that there are plenty of things in the collection that “haven’t had their picture taken yet.”

screenshot of cooper hewit collections website showing placeholder thumbnails for three items.

Un-digitized (a.k.a. un-photographed) collections objects

I think this is a better interaction than simply voting for your favorite object because it actually generates something useful. Participants will help us give visual life to areas of our database that sorely need it. Similar to how the V&A is using crowdsourcing to crop 120,000 database images or how the Museum Victoria in Australia is generating alt-text for thousands of images with their “Describe Me” project. The Little Printer platform adds a layer of cute analog quirk to what many museums and libraries are already doing with crowdsourcing.

paper printout of little printer publication. big empty box indicating where drawing should go.

This prototype (now getting closer..) uses machine tags to allow people to link their drawings directly to our database. I printed this with an inkjet printer so it looks a little sharper than the Little Printer heat paper will look.

Lately at the museum we’ve been talking about Nina Simon’s “golden rule” of asking questions of museum visitors—that you should only ask if you actually CARE about the answer. This carries over to interaction design, you shouldn’t ask people for a gratuitous vote, doodle, pic, tweet, or whatever. I think some of the enjoyment that people will get out of subscribing to this publication and sending in their drawings will be the feeling that they’re helping the Museum in some way. [We know that there aren’t that many Little Printers circulating out there in the world but we do think that those early adopters who do have them will be entertained and perhaps, predisposed to playing with us.]

flowchart style napkin sketch showing little printer's connection to the internet, collections site and database.

A typical Aaron diagram.

The edition runs as part of the collections website itself (aka “parallel-TMS“). We chose to do this instead of running it externally on its own and using the collection API because it’s “fewer moving parts to manage” (according to Aaron). Here’s a little picture that Aaron drew for me when he was explaining how & where the publication would run. If you’re interested in doing a standalone publication, though, there are several templates on GitHub you can use as a starting point.

We’ll see how people *actually* engage with the publication and iterate accordingly…

"All your color are belong to Giv"

Today we enabled the ability to browse the collections website by color. Yay!

Don’t worry — you can also browse by colour but since the Cooper-Hewitt is part of the Smithsonian I will continue to use US Imperial Fahrenheit spelling for the rest of this blog post.

Objects with images now have up to five representative colors attached to them. The colors have been selected by our robotic eye machines who scour each image in small chunks to create color averages. We use a two-pass process to do this:

  • First, we run every image through Giv Parvaneh’s handy color analysis tool RoyGBiv. Giv’s tool calculates both the average color of an image and a palette of up to five predominant colors. This is all based on the work Giv did for version two of the Powerhouse Museum’s Electronic Swatchbook, back in 2009.

  • Then, for each color in the palette list (we aren’t interested in the average) we calculate the nearest color in the CSS3 color spectrum. We “snap” each color to the CSS3 grid, so to speak.

We store all the values but only index the CSS3 colors. When someone searches the collection for a given color we do the same trick and snap their query back down to a managable set of 121 colors rather than trying to search for things across the millions of shades and variations of colors that modern life affords us.

Our databases aren’t set up for doing complicated color math across the entire collection so this is a nice way to reduce the scope of the problem, especially since this is just a “first draft”. It’s been interesting to see how well the CSS3 palette maps to the array of colors in the collection. There are some dubious matches but overall it has served us very well by sorting things in to accurate-enough buckets that ensure a reasonable spread of objects for each query.

We also display the palette for the object’s primary image on the object page (for those things that have been digitized).

We’re not being very clever about how we sort the objects or how we let you choose to sort the objects (you can’t) which is mostly a function of knowing that the database layer for all of this will change soon and not getting stuck working on fiddly bits we know that we’re going to replace anyway.

There are lots of different palettes out there and as we start to make better sense of the boring technical stuff we plan to expose more of them on the site itself. In the process of doing all this work we’ve also released a couple more pieces of software on Github:

  • color-utils is a mostly a grab bag of tools and tests and different palettes that I wrote for myself as we were building this. The palettes are plain vanilla JSON files and at the moment there are lists for the CSS3 colors, Wikipedia’s list of Crayola crayon colors, the various shades of SOME-COLOR pages on Wikipedia, both as a single list and bucketed by family (red, green, etc.) and the Scandawegian Natural Colour System mostly just because Frankie Roberto told me about it this morning.

  • palette-server is a very small WSGI-compliant HTTP pony (or “httpony“) that wraps Giv’s color analyzer and the snap-to-grid code in a simple web interface. We run this locally on the machine with all the images and the site code simply passes along the path to an image as a GET parameter. Like this:

    curl  'https://localhost:8000?path=/Users/asc/Desktop/cat.jpg' | python -m json.tool
    
    {
    "reference-closest": "css3",
    "average": {
        "closest": "#808080",
        "color": "#8e895a",
    },
    "palette": [
        {
            "closest": "#a0522d",
            "color": "#957d34",
            }
    
            ... and so on ...
        }
    }

This allows us to offload all the image processing to third-party libraries and people who are smarter about color wrangling than we are.

Both pieces of code are pretty rough around the edges so we’d welcome your thoughts and contributions. Pretty short on my TO DO list is to merge the code to snap-to-grid using a user-defined palette back in to the HTTP palette server.

As I write this, color palettes are not exposed in either the API or the collections metadata dumps but that will happen in pretty short order. Also, a page to select objects based on a random color but I just thought of that as I was copy-paste-ing the links for those other things that I need to do first…

In the meantime, head on over to the collections website and have a poke around.

Albers boxes

We have a lot of objects in our collection. Unfortunately we are also lacking images for many of those same objects. There are a variety of reasons why we might not have an image for something in our collection.

  • It may not have been digitized yet (aka had its picture taken).
  • We may not have secured the reproduction rights to publish an image for an object.
  • Sometimes, we think we have an image for an object but it’s managed to get lost in the shuffle. That’s not awesome but it does happen.

What all of those examples point to though is the need for a way to convey the reason why an image can’t be displayed. Traditionally museum websites have done this using a single stock (and frankly, boring) image-not-available placeholder.

We recently — finally — updated the site to display list style results with images, by default. Yay!

In the process of doing that we also added two different icons for images that have gone missing and images that we don’t have, either because an object hasn’t been digitized or we don’t have the reproduction rights which is kind of like not being digitized. This is what they look like:

The not digitized icon is courtesy Shelby Blair (The Noun Project).
The missing image icon is courtesy Henrik LM (The Noun Project).

So that’s a start but it still means that we can end up with pages of results that look like this:

What to do?

We have begun thinking of the problem as one of needing to develop a visual language (languages?) that a person can become familiar with, over time, and use a way to quickly scan a result set and gain some understanding in the absence of an image of the object itself.

Today, we let some of those ideas loose on the website (in a controlled and experimental way). They’re called Albers boxes. Albers boxes are a shout-out and a whole lot of warm and sloppy kisses for the artist Josef Albers and his book about the Interaction of Color.

This is what they look like:

The outer ring of an Albers box represents the department that an object belongs to. The middle ring represents the period that an object is part of. The inner ring denotes the type of object. When you mouse over an Albers box we display a legend for each one of the colors.

We expect that the Albers boxes will be a bit confusing to people at first but we also think that their value will quickly become apparent. Consider the following example. The Albers boxes allow us to look at this set of objects and understand that there are two different departments, two periods and three types of objects.

Or at least that there are different sorts of things which is harder to do when the alternative is a waterfall of museum-issued blank-faced placeholder images.

The Albers boxes are not enabled by default. You’ll need to head over to the new experimental section of the collections website and tell us that you’d like to see them. Experimental features are, well, experimental so they might go away or change without much notice but we hope this is just the first of many.

Enjoy!

Also: If you’re wondering how the colors are chosen take a look at this lovely blog post from 2007 from the equally lovely kids at Dopplr. They had the right idea way back then so we’re just doing what they did!