Rethinking Search on the Collections Site

One of my longer-term projects since joining the museum has been rethinking how the search feature functions on the collections website. As we get closer to re-opening the museum with a suite of new technologies, our work in collaboration with Local Projects has prompted us to take a close look at the moving pieces that comprise the backend of our collections site and API. Search, naturally, forms a large piece of that. Last week, after a few weeks of research and experimentation, I pushed the first iteration live. In this post, I’ll share some of the thoughts and challenges that informed our changes.

First, a glossary of terms for readers who (like me, a month ago) have little-to-no experience with the inner-workings of a search engine:

  • Platform: The software that actually does the searching. The general process is that we feed data to the platform (see “index”), and then we ask it for results matching a certain set of parameters (see “query”). Everything else is handled by the platform itself. Part of what I’ll get into below involves our migration from one platform, Apache Solr, to another, Elasticsearch.
  • Index: An index is the database that the search platform uses to perform searches on. The search index is a lot like the primary database (it probably could fill that role if it had to) but it adds extra functionality to facilitate quick and accurate retrieval of search results.
  • Query: The rules to follow in selecting things that are appropriate to provide as search results. For users, the query could be something like “red concert poster,” but we have to translate that into something that the search provider will understand before results can be retrieved. Search providers give us a lot of different ways we can query things (ranges of a number, geographic distance or word matching to name a few), and a challenge for us as interface designers is to decide how transparent we want to make that translation. Queries also allow us to define how results should be sorted and how to facet results.
  • Faceting/Aggregation: A way of grouping results based on traits they posses. For example, faceting on “location” when you search our collection for “cat” reveals that 80 of our cat-related things are from the USA, 16 are from France, and so on.
  • Analysis (Tokenization/Stemming etc): A process that helps a computer work with sentences. Tokenization, for example, would split a search for “white porcelain vase” into the individual tokens: “white,” “porcelain” and “vase,” and then perform a search for any number of those tokens. Another example is stemming, which would allow the platform to understand that if a user searches for “running,” then items containing other words like “run” or “runner” are also valid search results. Analysis also gives us the opportunity to define custom rules that might include “marathon” and “track” as valid results in a search for “running.”

The State of Search

Our old search functionality showed its symptoms of under-performance in a few ways. For example, basic searches — phrases like “red concert poster” — turned up no results despite the presence of such objects in our collection, and searching for people would not return the person themselves, only their objects. These symptoms led me to identify what I considered the two big flaws in our search implementation.

On the backend, we were only indexing objects. This meant that if you searched for “Ray Eames,” you would see all of the objects we have associated with her, but to get to her individual person page, you would have to first click on an object and then click on her name. Considering that we have a lot of non-objects1, it makes sense to index them all and include them, where relevant, in the results. This made my first objective to find a way to facilitate the indexing and querying of different types of things.

On the frontend, we previously gave users two different ways to search our collection. The default method, accessible through the header of every page, performed a full text search on our Solr index and returned results sorted by image complexity. Users could also choose the “fancy search” option, which allows for searches on one or more of the individual fields we index, like “medium,” “title,” or “decade.” We all agreed here that “fancy search” was confusing, and all of its extra functionality — faceting, searching across many fields — shouldn’t be seen as “advanced” features. My second objective in rethinking how search works, then, was to unify “fancy” and “regular” search into just “search.”

Objective 1: Update the Backend

Our search provider, Solr, requires that a schema be present for every type of thing being indexed. The schema (an XML file) tells Solr what kind of value to expect for a certain field and what sort of analysis to perform on the field. This means I’d have to write a schema file — anticipating how I’d like to form all the indexed data — for each new type of thing we want to search on.

One of the features of Elasticsearch is that it is “schemaless,” meaning I can throw whatever kind of data I want at the index and it figures out how to treat it. This doesn’t mean Elasticsearch is always correct in its guesses — for example, it started treating our accession numbers as dates, which made them impossible to search on — so it also gives you the ability to define mappings, which has the same effect as Solr’s schema. But if I want to add “people” to the index, or add a new “location” field to an object, using Elasticsearch means I don’t have to fiddle with any schemas. This trait of Elasticsearch alone made worth the switch (see Larry Wall’s first great virtue of programmers, laziness: “the quality that makes you go to great effort to reduce overall energy expenditure”) because it’s important to us that we have the ability to make quick changes to any part of our website.

Before building anything in to our web framework, I spent a few days getting familiar with Elasticsearch on my own computer. I wrote a python script that loops through all of the CSVs from our public collections repository and indexed them in a local Elasticsearch server. From there, I started writing queries just to see what was possible. I was quickly able to come up with a lot of the functionality we already have on our site (full-text search, date range search) and get started with some complex queries as well (“most common medium in objects between 1990-2000,” for example, which is “paper”). This code is up on Github, so you can get started with your own Cooper Hewitt search engine at home!

Once I felt that I had a handle on how to index and query Elasticsearch, I got started building it into our site. I created a modified version of our Solr indexing script (in PHP) that copied objects, people, roles and media from MySQL and added them to Elasticsearch. Then I got started on the endpoint, which would take search parameters from a user and generate the appropriate query. The code for this would change a great deal as I worked on the frontend and occasionally refactored and abstracted pieces of functionality, but all the pieces of the pipeline were complete and I could begin rethinking the frontend.

Objective 2: Update the Frontend

Updating the frontend involved a few changes. Since we were now indexing multiple categories of things, there was still a case for keeping a per-category search view that gave users access to each field we have indexed. To accommodate these views, I added a tab bar across the top of the search forms, which defaults to the full-collection search. This also eliminates confusion as to what “fancy search” did as the search categories are now clearly labeled.

Showing the tabbed view for search options

Showing the tabbed view for search options

The next challenge was how to display sorting. Previously, the drop-down menu containing sort options was hidden in a “filter these results” collapsible menu. I wanted to lay out all of the sorting options for the user to see at a glance and easily switch between sorting modes. Instead of placing them across the top in a container that would push the search results further down the page, I moved them to a sidebar which would also house search result facets (more on that soon). While it does cut in to our ability to display the pictures as big as we’d like, it’s the only way we can avoid hiding information from the user. Placing these options in a collapsible menu creates two problems: if the menu is collapsed by default, we’re basically ensuring that nobody will ever use them. If the menu is expanded by default, then it means that the actual results are no longer the most important thing on the page (which, on a search results page, they clearly are). The sidebar gives us room to lay out a lot of options in an unobtrusive but easily-accessible way2.

Switching between sort mode and sort order.

Switching between sort mode and sort order.

The final challenge on the frontend was how to handle faceting. Faceting is a great way for users who know what they’re looking for to narrow down options, and a great way for users who don’t know what they’re looking for to be exposed to the various buckets we’re able to place objects in to.

Previously on our frontend, faceting was only available on fancy search. We displayed a few of the faceted fields across the top of the results page, and if you wanted further control, users could select individual fields to facet on using a drop-down menu at the bottom of the fancy search form. When they used this, though, the results page displayed only the facets, not the objects. In my updates, I’ve turned faceting on for all searches. They appear alongside the search results in the sidebar.

Relocating facets from across the top of the page to the sidebar

Relocating facets from across the top of the page to the sidebar

Doing it Live

We initially rolled these changes out about 10 days ago, though they were hidden from users who didn’t know the URL. This was to prove to ourselves that we could run Elasticsearch and Solr alongside each other without the whole site blowing up. We’re still using Solr for a bit more than just the search (for example, to show which people have worked with a given person), so until we migrate completely to Elasticsearch, we need to have both running in parallel.

A few days later, I flipped the switch to make Elasticsearch the default search provider and passed the link around internally to get some feedback from the rest of the museum. The feedback I got was important not just for working out the initial bugs and kinks, but also (and especially for myself as a relative newbie to the museum world) to help me get the language right and consider all the different expectations users might have when searching our collection. This resulted in some tweaks to the layout and copy, and some added functionality, but mostly it will inform my bigger-picture design decisions going forward.

A Few Numbers…

Improving performance wasn’t a primary objective in our changes to search, but we got some speed boosts nonetheless.

Query Before (Solr) After (Elasticsearch)
query=cat, facets on 162 results in 1240-1350ms 167 results in 450-500ms
year_acquired=gt1990, facets on 13,850 results in 1430-1560ms 14,369 results in 870-880ms
department_id=35347493&period_id=35417101, facets on 1,094 results in 1530-1580ms 1,150 results in 960-990ms

There are also cases where queries that turned up nothing before now produce relevant results, like “red concert poster,” (0 -> 11 results) “German drawings” (0 -> 101 results) and “checkered Girard samples” (0 -> 10 results).

Next Steps

Getting the improved search in front of users is the top priority now – that means you! We’re very interested in hearing about any issues, suggestions or general feedback that you might have — leave them in the comments or tweet us @cooperhewittlab.

I’m also excited about integrating some more exiting search features — things like type-ahead search and related search suggestion — on to the site in the future. Additionally, figuring out how to let users make super-specific queries (like the aforementioned “most common medium in objects between 1990-2000″) is a challenge that will require a lot of experimentation and testing, but it’s definitely an ability we want to put in the hands of our users in the future.

New Search is live on our site right now – go check it out!

1 We’ve been struggling to find a word to use for things that are “first-class” in our collection (objects, people, countries, media etc.) that makes sense to both museum-folk and the laypeople. We can’t use “objects” because those already refer to a thing that might go on display in the museum. We’ve also tried “items,” “types” and “isas” (as in, “what is this? it is a person”). But nothing seems to fit the bill.

2 We’re not in complete agreement here at the labs over the use of a sidebar to solve this design problem, but we’re going to leave it on for a while and see how it fares with time. Feedback is requested!

The Medium is the Message (and pubsocketd)

Screen Shot 2014-08-02 at 1.31.57 PM

Have you ever wanted to see a real-time view of all the objects that people are looking at on the collections website? Now you can!

At least for objects with images. There are lots of opportunities to think about interesting ways to display objects without images but since everything that follows has been a weekends-and-mornings project we’ve opted to start with the “simple” thing first.

We have lots of different ways of describing media: 12, 865 ways at last count to be precise. The medium with the most objects (2, 963) associated with it is cotton but all of these numbers are essentially misleading. The history of the cataloging of the collection has preferenced precision and detail over the kind of rough bucketing (for example, tags) that lots of people are used to these days.

It’s a practice that can sometimes seem frustrating in the moment but, in the long-run, we’re better served for it. In time we will get around to assigning high-level categorizations for equally high-level browsing but it’s worth remembering that the practice of describing objects in minute detail predates things like databases, which we take for granted today. In fact these classifications, and their associated conventions and rituals, were the de-facto databases before computers or databases had even been invented.

But 13,000 different media, most of which only describe a single object, can be overwhelming. Where do you start? How do you know what to look for? Given the breadth of our collection what don’t we have? And given the level of detail we try to assign to objects how to do you whether a search doesn’t yield any results because it’s not in our collection or simply because we’re using a different name for the same thing you’re looking for?

This is a genuinely Big and Hairy Problem and we have not solved it yet. But the ability to relay objects as they are viewed by the public, in real-time, offers an interesting opportunity: What if we just displayed (and where possible, read aloud) the medium for that object?

Screen Shot 2014-08-02 at 1.31.29 PM

That’s all The Medium is the Message does: It is an ambient display that let you keep an eye on the kinds of things that are in our collection and offered a gentle, polite way to start to see the shape of all the different things that tell the story of the museum. It’s not a tool to help you take a quiz so much as a way to absorb an awareness of the collection as if by osmosis. To show people an aspect of the collection as an avenue to begin understanding its entirety.

We’re not thinking enough about sound. If we want all these things to communicate with us, and we don’t want to be starting at screens and they’re going to do more than flash a couple of lights, then we need to work with sound. Either ‘sound effects’ that mean something or devices that talk to us. Personally, I think it’ll be the latter morphing into the former. And this is worth thinking about because it’s already creeping up on us. Self-serve checkouts are talking at us, reversing trucks are beeping at us, trucks turning left are barking at us, incoherently – all with much less apparent thought and ‘design’ than we devote to screens.

— Russell Davies,  the internet of talking

While The Medium is the Message is a full-screen application that displays a scaled-up version of the square-crop thumbnail for an object it also tries to use your browser’s text-to-speech capabilities to read aloud that object’s medium. It may not be the kind of thing you want playing in a room full of people but alone in your room, or under a pair of headphones, it’s fun to imagine it as a kind of Music for Airports for cultural heritage.

Screen Shot 2014-08-02 at 1.32.27 PM

Text-to-speech is currently best supported in Chrome and Safari. Conversely the best support for crisp and pixelated image-rendering is in Firefox. Because… computers, right?

For the time being The Medium is the Message lives in a little sand-box all by itself over here:

http://medium.collection.cooperhewitt.org

Eventually we hope to merge it back in to the main collections website but since it’s all brand-new we’re going to put it some place where it can, if necessary, have little melt-downs and temper-tantrums without adversely affecting the rest of the collections website. It’s also worth noting that some internal networks – like at a big company or organization – might still disallow WebSockets traffic which is what we’re using for this. If that’s the case try waiting until you’re home.

Screen Shot 2014-08-04 at 3.16.31 PM
And now, for the Nerdy Bits: The rest of this blog post is captial-T technical so you can stop reading now if that’s not your thing (though we think it’s stil pretty interesting even if the details sound like gibberish).

The Medium is the Message is part of a larger project to investigate a few different tools in order to understand how they might fit together and to what effect. They are:

  • Redis and in particular its implementation of Publish/Subscribe messaging paradigm – Every once in a while there’s a piece of software that is released which feels like genuine magic. Arguably one of the last examples of this was memcached originally written by Brad Fitzpatrick, for the website LiveJournal and without which entire slices of the web as we now know it wouldn’t exist. Both Redis and memcached are similar in spirit in that their feature-set is limited by design but what they claim to do Just Works™ and both have broad support across the landscape of programming languages. That last piece is incredibly important since it means we can use Redis to bridge applications written in whatever language suits the problem best. We’ll return to that idea in the discussion of “step 0″ below.
  • Websockets – WebSockets are a way for a web browser and a server to create and maintain a persistent connection and to shuttle messages back and forth. Normally the chatter between a browser and a server happens akin to the way two people might send each other postcards in the mail and WebSockets are more like a pair of teenagers calling each on the phone and talking for hours and hours and hours. Sort of like Pub/Sub for a web browser, right? WebSockets have been around for a few years now but they are still a bit of a new territory; super-cool but not without some pitfalls.
  • Go – Go is a programming language from the nice people at Google, that recently celebrated its fourth anniversary. It is part of growing trend in language design to find a middle ground between loosely typed languages, and the need to develop stable applications with a minimum of fussiness. Go is probably not the language we would develop a complex user-facing application in but for long-running services with well-defined boundaries it seems kind of perfect. (Go’s notion of code-based channels are a fascinating parallel to both Pub/Sub and WebSockets but that’s a whole other blog post.)

Fun fact: The Labs’ very own Sam Brenner‘s ITP thesis project called Adventures of Teen Bloggers is an archive of old LiveJournal accounts in the shape of an 8-bit video game!

Adventures of Teen Bloggers

In order to test all of those technologies and how they might play together we built pubsocketd which is a simple daemon written in Go that subscribes to a Pub/Sub channel and ferries those messages to a browser using Websockets (WS).

  1. Listen for messages from a specific (Redis) Pub/Sub channel
  2. Accept incoming WS requests
  3. Shuttle any messages from the Pub/Sub channel to all the open WS connections

That’s it. It is left up to WS clients (your web browser) to figure out what to do with those messages.

$> ./pubsocketd -ws-origin=http://example.com
2014/08/01 17:23:38 [init] listening for websocket requests on 127.0.0.1:8080/, from http://example.com
2014/08/01 17:23:38 [init] listening for pubsub messages from 127.0.0.1:6379 sent to the pubsocketd channel
2014/08/01 17:23:44 [10.20.30.40][10.20.30.40:56401][handshake] OK
2014/08/01 17:23:44 [10.20.30.40][10.20.30.40:56401][request] OK
2014/08/01 17:23:44 [10.20.30.40][10.20.30.40:56401][connect] OK
2014/08/01 17:23:53 [10.20.30.40][10.20.30.40:56401][send] OK
2014/08/01 17:24:05 [10.20.30.40][10.20.30.40:56401][send] OK
# and so on...

The “step 0″ in all of this is the ability for the collections website itself to connect to a Redis server and send a Pub/Sub message, whenever someone views an object, to the same channel that the pubsocketd server is listening to.

ws-liden

This allows for a nice clean separation of concerns and provides a simple way for related, but fundamentally discrete, applications to interact without getting up in each other’s business.

Given the scope of the project we probably could have accomplished the same thing, with less scaffolding, using Server-Sent Events (SSE) but this was as much an exercise designed to get our feet wet with both WebSockets and Go so it’s been worth doing it the “hard way”.

Matthew Rothenberg, creator of the popular EmojiTracker, was nice enough to open-source the Go-based SSE server endpoint he wrote to feed his application and we may eventually re-write The Medium is the Message, or future applications like it, to use that.

Screen Shot 2014-08-04 at 4.11.45 PM

We’ve open-sourced the code for pubsocketd under a BSD license and we welcome suggestions, patches and (gentle) clue-bats:

https://github.com/cooperhewitt/go-pubsocketd

Enjoy!

Robot Rothko

20140707-robot-rothko-infobox

Now that I’ve written this blog post it occurs to me that it would be trivial to build something similar on top of the Cooper Hewitt Collections API — since that’s ultimately where all this colour stuff comes from — so I will probably do that shortly and stick in it the Play section.

That’s something I wrote last week on my personal weblog. I was writing about a little web “application” that I’d made to generate algorithmic “multiforms” that recall the work of the late painter Mark Rothko. The source of the colors used to create these robot-multiforms are derived from photo uploads and extracted using the same code that the Cooper Hewitt uses to generate color palettes for the objects in our collection. We wrote about that process last year.

These robot “paintings” are built by fetching three photos and using their primary color to fill one of three stacked rectangles that make up the canvas. A dominant color for a fourth photo is used along with an inset CSS3 box-shadow to give the illusion a fuzzy, hazy background on which the rectangles sit. Every 60 seconds a new version is generated and the colors (and boxes) gently transition from old to new.

20140705-robot-rothko-2004

In that original blog post, I also wrote:

That’s it. It doesn’t do anything else and that’s part of the charm for me. It just sits in the background running in second-screen-mode stamping out robot-Rothko paintings. … It’s nice to have a new screen friend to spend the days the days with

They’re not really Rothko paintings, obviously, and to suggest that they are would do the painter a disservice. Rothko’s paintings are not just any random set of colors stacked on top of one another. Rothko worked long and hard to choose the arrangement of his paintings and it’s easy to imagine that he would have been horrified by some of the combinations that Robot Rothko offers up. But like the experimental Albers Boxes feature they are a nod and gesture – and a wink – towards the real thing.

20140707-robot-rothko-fluid

Having gotten things working for a personal non-museum and not-really-for-strangers project I decided that it would be nice to do something similar for for the museum which is absolutely for everyone. So, today we are launching Robot Rothko which is exactly the same as the application described above except that it uses objects from our collection instead of photos as its source material. Like this:

https://collection.cooperhewitt.org/play/robot-rothko/#info

 

See the #info part of that URL? That will cause the application to load with an information box that explaining what you’re looking at (and that will close itself automatically after 30 seconds). If you just want to jump straight to the application all you have to do is remove the #info from the URL.

https://collection.cooperhewitt.org/play/robot-rothko/

Robot Rothko will automatically update itself using random object records to create a new multiform every 60 seconds. Mouse over any color to see the object it represents. Click on the text to see our collection record for the object itself.

20140707-robot-rothko-decade

You can also filter stuff by person, decade. You can also filter by the year we acquired an object if you can guess where it is; that one still feels a little buggy so we’re going to hold off publishing the URL until we can figure out what’s wrong. Here are some examples of the first two:

https://collection.cooperhewitt.org/play/robot-rothko/people/18046041

https://collection.cooperhewitt.org/play/robot-rothko/decade/1910

Robot Rothko is native to the web which means it will work in any modern web browser whether it’s on your desktop or your phone or your tablet. It can be put it to fullscreen mode (by pressing shift-F) and if you save the website’s URL to your homescreen on your phone, or tablet, it is configured to launch without any of the usual browser chrome. If you use a Mac you can plug the URL for Robot Rothko in to Todd Ditchendorf’s handy Fluid.app which will turn it all in to a shiny desktop application. I am guessing there are equivalent tools for Windows or Linux but I don’t know what they are.

20140707-robot-rothko-tablet

If you’d like to generate your own Robot Rothkos there’s an API method for doing just that:

https://collection.cooperhewitt.org/api/methods/cooperhewitt.play.robotRothko

And of course it works with our recently announced support for DSON as a response format:

curl -X GET 'https://api.collection.cooperhewitt.org/rest/?method=cooperhewitt.play.robotRothko&access_token=SEEKRET&person_id=18041501&format=dson'
such "rothko" is such "canvas" is so "49" and "28" and "23" many and "palette" is so such "colour" is "#b8ab5b" , "id" is "18805769" , "epitaph" is "Folding Fan, 1900\u201305. Medium: silk, wood, horn, metal, metal spangles. Gift of Lillian C. Hart. 1985-89-1." wow ? such "colour" is "#c7c7c7" . "id" is "18640557" ! "epitaph" is "Drawing, "Two Studies for Rectangul", ca. 1965. Pen and black ink on white wove paper. Gift of Vladimir Kagan. 1992-56-7." wow , such "colour" is "#db8952" , "id" is "18133219" , "epitaph" is "Fragment, mid-18th century. Medium: silk\nTechnique: plain weave patterned by supplementary warp floats and complementary weft floats. Gift of John Pierpont Morgan. 1902-1-811." wow many ? "background" is such "colour" is "#c7a9af" . "id" is "18761047" ! "epitaph" is "Booklet Cover Sheet, 1916. Color woodcut on lavender wove paper paper. Museum purchase from Drawings and Prints Council Fund and through gift of Margery and Edgar Masinter and Merrill C. Berman. 1999-50-1-3." wow wow , "filters" is so many and "stat" is "ok" wow

Robot Rothko lives in a new section of the collections website called “Play“. The distinction between the Play section and the Experimental Features section of the website can probably be easiest thought of as: Experimental features are things that apply to the entirety of the collections website, while Play things are small contained applications that use the collections API and focus on or build off a particular aspect of the collection. The first of these was Sam Brenner’s SkyDesigner and Robot Rothko is actually the third such application.

20140707-wwms-boom

In between those two was What Would Micah Say? (WWMS) a quick end-of-day project to test out the W3C’s Text-to-Speech APIs that are starting to appear in some web browsers (read: Chrome and Safari as of this writing, and make sure you have the volume turned up). The WWMS “application” was mostly a simple 20-minute exercise to test whether fetching some content dynamically and feeding to the text-to-speech APIs actually works and produces something useable. It does, which is very exciting because it opens up any number of accessibility-related improvements we can starting thinking about adding to the collections website.

That we happened to use the cooperhewitt.labs.whatWouldMicahSay API method and then configured the text-to-speech API to read his words as if spoken by a “French” robot made it all a little bit silly and a little more fun but those are important considerations. Because sometimes playing at – or making interesting – a technical problem is the best way to work through whether it is even worth pursuing in the first place.

20140707-robot-rothko-girard-2

Making of: Design Dictionary Video Series

We often champion processes of iterative prototyping in our exhibitions and educational workshops about design. Practicing what we preach by actually adopting iterative prototyping workflows in-house is something we’ve been working on internally at Cooper Hewitt for the last few years.

In the 3.5 years that I’ve been here, I’ve observed some inspiring progress on this front. Here’s one story of iterative prototyping and inter-departmental collaboration in-house, this time for our new Design Dictionary web video series.

Design Dictionary is a 14-part video series that aims to demystify everything from tapestry weaving to 3D printing in a quick and highly visual way. With this project, we aimed not only to produce a fun and educationally valuable new video series, but also to shake up our internal workflow.

Content production isn’t the first thing you’d think of when discussing iterative prototyping workflows, but it’s just as useful for media production as it is for hardware, software, graphic design, and other more familiar design processes.

The origin of Design Dictionary traces back to a new monthly meeting series that was kicked off about two years ago. The purpose of the meetings was to get Education, Curatorial, and Digital staff in the same room to talk about the content being developed for our new permanent collection exhibition, Making Design. We wanted everything from the wall labels to the digital interactive experiences to really resonate with our various audiences. Though logistically clunkier and more challenging than allowing content development to happen in a small circle, big-ish monthly meetings held the promise of diverse points of view and the potential for unexpected and interesting ideas.

At one of these meetings, when talking about videos to accompany the exhibition, the curators and educators both expressed a desire to illustrate the various design techniques employed in our collection via video. It was noted that video of most any technique is already available online, but since these videos are of varying quality, accuracy, and copyright allowances, and it might be worth it to produce our own series.

I got the ball rolling by creating a list of techniques that will appear more than once in Making Design.

Then I collected a handful of similar videos online, to help center the conversation about project goals. Even the habitual “lurkers” on Basecamp were willing to chime in when it came to criticizing other orgs’ educational videos: “so boring!” “so dry!” they said. This was interesting, because as a media producer it can be hard to 1) get people to actually participate and submit their thoughts and 2) break it to someone that their idea for a new video is extremely boring.

Once we were critiquing *somebody else’s* educational videos, and not our own darling ideas, people seemed more able to see video content from a viewer’s perspective (impatient, wanting excitement) as opposed to a curator/educator’s perspective (fixated on detail, accuracy, thoroughness, less concerned with the viewer’s interests & attention span).

a green post it note with four goals written on it as follows: 1) express new brand (as personality/mood) 2) generate online buzz 3) help docents/visitors grasp techniques in gallery-fast (research opinions) 4) help us start thinking about content creation in an audience-centered, purposeful way

I kept this note taped to my screen as a reminder of the 4 project goals.

It is amazingly easy to get confused and lost mid-project if you don’t keep your goals close. This is why I clung tightly to the sticky note shown above. When everyone involved can agree on goals up-front, the project itself can shape-shift quite nicely and organically, but the goals stay firm. Stakeholders’ concerns can be evaluated against the goals, not against your org. hierarchy or any other such evil criteria.

Even with all the viewer-centric empathy in the world, it can still be hard to predict what your audience will like and dislike. Would a video about tapestry weaving get any views on YouTube? What about 3D printing?

Screen shot of a tweet that says: Last chance! Tell us which design techniques interest you most in this one-question survey: http://bit.ly/Museum4U

We asked our Twitter followers which techniques interest them most.

We created a quick survey on SurveyMonkey and blasted it out to our followers on Facebook and Twitter to gauge the temperature.

a list of design techniques, each with an orange bar showing percentage of people who voted for that technique.

Surveying our Twitter and Facebook fans with SurveyMonkey, to learn which techniques they’d be interested in learning more about.

We also hosted the same survey on Qualaroo, which pops up on our website. My hunch about what people would say was all wrong. We used these survey results to help choose which techniques would get a video.

By this point, it was mid-winter 2014, and our new brand from Pentagram was starting to get locked in. It was a good opportunity to play with the idea of expressing this new brand via video. What should the pacing and rhythm be like? How should animations feel? What kind of music should we use?

grid of various images, each with a caption, like a mood board or bulletin board.

Public mood-boarding with Pinterest.

Seb & I are fans of “Look Around You” and we liked the idea of a somewhat cheeky approach to the dreaded “educational video.” How about an educational video that (lovingly, artfully) mocks the very format of educational videos? I created a Pinterest board to help with the art direction. We couldn’t go too kitsch with the videos, however, because our new brand is pretty slick and that would have clashed.

Then I made a low-stakes, low-cost prototype, recycling footage from a previous project. I sent this out to the curatorial/education team for feedback using Basecamp.

In retrospect I can now see that this video is awful. But at the time, it seemed pretty good to me. This is why we prototype, people!

With feedback from colleagues via Basecamp (less book, more live action, more prominent type), I made the next prototype:

I got mixed reactions about the new typography. Some found it distracting. And I was still getting a lot of mixed reactions to the book. So here was my third pass:

I was starting to reach out to artists and designers to lend their time to the shoots, and was cycling that fresh footage into the project, and cycling the new video drafts back to the group for feedback. Partially because we were on a deadline and partially because it works well in iterative projects, we didn’t wait for closure on step 1 before moving on to step 2.

a pile of scrap papers, each with different lists saying things like: "copy pattern, cover pattern with contact paper, mount pattern" or "embroidery steps: 1) cut fabric 2) stretch main fabric onto hoop 3) cut thread" et cetera

I got a crash course in 14 different techniques.

Every new shoot presented a new chance to test the look and feel and get reactions from my colleagues. Here was a video where I tried my own hand at graphical “annotations” (dovetail, interlock, slit):

By this point my prototype was refined enough to share with Pentagram, who were actively working on our digital collateral. I asked them to style a typographic solution for the series, which could serve as the basis for other museum videos as well. Whenever you can provide a designer with real content, do it, because it’s so much better than using dummy content. Dummy content is soft and easy, allowing itself to be styled in a way that looks good, but meets no real requirements when put through a real stress test (long words, bulky text, realistic quantities of donor credits, real stakeholders wanting their interests represented prominently).

Here is a revised video that takes Pentagram’s new, crisp typography into account:

This got very good feedback from education and curatorial. And I liked it too. Yay.

All-in-all, it took about 8 rounds of revision to get from the first cruddy prototype to the final polished result.

And here are the final versions.

Announcing SkyDesigner! Sam Brenner joins the Labs

Greetings readers! My name is Sam and I’m the new Interactive Media Developer here at the Cooper-Hewitt’s Digital and Emerging Media department. I’m thrilled to be here with the opportunity to help design and build the future of the museum, both online and in-house.

As part of my application for the position, I built SkyDesigner, a web application that lets users replace the color of the sky with a picture of a similarly-colored object from the Cooper-Hewitt’s collection. The “sky” idea comes from the original assignment, which was to create an application using both a weather API and the Cooper-Hewitt API, but you can use SkyDesigner to swap out colors from anything you can take a picture of (meaning, it’s great for selfies). Give it a try now!

687474703a2f2f7777772e73616d6a6272656e6e65722e636f6d2f70726f6a656374732f736b792f6c69622f696d672f30322e6a7067  687474703a2f2f7777772e73616d6a6272656e6e65722e636f6d2f70726f6a656374732f736b792f6c69622f696d672f30312e6a7067

Here’s how it works: first, users take a picture. If they’re on a computer, they can use their webcam. If they’re on a smartphone, they can use the built-in camera. Android users get (in my opinion) the better experience, because Android supports getUserMedia – this means that users can start their camera and take a picture without ever having to leave the application. iOS doesn’t support getUserMedia yet, so they are sent off to the native iOS camera app to take their picture, which then gets passed back to the browser. Once I receive the picture, I load it into a canvas.

In the next step, users tap on their picture to select a color. The color’s hex code is sent straight to the Cooper-Hewitt API’s search method, where I search for similarly-colored objects that have an associated image. While waiting for a response from the API, I also tell the canvas to make every pixel within range of the selected color become transparent. When I get the image back from the API, I load it in behind the canvas and presto! It shows through where the selected color used to be. Finally, the image is titled based on the object’s creator and your current weather information.

It’s built using HTML, CSS and JavaScript. The original application had PHP to talk to the API but that’s since been ported to JavaScript since I now have the luxury of running the site on the Collections website itself where we have our own built-in API hooks.

Being a weekend project, there are some missing features – sharing is a big one – but I think it demonstrates the API’s ability to provide fresh, novel ways into a museum’s vast collection. Here’s the link again, and you can also find the source on GitHub.