Category Archives: Interaction Design

Rethinking Search on the Collections Site

One of my longer-term projects since joining the museum has been rethinking how the search feature functions on the collections website. As we get closer to re-opening the museum with a suite of new technologies, our work in collaboration with Local Projects has prompted us to take a close look at the moving pieces that comprise the backend of our collections site and API. Search, naturally, forms a large piece of that. Last week, after a few weeks of research and experimentation, I pushed the first iteration live. In this post, I’ll share some of the thoughts and challenges that informed our changes.

First, a glossary of terms for readers who (like me, a month ago) have little-to-no experience with the inner-workings of a search engine:

  • Platform: The software that actually does the searching. The general process is that we feed data to the platform (see “index”), and then we ask it for results matching a certain set of parameters (see “query”). Everything else is handled by the platform itself. Part of what I’ll get into below involves our migration from one platform, Apache Solr, to another, Elasticsearch.
  • Index: An index is the database that the search platform uses to perform searches on. The search index is a lot like the primary database (it probably could fill that role if it had to) but it adds extra functionality to facilitate quick and accurate retrieval of search results.
  • Query: The rules to follow in selecting things that are appropriate to provide as search results. For users, the query could be something like “red concert poster,” but we have to translate that into something that the search provider will understand before results can be retrieved. Search providers give us a lot of different ways we can query things (ranges of a number, geographic distance or word matching to name a few), and a challenge for us as interface designers is to decide how transparent we want to make that translation. Queries also allow us to define how results should be sorted and how to facet results.
  • Faceting/Aggregation: A way of grouping results based on traits they posses. For example, faceting on “location” when you search our collection for “cat” reveals that 80 of our cat-related things are from the USA, 16 are from France, and so on.
  • Analysis (Tokenization/Stemming etc): A process that helps a computer work with sentences. Tokenization, for example, would split a search for “white porcelain vase” into the individual tokens: “white,” “porcelain” and “vase,” and then perform a search for any number of those tokens. Another example is stemming, which would allow the platform to understand that if a user searches for “running,” then items containing other words like “run” or “runner” are also valid search results. Analysis also gives us the opportunity to define custom rules that might include “marathon” and “track” as valid results in a search for “running.”

The State of Search

Our old search functionality showed its symptoms of under-performance in a few ways. For example, basic searches — phrases like “red concert poster” — turned up no results despite the presence of such objects in our collection, and searching for people would not return the person themselves, only their objects. These symptoms led me to identify what I considered the two big flaws in our search implementation.

On the backend, we were only indexing objects. This meant that if you searched for “Ray Eames,” you would see all of the objects we have associated with her, but to get to her individual person page, you would have to first click on an object and then click on her name. Considering that we have a lot of non-objects1, it makes sense to index them all and include them, where relevant, in the results. This made my first objective to find a way to facilitate the indexing and querying of different types of things.

On the frontend, we previously gave users two different ways to search our collection. The default method, accessible through the header of every page, performed a full text search on our Solr index and returned results sorted by image complexity. Users could also choose the “fancy search” option, which allows for searches on one or more of the individual fields we index, like “medium,” “title,” or “decade.” We all agreed here that “fancy search” was confusing, and all of its extra functionality — faceting, searching across many fields — shouldn’t be seen as “advanced” features. My second objective in rethinking how search works, then, was to unify “fancy” and “regular” search into just “search.”

Objective 1: Update the Backend

Our search provider, Solr, requires that a schema be present for every type of thing being indexed. The schema (an XML file) tells Solr what kind of value to expect for a certain field and what sort of analysis to perform on the field. This means I’d have to write a schema file — anticipating how I’d like to form all the indexed data — for each new type of thing we want to search on.

One of the features of Elasticsearch is that it is “schemaless,” meaning I can throw whatever kind of data I want at the index and it figures out how to treat it. This doesn’t mean Elasticsearch is always correct in its guesses — for example, it started treating our accession numbers as dates, which made them impossible to search on — so it also gives you the ability to define mappings, which has the same effect as Solr’s schema. But if I want to add “people” to the index, or add a new “location” field to an object, using Elasticsearch means I don’t have to fiddle with any schemas. This trait of Elasticsearch alone made worth the switch (see Larry Wall’s first great virtue of programmers, laziness: “the quality that makes you go to great effort to reduce overall energy expenditure”) because it’s important to us that we have the ability to make quick changes to any part of our website.

Before building anything in to our web framework, I spent a few days getting familiar with Elasticsearch on my own computer. I wrote a python script that loops through all of the CSVs from our public collections repository and indexed them in a local Elasticsearch server. From there, I started writing queries just to see what was possible. I was quickly able to come up with a lot of the functionality we already have on our site (full-text search, date range search) and get started with some complex queries as well (“most common medium in objects between 1990-2000,” for example, which is “paper”). This code is up on Github, so you can get started with your own Cooper Hewitt search engine at home!

Once I felt that I had a handle on how to index and query Elasticsearch, I got started building it into our site. I created a modified version of our Solr indexing script (in PHP) that copied objects, people, roles and media from MySQL and added them to Elasticsearch. Then I got started on the endpoint, which would take search parameters from a user and generate the appropriate query. The code for this would change a great deal as I worked on the frontend and occasionally refactored and abstracted pieces of functionality, but all the pieces of the pipeline were complete and I could begin rethinking the frontend.

Objective 2: Update the Frontend

Updating the frontend involved a few changes. Since we were now indexing multiple categories of things, there was still a case for keeping a per-category search view that gave users access to each field we have indexed. To accommodate these views, I added a tab bar across the top of the search forms, which defaults to the full-collection search. This also eliminates confusion as to what “fancy search” did as the search categories are now clearly labeled.

Showing the tabbed view for search options

Showing the tabbed view for search options

The next challenge was how to display sorting. Previously, the drop-down menu containing sort options was hidden in a “filter these results” collapsible menu. I wanted to lay out all of the sorting options for the user to see at a glance and easily switch between sorting modes. Instead of placing them across the top in a container that would push the search results further down the page, I moved them to a sidebar which would also house search result facets (more on that soon). While it does cut in to our ability to display the pictures as big as we’d like, it’s the only way we can avoid hiding information from the user. Placing these options in a collapsible menu creates two problems: if the menu is collapsed by default, we’re basically ensuring that nobody will ever use them. If the menu is expanded by default, then it means that the actual results are no longer the most important thing on the page (which, on a search results page, they clearly are). The sidebar gives us room to lay out a lot of options in an unobtrusive but easily-accessible way2.

Switching between sort mode and sort order.

Switching between sort mode and sort order.

The final challenge on the frontend was how to handle faceting. Faceting is a great way for users who know what they’re looking for to narrow down options, and a great way for users who don’t know what they’re looking for to be exposed to the various buckets we’re able to place objects in to.

Previously on our frontend, faceting was only available on fancy search. We displayed a few of the faceted fields across the top of the results page, and if you wanted further control, users could select individual fields to facet on using a drop-down menu at the bottom of the fancy search form. When they used this, though, the results page displayed only the facets, not the objects. In my updates, I’ve turned faceting on for all searches. They appear alongside the search results in the sidebar.

Relocating facets from across the top of the page to the sidebar

Relocating facets from across the top of the page to the sidebar

Doing it Live

We initially rolled these changes out about 10 days ago, though they were hidden from users who didn’t know the URL. This was to prove to ourselves that we could run Elasticsearch and Solr alongside each other without the whole site blowing up. We’re still using Solr for a bit more than just the search (for example, to show which people have worked with a given person), so until we migrate completely to Elasticsearch, we need to have both running in parallel.

A few days later, I flipped the switch to make Elasticsearch the default search provider and passed the link around internally to get some feedback from the rest of the museum. The feedback I got was important not just for working out the initial bugs and kinks, but also (and especially for myself as a relative newbie to the museum world) to help me get the language right and consider all the different expectations users might have when searching our collection. This resulted in some tweaks to the layout and copy, and some added functionality, but mostly it will inform my bigger-picture design decisions going forward.

A Few Numbers…

Improving performance wasn’t a primary objective in our changes to search, but we got some speed boosts nonetheless.

Query Before (Solr) After (Elasticsearch)
query=cat, facets on 162 results in 1240-1350ms 167 results in 450-500ms
year_acquired=gt1990, facets on 13,850 results in 1430-1560ms 14,369 results in 870-880ms
department_id=35347493&period_id=35417101, facets on 1,094 results in 1530-1580ms 1,150 results in 960-990ms

There are also cases where queries that turned up nothing before now produce relevant results, like “red concert poster,” (0 -> 11 results) “German drawings” (0 -> 101 results) and “checkered Girard samples” (0 -> 10 results).

Next Steps

Getting the improved search in front of users is the top priority now – that means you! We’re very interested in hearing about any issues, suggestions or general feedback that you might have — leave them in the comments or tweet us @cooperhewittlab.

I’m also excited about integrating some more exiting search features — things like type-ahead search and related search suggestion — on to the site in the future. Additionally, figuring out how to let users make super-specific queries (like the aforementioned “most common medium in objects between 1990-2000”) is a challenge that will require a lot of experimentation and testing, but it’s definitely an ability we want to put in the hands of our users in the future.

New Search is live on our site right now – go check it out!

1 We’ve been struggling to find a word to use for things that are “first-class” in our collection (objects, people, countries, media etc.) that makes sense to both museum-folk and the laypeople. We can’t use “objects” because those already refer to a thing that might go on display in the museum. We’ve also tried “items,” “types” and “isas” (as in, “what is this? it is a person”). But nothing seems to fit the bill.

2 We’re not in complete agreement here at the labs over the use of a sidebar to solve this design problem, but we’re going to leave it on for a while and see how it fares with time. Feedback is requested!

Interns React to…the Whitney's Audio Guide

I spent my summer as an intern in the Digital and Emerging Media department here at the Cooper-Hewitt National Design Museum. Next week, I head home to San Francisco where I will return to the graduate program in design at California College of the Arts. One of my projects this summer has been to visit museums, observe how visitors are using their devices (cell phones, iPads, etc), and to examine audio guides through the lens of an interaction designer.

When I went to check out MoMA’s new mobile guide, Audio+, it was the beginning of my stint at the Cooper-Hewitt, I had never before done a museum audio tour with an iPod touch, and my expectations were lofty. Now that I have spent a summer in Museum World, my perspective and my expectations have changed so I wanted to repeat the exercise of going to a museum and critiquing an audio guide experience.

Last Sunday I spent my afternoon at the Whitney. I arrived at the museum around 1pm. No wait for the audio guide, just walked up and handed over my ID in exchange. Like many other museums, the Whitney uses encased iPods for their audio guides. I was a bit surprised, however, to notice that the battery charge on my guide was around 40%. This turned out not to be a problem for me, but I did overhear other guests complaining that their guides had run out of batteries part way through the visit.

Whitney's audio guide interface

Two different views of the Whitney’s audio guide

The Whitney’s audio guide interface is simple and straight forward. All of the guide’s navigation is text based, and this digital affordance reinforced the fact that the guide does not offer endless paths and options. After just one or two minutes of clicking around the app, there was no more mystery. The minimalist design helped me to immediately understand what I was going to do with the device and it was easier for me to focus attention on what was on the walls rather than what was on the screen.

I was, and still am, pretty taken with the quality of content available on the Whitney’s guide. From what I could tell, in each room of each gallery there is audio content for at least two pieces. It may not sound like a lot, but it ended up being more than enough for me. The consistency of content allowed and encouraged me to use the guide throughout my visit and I was surprised to see how many visitors were using the audio guides – and not just tourists, but locals too.

According to Audio Guide stop 501, Oscar Bluemner once wrote, “Listen to my work as you listen to music…try to feel.” The Whitney’s audio guide embraces this idea by playing mood setting sounds and music to complement their audio descriptions.

Screen shot of audio guide player

Situation in Yellow on the Whitney’s web player

One of my favorites is the description of Burchfield’s Chicket and Chorus in the Arbor, which you can listen to here. With crickets chirping in the background, it’s much easier to put yourself into the world described (late summer, thick trees and bushes, crickets, sunset). The first time I came across one of the audio guide stops with background music I was surprised and delighted. It was a subtle gesture, but one that really elevated the content and did wonders for putting me in the mood, so to speak.

Compared to its counterpart at MoMA, the Audio+, this guide has less content and that was clear from the start. Less content may sound like one point for the “con” list, but I do not think it is always a bad thing. Yes, there were a few times when I missed being able to look up more info about the artist and see related works, but not having the option at all definitely made me more focussed on the art in front of me. In the case of the Whitney, the content is good enough that the “less is more” approach is working well. Plus, all of their audio content is available online, so when I am back at my desk, I can browse through and re-listen or learn more about the clips I found particularly interesting.
visitor looking at a painting

A Whitney visitor using the audio guide

In both the guide for the Whitney and the guide for the MoMA the museum’s own style is reflected. The MoMA states that their “mission is helping you understand and enjoy the art of our time” where as the Whitney is “dedicated to collecting, preserving, interpreting, and exhibiting American art.” It makes sense, then, that the MoMA (focussed on education) includes much more educational content in their mobile guide and that the Whitney does.

From what I’ve learned this summer (through working with the labs team and through visits to other museums), I know that the visitor experience at home is just as important as the visitor experience in the museum. For myself, I liked the Whitney guide so much because I didn’t feel compelled to do an incredibly deep dive on a mobile device when I should be focussing my attention on what is in front of me. However, where was my at home experience? Once I left the museum, I had nothing to take with me that would prompt me to visit their website and learn more about what I saw. I would love to follow up the great audio guide experience with a great at home experience in the vein of what MoMA is doing with the option to take your visit home. There is opportunity here and I hope the Whitney has plans to fill it.

Interns React to…MoMA's Audio+

I spent my summer as an intern in the Digital and Emerging Media department here at the Cooper-Hewitt National Design Museum. Next week, I head home to San Francisco where I will return to the graduate program in design at California College of the Arts. One of my projects this summer has been to visit museums, observe how visitors are using their devices (cell phones, iPads, etc), and to examine audio guides through the lens of an interaction designer.

Before you start, it’s important to note that I ran over to try out the Audio+ out as soon as I could. The new guides are technically still in a pre-release phase and the team at MoMA is actively rolling out tweaks and fixes.

I arrived at the museum around 11 am (they open at 10.30) and already there was a pretty significant line for the mobile guide. After a bit of a wait, I exchanged my photo ID (note: passports and credit cards are not accepted) for the encased iPod touch. Although they are commonly used by museums as audio guides, this was the first time I had ever done an audio tour with an iPod touch and my expectations were lofty from the start. Hanging around my neck from a lanyard was a device full of content, and a device that I knew could connect my experience at MoMA to the world wide web! Cue sunburst and music from the heavens.

Visitors waiting in line

MoMA visitors waiting in line to pick up an audio guide

The guide is handed to you with the prompt to “Take your visit home” and here you can enter your email address, which I did, or skip this step and do it later (or not at all). The in-museum functionality is the same regardless of whether you decide to give them your email address or not.

Email address entered, ready to go.

…Or so I thought. After a few network connection failure screens, I took the guide back to the Audio+ desk and they inserted what looked like a folded up paperclip into a slot in the back of the case and pushed some kind of reset button. Not a big deal since I was still in the lobby, but it would have been nice to, as Nielsen’s 10 usability heuristics suggest, include some help and documentation outside of “network connection failure.” I assume this is one of the kinks being working out.

Image of the Audio+ guide

As I ambled around the third floor, I couldn’t help but get pulled into the guide’s glowing screen; it was a bit distracting, actually. The interface looks like a website; there are clickable images, clickable text, videos, a camera, and icon based navigation system. I spent at least 15 minutes playing around with the app not only trying to figure out what it could do, but trying to figure out what I should do. I had too many options and my attention was on the device in my hands rather than on the walls where it should have been. I wondered what else (besides explore content) I could do with the device. Is it going to navigate me through the maze of MoMA and tell me to turn right at the Gilda Mantilla drawings in order to get to the A Trip from Here to There exhibit? Does it know where I am? No, but I wish it did. This may be unavoidable, but I would be surprised if most visitors don’t feel the same way. It’s an iPod touch and I therefore expect it to do the things a Wi-Fi enabled iPod touch can do (mainly help me to find my way), but it doesn’t…and I really want it to.

Once I stopped fidgeting with the new toy, the first thing I did with the guide was listen to an audio description of Alighiero Boetti’s process in the piece Viaggi postali (Postal Voyages). The audio content was engaging and with the guide I could also read information, see the location within the museum, and…see related works! This last feature was my favorite; I love having a connection to something on the wall and immediately being able to see more from the artist. This was a significantly harder, however, when the piece I was interested in did not have the little audio icon on the label (which is true for the majority of the work at MoMA).

Audio content icon

I want info with a single tap or a simple search for all of the pieces on the walls, as I got for the Boetti piece, not just the ones with the audio icon. Unfortunately, access to extended content for artworks outside of the official tours meant effort because (without a clear alternative) my instinct was to search either the artist name or the title. As you can image, unfamiliar names and looooong titles made this a tiresome process. The cognitive load was on me, the user, rather than on the technology. I want to access to the content without having to think. 

screens from MoMA's new Audio+ guide

Screens on the Audio+ guide: menu options, audio content page, and search screen

One thing I loved about the Audio+ guide was the built in camera. Not having to juggle my own camera along with the audio guide was a relief and it was an easy way to ensure that content would be available for me to really explore on my own time, outside of the museum. Throughout my visit I used the guide to take photos and add stars to the pieces I liked, but unless the piece was part of an official tour (i.e. I didn’t have to search by artist/title), I was not compelled to look up any extended info. Call me lazy, but going through the search process was too much effort for the return. After about an hour I returned the guide and realized it was lucky I arrived when I had; by 11.50 am they were completely out of mobile guides and there was still a line of people waiting.

Post visit, I was emailed a link to My Path at MoMA; it is elegantly designed and responsive to screen size so is just as comfortable to view on a mobile device as it is on the desktop.

My Path at MoMA

Screen capture of My Path at MoMA

In My Path there are three different categories, Dashboard, Timeline, and Type. Dashboard gives a handful of metrics about the visit — duration (52 mins), works viewed (11 out of 1064), artists viewed (14 out of 590), and years explored (65 out of 132). My first thought: “What?! I saw more than 11 works!” I like what Dashboard tells me, but it is an incomplete story and I want to know more. Now that I’m back at my desk and I’ve walked through about ten doorways, my memory is fuzzy. What are these 11 works that I allegedly viewed and how is the guide determining what I did or did not see? 1984 is supposedly my Most Explored Year, well, what was it I saw from 1984?

Years Explored

Screen capture of the years explored section of My Path

The Timeline and Type sections show the stuff I did within the guide: audio listened to, photos taken, and items searched for. Same content in each section, just sorted differently. It’s really great to see, and excellent to have it all on one place. However, this is where I think there is a big opportunity lost from an interaction design perspective. I can play the audio tours in the My Path interface, but the links do not take me to the MoMA website where I can get more information about the artists, related works, etc. Basically, all of the functionality in the Audio+ mobile guide that made it easy to contextualize and relate individual works within a greater context is lost when I am back at my desk and most able to use it. I prefer to have more content and connections available when I’m at home processing my visit (and have a full sized monitor to use), but the Audio+ experience gives you the most content when you are on site (and looking at a tiny screen) and takes it away when you leave.

There are some errors with the My Path interface and I assume that some of these are tweaks being worked out. For example, when I open a photo, the sharing options are convenient, but they don’t include an option for downloading the original. Even the email link, which I expect will email me the photo, just sends a link to My Path. This again brings me to the point that with something like this, users shouldn’t have to think.

Photo share options

Screen capture of the photo share options in My Path

Beyond my gripes, let me say that overall my experience with the Audio+ was a strong positive. I generally hate using audio guides (because they are generally boring and clunky) but kudos to the folks behind the Audio+ because this first iteration is fun to use and provides delightful content above and beyond what I am used to. There is huge potential both with the guide and with the post-visit interface and I look forward to giving it another go in a few months once they have had a chance to work out the bumps.