Tag Archives: prototyping

Making ‘Dive into Color’

Guest post by Olivia Vane

‘Dive into Color’ is an interactive timeline for exploring the Cooper Hewitt collection by colour/colour harmony and time. It is exhibited in ‘Saturated: The Allure and Science of Color’ 11 May 2018 – 13 Jan 2019.

Since spending time at Cooper Hewitt last year on a fellowship, I returned to London where I’m a PhD student at Royal College of Art (RCA) in Innovation Design Engineering supervised by Professor Stephen Boyd Davis. At Cooper Hewitt, I developed a prototype timeline tool for visualising the museum collection by tags.

This post describes further work on that prototype, shifting the tool to exploring the collection by colour. As a curator explained: “[visualising by] colour, I think, is useful for the purposes of the study of the taste for different colours, but it’s also a more interesting exercise for the public just to be able to do and get pleasure out of.” Colour is enjoyable – it’s eye-catching and vibrant – and it offers a visual, intuitive way to explore a digitised collection without needing specialist knowledge. With a design collection like Cooper Hewitt’s, tracing colour through history is also interesting for looking at fashions and innovation in colour technology.

I’ve been asked a few times recently what my process is for designing visualisations. So in this post I’m going to step though the early prototypes and retrace my design decisions. Along the way, I will talk over practical points for working with colour (and colour harmonies!) in collection data, and working between digital and artist colour systems.

Previous colour-collections visualisations
Colour has previously been used both as a facet for search and for visualising collections. Geoff Hinchcliffe’s ‘Tate Explorer’ offers colour as a search facet paired with a timeline. This prototype from the Swedish National Heritage Board (write-up forthcoming) combines colours and tags for exploring a fashion collection. Richard Barrett-Small’s ‘ColourLens’ searches over Rijksmuseum and Walters Art Museum data by colour with a graphic for each item visualising its colour proportions. And Google Arts & Culture’s ‘Art Palette’ is a search engine that finds artworks based on a chosen colour palette.

Collections visualised by colour include the Library of Congress, where Laura Wrubel created this tool for overviewing the colour palettes of objects over a collection and Jer Thorp visualised the colour names present in the titles of works. Also using Cooper Hewitt data, Rubén Abad created this visualisation of the colours present by decade in Cooper Hewitt’s objects. Lev Manovich has visualised artworks, for example Mondrian and Rothko paintings, by colour characteristics including hue and saturation. Everardo Reyes visualised Paul Klee’s paintings by hue. And Brian Foo’s visualisation of the New York Public Library digitised collection has an option to organise items by colour.

I was interested to explore colour alongside the time dimension. And since Cooper Hewitt was preparing for an exhibition, ‘Saturated’, focusing on colour theory and design, I was intrigued to see if I could trace colour harmonies across the collection.

Colour harmonies are combinations of colours that are pleasing to the eye. The relative positions of colours in a colour wheel can be used to describe different harmonies like complementary colours (opposites on the colour wheel), or analogous colours (neighbours on the colour wheel). Artists and designers create different visual effects and contrasts with different harmony types.

Colour harmony examples, image from studiobinder

Extracting colours across the Cooper Hewitt collection
It’s already possible to search by colour on Cooper Hewitt’s collection site. Colour data was extracted using Giv Parvaneh’s great tool RoyGBiv (described in this Cooper Hewitt Labs post). Roughly, RoyGBiv works by checking the colour value of each pixel in an image, clustering colour values that are similar enough to be considered the same and returning up to 5 dominant colours in an image.

The colours extracted from Cooper Hewitt’s collection with RoyGBiv are good on the whole, but errors sometimes occur. The background colour is sometimes picked up. The effect of light and shadow on a 3D object can introduce multiple, illusory colours:

As always there are quirks working with digitised collections, like these Dutch tiles which had coloured stickers on them when they were photographed:

Or lace photographed against a dark background for contrast:

Since the possible number of unique colours extracted across the collection is huge, searching by colour on the Cooper Hewitt website is simplified by snapping extracted colours to the closest value in a standardised palette (the default is the CSS4 web colour palette, but the CSS3 and Crayola palettes are also options). On the Cooper Hewitt website, you can search the collection by 116 CSS4 colours. Both the original and ‘snapped to’ colour palettes are available in the Cooper Hewitt data – all stored as hex codes (six hexadecimal digits representing the levels of red, green and blue).

Prototyping
As a first step, I adapted my code to visualise collection items matching a CSS4 colour on a timeline (see visualisations below).

Although Cooper Hewitt has an API (an Application Programming Interface: a way for someone to make use of Cooper Hewitt’s data through a set of pre-defined requests made over the web), there is no method for returning all the objects matching a colour. Instead, I used collection data Cooper Hewitt had put on GitHub – an argument for institutions to offer both!

‘Orangered’

‘Steelblue’

‘Olivedrab’

I then started exploring how I might visualise objects featuring a colour harmony, first trying complementary colours (opposites on the colour wheel).

I initially tried to do this by matching a chosen CSS4 colour with the nearest CSS4 colour of opposite hue. The HSL and HSV (hue, saturation, lightness/value) colour systems define hue as an angle round a circle (0-360°), so I inverted hues by converting hex codes to HSL. The visualised results were unsatisfying though, as often the search failed to find any matches. This doesn’t mean to say there was a lack of objects with complementary colours, but that my search was too precise (and artificially precise since which CSS4 palette colours you can consider to be complementary is fuzzy, and the colour data is imprecise anyway e.g. the illusory colours extracted for 3D objects).

I tried extending the reach of my search to matching several colours close to the inverted hue, but it felt very frustrating not to have a visual reference to the range of colours in a region and what colours were being searched over.

So I started experimenting with using a colour wheel input as a way to pick colour combinations and simultaneously see possible hue relations. I first tried mapping the colours from the standardised palettes by HSL round a circle.

‘Snapped to’ colours in the Cooper Hewitt collection. CSS4 (left) and Crayola (right) palettes mapped by hue (in HSL). Angle = hue, radius = lightness.

And to make it easier to see the possible colours, I wrote some code to map the CSS4 palette colours to a wheel.

CSS4 palette colours → mapped round a wheel. Hue (HSL) = angle, ordered by lightness

I realised at this point, though, that the resulting design doesn’t match a typical artist’s/pigment colour wheel (which has red, yellow, blue – RYB – primary colours). HSL is a simple transformation of RGB colour space, and therefore the wheel has red, green, blue primaries. If this colour wheel is used to search over design artefacts, surely it would be more appropriate to use a design closer to the norm for artists and designers? (These in-depth articles by David Briggs – part 1, part 2 – explain the differences between traditional and modern colour theory, and colour training for artists).

There is no ‘correct’ colour wheel to adopt, but converting my HSL colour wheel to something closer to an artist’s version (using code from Ben Knight’s implementation of Adobe’s ‘Kuler’ colour wheel) felt like a reasonable compromise here.

(Left) Colours mapped by hue (HSL). Angle = hue, radius = lightness. (Right) Colour mapping adjusted to more closely resemble an artist’s colour wheel.

Using this colour wheel as a guide and an input, I could see and choose which colours to query. By searching over multiple colours, the visualised results were better. (In the images below, white and black borders around tiles in the colour wheel indicate the searched-over colour combinations):

Purple against olive timeline

Orangered against cyan/blue timeline

Querying against colour data in HSV
While the results looked better with this prototype, the user interface is a mess and complicated to use. And the search query was not excluding objects that featured other colours in addition to the searched colour combination.

Sticking with the CSS4 palette was greatly complicating the task, so I abandoned using it. I converted all the original extracted colours (not ‘snapped’) from hex codes to HSV and created my own Elasticsearch index with the HSV colours stored as a nested datatype. This way I can: search over a hue range, with a threshold on saturation and value; exclude objects also featuring other hues; and it is simple to define more complex colour harmony searches (e.g. analogous, triadic, quadratic and split complementary).

Different colour harmonies tried in prototyping

Colour wheel graphic for objects
As a by-product, I realised I could repurpose my code to map individual object palettes round a colour wheel too. Thus, you get a compact graphic describing the colour relationships present in a single design. This is a nice example of the serendipity of designing, where you identify new possibilities as a result of seeing what you have already made.



(Above) Object with colour palette, (below) palette mapped to a wheel: easy to see complementary harmony


Colour wheel graphic examples

I adapted a simple artist’s (RYB) colour wheel to use as an input and tested the prototype with visitors at Royal College of Art’s January 2018 ‘Work in Progress’ show.

Testing the prototype with visitors at Royal College of Art’s Jan 2018 ‘Work in Progress’ show

In order to avoid reducing the size of the images (so it’s still possible to see what the objects are), I’ve capped the number of visualised objects to the 100 most saturated in colour.

There were few hits for the more complex harmonies (quadratic, tetradic, split complementary) and the results felt less convincing. I had widened the hue range to search over in order to increase the small number of hits, so the results were less visually cohesive anyway. And, in conversations with the museum curators, we decided to drop these more complex harmonies from the interface.

As mentioned earlier, there are some errors in the colour data. At this stage, since this setup only allows a fixed set of possible searches, with repeatable results, it was worth it for me to do some manual editing of the colour data to remove obvious errors.

Final interface design
For the final (more polished!) interface design, which is now on display, I set on adopting a colour wheel input inspired by this Hilaire Hiler design in Cooper Hewitt’s collection. (This wheel actually has 4 ‘psychological’ colour primaries and features 30 hues). The interface has 4 harmony options: monochromatic, complementary, analogous and spectrum (a rainbow colour option).

Color wheel picker, inspired by Hiler’s design, used in ‘Dive into Color’

 

‘Dive into Color’ installed at Cooper Hewitt. Photo credit: Caroline Koh Smith

‘Dive into Color’ installed at Cooper Hewitt. Photo credit: Caroline Koh Smith

What does the tool reveal?
Visualising the Cooper Hewitt data this way gives some sense of when colours appear in time. There are no results for purple pre-19th century. Perhaps because of the difficulty and expense of producing purple before synthetic dyes/pigments were developed in the 19th century?

Though, as often is the case interpreting collection visualisations, it is difficult to disentangle historical trends from the biases and character of what has been collected and how it has been catalogued. (And bear in mind I’m only visualising the 100 objects most saturated in colour for a search). For example, these green and purple Japanese prints of irises are clearly part of a set rather than indicating some colour trend around 1910. Using the images themselves as data points is helpful for diagnosing this.

The same purple-green visualisation demonstrates how the tool can connect artefacts across time, in this case with by similar colour scheme/design:

   
(Left) Frieze (USA), 1890–1910; Manufactured by Hobbs, Benton & Heath, (right) Sidewall, Anemone, 1960–66; Designed by Phoebe Hyde 

The tool surfaces colours, used in a particular material, that are strongly attached to design types. For example these English vivid blue and white late-18th century ceramic buttons/medallions:


(From left) Medallion (England), late 18th century, stoneware; Medallion (England), late 18th century, stoneware; Button (England), late 18th century; stoneware

Blue and white ceramics manufactured in the Netherlands in the late-17th century, early-18th century:

  
(From left) Plate (Netherlands), 1675–1725, tin-glazed earthenware; Plaque (Netherlands), 1675–1725, tin-glazed earthenware; Obelisk (Netherlands), ca. 1700–25, tin-glazed earthenware

And French red and white textiles in the late 18th/ early 19th century:


(From left) Textile (France), late 18th century, cotton; Textile (France), ca. 1850, cotton; Textile (France), 18th century, cotton

Visualising blue-yellow shows more saturated colour from mid-19th century onwards. Is this signalling changing fashion, or the availability of new synthetic dyes/pigments? Can we connect the more saturated harmony in designs from the mid-1800s with Chevreul’s influential text from the time ‘The Law of Simultaneous Colour Contrast’, describing how colour harmony can be used to create a more vibrant effect? Though a number of the earlier objects are textiles and the colours will have faded over time. Possibly a combination of these factors is at play here.

User evaluations
While I’ve discussed historical trends and the Cooper Hewitt collection, I’m also interested in how others might use a tool like this in their own projects, and with other collections. I conducted a number of interviews around this tool design with history of design students and colour history specialists, exploring their impressions of it and if/how they might use it in their own work. (I was also interested to talk with designers about using a tool like this for design inspiration, but struggled with recruiting!)

The history of design students (Masters students in History of Design from the Royal College of Art/Victoria & Albert Museum programme) discussed examples from their own work where such a tool could be useful. Example projects included: tracing the use of blue through time in anti-vaccination movement posters to convey trust; or pink clothing in the history of women’s protest movements. In both these examples a hue, rather than a more specific colour, was of interest. Out of these conversations, the most useful features to add would be a filter by object type, and to be able to narrow down a time period.

For the specialist colour audience, though, this tool design has some issues. Not least because I do not know how accurately the photos represent the true colours of the objects (what lighting conditions the photographs were taken in, if they have been retouched etc.). While the overall extracted colours seem generally good, they may not be precise enough for some. The control in searching by colour – only by hue – may also be too limited for some needs.

Seeing is believing?
Colour data is computationally extracted, in contrast with manually added metadata. Do these different cases require different considerations for designing visualisations?

In this post, I’ve used arguments like it ‘looked better’, or the results were ‘more satisfying’ to explain my design decision-making. Working with colour data I knew had errors, I was more comfortable adjusting parameters in my search queries and editing obviously wrong colour data to return what looked ‘better’ to me. For colour, you can immediately see when images appear in the visualisation don’t match. (Though, of course, looking at the visualised results will not tell you if there are absent items). In interviews, I asked whether this data massaging to produce more satisfying results bothered interviewees and was often told the person didn’t mind, but they’d worry if there were obvious errors appearing in the visualisation.

While prototyping the design in conversation with curators at Cooper Hewitt, we discussed the possibility of different versions of this tool: offering more control in search and not massaging parameters for in-depth researchers. But there is also value in visually satisfying results. As a curator expressed it: “We have a large number of professional designers and design students who come here … Just seeing beautiful examples of how people have used particular colour schemes is research. So the visually satisfying… seeing the most compelling works has a value as well. For the professional designers who, say ‘wow this is really incredible use of this colour scheme. I want to share this with my students.’

‘Dive into Color’ has since been exhibited in the London Design Festival 2018, and will hopefully go online at some point. Any feedback is very welcome: olivia.fletcher-vane@network.rca.ac.uk

Many thanks to Cooper Hewitt for their help with this project: especially Pamela Horn, Jennifer Cohlman Bracchi, Susan Brown and the technical team for getting ‘Dive into Color’ up and running in the galleries. Thanks to Neil Parkinson who showed me the Colour Reference Library at RCA, to Dr Alexandra Loske, Patrick Baty and RCA students for their thoughts, and to Jonny Jiang for help with the final UI design. And thanks to Stephen Boyd Davis for his continued help and support!

 

Exploring the Cooper Hewitt collection with timelines and tags: guest post by Olivia Vane

‘Black & white’ timeline detail, Cooper Hewitt data

A physical museum is itself a sort of data set — an aggregation of the micro in order to glimpse the macro. One vase means little on its own, beyond perhaps illustrating a scene from daily life. But together with its contemporaries, it means the contours of a civilization. And when juxtaposed against all vases, it helps create a first-hand account of the history of the world.
From ‘An Excavation Of One Of The World’s Greatest Art Collections

The ability to draw on historic examples from various cultures, to access forgotten techniques and ideas and juxtapose them with contemporary works, creates provocative dialogues and amplifies the historic continuum. This range is an asset few museums have or utilize and provides a continuing source of inspiration to contemporary viewers and designers.”
From ‘Making Design: Cooper Hewitt, Smithsonian Design Museum Collection’ p.28

Guest post by Olivia Vane

I’m 4 months into a 5-month fellowship at Cooper Hewitt working with their digitised collection. I’m normally based in London where I’m a PhD student in Innovation Design Engineering at the Royal College of Art supervised by Stephen Boyd Davis, Professor of Design Research. My PhD topic is designing and building interactive timelines for exploring cultural data (digitised museum, archive and library collections). And, in London, I have been working with partners at the V&A, the Wellcome Library and the Science Museum.

The key issue in my PhD is how we ‘make sense’ of history using interactive diagrams. This is partly about visualisation of things we already know in order to communicate them to others. But it is also about visual analytics – using visuals for knowledge discovery. I’m particularly interested in what connects objects to one another, across time and through time.

I am very fortunate to be spending time at Cooper Hewitt as they have digitised their entire collection, more than 200,000 objects, and it is publicly available through an API. The museum is also known for its pioneering work in digital engagement with visitors and technical innovations in the galleries. It is a privilege to be able to draw on the curatorial, historical and digital expertise of the staff around me here for developing and evaluating my designs.

As I began exploring the collection API, I noticed many of the object records had ‘tags’ applied to them (like ‘birds’, ‘black & white’, ‘coffee and tea drinking’, ‘architecture’, ‘symmetry’ or ‘overlapping’). These tags connect diverse objects from across the collection: they represent themes that extend over time and across the different museum departments. This tagging interested me because it seemed to offer different paths through the data around shape, form, style, texture, motif, colour, function or environment. (It’s similar to the way users on platforms like Pinterest group images into ‘boards’ around different ideas). An object can have many tags applied to it suggesting different ways to look at it, and different contexts to place it in.

Where do these tags come from? Here, the tags are chosen and applied by the museum when objects are included in an exhibition. They provide a variety of ways to think about an object, highlighting different characteristics, and purposely offer a contrasting approach to more scholarly descriptive information. The tags are used to power recommendation systems on the museum collection website and applications in the galleries. They constitute both personal and institutional interpretation of the collection, and situate each item in a multi-dimensional set of context.


Some examples of tags and tagged objects in the Cooper Hewitt collection

I was interested to trace these themes over the collection and, since objects often have multiple tags, to explore what it would be like to situate or view each object through these various lenses.

The temporal dimension is important for identifying meaningful connections between items in cultural collections, so my first thoughts were to map tagged objects by date.

I’m working on a prototype interface that allows users to browse in a visually rich way through the collection by tags. A user starts with one object image and a list of the tags that apply to that object. They may be interested to see what other objects in the collection share a given tag and how the starting image sits in each of those contexts. When they click a tag, a timeline visualisation is generated of images of the other objects sharing that tag – arranged by date. The user can then click on further tags, to generate new timeline visualisations around the same starting image, viewing that image against contrasting historical narratives. And if they see a different image that interests them in one of these timelines, they can click on that image making it the new central image with a new list of tags through which to generate timelines and further dig into the collection. By skipping from image to image and tag to tag, it’s easy to get absorbed in exploring the dataset this way; the browsing can be undirected and doesn’t require a familiarity with the dataset.


‘Coffee and tea drinking’ timeline: designs in the collection stretch from 1700 to the present with a great diversity of forms and styles, elaborate and minimal.

‘Water’ timeline. Here there are many different ways of thinking about water: images of garden plans with fountains and lakes from the 16th–18th Century, or modern interventions for accessing and cleaning water in developing countries. Contrasting representations (landscape painting to abstracted pattern) and functions (drinking to boating) stretch between.

‘Water’ timeline, detail


‘Space’ timeline: 1960s ‘space age’ souvenirs (Soviet and American) precede modern telescope imaging. And a 19th Century telescope reminds us of the long history of mankind’s interest in space.

I’m plotting the object images themselves as data points so users can easily make visual connections between them and observe trends over time (for instance in how an idea is visually represented or embodied in objects, or the types of objects present at different points in time). The images are arranged without overlapping, but in an irregular way. I hoped to emulate a densely packed art gallery wall or mood board to encourage these visual connections. Since the tags are subjective and haven’t been applied across the whole collection, I also felt this layout would encourage users to view the data in a more qualitative way.


Yale Center for British Art: Long Gallery, image credit Richard Caspole, YCBA & Elizabeth Felicella, Esto

Moodboard, image credit ERRE

Dealing with dates

How to work with curatorial dating?

While most of the post-1800 objects in the dataset have a date/date span expressed numerically, pre-1800 objects often only have date information as it would appear on a label: for example ‘Created before 1870s’, ‘late 19th–early 20th century’, ‘ca. 1850’ or ‘2012–present’. My colleagues at the Royal College of Art have previously written about the challenges of visualising temporal data from cultural collections (Davis, S.B. and Kräutli, F., 2015. The Idea and Image of Historical Time: Interactions between Design and Digital Humanities. Visible Language49(3), p.101).

In order to process this data computationally, I translated the label date text to numbers using the yearrange library (which is written for working with curatorial date language). This library works by converting, for example, ‘late 18th century’ to ‘start: 1775, end: 1799’ For my purposes, this seems to work well, though I am unsure how to deal with some cases:

  • How should I deal with objects whose date is ‘circa X’ or ‘ca. X’ etc.? At the moment I’m just crudely extending the date span by ±20 years.
  • How should I deal with ‘before X’? How much ‘before’ does that mean? I’m currently just using X as the date in this case.
  • The library does not translate BC dates (though I could make adjustments to the code to enable this…) I am just excluding these at the moment.
  • There are some very old objects in the Cooper Hewitt collection for example ‘1.85 million years old’, ‘ca. 2000-1595 BCE’ and ‘300,000 years old’. These will create problems if I want to include them on a uniformly scaled timeline! Since these are rare cases, I’m excluding them at the moment.

Skewing the timeline scale

The Cooper Hewitt collection is skewed towards objects dating post-1800 so to even out image distribution over the timeline I am using a power scale. Some tags, however, – such as ‘neoclassical’ or ‘art nouveau’ – have a strong temporal component and the power scale fails to even out image distribution in these cases.

How are the images arranged?

My layout algorithm aims to separate images so that they are not overlapping, but still fairly closely packed. I am using a rule that images can be shifted horizontally to avoid overlaps so long as there is still some part of the image within its date span. Since images are large data markers, it is already not possible to read dates precisely from this timeline. And the aim here is for users to observe trends and relationships, rather than read off exact dates, so I felt it was not productive to worry too much about exact placement horizontally. (Also, this is perhaps an appropriate design feature here since dating cultural objects is often imprecise and/or uncertain anyway). This way the images are quite tightly packed, but don’t stray too far from their dates.

‘Personal environmental control’ timeline: a dry juxtaposition of these decorated fans against modern Nest thermostats.

‘Foliate’ timeline, detail

‘Squares’ timeline

I’ve also tried to spread images out within date spans, rather than just use the central point, to avoid misleading shapes forming (such as a group of objects dating 18th century forming a column at the midpoint, 1750).

Things to think about

Interface design

  • The layout algorithm slows when there are many (100 or more) images visualised. Is there a more efficient way to do this?
  • I’m considering rotating the design 90° for web-use; I anticipate users will be interested to scroll along time, and scrolling vertically may improve usability with a mouse.
  • Would a user be interested to see different timeline visualisations next to each other, to compare them?
  • It could be interesting to apply this interface to other ways of grouping objects such as type, colour, country or other descriptor.
  • I need to build in a back button, or some way to return to previously selected images. Maybe a search option for tags? Or a way to save images to return to?

Tags

  • This visualisation design relies on curator-applied tags and, therefore, would be difficult to apply to other datasets: might there be a way to automate part of this? Maybe using computer vision technologies?
  • Since objects are only tagged if they are featured in an exhibition, the interface misses many relevant objects in the collection when visualising a theme. For instance there are 23 objects tagged ‘Japanese’, but keyword searching the collection for ‘Japanese’ returns 453 objects. While the interface works well with the current quantities of images (up to about 100), what changes to the design would be needed to increase this number?
  • What about grouping tags together? There is no dictionary or hierarchy to them so some are very similar, for instance: ‘floral’, ‘floral bouquets’, ‘floral swag’, ‘flower’, ‘flowering vine’, and ‘flowers’. Though it can be interesting to see the subtle differences in how related tags have been applied. For instance: ‘biomorphic’ is more often applied to modern objects; ‘nature’ is generally applied to depictions of nature such as landscape paintings; while ‘organic’ is applied in a more abstract sense to describe objects’ form.

I’m at a stage where I’d like to get user feedback from a range of audiences (general and scholarly) to explore some of these questions.

This is very much a work in progress, and feedback is welcome! (olivia.fletcher-vane@network.rca.ac.uk to get in touch by email)

Redesigning Post-Purchase Touchpoints

We re-opened the museum with “minimum viable product” relating to online ticket orders. Visitor-facing touchpoints like confirmation emails, eTicket PDFs and “thank you for your order” webpages were built to be simple and efficient. After putting them to the test with real visitors, room for improvement became obvious.

Here’s how we used staff feedback and designerly observation to iterate and improve upon 3 important touchpoints. The goal of this undertaking was to make things smoother for our front-of-house staff (who turned out to have quite a bit to juggle, given the new Pen and its backend complexities), and simpler for visitors (some of whom were confused by our system.. how dare they!).

The original confirmation webpage was designed with visitors buying on mobile (perhaps even while en route to the museum) in mind:

screen shot of a webpage with order number and a barcode for each ticket.

The original “Thank You” webpage was stripped of information, with the idea of getting you through the front desk transaction as efficiently as possible.

The original confirmation email was a few lines of text:

Screen shot of an email confirming cooper hewitt ticket order

Made in a pre-opening vacuum without real visitors to test upon, The original confirmation email was more self-promotional than it was anticipatory of visitors’ needs.

The original PDF attached to this confirmation email was designed for visitors who like to print things out and have something on paper:

The original eTicket PDF had one page (one "ticket") per visitor. The email went to the purchasing visitor's inbox.

The original eTicket PDF had one page (one “ticket”) per visitor. The email went to the purchasing visitor’s inbox.

Over a few weeks of heavy visitor traffic (with about 20% of visitors buying advance tickets online), I sat behind the front desk staff to quietly observe a handful of transactions every day. I initiated my observation sessions knowing that we needed to make the front desk move smoother and faster, but I didn’t yet know which touchpoints/services/operations would need changing.

These 3 touchpoints stood out to me as something that needed re-addressing if we wanted to make the front desk run more smoothly. (My daily observations also led to many efficiency-boosting changes made to internal tools, IT concerns, staffing needs, signage, and more.) This experience has made me a big believer in quiet observation as a direct route to improving services and systems. “Conference room conjecture” is worth very little compared to real observations and listening-based chats with your public-facing staff.

My advice on Observing and Listening for service design:

  •  You may observe a staff person answer a question incorrectly, or a problem that you could resolve yourself on the spot. Don’t intervene, tempting as it might be! You’re not there to fix problems, you’re there to fix problem patterns. Your mission is long-term.
  • When chatting with staff, listen quietly and attentively. It’s OK if you can’t offer an instant fix. You may not have a magic wand, but listening with empathy is at least half as good.
  • Focus on building trust with the staff you are observing over a period of days or weeks, so they will become comfortable sharing bad news as easily as they share the good. Remind them repeatedly that your intention is to improve their daily work situation.
  • Remember it can be very intimidating to feel “interrogated” or “observed” by someone who is your direct/indirect superior. Make sure they know your questions are motivated by a spirit of service, not by “tattle-telling” to other staff that things might be going amiss. You will get more honesty, and thereby, better design insights.

Here are the observation-based insights that motivated our choices:

  • Visitors sometimes get confused by the barcodes. They think something has to be scanned after their visit in order for their pen diary to get “Saved” or “sent to their email.”
  • Because this collateral is called an “eTicket,” some visitors are marching right up to the gallery entrance with their “eTicket,” and bypassing the front desk. “I already bought my ticket, why do I have to wait on this line?”
  • Visitors don’t know what the Pen is, and explaining it takes several minutes, slowing down the line.
  • Visitors may not have great cell service in our lobby, and probably haven’t gotten the wifi working yet, so if their email attachment hasn’t pre-downloaded, this will slow everything down.
  • Front desk staff each have different ways of handling eTickets. Most staff ask for the order number verbally. A few staff take the printout or phone and scan the barcode, avoiding the need to re-print a ticket (this is how the barcode was intended to be used).
  • The diversity of collateral that visitors may bring to the transaction makes things more complicated for our staff. “Is my customer looking at a webpage, an email, or a PDF? Should I tell them to look for an order number, hand me a barcode, or open the attachment?”
two gentlemen at a large white desk in a dark room full of wood paneling. a third gentleman sits behind the desk.

For their own ease of use, most desk reps were initiating the transaction by asking: “What’s your Order number?” so we designed to accommodate that preference instead of working against it.

The ideas we cycled through:

  • A picture of the Pen with an “enticing” explanation of what it does might help offset the burden on the front desk to explain it all very quickly.
  • We thought one barcode per visitor displayed in a list might let us hold on to our original “paperless dream.” (The “paperless dream” entailed scanning each barcode and pairing immediately with pens, bypassing our CRM and house-printed tickets.) When we ran this idea by our colleagues at the desk, though, we learned quickly that this would be extraordinarily confusing for guests, who need to remember their personal URL (usually printed on the ticket) to access their post-visit diary. What if a group of 5 friends come together, will we put the burden on the visitor to remember which URL goes with which friend? Will they have to write it down, or forward around the ticket email with added whose-URL-is-whose notes? That’s too much of a burden on guests, who are already working to assimilate new information about our Pen, which has already buffeted their expectations (and tried their transaction-length-patience) about what to expect during a museum front desk experience.
printouts of an email confirming tickets with barcodes and giant pen scribbled "x" with handwritten pen notes

What seems like a good idea at your desk may not seem so smart after you’ve shown it around to ground-level users

The current solution (after all, our work is never final):

screen shot of an email with lots of information about cafe, hours, map, the pen, and an image of museum interior and pen usage.

The order number is large and at the top of the email. It’s also in the subject line. Click this image to enlarge.

  • This solution makes the front desk staffer’s job simpler when a pre-order person arrives. It’s all about the order number. There is no more choice involved about whether to ask for the order number, or the barcode, or the purchaser’s name… or….
  • There is still a confirmation webpage, and it looks exactly like this.
  • There is no more PDF attachment to the email.
  • Since this is a “will-call” paradigm instead of an “eTicket” paradigm, we hope this solution will keep visitors from expecting that they can enter the museum directly without talking to a desk attendant first.
  • The order number is in the subject line, so if your email hasn’t fully downloaded, you won’t slow down the line.
  • The original idea was to save paper by allowing a visitor’s PDF to work as their ticket/URL reminder. This idea, though it does now involve reprinting tickets, may involve less user-printouts, since we’re simply asking folks to “bring” their order number, and not any printouts.

This is just one piece of an elaborate service design puzzle. More posts will be coming about other touchpoints we’ve created and re-designed based on observations made in the first months of running our new Pen service.

Making of: Design Dictionary Video Series

We often champion processes of iterative prototyping in our exhibitions and educational workshops about design. Practicing what we preach by actually adopting iterative prototyping workflows in-house is something we’ve been working on internally at Cooper Hewitt for the last few years.

In the 3.5 years that I’ve been here, I’ve observed some inspiring progress on this front. Here’s one story of iterative prototyping and inter-departmental collaboration in-house, this time for our new Design Dictionary web video series.

Design Dictionary is a 14-part video series that aims to demystify everything from tapestry weaving to 3D printing in a quick and highly visual way. With this project, we aimed not only to produce a fun and educationally valuable new video series, but also to shake up our internal workflow.

Content production isn’t the first thing you’d think of when discussing iterative prototyping workflows, but it’s just as useful for media production as it is for hardware, software, graphic design, and other more familiar design processes.

The origin of Design Dictionary traces back to a new monthly meeting series that was kicked off about two years ago. The purpose of the meetings was to get Education, Curatorial, and Digital staff in the same room to talk about the content being developed for our new permanent collection exhibition, Making Design. We wanted everything from the wall labels to the digital interactive experiences to really resonate with our various audiences. Though logistically clunkier and more challenging than allowing content development to happen in a small circle, big-ish monthly meetings held the promise of diverse points of view and the potential for unexpected and interesting ideas.

At one of these meetings, when talking about videos to accompany the exhibition, the curators and educators both expressed a desire to illustrate the various design techniques employed in our collection via video. It was noted that video of most any technique is already available online, but since these videos are of varying quality, accuracy, and copyright allowances, and it might be worth it to produce our own series.

I got the ball rolling by creating a list of techniques that will appear more than once in Making Design.

Then I collected a handful of similar videos online, to help center the conversation about project goals. Even the habitual “lurkers” on Basecamp were willing to chime in when it came to criticizing other orgs’ educational videos: “so boring!” “so dry!” they said. This was interesting, because as a media producer it can be hard to 1) get people to actually participate and submit their thoughts and 2) break it to someone that their idea for a new video is extremely boring.

Once we were critiquing *somebody else’s* educational videos, and not our own darling ideas, people seemed more able to see video content from a viewer’s perspective (impatient, wanting excitement) as opposed to a curator/educator’s perspective (fixated on detail, accuracy, thoroughness, less concerned with the viewer’s interests & attention span).

a green post it note with four goals written on it as follows: 1) express new brand (as personality/mood) 2) generate online buzz 3) help docents/visitors grasp techniques in gallery-fast (research opinions) 4) help us start thinking about content creation in an audience-centered, purposeful way

I kept this note taped to my screen as a reminder of the 4 project goals.

It is amazingly easy to get confused and lost mid-project if you don’t keep your goals close. This is why I clung tightly to the sticky note shown above. When everyone involved can agree on goals up-front, the project itself can shape-shift quite nicely and organically, but the goals stay firm. Stakeholders’ concerns can be evaluated against the goals, not against your org. hierarchy or any other such evil criteria.

Even with all the viewer-centric empathy in the world, it can still be hard to predict what your audience will like and dislike. Would a video about tapestry weaving get any views on YouTube? What about 3D printing?

Screen shot of a tweet that says: Last chance! Tell us which design techniques interest you most in this one-question survey: https://bit.ly/Museum4U

We asked our Twitter followers which techniques interest them most.

We created a quick survey on SurveyMonkey and blasted it out to our followers on Facebook and Twitter to gauge the temperature.

a list of design techniques, each with an orange bar showing percentage of people who voted for that technique.

Surveying our Twitter and Facebook fans with SurveyMonkey, to learn which techniques they’d be interested in learning more about.

We also hosted the same survey on Qualaroo, which pops up on our website. My hunch about what people would say was all wrong. We used these survey results to help choose which techniques would get a video.

By this point, it was mid-winter 2014, and our new brand from Pentagram was starting to get locked in. It was a good opportunity to play with the idea of expressing this new brand via video. What should the pacing and rhythm be like? How should animations feel? What kind of music should we use?

grid of various images, each with a caption, like a mood board or bulletin board.

Public mood-boarding with Pinterest.

Seb & I are fans of “Look Around You” and we liked the idea of a somewhat cheeky approach to the dreaded “educational video.” How about an educational video that (lovingly, artfully) mocks the very format of educational videos? I created a Pinterest board to help with the art direction. We couldn’t go too kitsch with the videos, however, because our new brand is pretty slick and that would have clashed.

Then I made a low-stakes, low-cost prototype, recycling footage from a previous project. I sent this out to the curatorial/education team for feedback using Basecamp.

In retrospect I can now see that this video is awful. But at the time, it seemed pretty good to me. This is why we prototype, people!

With feedback from colleagues via Basecamp (less book, more live action, more prominent type), I made the next prototype:

I got mixed reactions about the new typography. Some found it distracting. And I was still getting a lot of mixed reactions to the book. So here was my third pass:

I was starting to reach out to artists and designers to lend their time to the shoots, and was cycling that fresh footage into the project, and cycling the new video drafts back to the group for feedback. Partially because we were on a deadline and partially because it works well in iterative projects, we didn’t wait for closure on step 1 before moving on to step 2.

a pile of scrap papers, each with different lists saying things like: "copy pattern, cover pattern with contact paper, mount pattern" or "embroidery steps: 1) cut fabric 2) stretch main fabric onto hoop 3) cut thread" et cetera

I got a crash course in 14 different techniques.

Every new shoot presented a new chance to test the look and feel and get reactions from my colleagues. Here was a video where I tried my own hand at graphical “annotations” (dovetail, interlock, slit):

By this point my prototype was refined enough to share with Pentagram, who were actively working on our digital collateral. I asked them to style a typographic solution for the series, which could serve as the basis for other museum videos as well. Whenever you can provide a designer with real content, do it, because it’s so much better than using dummy content. Dummy content is soft and easy, allowing itself to be styled in a way that looks good, but meets no real requirements when put through a real stress test (long words, bulky text, realistic quantities of donor credits, real stakeholders wanting their interests represented prominently).

Here is a revised video that takes Pentagram’s new, crisp typography into account:

This got very good feedback from education and curatorial. And I liked it too. Yay.

All-in-all, it took about 8 rounds of revision to get from the first cruddy prototype to the final polished result.

And here are the final versions.

Designing the responsive footer

We now have a responsive main website. To a degree.

Like everything it is a stopgap measure before we do a full overhaul of the Cooper-Hewitt online – timed to go live before we reopen our main campus (2014).

With the proportion of mobile traffic to our web properties increasing every month we couldn’t wait for a full redesign to implement a mobile-friendly version of the site. So we did some tweaking and with the help of Orion, pulled responsiveness into the scope for a migration of backends from Drupal 6 to Drupal 7.

Katie did the wireframing and design of the new funky fat footer – which you’ll notice, changes arrangements as it switches between enormous (desktop), large (tablet) and mini (mobile) modes.

Here she is explaining the what and why.

Why did you do paper prototypes for the responsive design?

A few months ago I was working on a design for the Arts Achieve website. I showed my screen to Bill, our museum director, to get his thoughts. Bill is a former industrial designer and one of the pioneers of interaction design. The first thing he said was “ok, let’s print out a screenshot.” He then drew his suggestions right onto the printed page. We didn’t really look at the screen much during the conversation. Writing directly onto the paper was more immediate and direct, and made his suggestions feel very possible to me. Looking at a site design on a screen makes me feel like I’m looking at something final, even if its just a mockup. The same thing printed on paper seems more malleable. It’s a mind trick!

Paper also lets me print out many versions and compare them side-by-side (you can’t do that on a single monitor).

Paper also ALSO lets me walk around showing my print-outs to others and ask for rapid reactions without pulling everyone into a screen hover session. This is a simple body/communication thing: when everyone is facing toward a screen to talk about a design, you’re not in a natural conversational position. Everyone’s face and body is oriented toward the screen. I can’t see people’s faces and expressions unless I twist around. When you’re just holding a paper, and there’s no screen, it’s more like a natural conversation.

Post-its stuck to the monitor as a way to quickly agree on our initial ideas

Why do some of the elements move around in the responsive footer? (why do the icons and signups move)

They move around to be graphically pleasing. And to make sure the stuff we wanted people to notice and click on is most prominent.

We had a strong desire for the social media icons to be really prominent. So they’re front and center in the monitor-width design (940px width). They’re on the right hand side in the tablet-size design (700px wide) and in the mobile-size design (365px wide) because I think it looks sharpest when the rectilinear components are left-justified and the round stuff is on the right.

What were the challenges for the responsive design?

We had a really clear hierarchy in mind from the beginning (we knew what we really wanted people to notice and click) so that eliminated a lot of complexity. The only challenge was how to serve that hierarchy cleanly.

One challenge was the footer doesn’t always graphically harmonize with the body of the page, because the page content is always changing.

Another challenge was getting the latest tweet to be clear and legible, but still appear quiet and ambient and classy.

What were some of the things you are going to be looking out for as it the site goes live?

I want to see how the footer harmonizes with our varying page body content and then decide if it makes sense to change the footer to match the body, or re-style the body content to sit better atop the footer.

I wonder if people on Twitter will start saying stuff @Cooperhewitt just because they know they’ll get a few minutes of fame on our homepage. That participation could be awesome or spammy. We’ll see.

I’m really excited to see the analytics. I want to see if this new layout really does boost our newsletter signup and social media participation and everything. It will be super gratifying if it does.

Of course, we’ll reiterate and revise based on all the analytics and feedback.