Category Archives: Design

Making ‘Dive into Color’

Guest post by Olivia Vane

‘Dive into Color’ is an interactive timeline for exploring the Cooper Hewitt collection by colour/colour harmony and time. It is exhibited in ‘Saturated: The Allure and Science of Color’ 11 May 2018 – 13 Jan 2019.

Since spending time at Cooper Hewitt last year on a fellowship, I returned to London where I’m a PhD student at Royal College of Art (RCA) in Innovation Design Engineering supervised by Professor Stephen Boyd Davis. At Cooper Hewitt, I developed a prototype timeline tool for visualising the museum collection by tags.

This post describes further work on that prototype, shifting the tool to exploring the collection by colour. As a curator explained: “[visualising by] colour, I think, is useful for the purposes of the study of the taste for different colours, but it’s also a more interesting exercise for the public just to be able to do and get pleasure out of.” Colour is enjoyable – it’s eye-catching and vibrant – and it offers a visual, intuitive way to explore a digitised collection without needing specialist knowledge. With a design collection like Cooper Hewitt’s, tracing colour through history is also interesting for looking at fashions and innovation in colour technology.

I’ve been asked a few times recently what my process is for designing visualisations. So in this post I’m going to step though the early prototypes and retrace my design decisions. Along the way, I will talk over practical points for working with colour (and colour harmonies!) in collection data, and working between digital and artist colour systems.

Previous colour-collections visualisations
Colour has previously been used both as a facet for search and for visualising collections. Geoff Hinchcliffe’s ‘Tate Explorer’ offers colour as a search facet paired with a timeline. This prototype from the Swedish National Heritage Board (write-up forthcoming) combines colours and tags for exploring a fashion collection. Richard Barrett-Small’s ‘ColourLens’ searches over Rijksmuseum and Walters Art Museum data by colour with a graphic for each item visualising its colour proportions. And Google Arts & Culture’s ‘Art Palette’ is a search engine that finds artworks based on a chosen colour palette.

Collections visualised by colour include the Library of Congress, where Laura Wrubel created this tool for overviewing the colour palettes of objects over a collection and Jer Thorp visualised the colour names present in the titles of works. Also using Cooper Hewitt data, Rubén Abad created this visualisation of the colours present by decade in Cooper Hewitt’s objects. Lev Manovich has visualised artworks, for example Mondrian and Rothko paintings, by colour characteristics including hue and saturation. Everardo Reyes visualised Paul Klee’s paintings by hue. And Brian Foo’s visualisation of the New York Public Library digitised collection has an option to organise items by colour.

I was interested to explore colour alongside the time dimension. And since Cooper Hewitt was preparing for an exhibition, ‘Saturated’, focusing on colour theory and design, I was intrigued to see if I could trace colour harmonies across the collection.

Colour harmonies are combinations of colours that are pleasing to the eye. The relative positions of colours in a colour wheel can be used to describe different harmonies like complementary colours (opposites on the colour wheel), or analogous colours (neighbours on the colour wheel). Artists and designers create different visual effects and contrasts with different harmony types.

Colour harmony examples, image from studiobinder

Extracting colours across the Cooper Hewitt collection
It’s already possible to search by colour on Cooper Hewitt’s collection site. Colour data was extracted using Giv Parvaneh’s great tool RoyGBiv (described in this Cooper Hewitt Labs post). Roughly, RoyGBiv works by checking the colour value of each pixel in an image, clustering colour values that are similar enough to be considered the same and returning up to 5 dominant colours in an image.

The colours extracted from Cooper Hewitt’s collection with RoyGBiv are good on the whole, but errors sometimes occur. The background colour is sometimes picked up. The effect of light and shadow on a 3D object can introduce multiple, illusory colours:

As always there are quirks working with digitised collections, like these Dutch tiles which had coloured stickers on them when they were photographed:

Or lace photographed against a dark background for contrast:

Since the possible number of unique colours extracted across the collection is huge, searching by colour on the Cooper Hewitt website is simplified by snapping extracted colours to the closest value in a standardised palette (the default is the CSS4 web colour palette, but the CSS3 and Crayola palettes are also options). On the Cooper Hewitt website, you can search the collection by 116 CSS4 colours. Both the original and ‘snapped to’ colour palettes are available in the Cooper Hewitt data – all stored as hex codes (six hexadecimal digits representing the levels of red, green and blue).

Prototyping
As a first step, I adapted my code to visualise collection items matching a CSS4 colour on a timeline (see visualisations below).

Although Cooper Hewitt has an API (an Application Programming Interface: a way for someone to make use of Cooper Hewitt’s data through a set of pre-defined requests made over the web), there is no method for returning all the objects matching a colour. Instead, I used collection data Cooper Hewitt had put on GitHub – an argument for institutions to offer both!

‘Orangered’

‘Steelblue’

‘Olivedrab’

I then started exploring how I might visualise objects featuring a colour harmony, first trying complementary colours (opposites on the colour wheel).

I initially tried to do this by matching a chosen CSS4 colour with the nearest CSS4 colour of opposite hue. The HSL and HSV (hue, saturation, lightness/value) colour systems define hue as an angle round a circle (0-360°), so I inverted hues by converting hex codes to HSL. The visualised results were unsatisfying though, as often the search failed to find any matches. This doesn’t mean to say there was a lack of objects with complementary colours, but that my search was too precise (and artificially precise since which CSS4 palette colours you can consider to be complementary is fuzzy, and the colour data is imprecise anyway e.g. the illusory colours extracted for 3D objects).

I tried extending the reach of my search to matching several colours close to the inverted hue, but it felt very frustrating not to have a visual reference to the range of colours in a region and what colours were being searched over.

So I started experimenting with using a colour wheel input as a way to pick colour combinations and simultaneously see possible hue relations. I first tried mapping the colours from the standardised palettes by HSL round a circle.

‘Snapped to’ colours in the Cooper Hewitt collection. CSS4 (left) and Crayola (right) palettes mapped by hue (in HSL). Angle = hue, radius = lightness.

And to make it easier to see the possible colours, I wrote some code to map the CSS4 palette colours to a wheel.

CSS4 palette colours → mapped round a wheel. Hue (HSL) = angle, ordered by lightness

I realised at this point, though, that the resulting design doesn’t match a typical artist’s/pigment colour wheel (which has red, yellow, blue – RYB – primary colours). HSL is a simple transformation of RGB colour space, and therefore the wheel has red, green, blue primaries. If this colour wheel is used to search over design artefacts, surely it would be more appropriate to use a design closer to the norm for artists and designers? (These in-depth articles by David Briggs – part 1, part 2 – explain the differences between traditional and modern colour theory, and colour training for artists).

There is no ‘correct’ colour wheel to adopt, but converting my HSL colour wheel to something closer to an artist’s version (using code from Ben Knight’s implementation of Adobe’s ‘Kuler’ colour wheel) felt like a reasonable compromise here.

(Left) Colours mapped by hue (HSL). Angle = hue, radius = lightness. (Right) Colour mapping adjusted to more closely resemble an artist’s colour wheel.

Using this colour wheel as a guide and an input, I could see and choose which colours to query. By searching over multiple colours, the visualised results were better. (In the images below, white and black borders around tiles in the colour wheel indicate the searched-over colour combinations):

Purple against olive timeline

Orangered against cyan/blue timeline

Querying against colour data in HSV
While the results looked better with this prototype, the user interface is a mess and complicated to use. And the search query was not excluding objects that featured other colours in addition to the searched colour combination.

Sticking with the CSS4 palette was greatly complicating the task, so I abandoned using it. I converted all the original extracted colours (not ‘snapped’) from hex codes to HSV and created my own Elasticsearch index with the HSV colours stored as a nested datatype. This way I can: search over a hue range, with a threshold on saturation and value; exclude objects also featuring other hues; and it is simple to define more complex colour harmony searches (e.g. analogous, triadic, quadratic and split complementary).

Different colour harmonies tried in prototyping

Colour wheel graphic for objects
As a by-product, I realised I could repurpose my code to map individual object palettes round a colour wheel too. Thus, you get a compact graphic describing the colour relationships present in a single design. This is a nice example of the serendipity of designing, where you identify new possibilities as a result of seeing what you have already made.



(Above) Object with colour palette, (below) palette mapped to a wheel: easy to see complementary harmony


Colour wheel graphic examples

I adapted a simple artist’s (RYB) colour wheel to use as an input and tested the prototype with visitors at Royal College of Art’s January 2018 ‘Work in Progress’ show.

Testing the prototype with visitors at Royal College of Art’s Jan 2018 ‘Work in Progress’ show

In order to avoid reducing the size of the images (so it’s still possible to see what the objects are), I’ve capped the number of visualised objects to the 100 most saturated in colour.

There were few hits for the more complex harmonies (quadratic, tetradic, split complementary) and the results felt less convincing. I had widened the hue range to search over in order to increase the small number of hits, so the results were less visually cohesive anyway. And, in conversations with the museum curators, we decided to drop these more complex harmonies from the interface.

As mentioned earlier, there are some errors in the colour data. At this stage, since this setup only allows a fixed set of possible searches, with repeatable results, it was worth it for me to do some manual editing of the colour data to remove obvious errors.

Final interface design
For the final (more polished!) interface design, which is now on display, I set on adopting a colour wheel input inspired by this Hilaire Hiler design in Cooper Hewitt’s collection. (This wheel actually has 4 ‘psychological’ colour primaries and features 30 hues). The interface has 4 harmony options: monochromatic, complementary, analogous and spectrum (a rainbow colour option).

Color wheel picker, inspired by Hiler’s design, used in ‘Dive into Color’

 

‘Dive into Color’ installed at Cooper Hewitt. Photo credit: Caroline Koh Smith

‘Dive into Color’ installed at Cooper Hewitt. Photo credit: Caroline Koh Smith

What does the tool reveal?
Visualising the Cooper Hewitt data this way gives some sense of when colours appear in time. There are no results for purple pre-19th century. Perhaps because of the difficulty and expense of producing purple before synthetic dyes/pigments were developed in the 19th century?

Though, as often is the case interpreting collection visualisations, it is difficult to disentangle historical trends from the biases and character of what has been collected and how it has been catalogued. (And bear in mind I’m only visualising the 100 objects most saturated in colour for a search). For example, these green and purple Japanese prints of irises are clearly part of a set rather than indicating some colour trend around 1910. Using the images themselves as data points is helpful for diagnosing this.

The same purple-green visualisation demonstrates how the tool can connect artefacts across time, in this case with by similar colour scheme/design:

   
(Left) Frieze (USA), 1890–1910; Manufactured by Hobbs, Benton & Heath, (right) Sidewall, Anemone, 1960–66; Designed by Phoebe Hyde 

The tool surfaces colours, used in a particular material, that are strongly attached to design types. For example these English vivid blue and white late-18th century ceramic buttons/medallions:


(From left) Medallion (England), late 18th century, stoneware; Medallion (England), late 18th century, stoneware; Button (England), late 18th century; stoneware

Blue and white ceramics manufactured in the Netherlands in the late-17th century, early-18th century:

  
(From left) Plate (Netherlands), 1675–1725, tin-glazed earthenware; Plaque (Netherlands), 1675–1725, tin-glazed earthenware; Obelisk (Netherlands), ca. 1700–25, tin-glazed earthenware

And French red and white textiles in the late 18th/ early 19th century:


(From left) Textile (France), late 18th century, cotton; Textile (France), ca. 1850, cotton; Textile (France), 18th century, cotton

Visualising blue-yellow shows more saturated colour from mid-19th century onwards. Is this signalling changing fashion, or the availability of new synthetic dyes/pigments? Can we connect the more saturated harmony in designs from the mid-1800s with Chevreul’s influential text from the time ‘The Law of Simultaneous Colour Contrast’, describing how colour harmony can be used to create a more vibrant effect? Though a number of the earlier objects are textiles and the colours will have faded over time. Possibly a combination of these factors is at play here.

User evaluations
While I’ve discussed historical trends and the Cooper Hewitt collection, I’m also interested in how others might use a tool like this in their own projects, and with other collections. I conducted a number of interviews around this tool design with history of design students and colour history specialists, exploring their impressions of it and if/how they might use it in their own work. (I was also interested to talk with designers about using a tool like this for design inspiration, but struggled with recruiting!)

The history of design students (Masters students in History of Design from the Royal College of Art/Victoria & Albert Museum programme) discussed examples from their own work where such a tool could be useful. Example projects included: tracing the use of blue through time in anti-vaccination movement posters to convey trust; or pink clothing in the history of women’s protest movements. In both these examples a hue, rather than a more specific colour, was of interest. Out of these conversations, the most useful features to add would be a filter by object type, and to be able to narrow down a time period.

For the specialist colour audience, though, this tool design has some issues. Not least because I do not know how accurately the photos represent the true colours of the objects (what lighting conditions the photographs were taken in, if they have been retouched etc.). While the overall extracted colours seem generally good, they may not be precise enough for some. The control in searching by colour – only by hue – may also be too limited for some needs.

Seeing is believing?
Colour data is computationally extracted, in contrast with manually added metadata. Do these different cases require different considerations for designing visualisations?

In this post, I’ve used arguments like it ‘looked better’, or the results were ‘more satisfying’ to explain my design decision-making. Working with colour data I knew had errors, I was more comfortable adjusting parameters in my search queries and editing obviously wrong colour data to return what looked ‘better’ to me. For colour, you can immediately see when images appear in the visualisation don’t match. (Though, of course, looking at the visualised results will not tell you if there are absent items). In interviews, I asked whether this data massaging to produce more satisfying results bothered interviewees and was often told the person didn’t mind, but they’d worry if there were obvious errors appearing in the visualisation.

While prototyping the design in conversation with curators at Cooper Hewitt, we discussed the possibility of different versions of this tool: offering more control in search and not massaging parameters for in-depth researchers. But there is also value in visually satisfying results. As a curator expressed it: “We have a large number of professional designers and design students who come here … Just seeing beautiful examples of how people have used particular colour schemes is research. So the visually satisfying… seeing the most compelling works has a value as well. For the professional designers who, say ‘wow this is really incredible use of this colour scheme. I want to share this with my students.’

‘Dive into Color’ has since been exhibited in the London Design Festival 2018, and will hopefully go online at some point. Any feedback is very welcome: olivia.fletcher-vane@network.rca.ac.uk

Many thanks to Cooper Hewitt for their help with this project: especially Pamela Horn, Jennifer Cohlman Bracchi, Susan Brown and the technical team for getting ‘Dive into Color’ up and running in the galleries. Thanks to Neil Parkinson who showed me the Colour Reference Library at RCA, to Dr Alexandra Loske, Patrick Baty and RCA students for their thoughts, and to Jonny Jiang for help with the final UI design. And thanks to Stephen Boyd Davis for his continued help and support!

 

Three low white pillars glow with light while one person leans over each, pressing a button on the right side of each pillar releasing a unique scent.

Accessible Exhibition Guide: Delivering Content to All

When the user is included in the design process from the start, choices multiply for everyone’s benefit.

In Cooper Hewitt’s newest exhibitions, Access+Ability (through September 3, 2018) and The Senses: Design Beyond Vision (through October 28, 2018), designing with the user and visitor at the center are integral. Accessibility and inclusion are a key focus of these current exhibitions and essential strategic goals for the museum, now and for the years ahead. There are many exciting ways we’re stepping up our efforts even more and aiming to provide all audiences the same quality of experience on campus and online. https://www.cooperhewitt.org/accessibility-at-cooper-hewitt

In The Senses, visitors are welcomed into a multisensory playground touching, smelling, hearing, and seeing in dynamic and new ways.

Four adults are lined along and facing a curving wall covered in black synthetic fur. They stand on a wooden floor with backs facing outward as the each touch the wall moving both hands along and around the wall's surface.

Visitors touching a furry wall activate orchestral musical compositions. Designed by Studio Roos Meermanand KunstLAB Arnhem

While preparing for the exhibition we knew that there would be many touchable objects and installations (43 to be exact) in The Senses. What we lacked was the medium to deliver the exhibition content to every visitor. Cooper Hewitt’s Pen offers agency and a new way to explore in a museum for people who are sighted but without audio, how might visitors with low vision and who are blind explore the content?

Making Cooper Hewitt Content Accessible
In 2017, we launched our large print label feature on our collection website. This system makes available each exhibition’s label content (text and image) responsive to the user’s device and customizable in six different font sizes. Our Visitor Experience team also has an efficient way to produce large print label binders, directly from our collection database, for the galleries to share with visitors. We began investigating solutions for making content labels and gallery cues readable for people who are blind. We considered Braille labels, using beacon notifications, and even floor mats in front of each object and installation to message that an interaction is calling. Asking questions such as “which text would be more meaningful to print in Braille on the labels—object tombstone (museum-speak for object stats) or text about the object?” Wrong questions! All of this research and internal discussion wasn’t leading to viable solutions. Our group of five sighted colleagues, good intentioned as we were, weren’t getting at what the user might want. It was during a gallery visit with Sina Bahram—computer scientist, consultant, and researcher and President and Founder of Prime Access Consulting (PAC), which advises museums and other institutions on website and digital accessibility—who was onsite for meeting when he simply said “Identification and verification.” So obvious but somehow we weren’t seeing it! Every visitor wants agency to determine what information to access, about which object, when he or she wants it.

White label displays a bright yellow textures bar that messages touch | hear in Braille. Also printed in Braille are the object ID number that the user will enter into the app.

Accessible Exhibition Content Design

The Scaffolding
Using our API and label rail system—designed for access to the collection in the galleries—we determined the identification would be an assigned object ID number e.g. 0103, and verification is why we have the name of the object in Braille as well. This way, once you type the number of the “identified” object into your phone you can “verify” that you’ve got the right one in front of you. An added benefit since users can choose if they even want to read more about that particular object. An app could call the API and deliver the exhibition content. After discussing with our colleagues we prototyped, tested, and moved forward—with about 15 days to go before exhibition opening—to develop a version 1 native app for iOS with v.2 web-based application for android use.

Cooper Hewitt Accessible Exhibition Guide printied in white type on black background about a blank box where the keypad below is used to type in content ID number. HELP appears in top right corner.

Content number entry screen for Coper Hewitt app.

Accessible Systems
Bahram and Joshua Lerner of Monorail built the app. The tactile labels were produced by Steven Landau, whose company Touch Graphics creates accessible audio and tactile graphics. Every label in The Senses features an object title and number, printed in braille and Latin characters. Smartphones are used to access the associated content. Labels are discoverable as they are installed at a consistent height throughout the exhibition on our label rail system. Cooper Hewitt’s first Accessible Exhibition Guide was uploaded and went live on April 18. Download it from the Apple App Store or connect from our website. The guide contains the exhibition’s descriptive and interpretive content in both text and audio formats. A visitor can choose to read the text, hear it with a screen reader, or listen to an audio recording.

From top down: Black background with drop-out white type: 0104, Dialect for a New Era, 2017–18; Image of six glowing white pillars; label text.

Image of screen that appears when Dialect for a New Era content ID is entered into the app.

The Cooper Hewitt Accessible Exhibition Guide was conceived as we looked for multi-modal channels for content delivery for users with differing abilities. Designing for multiple senses highlights that the methodology of access not dictate level of access, which is the goal for the Accessible Content Guide and for Cooper Hewitt.

New Relationships: A Summer at Cooper Hewitt

This summer I was the Peter A. Kruger Cross-Platform Publishing intern. When asked about my responsibilities many people want to know, “What does “Cross-Platform” mean?”

At Cooper Hewitt, Cross-Platform Publishing sits at the nexus of the Digital and Emerging Media, Communications, Curatorial, Education, and Exhibitions departments. During my internship I have helped to research, develop, and manage all forms of content for print and electronic publications. As a part of the Cross-Platform Publications team, I have had the opportunity to participate in decision-making that affects the design and content of museum channels, printed books, and digital tables.

One of my favorite projects this summer was collaborating with the Product Design and Decorative Arts department to develop their plan for the digital tables in the museum’s upcoming exhibition, Jewelry of Ideas: Gifts from the Susan Grant Lewin Collection. The process for developing an application appearing on one of the museum’s digital tables in an exhibition begins with thinking about what stories are not being told in the exhibition didactics. When Cooper Hewitt launched its digital tables in 2014 the team built an application that shows relationships between the founding donors of the Cooper Hewitt collection. It has been in use in the exhibition Hewitt Sisters Collect. So I was asked how might we modify that application to apply to the constituents in the Jewelry of Ideas exhibition. I began looking at research about the designers in the exhibition to uncover meaningful relationships and connections between designers. Working with the curator, we decided which relationships we believed were the most important to highlight. From there, I along with the help of our registrar created relationship hierarchies in TMS. For the Susan Grant Lewin exhibition we decided that the most important relationships to feature on the digital table were those created by the schools that various designers attended or taught for. With this information we hope visitors can see how various styles and techniques arose from certain schools and how these designers’ works influenced one another.

To build the foundation of the interactive content in TMS, I recorded the connections between designers based on school, mentorship, and history of collaboration. Currently, new code is being written to modify the donor application. Once completed the collecetion site records and the digital table will reveal the relationships to its users. The table interface is designed with a “river” where objects and designer images will flow on the digital table. When a designer is selected, a short biography will appear. Underneath the biography, related designers are listed who either participated in the same school or worked together in some way. We hope that this interactive digital experience will help visitors visualize the interconnected nature of the collection in a new way.

 

 

Designing for Digital and Print

As this year’s intern in Digital and Emerging Media, I was tasked with creating designs that needed to work as a static .pdf in both print and digital format. This presented a significant challenge: the design created for printable paper dimensions is approached differently than a design for digital display. To accommodate both, the use of space in a limited size must be efficient and readable.

The first project I tackled this issue with was Cooper Hewitt’s Fall 2017 Public Programs calendar. The design brief was to create a brochure that visitors can view on their screens as a landscape orientation .pdf page on the Public Programs website. We designed this .pdf so that visitors could print at home. For easy transport, the brochure needed to fold up small enough to fit in a pocket. The desired layout and size of this brochure presented a fun but difficult design challenge.

The image below is what the .pdf will look like when opened on the website. The dashes represent folding lines, which when printed onto a double-sided letter-size piece of paper act as a guide for folding it into 8 small rectangles for a pocket size calendar or 4 longer vertical rectangles for a brochure size. Should we decide to print on-site, an 11×17 layout was also created.

The second project I worked on was less complex to design for print, but was much more complex in creating the digital layout. The book Enwheeled by Penny Wolfson is the latest publication in Cooper Hewitt’s DesignFiles e-book series. The e-books in the series are currently offered through online stores for $2.99 per book. Cooper Hewitt will be shifting this distribution model for the DesignFiles series beginning with Enwheeled publishing in December 2017. Enwheeled will be the first web-based publication available on cooperhewitt.org but that is a whole other post! My task was to create a traditional InDesign layout for the text and then use the instructions developed by last year’s Digital and Emerging Media intern, Emma Weil, for creating a Markdown-friendly HTML code. The text was first entered into threaded text boxes with the related images dispersed accordingly. Once that process was complete, I followed Emma’s instructions for tagging the text and images so that they appear in the correct order when exported to HTML.

Certain images needed to be placed in between threaded text boxes; in this case, the images had to be anchored to their corresponding caption, and the caption anchored to the end of the paragraph that it coincides with.

The final step in preparing the InDesign document for export to HTML was to assign paragraph and character styles to the text. Each text format needs to be assigned a specific style setting in order for the text to appear correctly in the HTML. Character styles include all italic, bolded, and superscripted text. Paragraph styles act the same way with entire bodies of text, as well as signifying the headers and sub-headers of the book.

When the document is exported to HTML, the order of the text and images remains intact and correctly formatted in simple text form. The HTML reformats to the size of the window or screen it is being used on, and after being entered into Emma’s Python script is ready to be entered into Markdown.

Interactive timeline design: seeking feedback!

Guest post by Olivia Vane

I’ve been visiting Cooper Hewitt for the last few months designing a new way of exploring the collection using timelines and tags. (For more background and details of the project, I’ve written a post here).

I’m set up with a prototype on a touchscreen in the Cooper Hewitt galleries today seeking impressions and feedback from visitors. Do drop in and have a play! I would love to hear your thoughts.

 

Parting gifts

Today is my last day at Cooper Hewitt, Smithsonian Design Museum. It’s been an incredible seven years. You can read all about my departure over on my personal blog if you are interested. I’ve got some exciting things on the horizon, so would love it if you’d follow me!

Parting gifts

Before I leave, I have been working on a couple last “parting gifts” which I’d like to talk a little about here. The first one is a new feature on our Collections website, which I’ve been working on in the margins with former Labs member, Aaron Straup Cope. He did most of the work, and has written extensively on the topic. I was mainly the facilitator in this story.

Zoomable Object Images

The feature, which I am calling “Zoomable Object Images” is live now and lets you zoom in on any object in our collection with a handy interface. Just go to any object page and add “/zoom” to the URL to try it out. It’s very much an experiment/prototype at this point, so things may not work as expected, and not ALL objects will work, but for the majority that do, I think it’s pretty darn cool!

Here’s an example, zoomed all the way in – https://collection.cooperhewitt.org/objects/18382347/zoom

Notice that there is also a handy camera button that lets you grab a snapshot of whatever you have currently showing in the window!

Click on the camera to download a snapshot of the view!

This project started out as a fork of some code developed around the relatively new IIIF standard that seems to be making some waves within the cultural sector. So, it’s not perfectly compliant with the standard at the moment ( Aaron says “there are capabilities in the info.json files that don’t actually exist, because static files”), but we think it’s close enough.

You can’t tell form this image, but you can pinch and zoom this on your mobile device.

To do the job of creating zoomable image tiles for over 200,000 object we needed a few things. First, we built a very large EC2 server. We tried a few instance sizes and eventually landed on an m4.2xlarge which seemed to have enough CPU and RAM to get the job done. I also added a terabyte of storage to the instance so we’d have enough room to store all the images as we processed them.

Aaron wrote the code to download all of our high res image files and process them into tiles. The code is designed to run with multiple threads so we can get the most efficiency out of our big-giant-ec2 server as possible (otherwise we’d be waiting all year for the results). We found out after a few attempts that there was one major flaw in the whole operation. As we started to generate thousands and thousands of small files, we also started to run up to the limit of inodes available to us on our boot drive. I don’t know if we ever really figure this out, but it seemed to be that the OS was trying to index all those tiny files in some way that was causing the extra overhead. There is likely a very simple way to turn that off, but in the interest of getting the job done we decided to take an alternative route.

Instead of processing and saving the images locally and then transferring them all to our S3 storage bucket, Aaron rewrote the code so that it would upload the files as they were processed. S3 would be their final resting place in any case, so they needed to get there one way or another and this meant that we wouldn’t need so much storage during processing. We let the script run and after a few weeks or so, we had all our tiles, neatly organized and ready to go on S3.

Last but not least, Aaron wrote some code that allowed me to plug it into our collections website, resulting in the /zoom pages that are now available. Woosh!

Check out the code here https://github.com/thisisaaronland/go-iiif – and dive into the tl;dr discussion over here if you’re into this kinda thing.

Cooper Hewitt in a box

The second little gift is around some work I’ve been doing to try and make developing and maintaining code at Cooper Hewitt a tiny bit better.

Since we started all this work we’ve been utilizing a server that sits on the third-floor (right next to Josue) called, affectionately, “Bill.” Bill (pictured above) has been our development machine for many years, and has also served as the server in charge of extracting all of our collection data and images from TMS and getting them published to the web. Bill is a pretty important piece of equipment.

The pros to doing things this way are pretty clear. We always have, at the ready, a clone of our entire technology stack available to use for development. All a developer needs to do is log in to Bill and get coding. Being within the Smithsonian network also means we get built in security, so we don’t need to think about putting passwords in place or trying to hide in plain site.

The cons are many.

For one, you need to be aware of each other. If more than one developer is working on the same file at the same time, bad things can happen. Sam sort of fixed this by creating individual user instances of the major web applications, but that requires a good bit of work each time you set up a new developer. Also, you are pretty much forced to use either Emacs or Vi. We’ve all grown to love Emacs over time, but it’s a pain if you’ve never used it before as it requires a good deal of muscle memory. Finally, you have to be sitting pretty close to Bill. You at least need to be on the internal network to access it easily, so remote work is not really possible.

To deal with all of this and make our development environment a little more developer friendly, I spent some time building a Vagrant machine to essentially clone Bill and make it portable.

Vagrant is a popular system amongst developers since it can easily replicate just about any production environment, and allows you to work locally on your MacBook from your favorite coffee shop. Usually, Vagrant is setup on a project by project basis, but since our tech stack has grown in complexity beyond a single project ( I’ve had to chew on lots of server diagrams over the years ), I chose to build more of a “workspace.”

I got the idea from Dan Phiffer at Mapzen who did the same for their Who’s on First project.

Essentially, there is a simple Vagrantfile that builds the machine to match our production server type, and then there is a setup.sh script that does the work of installing everything. Within each project repository there is also a /vagrant/setup.sh script that installs the correct components, customized a little for ease of use within a Vagrant instance. There is also a folder in each repo called /data with tools that are designed to fetch the most recent data-snapshots from each application so you can have a very recent clone of what’s in production. To make this as seamless as possible, I’ve also added nightly scripts to create “latest” snapshots of each database in a place where they Vagrant tools can easily download them.

This is all starting to sound very nerdy, so I’ll just sum it up by saying that now anyone who winds up working on Cooper Hewitt’s tech stack will have a pretty simple way to get started quickly, will be able to use their favorite code editor, and will be able to do so from just about anywhere. Woosh!

Lastly

Lastly, it’s been an incredible journey working here at Cooper Hewitt and being part of the “Labs.” We’ve done some amazing work and have tried to talk about it here as openly as possible–I hope that continues after I’m gone. Everyone who has been a part of the Labs in one way or another has meant a great deal to me. It’s a really exciting place to be.

There is a ton more work to do here at Cooper Hewitt Labs, and I am really excited to continue to watch this space and see what unfolds. You should too!

Thanks for everything.

-micah

Process Lab: Citizen Designer Digital Interactive, Design Case Study

Fig.1. Process Lab: Citizen Designer exhibition and signage, on view at Cooper Hewitt.

Fig.1. Process Lab: Citizen Designer exhibition and signage, on view at Cooper Hewitt

Background

The Process Lab is a hands-on educational space where visitors are invited to get involved in design process. Process Lab: Citizen Designer complimented the exhibition By the People: Designing a Better America, exploring the poverty, income inequality, stagnating wages, rising housing costs, limited public transport, and diminishing social mobility facing America today.

In Process Lab: Citizen Designer participants moved through a series of prompts and completed a worksheet [fig. 2]. Selecting a value they care about, a question that matters, and design tactics they could use to make a difference, participants used these constraints to create a sketch of a potential solution.

Design Brief

Cooper Hewitt’s Education Department asked Digital & Emerging Media (D&EM) to build an interactive experience that would encourage visitors to learn from each other by allowing them to share and compare their participation in exhibition Process Lab: Citizen Designer.

I served as project manager and user-experience/user-interaction designer, working closely with D&EM’s developer, Rachel Nackman, on the project. Interface Studio Architects (ISA) collaborated on concept and provided environmental graphics.

Fig. 2. Completed worksheet with question, value and tactic selections, along with a solution sketch

Fig. 2. Completed worksheet with question, value and tactic selections, along with a solution sketch

Process: Ideation

Project collaborators—D&EM, the Education Department, and ISA—came together for the initial steps of ideation. Since the exhibition concept and design was well established at this time, it was clear how participants would engage with the activity. Through the process of using cards and prompts to complete a worksheet they would generate several pieces of information: a value, a question, one to two tactic selections, and a solution sketch. The group decided that these elements would provide the content for the “sharing and comparing” specification in the project brief.

Of the participant-generated information, the solution sketch stood out as the only non-discrete element. We determined that given the available time and budget, a simple analog solution would be ideal. This became a series of wall-mounted display bins in which participants could deposit their completed worksheets. This left value, question, and tactic information to work with for the content of the digital interactive.

From the beginning, the Education Department mentioned a “broadcast wall.” Through conversation, we unpacked this term and found a core value statement within it. Phrased as a question, we could now ask:

“How might we empower participants to think about themselves within a community so that they can be inspired to design for social change?”

Framing this question allowed us to outline project objectives, knowing the solution should:

  • Help form a virtual community of exhibition participants.
  • Allow individual participants to see themselves in relation to that community.
  • Encourage participants to apply learnings from the exhibition other communities

Challenges

As the project team clarified project objectives, we also identified a number of challenges that the design solution would need to navigate:

  • Adding Value, Not Complexity. The conceptual content of Process Lab: Citizen Designer was complex. The design activity had a number of steps and choices. The brief asked that D&EM add features to the experience, but the project team also needed to mitigate a potentially heavy cognitive load on participants.
  • Predetermined Technologies. An implicit part of the brief required that D&EM incorporate the Pen into user interactions. Since the Pen’s NFC-reading technology is embedded throughout Cooper Hewitt, the digital interactive needed to utilize this functionality.
  • Spatial Constraints. Data and power drops, architectural features, and HVAC components created limitations for positioning the interactive in the room.
  • Time Constraints. D&EM had two months to conceptualize and implement a solution in time for the opening of the exhibition.
  • Adapting to an Existing Design. D&EM entered the exhibition design process at it’s final stages. The solution for the digital interactive had to work with the established participant-flow, environmental graphics, copy, furniture, and spatial arrangement conceived by ISA and the Education Department.
  • Budget. Given that the exhibition design was nearly complete, there was virtually no budget for equipment purchases or external resourcing.

Process: Defining a Design Direction

From the design brief, challenges, objectives, and requirements established so far, we could now begin to propose solutions. Data visualization surfaced as a potential way to fulfill the sharing, comparing and broadcasting requirements of the project. A visualization could also accommodate the requirement to allow an individual participants to compare themselves to the virtual exhibition community by displaying individual data in relation to the aggregate.

ISA and I sketched ideas for the data visualization [figs. 3 and 4], exploring a variety of structures. As the project team shared and reviewed the sketches, discussion revealed some important requirements for the data organization:

  • The question, value and tactic information should be hierarchically nested.
  • The hierarchy should be arranged so that question was the parent of value, and value was the parent of tactics.
Fig. 3. My early data visualization sketches

Fig. 3. My early data visualization sketches

Fig. 4. ISA’s data visualization sketch

Fig. 4. ISA’s data visualization sketch

With this information in hand, Rachel proceeded with the construction of the database that would feed the visualization. The project team identified an available 55-inch monitor to display the data visualization in the gallery; oriented vertically it could fit into the room. As I sketched ideas for data visualizations I worked within the given size and aspect ratio. Soon it became clear that the number of possible combinations within the given data structure made it impossible to accommodate the full aggregate view in the visualization. To illustrate the improbability of showing all the data, I created a leaderboard with mock values for the hundreds of permutations that result from the combination of 12 value, 12 question and 36 tactic selections [fig. 5, left]. Not only was the volume of information overwhelming on the leaderboard, but Rachel and I agreed that the format made no interpretive meaning of the data. If the solution should serve the project goal to “empower participants to think about themselves within a community so that they can be inspired to design for social change,” it needed to have a clear message. This insight led to a series steps towards narrativizing the data with text [fig. 5].

Concurrently, the data visualization component was taking shape as an enclosure chart, also known as a circle packing representation. This format could accommodate both hierarchical information (nesting of the circles) and values for each component (size of the circles). With the full project team on board with the design direction, Rachel began development on the data visualization using D3.js library.

Fig. 5. Series of mocks moving from a leaderboard format to a narrativized presentation of the data with an enclosure chart

Fig. 5. Series of mocks moving from a leaderboard format to a narrativized presentation of data with an enclosure chart

Process: Refining and Implementing a Solution

Through parallel work and constant communication, Rachel and I progressed through a number of decisions around visual presentation and database design. We agreed that to enhance legibility we should eliminate tactics from the visualization and present them separately. I created a mock that applied Cooper Hewitt’s brand to Rachel’s initial implementation of the enclosure chart. I proposed copy that wrapped the data in understandable language, and compared the latest participant to the virtual community of participants. I opted for percentage values to reinforce the relationship of individual response to aggregate. Black and white overall, I used hot pink to highlight the relationship between the text and the data visualization. A later iteration used pink to indicate all participant data points. I inverted the background in the lower quarter of the screen to separate tactic information from the data visualization so that it was apparent this data was not feeding into the enclosure chart, and I utilized tactic icons provided by ISA to visually connect the digital interactive to the worksheet design [fig. 2].

Next, I printed a paper prototype at scale to check legibility and ADA compliance. This let us analyze the design in a new context and invited feedback from officemates. As Rachel implemented the design in code, we worked with Education to hone the messaging through copy changes and graphic refinements.

Fig. 5. A paper prototype made to scale invited people outside the project team to respond to the design, and helped check for legibility

Fig. 5. A paper prototype made to scale invited people outside the project team to respond to the design, and helped check for legibility

The next steps towards project realization involved integrating the data visualization into the gallery experience, and the web experience on collection.cooperhewitt.org, the collection website. The Pen bridges these two user-flows by allowing museum visitors to collect information in the galleries. The Pen is associated with a unique visit ID for each new session. NFC tags in the galleries are loaded with data by curatorial and exhibitions staff so that visitors can use the Pen to save information to the onboard memory of the Pen. When they finish their visit the Pen data is uploaded by museum staff to a central database that feeds into unique URLs established for each visit on the collection site.

The Process Lab: Citizen Designer digital interactive project needed to work with the established system of Pens, NFC tags, and collection site, but also accommodate a new type of data. Rachel connected the question/value/tactic database to the Cooper Hewitt API and collections site. A reader-board at a freestanding station would allow participants to upload Pen data to the database [fig. 6]. The remaining parts of the participant-flow to engineer were the presentation of real time data on the visualization screen, and the leap from the completed worksheet to digitized data on the Pen.

Rachel found that her code could ping the API frequently to look for new database information to display on the monitor—this would allow for near real-time responsiveness of the screen to reader-board Pen data uploads. Rachel and I decided on the choreography of the screen display together: a quick succession of entries would result in a queue. A full queue would cycle through entries. New entries would be added to the back of the queue. An empty queue would hold on the last entry. This configuration assumed that if the queue was full when they added their entry participants may not see their data immediately. We agreed to offload the challenge of designing visual feedback about the queue length and succession to a subsequent iteration in service of meeting the launch deadline. The queue length has not proven problematic so far, and most participants see their data on screen right away.

Fig. 6. Monitor displaying the data visualization website; to the left is the reader-board station

Fig. 6. Monitor displaying the data visualization website; to the left is the reader-board station

As Rachel and I brought the reader board, data visualization database, and website together, ISA worked on the graphic that would connect the worksheet experience to the digital interactive. The project team agreed that NFC tags placed under a wall graphic would serve as the interface for participants to record their worksheet answers digitally [fig. 7].

Fig. 7. ISA-designed “input graphic” where participants record their worksheet selections; NFC tags beneath the circles write question, value and tactic data to the onboard memory of the Pen

Fig. 7. ISA-designed “input graphic” where participants record their worksheet selections; NFC tags beneath the circles write question, value and tactic data to the onboard memory of the Pen

Process: Installation, Observation & Iteration

Rachel and I had the display website ready just in time for exhibition installation. Exhibitions staff and the project team negotiated the placement of all the elements in the gallery. Because of obstacles in the room, as well as data and power drop locations, the input wall graphic [fig. 7] had to be positioned apart from the reader-board and display screen. This was unfortunate given the interconnection of these steps. Also non-ideal was the fact that ISA’s numeric way-finding system omitted the step of uploading Pen data at the reader-board and viewing the data on-screen [fig.1]. After installation we had concerns that there would be low engagement with the digital interactive because of its disconnect from the rest of the experience.

As soon as the exhibition was open to the public we could see database activity. Engagement metrics looked good with 9,560 instances of use in the first ten days. The quality of those interactions, however, was poor. Only 5.8% satisfied the data requirements written into the code. The code was looking for at least one question, one value, and one tactic in order to process the information and display it on-screen. Any partial entries were discounted.

Fig. 8. A snippet of database entries from the first few days of the exhibition showing a high number of missing question, value and tactic entries

Fig. 8. A snippet of database entries from the first few days of the exhibition showing a high number of missing question, value and tactic entries

Conclusion

The project team met the steep challenges of limited time and budget—we designed and built a completely new way to use the Pen technology. High engagement with the digital interactive showed that what we created was inviting, and fit into the participatory context of the exhibition. Database activity, however, showed points of friction for participants. Most had trouble selecting a question, value and tactic on the input graphic, and most did not successfully upload their Pen data at the reader-board. Stringent database requirements added increased difficulty.

Based on these observations, it is clear that the design of the digital interactive could be optimized. We also learned that some of the challenges facing the project could have been mitigated by closer involvement of D&EM with the larger exhibition design effort. Our next objective is to stabilize the digital interactive at an acceptable level of usability. We will continue observing participant behavior in order to inform our next iterations toward a minimum viable product. Once we meet the usability requirement, our next goal will be to hand-off the interactive to gallery staff for continued maintenance over the duration of the exhibition.

As an experience, the Process Lab:Citizen Designer digital interactive has a ways to go, but we are excited by the project’s role in expanding how visitors use the Pen. This is the first time that we’ve configured Pen interactivity to allow visitors to input information and see that input visualized in near real-time. There’s significant potential to reuse the infrastructure of this project again in a different exhibition context, adapting the input graphic and data output design to a new educational concept, and the database to new content.