Tag Archives: UX

Post-Launch Update on Exhibition Channels: Metrics Analysis

To date, Cooper Hewitt has published several groupings of exhibition-related content in the channels editorial web format. You can read about the development of channels in my earlier post on the topic. This article will focus on post-launch observations of the two most content-rich channels currently on cooperhewitt.org: Scraps and By the People. The Scraps channel contains a wonderful series of posts about sustainability and textiles by Magali An Berthon, and the By the People channel has a number of in-depth articles written by the Curator of Socially Responsible Design at Cooper Hewitt, Cynthia E. Smith. This article focuses on channels as a platform, but I’d like to note that the metrics cited throughout reflect the appeal of the fabulous photography, illustration, research, and writing of channel contributors.

The Scraps exhibition installed in Cooper Hewitt’s galleries.

The Scraps exhibition installed in Cooper Hewitt’s galleries.

Since launch, there’s been a positive reaction among staff to channels. Overall they seem excited to have a considered editorial space in which to communicate with readers and exhibition-goers. There has also been strong user engagement with channels. Through Google Analytics we’ve seen two prominent user stories emerge in relation to channels. The first is a user who is planning or considering a trip to the museum. They enter the most common pathway to channel pages through the Current Exhibitions page. From the channel page, they then enter the ticket purchase path through the sidebar link. [Fig. 1] 4.25% of channel visitors proceeded to tickets.cooperhewitt.org from the Scraps channel; 6.09% did the same from the By the People channel. Web traffic through the Scraps channel contributed 13.31% of all web sales since launch, and By the People contributed 15.7%.

Fig. 1. The Scraps channel sidebar contains two well-trafficked links: one to purchase tickets to the museum, and one to the Scraps exhibition page on the collection website.

Fig. 1. The Scraps channel sidebar contains two well-trafficked links: one to purchase tickets to the museum, and one to the Scraps exhibition page on the collection website.

The second most prominent group of users demonstrates interest in diving into content. 16.32% of Scraps channel visitors used the sidebar link to visit the corresponding exhibition page that houses extended curatorial information about the objects on display; 10.99% used the navigation buttons to view additional channel posts. 19.11% of By the People channel visitors continued to the By the People exhibition page, and 2.7% navigated to additional channel posts.

Navigation patterns indicate that the two main types of users — those who are planning a visit to the museum and those who dive into editorial content — are largely distinct. There is little conversion from post reads to ticket sales, or vice-versa. Future iterations on channels could be directed at improving the cross-over between these user behaviors. Alternately, we could aim to disentangle the two user journeys to create clearer navigational pathways. Further investigation is required to know which is the right course of development.

Through analytics we’ve also observed some interesting behaviors in relation to channels and social media. One social-friendly affordance of the channel structure is that each post contains a digestible chunk of content with a dedicated URL. Social buttons on posts also encourage sharing. Pinterest has been the most active site for sharing content to date. Channel posts cross-promoted through Cooper Hewitt’s Object of the Day email subscription service are by far the most read and most shared. Because posts were shared so widely, 8.65% of traffic to the Scraps channels originated from from posts. (By the People content has had less impact on social media and has driven a negligible amount of site traffic.)

Since posts are apt for distribution, we realized they needed to serve as effective landing pages to drive discovery of channel content. As a solution, Publications department staff developed language to append to the bottom of each post to help readers understand the editorial context of the posts and navigate to the channel page. [Fig. 2] To make use of posts as points of entry, future channel improvements could develop discovery features on posts, such as suggested content. Currently, cross-post navigation is limited to a single increment forward or backward.

Fig. 2. Copy appended to each post contextualizes the content and leads readers to the channel home page or the exhibition page on the collection website.

Fig. 2. Copy appended to each post contextualizes the content and leads readers to the channel home page or the exhibition page on the collection website.

Further post-launch iterations focused on the appearance of posts in the channels page. Publications staff began utilizing an existing feature in WordPress to create customized preview text for posts. [Fig. 3] These crafted appeals are much more to inviting potential readers than the large blocks of excerpted text that show up automatically. [Fig. 4]

Fig. 3. View of a text-based post on a channel page, displaying customized preview text and read time.

Fig. 3. View of a text-based post on a channel page, displaying customized preview text and read time.

Fig. 4. View of a text-based post on a channel page, displaying automatically excerpted preview text.

Fig. 4. View of a text-based post on a channel page, displaying automatically excerpted preview text.

Digital & Emerging Media (D&EM) department developer, Rachel Nackman, also implemented some improvements to the way that post metadata displays in channels. We opted to calculate and show read time for text-based posts. I advocated for the inclusion of this feature because channel posts range widely in length. I hypothesized that showing read time to users would set appropriate expectations and would mitigate potential frustration that could arise from the inconsistency of post content. We also opted to differentiate video and publication posts in the channel view by displaying “post type” and omitting post author. [Fig. 5 and 6] Again, these tweaks were aimed at fine-tuning the platform UX and optimizing the presentation of content.

Fig. 5. View of a video post on a channel page, displaying “post type” metadata and omitting post author information.

Fig. 5. View of a video post on a channel page, displaying “post type” metadata and omitting post author information.

Fig. 6. View of a publication post on a channel page, displaying “post type” metadata and omitting post author information.

Fig. 6. View of a publication post on a channel page, displaying “post type” metadata and omitting post author information.

The channels project is as much an expansion of user-facing features as it is an extension of the staff-facing CMS. It has been useful to test both new methods of content distribution and new editorial workflows. Initially I intended channels to lean heavily on existing content creation workflows, but we have found that it is crucial to tailor content to the format in order to optimize user experience. It’s been an unexpectedly labor intensive initiative for content creators, but we’ve seen a return on effort through the channel format’s contribution to Cooper Hewitt business goals and educational mission.

Based on observed navigation patterns and engagement analytics it remains a question as to whether the two main user journeys through channels — toward ticket purchases and toward deep-dive editorial content — should be intertwined. We’ve seen little conversion between the two paths, so perhaps user needs would be better served by maintaining a separation between informational content (museum hours, travel information, what’s on view, ticket purchasing, etc.) and extended editorial and educational content. The question certainly bares further investigation — as we’ve seen, even the smallest UI changes to a content platform can have a big impact on the way content is received.

Exhibition Channels on Cooperhewitt.org

There’s a new organizational function on cooperhewitt.org that we’re calling “channels.” Channels are a filtering system for WordPress posts that allow us to group content in a blog-style format around themes. Our first iteration of this feature groups posts into exhibition-themed channels. Subsequent iterations can expand the implementation of channels to broader themed groupings that will help break cooperhewitt.org content out of the current menu organization. In our long-term web strategy this is an important progression to making the site more user-focused and less dictated by internal departmental organization.

The idea is that channels will promote browsing across different types of content on the site because any type of WordPress post—publication, event, Object of the Day, press, or video—can be added to a channel. Posts can also live in multiple channels at once. In this way, the channel configuration moves us toward our goal of creating pathways through cooperhewitt.org content that focus on user needs; as we develop a clearer picture of our web visitors, we can start implementing channels that cater to specific sets of users with content tailored to their interests and requirements. Leaning more heavily on posts and channels than pages in WordPress also leads us into shifting our focus from website = a static archive to website = an ever-changing flow of information, which will help keep our web content fresher and more engaged with concurrent museum programs and events.

Screenshot of the Fragile Beasts exhibition channel page on cooperhewitt.org

The Fragile Beasts exhibition channel page. Additional posts in the channel load as snippets below the main exhibition post (pictured here). The sidebar is populated with metadata entered into custom fields in the CMS.

In WordPress terms, channels are a type of taxonomy added through the CustomPress plugin. We enabled the channel taxonomy for all post types so that in the CMS our staff can flag posts to belong to whichever channels they wish. For the current exhibition channel system to work we also created a new type of post specifically for exhibitions. When an exhibition post is added to a channel, the channel code recognizes that this should be the featured post, which means its “featured image” (designated in the WordPress CMS) becomes the header image for the whole channel and the post is pinned to the top of the page. The exhibition post content is configured to appear in its entirety on the channel page, while all other posts in the channel display as snippets, cascading in reverse chronological order.

Through CustomPress we also created several custom fields for exhibition posts, which populate the sidebar with pertinent metadata and links. The new custom fields on exhibition posts are: Exhibition Title, Collection Site Exhibition URL, Exhibition Start Date, and Exhibition End Date. The sidebar accommodates important “at-a-glance” information provided by the custom field input: for example, if the date range falls in the present, the sidebar displays a link to online ticketing. Tags show up as well to act as short descriptors of the exhibition and channel content. The collection site URL builds a bridge to our other web presence at collection.cooperhewitt.org, where users can find extended curatorial information about the exhibition.

Screenshot of the sidebar on the <em>Fragile Beasts</em> exhibition channel page.

The sidebar on the Fragile Beasts exhibition channel page displays quick reference information and links.

On a channel page, clicking on a snippet (below the leading exhibition post) directs users to a post page where they can read extended content. On the post page we added an element in the sidebar called “Related Channels.” This link provides navigation back to the channel from which users flowed. It can also be a jumping-off point to a new channel. Since posts can live in multiple channels at once this feature promotes the lateral cross-content navigation we’re looking to foster.

Screenshot of sidebar on a post page displaying Related Channel navigation.

The sidebar on post pages provides “Related Channel” navigation, which can be a hub to jump into several editorial streams.

Our plan over the coming weeks is to on-board CMS users to the requirements of the new channel system. As we launch new channels we will help keep information flowing by maintaining a publishing schedule and identifying content that can fit into channel themes. Our upcoming exhibition Scraps: Fashion, Textiles and Creative Reuse will be our first major test of the channels system. The Scraps channel will include a wealth of extra-exhibition content, which we’re looking forward to showcasing with this new system.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site. Additional feature roll-out will be on-going.

Museums and the Web Conference Recap: Administrative Tools at Cooper Hewitt

The Labs team had a great time at Museums and the Web this year. We published two papers for the conference and presented them both to the audience of cultural heritage thinkers, makers, planners and administrators. Sam Brenner and I shared our paper, “Winning (and losing) hearts and minds of museum staff: Administrative interfaces at Cooper Hewitt,” which outlines the process of designing, developing and iterating two in-house built, staff-facing tools: Tagatron and the Pen Pairing Station. Both administrative tools are essential aides to staff managing new responsibilities associated with visitor-facing gallery technologies.

Here is the deck from our presentation:

Administrative interfaces at Cooper Hewitt (14)

Administrative interfaces at Cooper Hewitt

Introduction

  • Cooper Hewitt, Smithsonian Design Museum. New York, New York.
  • Our strategy around presenting design is to expose process—how things are made, how they are conceived, how they are designed.
  • This presentation will speak to our philosophy of openness around design process in sharing part of the back-story of how our current visitor-facing experience came together and how it’s maintained.

Administrative interfaces at Cooper Hewitt (1)

Visitor Interfaces

  • The visitor-facing technologies in the museum, introduced in 2014, invite new forms of engagement with the Cooper Hewitt collection. They encourage active participation, letting visitors play, design and collect through multi-touch table applications and the Pen.
  • Before we were able to re-design the visitor’s relationship to the museum we went through comprehensive changes at every level.

Administrative interfaces at Cooper Hewitt (2)

Comprehensive Re-design / Institutional Shift

  • We began a restoration of the mansion, stripping it down to its Carnegie steel girders.
  • To a similar degree we rethought the organizational infrastructure of Cooper Hewitt with a comprehensive re-design of operations, workflows and responsibilities.

Administrative interfaces at Cooper Hewitt (3)

New Responsibilities (for Everyone)

  • There were new jobs created to support the new visitor experience, including that of our Gallery Technology Manager, Mary Fe, whose job responsibilities include maintaining the Pens and troubleshooting touch tables and gallery interactives
  • The re-design affects every staff member at Cooper Hewitt:
  • Registrars: aggressive timetable to enter data
  • Security: understand the mission and visitor experience, teaching visitors on pen usage
  • Exhibitions: label programming, maintenance
  • Curators: tags, relations, chat formatting for length
  • Visitor services: pen pairing – whole new step in between “welcome” and ticket sale
  • Before we got to this stage there was the task of onboarding staff to new responsibilities, which fell largely to the Digital & Emerging Media department. With the allocation of new responsibilities also came the opportunity to create tools that could facilitate some of the work.

Administrative interfaces at Cooper Hewitt (4)

Defining the Need for Considered Interfaces

  • Why did we decide that new interfaces were necessary in certain parts of the workflow?
  • We started with observation, watching workflows as they emerged. We created tools to assist where necessary. The need for interfaces was in part logistical, in part technical and also in part human.
  • Candidates for interface development are parts of the new digital ecosystem where there is:
  • High volume of data
  • Large number of users
  • Complex tasks
  • Something that needs constraints or enforcement
  • Example: the job of assigning tags and related objects to everything we put on display for the reopening. The touch table interfaces utilize tag and related object information. This data does not live in TMS, so it is housed in a custom database.
  • The task of creating the data fell to the curators. Originally this was stored in Excel files. While the curators were happy using spreadsheets, we identified a few major issues with them. The biggest one was that every department had devised their own schema for storing the data, which would ultimately have to be reconciled
  • This example fits all of the criteria above.

Administrative interfaces at Cooper Hewitt (5)

Case Study 1: Tagatron

  • Explicit purpose of the Tagatron tool: make the work quicker; make the metadata consistent; make the organization of the metadata consistent
  • Making this tool highlighted for the digital team the complex relationship between the work, the tool, and the people responsible for each—even though we believed the tool made things easier, the tool had its own set of ongoing technical and usability issues
  • We found that those issues propagated an amount of distrust or lack of confidence in the larger project. Some of these were due to bugs in the tool, but some of it was just that now it was known that this was work that would be “enforced” or taken more seriously, which made users uncomfortable.
  • Key idea: the interface takes on a symbolic value in representing “new responsibilities” and can bring about issues that it might not have been designed to address. It takes on a complex position between human needs and technical needs.

Administrative interfaces at Cooper Hewitt (6)

Tagatron (continued)

  • These graphs illustrate how prolific the task of tagging and relating objects is. It was important to build Tagatron because it is crucial tool in the ongoing operation of the museum’s digital experience. More so than the spreadsheets ever could, it allows for scalability.
  • Since the re-opening the tool went through one major design and backend overhaul, and continues to see small iterations.

Administrative interfaces at Cooper Hewitt (7)

Case Study 2: Pen Pairing Stations

  • Context of Pen Pairing: Every visitor to the museum receives a Pen. At the museum’s front desk each Pen is paired with a unique admission ticket. Every ticket has a shortcode identifier that allows visitors to retrieve their Pen visit data online when they enter the code on their ticket.
  • Pen pairing is done at a very critical point in the visitor experience when the interaction needs to be quick and frictionless. Visitor Services Associates have to coordinate a number of simultaneous tasks.

Pen Pairing Station (continued)

  • This video depicts the Pen pairing process behind the front desk. It documents the first version of the Pen Pairing application, and shows the exposed Pen-reading circuit board before housing was built.
  • Pen pairing is one of the most demanding of the new responsibilities created by the “new experience”–has to fit between welcoming a visitor, taking their money, answering any questions, looking up their member ID.
  • Each use of the tool only lasts 5-10 seconds but we’ve spent many hours and built many versions of this tool to figure out exactly what needs to happen in that time to accomplish all the tasks, including updating databases, handling failures, serial communication
  • Every one of these iterations gave us an opportunity to be connected to the staff using the tools, not only to make something that works better, but to be a part of the conversation

Administrative interfaces at Cooper Hewitt (8)

Administrative Interfaces: What does success look like? How does it feel?

  • In informal interviews with Tagatron users we found trust to be a central theme of users’ response to the interface
  • Since Tagatron augments the curators’ use of TMS, they were less trusting of its database as a long-lasting data repository
  • Improving user feedback (like confirmation messages) helped build trust in the interface
  • Bill Moggridge, Designing Interaction: designing interaction is designing the relationship between people and things
  • We came to realize the responsibility of designing interfaces—validating and responding to users’ concerns; acknowledging the burden of new responsibilities
  • Administrative interfaces at the crux of the staff relationship to the new Cooper Hewitt experience
  • Anticipating issues in developing and maintaining administrative interfaces (when success feels like failure):
  • First, the human factor: being open to the feedback and creating an environment where the channels exist to communicate staff thoughts on the tool.
  • Second, the technical factor: being able to act on what you hear from staff and make the required changes to complete the feedback loop.
  • Our responsibility as facilitators of technology in the museum to hear and act on concerns.

Administrative interfaces at Cooper Hewitt-9

Questions to ask when starting an administrative application to anticipate issues and accommodate of feedback.

Question 1: To what degree should the (administrative) tool fit with pre-existing notions?

  • This question addresses the need to understand contextual use of the tool
  • Tagatron: curatorial culture around spreadsheets and TMS
  • Pen Pairing Station: this tool disrupted the expected ticket selling workflow. We learned the that the tool needed to make Pen Pairing as unobtrusive as possible

Administrative interfaces at Cooper Hewitt (10)

Question 2: How much of the underlying technology should come through to the interface?

  • Infrastructure & interfaces are layers of an onion—the best mental model for a tool’s interface might not reflect the best technical model for its back end
  • Tagatron: the filtering tools were a reflection of how data was stored in the database, not how curators expected it
  • Pen Pairing Station: error messages from all parts of the application stack came through to the user unaltered, this was not helpful to users
  • Highlights the need for a technical solution that allows for flexibility in the middle, “translation layer” of an application

Administrative interfaces at Cooper Hewitt (11)

Question 3: What kinds of feedback does the tool provide?

  • Feedback is the voice of the interface/ its personality–is it finicky or reliable? Annoying or supportive?
  • Tagatron: missing feedback created distrust
  • Pen Pairing: too much feedback caused confusion (error messages, validation handshake)
  • Our design and production methodology: working code always wins/ learning through doing; build small, working prototypes and continually iterate.
  • A more anticipatory form of design (like design thinking) could have helped us find answers to this question sooner

Administrative interfaces at Cooper Hewitt (12)

Question 4: Is it an appropriate time for experimentation?

  • Tagatron’s v1 included relatively unknown-to-us technology like MongoDB and nodejs. We should have used more familiar technology or done small-scale tests before implementing a project of this scale–it severely hindered our ability to accommodate feedback
  • Other tools we built that involved experimental tech were only successful because their scale and userbase were far smaller (label writer)

Administrative interfaces at Cooper Hewitt (13)

The result of everything: bridges, lines of communication opened

  • Building administrative tools for staff created cross-departmental conversation—in taking on the role of building and maintaining Tagatron and the Pen Pairing Station, the Digital & Emerging Media team engaged users from departments across the museum and observed closely how the tools fit into staff members’ larger roles

Curating Exhibition Video for Digital Platforms

First, let me begin this post with a hearty “hello”! This is my first Labs blog post, though I’ve been on board with the Digital and Emerging Media team since July 2015 as Media Technologist. Day-to-day I participate in much of the Labs activity that you’ve read about here: maintaining and improving our website; looking for ways to enhance visitor experience; and expanding the meaningful implementation of technology at Cooper Hewitt. In this post I will focus on the slice of my work that pertains to video content and exhibitions.

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

The topic of exhibition video is fresh in my mind since we are just off the installation of Beauty—Cooper Hewitt Design Triennial. This is a multi-floor exhibit that contains twenty-one videos hand-picked or commissioned by the exhibition curators. My part in the exhibition workflow is to format, brand, caption and quality-check videos, ushering them through a production flow that results in their display in the galleries and distribution online. Along with the rest of the Labs team, I also advise on the presentation and installation of videos and interactive experiences in exhibitions and on the web, and help steer the integration of Pen functionality with exhibition content. This post gathers some of my video-minded observations collected on the road to installing Beauty.

The Beauty curators and the Labs team came together when content for the show began to arrive—both loans of physical objects and digital file transfers. At this time, my video workflow shifted into high gear, and I began to really see the landscape of digital content planned for the exhibit. Videos in Beauty fall into roughly two categories: those that are the primary highlighted object on display and those that supplement the display of another object. Sam Brenner recently posted about reformatting our web presentation of video content when it stands in as primary collection object and has a medium that is “video, animation or other[wise] screen-based.” This change was a result of thinking through the flag we raised earlier for the curators around linking collection records to tags, i.e. “what visitors get when they collect works with the Pen.” As has been mentioned before on this blog, the relationship of collecting points (NFC tags) to collection objects does not need to be one-to-one; Beauty expanded our exploration of the tags-to-collection records relationship in a few interesting ways.

Collecting Neri Oxmann

When visitors collect at the Neri Oxman tag they save a cluster of collections database records, including 12 glass vessels and a video.

In the Beauty exhibition, collecting points are presented uniformly: one tag in each object label. Additionally, tags positioned beside wall text panels allow visitors to save chunks of written exhibition content. The curatorial format of the Triennial exhibition organized around designers (sometimes with multiple works on display), however, encouraged us to think carefully about the tag-collecting relationship. I was impressed to see the curators curating the Pen experience, including notes to me along with each video, like “the works in the show are jewelry pieces; the video will supplement,” “video is primary object; digital prints supplement,” and “video clips sequenced together for display but each video is separately collectible.” They were really thinking about the user flow of the Pen and the post-visit experience, extending their role in organizing and presenting information to all aspects of the museum experience.

Another first in the Beauty exhibition is the video content created specifically for interactive tables. With the curators’ encouragement, the designers featured in the exhibition considered the tables as a unique environment to present bonus content. For example, Olivier van Herpt provided a video of his 3D printer at work on the ceramic vessels on display in the exhibition. It was interesting to see the possibilities that the tables and post-visit outlets opened up—for one thing, the quality standards can be more relaxed for videos shown outside the monitors in the galleries. Also notable is the fact that the Beauty curators selected behind-the-scenes-type videos for tables and post-visit, suggesting that these outlets make room for content that might not typically make it onto gallery walls.

Still from Oliver van Herpt's "3D Printed Ceramic Process"

The video “3D Printed Ceramic Process” by Oliver van Herpt is an example of behind-the-scenes video content that was made for tables and website display only.

The practical fallout on my end was that these supplemental videos added to the already video-heavy exhibition, putting increased pressure on the video workflow. In turn, this revealed a major lack of optimization. The diagram of my video workflow shows, for example, several repeated instances of formatting, captioning and exporting. Multiplied by twenty-one, each of these redundant procedures takes up significant time. Application of branding is probably the biggest time-hog in the workflow—all of it is done manually and locking in the information of maker, credit line and video title with curators and designers is a substantial task. It’s funny, the amount of video content is increasing in exhibitions at Cooper Hewitt and it’s receiving increased attention from curators, but the supplementary videos in the galleries are not treated as first-order exhibition objects, so they don’t go through as rigorous a documentation process as other works in the show. Because of this, video-specific information required for my workflow remained in flux until the very last minute. Even the video content itself continued to shift as designers pushed past my deadlines to request more time to make changes and additions. In truth, the deadlines related to in-gallery video content are much stricter than those for table/post-visit-only content because gallery videos require hardware installation. The environment of the tables and website afford continual change, but deadlines act as benchmarks to keep those interfaces stocked with new content that stays in sync with objects in the physical exhibition.

Exhibition Video Workflow

The workflow that videos follow to get to gallery screens, interactive tables and the collections website.

I maintain a spreadsheet to collect video information and maintain order over my exhibitions video workflow. These are the column headings.

All the steps and data points that need to be checked-off in my exhibition video workflow.

By the exhibition opening, I had all video information confirmed, and all branding and formatting completed. The running spreadsheet I keep as a video to-do list was filled with check-marks and green (good-to-go) highlighting. I had created media records in TMS and connected them to exhibition videos uploaded to YouTube; this allows a script to pull in the embed code so videos appear within the YouTube player on the collections site. I also linked the media records to other database entries so that they would show up on the collections site in relation to other objects and people. For example, since I linked the “Afreaks Process” video record to all of the records for the beaded Afreak objects, the video appears on each object page, like the one for The Haas Brothers and Haas Sisters’ “Evelyn”. Related videos like this one (that are not the primary object) are configured to appear at the bottom of an object page with the language, “We have 1 video that features Sculpture, Evelyn, from the Afreaks series, 2015.” Since the video has its own record in the database, there is also a corresponding “video page” for the same clip that presents the video at the top with related objects in a grid view below. I also connected object records to database entries for people, ensuring that visitors who click on a name find videos among the list of associated objects.

Screenshot of Haas Brothers record webpage

The webpage for the Haas Brothers record includes a video among the related objects.

It is highly gratifying to seed videos into this web of database connections. The content is so rich and so interesting that it really enhances the texture of the collections site and of exhibitions. Cooper Hewitt curators demonstrated their appreciation for the value of video by honoring video works as primary objects on display. They also utilized video in a demonstrative way to enhance the presentation of highlighted works. Beauty opened the doors for curating video works on interactive tables, and grouping videos in with clusters of data linked to collecting points (aka. tags). I’m pleased with the overall highly integrated and considered take on video content in the latest exhibition, and I hope we can push the integration even further as the curators become increasingly invested in adapting their practice to the extended exhibition platforms we have in place like tables, tags and web.

Iterating the “Post-Visit Experience”

The final phase of a visitor’s experience at Cooper Hewitt, after they’ve left the museum, is what we call the “post-visit experience.” Introduced along with the Pen in March, it is a personalized website that displays a visitor’s interactions with the museum as a grid of images, including objects they collected from the galleries and wallpapers they created in the Immersion Room.

Our focus leading up to its launch was just to have it working, and as such, some of the details of a visitor’s experience with the application were overlooked. As a result of this, our theoretically simple interface became cluttered with extra buttons, calls to action and explanatory texts. In this post, I’ll present the experience as it existed before and describe some of the steps we took in the past month to iterate on the post-visit experience.

The “Before” Experience

ticket_old

First, let’s walk through the experience as it existed up until this week. The post-visit begins when a visitor accesses their personal website, which they could do by going to a URL on their physical ticket. On the ticket above, that URL is https://cprhw.tt/v/brr6. The domain is our “URL shortener,” https://cprhw.tt, followed by /v/ to indicate a visit (the shortener also supports /o/ for objects or /p/ for people), followed by a four or five-character alphanumeric code which we call the “shortcode.” If a visitor recognized this whole thing as a URL, they would get access to their visit. If a visitor didn’t recognize this as a URL, they would hopefully go to our homepage and find the link that took them to the “visit shortcode page” seen below.

Screen Shot 2015-10-29 at 4.03.33 PM copy

From here, they would enter their shortcode and get their visit. A visit page contains a grid of all the images of items you collected and created during your visit to the museum, which looked like this:

Screen Shot 2015-10-29 at 12.37.47 PM

You will notice the unwieldy CTA. It’s big, it’s ugly and it gets in the way of what we’re all here to do, but this was our first opportunity to present the concept of “visit claiming” to the visitor. Visit claiming is the idea that your visit is initially anonymous, but you can create an account and claim it as your own. Let’s say the visitor engages the CTA and claims their account. They are taken through a log in / sign up flow and return to their visit page which has now been linked to their account.

After claiming a visit, the visitor has access to some new functionality. At the top of the page are the token share tools. Under every image now live privacy controls, in the form of a repeated paragraph. At the very bottom of the page are buttons to make everything public, export the visit and delete the visit.

Screen Shot 2015-10-29 at 12.38.28 PM Screen Shot 2015-10-29 at 12.38.57 PM

What to Work On?

The goal for this work was to redesign the post-visit experience to put the visitor’s experience above all of our functional and technical requirements. At this point, we were all familiar with the many complex details along the way, so we met to discuss the end-to-end experience. Taking a step back and thinking in terms of expectations — both ours and the visitors’ — helped us rebuild the experience from the ground up. Feedback we had collected both anecdotally and through our online feedback form was helpful in this process. Once we had an idea of a visitor’s overall expectations of the post-visit experience, we were able to turn that into actionable tasks.

Step 1: Redesigning Visit Retrieval

Screen Shot 2015-11-02 at 5.45.33 PM

The first pain point we identified was the beginning of the experience: visit retrieval. Katie, our former Labs technologist, has written before about some of the ways we’ve tried to get visitors quickly up to speed on “how everything works” — the idea that you get a pen, you use the pen to collect objects, you go to a website and you get your objects. Her work focused on informational postcards and the introductory script used by the visitor experience staff. In the case of the visit retrieval flow chart above, this helped reduce the number of “no” answers to the two questions: “do I have my ticket?” and “do I recognize the URL on my ticket?”

That second question — “do I recognize the URL on my ticket?” — is not a question we would have expected visitors to even be asking. To us, the no-vowel/non-standard-TLD “URL-shortener”-style URL, a la bit.ly or t.co, has an instantly recognizable purpose. Through visitor feedback, we learned that for some visitors, it understandably looked more like an internal tracking number than the actual website we wanted people to go visit.

For these visitors, the best-case scenario is that the they would go to our main website where we provide links, both in the header and on the homepage, to the “visit retrieval” page. Here it is again, for reference:

Screen Shot 2015-10-29 at 4.03.33 PM copy

Since we expected users to go straight to the URL on their ticket, this page was more of a backup and as such hadn’t received a lot of attention. As a consequence of this, there were a few things that confused users on the page. First, the confirm button’s CTA is “fetch,” which is different from the “retrieve” used in the header and “access” used on the ticket. Second, the placeholder text in the input field is cut off. Third, the introduction of the word “shortcode,” which we’ve always used internally to refer to a visitor’s visit ID, had no meaning in the visitor’s mind. We tried explain it by saying that it means “the alphanumeric code after the final slash on your ticket,” which is a useless jumble of words.

Our approach to this was to eliminate the “do I recognize the URL?” question and its resulting outcomes (the dotted box in the flow chart above) and replace it with self-evident instructions. To that end, we redesigned both the visit retrieval page and the ticket itself. Here’s the new ticket:

ticket_new

We’ve provided a much more human-friendly URL in “www.cooperhewitt.org/you” and established the shortcode (now just called “code”) as a separate entity. Regardless of whether or not visitors were confused by the short URL, the language on the new ticket fits with our desire to use natural language wherever we can to avoid having the digital experience feel unnecessarily technical.

The visit retrieval page (which is accessed via the URL on the ticket) also got an update. The code entry field got much bigger and we tucked a small FAQ below it. We also standardized on the word “retrieve” as the imperative.

Screen Shot 2015-10-29 at 12.17.46 PM

Screen Shot 2015-11-03 at 2.46.14 PM

Step 2: Redesigning the Visit Page

The next pain point we identified was the visit page itself, and specifically how we used it to explain claiming and privacy. Here’s the page again for reference:

Screen Shot 2015-10-29 at 12.37.47 PM

The problem concerning how we explained claiming is fairly straightforward. The visual design of the CTA is obtrusive, but it was our only opportunity to explain the benefits of claiming a visit. We sought to find a less obtrusive, more intuitive way to explain why claiming a visit is an option our visitors might want to take advantage of.

The problem concerning how we explained privacy is the more complicated of the two issues. It specifically regards the concept of the “anonymous visit.” Visits aren’t connected with visitor’s identities in any way except in that only they know the code. We do this because we need a way to uniquely identify each museum visit and the shortcode keeps that unique ID at a reasonable length. We also want to allow visitors to have an anonymous post-visit experience, meaning they can see everything they did in the museum without having to sign up for an account. But we don’t expect everyone to remember their shortcode or hold on to their ticket forever, so we allow visitors to create an account on our website and “claim” their visits. A claimed visit is linked to a visitor’s email address, so now they can throw out their ticket and forget their shortcode. Over time, we also hope that visitors will claim multiple visits with their accounts so they get a complete history of their relationship with our museum.

The problem this presents is that we have to treat every visitor who has a code as if they are the owner of that visit. This manifests itself in a specific (but important) use case. If a visitor shares their visit on social media while it is unclaimed, then any person who accesses the visit will also have the option to sign up and claim it as their own.

Further compounding this issue is the fact that we automatically make claimed visits private. We do this because in claiming an account, the visitor is effectively de-anonymizing it. Claimed visits are linked to real-world identities (in the form of a username) and for that reason we make it an opt-in choice to go public with that connection.

The goal of redesigning this page, then, was to allow the visitor to navigate the complex business logic without having to fully comprehend it. In talking this through we concluded that by consolidating the visit controls (which previously only appeared on the claimed visit page) and adding them (greyed out) to the unclaimed visit we could solve many of our problems. Why have a paragraph of explanatory text about why you should claim your visit when we could just show you the control panel that claimed visits have access to? A control panel presents the functionality plainly and concisely, without confusing language.

This also allowed us to establish a language of icons that we could reuse elsewhere to replace explanatory sentences. We also agreed to standardize on the word “claim” as the action that we wanted visitors to engage in, as it more effectively conveys the idea that other people have visits as well but we need to know that this one was yours.

Best of all, it allowed us to build off the work we’d done earlier this year which had the explicit purpose of organizing our code and visual hierarchy to better support future iterations.

Here’s what that ended up looking like.

Screen Shot 2015-10-29 at 12.21.16 PM

Interacting with any of the controls invokes a modal dialog that prompts the user claim their visit. If they’re not logged in, they are presented with a login / signup prompt. Otherwise they are asked to confirm their desire to claim the visit. Once claimed, the controls function as expected. Like the changes we made to the ticket design, it moves towards a more self-evident experience that requires less information processing time on the visitor’s part.

Screen Shot 2015-10-29 at 12.21.44 PM

Finally, some bonus gifs to show off the interaction details. The control panel has some rollover action:

controls

We use modal dialogs to confirm privacy changing, deleting and claiming actions:

publicification

A Brief Bit About Code

Powering the redesign was a complete overhaul of the Javascript that powers these pages. Specifically, we reorganized it to remove inline code and decouple API logic from DOM logic. In lay terms, this means separating the code that says “when I click this thing…” from the code that says “…perform this action.” When those separate intentions are tightly coupled, the website is less flexible and doing maintenance work or experimenting with alternate user flows requires more effort than necessary. When separate, it makes reusing code much more straightforward, which will allow us to tweak and test with ease going forward. Recent frameworks such as Angular or React, which we’ve only just started experimenting with, excel at this. For now, we opted for a slightly modified module pattern, which gives us just enough structure to keep things organized without having to learn a new framework.

What’s Next?

The changes have only been live for a few days now so it will take some time to build up enough numbers to see where to focus our future improvements on this part of the site. Specifically, we will be looking at the percentage of visitors who visit their website and the percentage of those visitors who create accounts, and hope to see the rate of change increasing for both of those numbers.

One part of this visitor flow where we hope to do structured A/B tests is with the “sign up” functionality. Right now, when a visitor enters their code and clicks the “Retrieve” button, they are taken immediately to their visit page. We want to test whether adding in a guided “visit claiming” flow, which would optionally hold the user’s hand through the account creation process before they’ve seen their objects, results in more account creations. We’ll wait and collect enough “A” data before rolling the “B” test out.

Of course, there are big questions we can start answering as well. How can we enhance the value of a visitor’s personal collection? Right now we have rudimentary note-taking functionality which is severely underutilized. What do we do with that? What about new features? We have all of our object metadata sitting right there waiting to be turned into personalized visualizations. (Speaking of that – we have public API methods for visit data!) Finally, how can we complete the cycle and turn the current “post-visit” into the next “pre-visit” experience?

With each iteration, we strive not only to apply what we’ve learned from visitors, colleagues and peers to our digital ecosystem, but also to improve the ease with which future iterations can be made. We are better able to answer questions both big and small with these iterations, which we hope over time will result in a stronger and more meaningful relationship between Cooper Hewitt and our visitors.

Label Writer: Connecting NFC tags to collection objects

IMG_20150610_113555

Labels, for better or worse, are central to the museum experience. They provide visitors with access to basic object information (metadata) and a tiny glimpse into the curatorial research for everything in the galleries, helping to place objects in context. At Cooper Hewitt, they are also the gateway through which the Pen‘s “collect” interaction is realized.

In order for the Pen to know which object label you’re trying to collect, every label in the museum contains an NFC tag that is written with the object’s ID. When an object gets added to our database we give it an ID, an integer that is unique across our entire online collections database. Our beloved Spanking Cat, for example, has the ID number 18382391. Writing that number to an NFC tag is a simple task, but doing it hundreds of times for every new exhibition we roll out will get tedious very quickly. Thus, Label Writer was born.

Label Writer is an Android app that writes, reads and locks NFC tags based on the object to which the label refers. The staff member can look up the objects that are in a given room of our museum, select one or more of them, and assign them to the label in question. They can search for specific objects in case an object’s location hasn’t been updated yet. They can also write tags for videos and shop items.

The front and back of the NFC tag we use in our labels, with pennies for scale

From left to right: the back and front of the NFC tags we use in our labels, and pennies for scale.

Planning

After thinking about the app we came up with the following requirements:

  1. When processing a user’s visit, we need to know what type of thing they’ve collected. When the Pen launched, this was either objects or videos, and has since grown to include shop items. To facilitate that process, Label Writer would have to distinguish between types of things and write tags that indicated that.
  2. It would need to write multiple things to a tag, including things of different types. One label might contain three objects. Another label might contain one video and two objects.
  3. It would need to lock tags. Leaving the tags unlocked would enable anyone with an NFC-enabled smartphone to walk around the galleries and overwrite our tags. Locking the tags prevents this.
  4. It would need to read tags and display images of what’s on a tag. This is so we can double-check what is on a tag before we lock it. We only print one copy of every label – sometimes through an offsite service – and the wall labels (as opposed to the rail labels) have their NFC tags glued in and unable to be replaced.
  5. Label Writer would have to present objects in a constrained format — having to find the object on a label from our total collection of 210,000 objects every time, through accession number lookup or other traditional searches, would get annoying very quickly.
IMG_20150608_172543

The NFC tags on our wall labels are built in to the label.

IMG_20150608_172442

The NFC tags on our rail labels are interchangeable.

Production

I decided to build the app in Android because it has great support for NFC and we have plenty of Nexus 9 tablets at the museum for use in the galleries. I started with this boilerplate for an Android read/write NFC app and performed initial tests to make sure we could write a tag that could be read by some of the early Pen prototypes. Once that was established, I began fleshing out the UI of the app and worked on hooking it up to our API.

The API gives us so much to work with on the app’s frontend. Being able to display an object’s image is a much better way to confirm that a label is written correctly than by comparing IDs or accession numbers. The API also lets us see all of the objects in a given room of the museum, which means that the user can write labels in an ordered fashion. When the labels arrive from the printer they are grouped by room, and often we will not write the tags until they have been installed in the galleries, so “by room” is a convenient way to organize objects on the frontend. It also gives us easy access to videos and shop items, and allows the app to easily be expanded to write labels for more things from our collections database. Since our collections site alpha, we have stressed the importance of an easily-accessed permanent ID for everything: people, objects, videos, exhibitions, locations etc., and now with the Pen we can prepare labels that allow users to collect any one of those things during their visit to the museum.

01

When I took all these screenshots, the app was called “Tag Writer”, as in “NFC Tag Writer.” But “Label Writer” sounds better.

When the app is opened, the user is prompted with a few ways to group objects. Since we added videos and shop items to the app, this intro screen has grown a bit so it will probably get a redesign when we next expand its capabilities. But for now, users have a few options here:

  1. They can select a room from a dropdown menu (here’s a list of all of our rooms)
  2. They can enter an individual accession number
  3. They can enter a video’s ID
  4. They can search the shop (see Aaron’s recent post about adding shop items to our online collection)

When one of these options is used, the relevant objects appear on the screen. For example, selecting Room 106 brings up some of the posters from our current How Posters Work exhibition. Being able to display the images of the posters makes it much easier for the user to confirm that they are connecting the dots accurately — accession numbers and object IDs are easily confused (not to mention boring to look at).

02

The user can then tap one or more objects to add them to a label. In the screenshot below, you can see that two objects have been selected and the orange bar at the bottom has formatted them to be written to a tag — in this case, chsdm:o:68730187;18708395. The way that things get written to tags follows a format we agreed upon early in the Pen design process, as various developers would be building applications that relied on reading and parsing a Pen’s content. In brief, chsdm is a namespace for our museum that is not particularly necessary but serves as a header for what follows. o stands for object and then the ID (or semicolon-delimited IDs) that follow are the IDs of objects. The letter can change: v for video, s for shop, and on and on for whatever other things we might eventually write to tags. We add a pipe character (|) to delimit multiple types of things on a tag, so a tag with an object and a video might look like chsdm:o:18714653|chsdm:v:68764195. But all of this is handled by the app based on what the user selects in the interface.

03

Next, a user can hold the tablet up to the object label to write the NFC tag. When the tag is written, the orange bar at the bottom turns green to let the user know it went okay. Later, using the “Read Tags” functionality of the app, the user can confirm the tag’s contents by reading the NFC tag. The app parses the tag and loads the things it thinks the tag refers to. When this is confirmed, the user can lock the tag to make sure nobody overwrites it.

05

Here’s everything, from start to finish, using the object-lookup-by-accession-number functionality.

Next Steps

I mentioned that the home screen of this app will get a redesign as we allow more types of things to be written to tags. The user experience of the tag writing process needs a little finessing — a bug in how success messages get displayed has resulted in a few tags that get written with bunk data. Fortunately that is caught in the “read” phase of the workflow, but should be corrected earlier.

Overall, as we keep swapping out exhibitions, Label Writer will get more and more use. We will use these opportunities to collect feedback from the app’s users and make changes to the app accordingly.

Collect all the things – shoeboxes, shop items and the Pen

Screen Shot 2015-05-27 at 6.15.22 PM

You can now collect any object in the collection, or on display, from the collections website itself. Just like in the galleries there is a small “collect” icon on the top right-hand side of every object page on the collections website. It’s not just individual object pages but also all the object list pages, too. So many “collect” icons!

20150527-shoebox-visit-icon

  Objects that haven’t been collected yet have a grey icon.

  Objects that have been collected in the galleries, as part of a visit to the museum, have a pink icon.

  Objects that that have been collected on the collections website have an orange icon.

Simply click the grey icon to collect an object or click one of the orange or pink icons to remove or un-collect that object.

That’s it!

Screen Shot 2015-05-27 at 6.14.55 PM

Just like visit items, things you collect on the website have a permanent URL that can be made public to share with other people and can be given a bespoke title or description. Objects that you collect on the collections website live in something we’re calling the “shoebox”.

You can get your to shoebox by visiting https://collection.cooperhewitt.org/users/YOUR-USERNAME/shoebox or if you’re already logged in to your Cooper Hewitt account by visiting https://collection.cooperhewitt.org/you/shoebox/.

There is also a handy link in the Your stuff menu, located at the top-left of every page on the collections website.

The shoebox is the set of all the objects you’ve collected (or created) on the website or during your visits to the museum. Although visits and visit items overlap with things in your shoebox we still treat them differently because although you need to be logged in to you Cooper Hewitt account to add things to your shoebox a visit to the museum can be entirely anonymous if a visitor so chooses.

The default view for the shoebox is to display everything together in reverse-chronological order but you can filter the view to show only things collected online or things collected during a visit. You can also see the set of all the objects you’ve made public or private.

20150528-shoebox-loggedout

logged out view (large version)

2015-shoebox-loggedin

logged in view (large version)

But it’s not just objects, either. You can already collect videos during your museum visit so those are included too. Ultimately the only limit to what you might collect with the Pen is time-and-typing. Things we’re thinking about making collect-able include: entire exhibitions or the introductory texts on the wall for an exhibition or people or individual rooms in the Mansion.

Museum retail

We’ve started this process by allowing you to collect things in the museum Shop.

By “things in the Shop” we mean all the things that have ever been sold in the Shop over the years. And by “all the things” we mean almost all the things. There is some technical hoop-jumping related to inventory management systems and that is why we don’t have everything yet but we’ll get there in time.

We are a captial-D design museum with a capital-D design shop and many of the things that have been available in the Shop have gone on to become part of our permanent collection so it only makes sense to give them a home on the collections website. In fact MoMA already does similarly with their “find related products in the MoMA Store” feature though ours is a bit different.

20150527-shop-landing

You can see for yourself at https://collection.cooperhewitt.org/shop

The /shop section is divided in two parts: Brands and Items (and all the items for a given brand of course). There isn’t a whole lot of extra information beyond titles and links to the SHOP Cooper Hewitt website for those items that are currently in-stock but it’s a start. Like the rest of the collections website we’ve started with the idea that providing permanent stable URLs that people can have confidence we create something that can be improved on over time.

20150527-shop-brands-crop

Shop items and brands don’t get updated as regularly as we’d like yet. We are still working through the fiddly details of bridging our systems with the Shop’s ecommerce and POS system and some things still need to be done by hand. We’ve been able to get this far though so we expect things will only get better.

20150527-shoebox-listview-shop

You might be wondering…

You might be reading this and starting to wonder Hmmm… does that mean I can also collect things in the Shop as I walk around the museum with the Pen? the answer is… Yes!

As of this writing there are only one or two items that can be collected with the Pen because the Shop staff are still getting familiar with the tools and thinking about how making collect-able labels changes in their day-to-day workflow. The obvious future of this might be the infamous ‘wedding register’, however we believe that many museum visitors actually would like to bookmark objects to possibly buy later, or just remember as part of their overall visit to the ‘museum campus’.

Practically what that has meant are some changes to Sam‘s “tag writer” application (the subject of a future blog post) to fetch shop items via our API and then letting the Shop folks decide what they want to tag and when they want to do it.

There has been a whole lot of change here over the course of the last three years and allowing the various parts of the museum warm up to the possibilities that the Pen starts to afford at their own pace and with not only a minimum of fuss but plenty of wiggle-room for experimentation is really important.

In the meantime we hope that you enjoy collecting at least more, if not all, of the things that make up the museum.

Happy Staff = Happy Visitors: Improving Back-of-House Interfaces

“You have to make the back of the fence that people won’t see look just as beautiful as the front, just like a great carpenter would make the back of a chest of drawers … Even though others won’t see it, you will know it’s there, and that will make you more proud of your design.”

—Steve Jobs

In my last post I talked about improvements to online ticketing based on observations made in the first weeks after launching the Pen.

Today’s post is about an important internal tool: the registration station whose job is to pair a new ticket with a new pen. Though visitors will never see this interface, it’s really important that it be simple, easy, clear, and fast. It is also critical that staff are able to understand the feedback from this app because if a pen is incorrectly paired with a ticket then the visitor’s data (collections and creations) will be lost.

Like a Steve-Jobs-approved iPod or a Van Cleef & Arpels ruby brooch, the “inside” of our system should be as carefully and thoughtfully designed as the outside.

the view from behind a desk with screens and wires everywhere. a tablet positioned upright with some tiny text and bars of color.

Version 1 of the app was functional but cluttered, with too much text, and no clear point of focus for the eye.

Because the first version of the app was built to be procedurally functional, its visual design was given little consideration. However, the application as a whole was designed so that the user interface – running in a web browser – was completely separate from the underlying pen pairing functionality, which makes updating the front-end a relatively straightforward task.

Also, we were getting a few complaints from visitors who returned home eager to see their visit diary, and were disappointed to see that their custom URL contained no data. We suspected this could have been a result of the poor UI at ticketing.

With this in mind, I sat behind the desk to observe our staff in action with real customers. I did about three sessions, for about ten minutes each, sometimes during heavy visitor traffic and sometimes during light traffic. Here’s what I kept an eye on while observing:

  • How many actions are required per transaction? Is there any way to minimize the number of “clicks” (in this case, “taps”) required from staff?
  • Is the visual feedback clear enough to be understood with only partial attention? Or do  typography, colors, and composition require an operator’s full attention to understand what’s going on?
  • What extraneous information can we minimize or omit?
  • What’s the critical information we should enlarge or emphasize?

After observing, I tried my hand at the app myself. This was actually more edifying than doing observations. Kathleen, our head of Visitor Services, had a batch of about 30 Pens to pair for a group, and I offered to help. I was very slow with the app, so I wasn’t really of much help, moving through my batch of pens at about half the speed of Kathleen’s staff.

Some readers may be thinking that since the desk staff had adjusted to a less-than-excellent visual design and were already moving pretty fast with it, this could be a reason not to improve it. As designers, we should always be helping and improving. Nobody should have to live with a crappy interface, even if they’ve adjusted to it! And, there will be new staff, and they will get to skip the adjustment process and start on the right foot with a better-designed tool.

My struggle to use the app was fuel for its redesign, which you can see germinating in my drawings below.

some marker sketches of a tablet interface with lots of scribbled notes

After several rounds of paper sketches like these, the desk reps and I decided on this sequence as the starting point for version two of the app.

These were the last in a series of drawings that I worked through with the desk staff. So our first few “iterative prototypes” were created and improved upon in a matter of minutes, since they were simply scribbled on paper. We arrived at the above stopping point, which Sam turned into working code.

Here’s what’s new in version 2:

  • The most important information—the alphanumeric shortcode— is emphasized. The font is about 6 or 7 times bigger, with exaggerated spacing and lots of padding (white space) on all sides for increased legibility. Or as I like to call it, “glanceability.” This helps make sure that the front of house staff pair the correct pen with the correct ticket.
  • Fewer words. For example, “Check Out Pen With This Shortcode” changed to “GO”, “Pen has been successfully checked out and written with shortcode ABCD” changed to “Success,” etc. This makes it easier for staff to know, quickly, that the process has worked and they can move on to the next ticket/pen/customer.

“I didn’t have time to write a short letter, so I wrote a long one instead.”
Mark Twain

  • More accurate words. Our team uses a different vernacular from the people working at the desk. This is normal, since we don’t work together often, and like any neighboring tribes, we’ve developed subtly different words for different things. Since this app is used by desk staff, I wanted it to reflect their language, not ours. For example, “Pair” is what they call “check-out” and “Return” is what they call “check-in.”
  • Better visual hierarchy: The original app had many competing horizontal bands of content, with no clear visual clue as to which band needed the operator’s attention at any given time. We used white space, color (green/yellow/red for go/wait/stop), and re-arranging of elements (less-used features to the bottom, more-used features to the top) to better direct the eye and make it clear to the user what she ought to be looking at.
  • Simple animations to help the user understand when the app is “working” and they should just wait.

Still to come are added features (bulk pairing, maintenance mode) and any ideas the desk reps might develop after a couple of weeks of using the new version.

Imagine how difficult this process would have been if the museum had outsourced all of its design and programming work, or if it were all encased in a proprietary system.

From concept to video prototype: the early form of the Pen

It was in late 2012 that the concept for the Pen was pitched to the museum by Local Projects, working then as subcontractors to Diller Scofidio & Renfro. The concept portrayed the Pen as an alternative to a mobile experience, and importantly, was symbol that was meant to activate visitors.

Early image of Pen

Original concept for the Pen by Local Projects with Diller Scofidio + Renfro, late 2012.

“Design Fiction is making things that tell stories. It’s like science-fiction in that the stories bring into focus certain matters-of-concern, such as how life is lived, questioning how technology is used and its implications, speculating bout the course of events; all of the unique abilities of science-fiction to incite imagination-filling conversations about alternative futures.” (Julian Bleeker, 2009)

In late 2013, Hanne Delodder and our media technologist, Katie Shelly, were tasked with making a short instructional video – a piece of ‘internal design fiction’ to help us expand the context of the Pen, beyond just the technology. (Hanne was spending three weeks observing work in the Labs courtesy of the Belgian Government as part of her professional development at Het Huis van Alijn, a history museum in Ghent.)

The video used the vWand from Sistelnetworks, an existing product that became the starting point from which the final Pen developed. At the time of production the museum had not yet begun the final development path that engaged Sistelnetworks, GE, Makesimply, Tellart and Undercurrent who would help augment and transform the vWand into the new product we now have.

The brief for the video was simply to create an instructional video of the kind that the museum might play in the Great Hall and on our website to instruct visitors how they might use the Pen. As it turned out, the video ended up being a hugely valuable tool in the ‘socialisation’ of the Pen as the entirety of the museum started to gets its head around what/how/when from curators to security staff, well before we had any working prototypes.

It ended up informing our design sprints with GE and Sistelnetworks which resulted in the form, operation and interaction design for the Pen; as well as a ‘stewardship’ sprint with SVA’s Products of Design where we worked through operational issues around distribution and return.

The video was also the starting point for the instructional video we ended up having produced that now plays online and in the Great Hall. You will notice that the emphasis in the final video has changed dramatically – focussing on collecting inside the museum and the importance of the visitor’s ticket (in contrast to the public collection of email addresses in the original).

The digital experience at Cooper Hewitt is supported by Bloomberg Philanthropies.