Tag Archives: iteration

Curating Exhibition Video for Digital Platforms

First, let me begin this post with a hearty “hello”! This is my first Labs blog post, though I’ve been on board with the Digital and Emerging Media team since July 2015 as Media Technologist. Day-to-day I participate in much of the Labs activity that you’ve read about here: maintaining and improving our website; looking for ways to enhance visitor experience; and expanding the meaningful implementation of technology at Cooper Hewitt. In this post I will focus on the slice of my work that pertains to video content and exhibitions.

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

The topic of exhibition video is fresh in my mind since we are just off the installation of Beauty—Cooper Hewitt Design Triennial. This is a multi-floor exhibit that contains twenty-one videos hand-picked or commissioned by the exhibition curators. My part in the exhibition workflow is to format, brand, caption and quality-check videos, ushering them through a production flow that results in their display in the galleries and distribution online. Along with the rest of the Labs team, I also advise on the presentation and installation of videos and interactive experiences in exhibitions and on the web, and help steer the integration of Pen functionality with exhibition content. This post gathers some of my video-minded observations collected on the road to installing Beauty.

The Beauty curators and the Labs team came together when content for the show began to arrive—both loans of physical objects and digital file transfers. At this time, my video workflow shifted into high gear, and I began to really see the landscape of digital content planned for the exhibit. Videos in Beauty fall into roughly two categories: those that are the primary highlighted object on display and those that supplement the display of another object. Sam Brenner recently posted about reformatting our web presentation of video content when it stands in as primary collection object and has a medium that is “video, animation or other[wise] screen-based.” This change was a result of thinking through the flag we raised earlier for the curators around linking collection records to tags, i.e. “what visitors get when they collect works with the Pen.” As has been mentioned before on this blog, the relationship of collecting points (NFC tags) to collection objects does not need to be one-to-one; Beauty expanded our exploration of the tags-to-collection records relationship in a few interesting ways.

Collecting Neri Oxmann

When visitors collect at the Neri Oxman tag they save a cluster of collections database records, including 12 glass vessels and a video.

In the Beauty exhibition, collecting points are presented uniformly: one tag in each object label. Additionally, tags positioned beside wall text panels allow visitors to save chunks of written exhibition content. The curatorial format of the Triennial exhibition organized around designers (sometimes with multiple works on display), however, encouraged us to think carefully about the tag-collecting relationship. I was impressed to see the curators curating the Pen experience, including notes to me along with each video, like “the works in the show are jewelry pieces; the video will supplement,” “video is primary object; digital prints supplement,” and “video clips sequenced together for display but each video is separately collectible.” They were really thinking about the user flow of the Pen and the post-visit experience, extending their role in organizing and presenting information to all aspects of the museum experience.

Another first in the Beauty exhibition is the video content created specifically for interactive tables. With the curators’ encouragement, the designers featured in the exhibition considered the tables as a unique environment to present bonus content. For example, Olivier van Herpt provided a video of his 3D printer at work on the ceramic vessels on display in the exhibition. It was interesting to see the possibilities that the tables and post-visit outlets opened up—for one thing, the quality standards can be more relaxed for videos shown outside the monitors in the galleries. Also notable is the fact that the Beauty curators selected behind-the-scenes-type videos for tables and post-visit, suggesting that these outlets make room for content that might not typically make it onto gallery walls.

Still from Oliver van Herpt's "3D Printed Ceramic Process"

The video “3D Printed Ceramic Process” by Oliver van Herpt is an example of behind-the-scenes video content that was made for tables and website display only.

The practical fallout on my end was that these supplemental videos added to the already video-heavy exhibition, putting increased pressure on the video workflow. In turn, this revealed a major lack of optimization. The diagram of my video workflow shows, for example, several repeated instances of formatting, captioning and exporting. Multiplied by twenty-one, each of these redundant procedures takes up significant time. Application of branding is probably the biggest time-hog in the workflow—all of it is done manually and locking in the information of maker, credit line and video title with curators and designers is a substantial task. It’s funny, the amount of video content is increasing in exhibitions at Cooper Hewitt and it’s receiving increased attention from curators, but the supplementary videos in the galleries are not treated as first-order exhibition objects, so they don’t go through as rigorous a documentation process as other works in the show. Because of this, video-specific information required for my workflow remained in flux until the very last minute. Even the video content itself continued to shift as designers pushed past my deadlines to request more time to make changes and additions. In truth, the deadlines related to in-gallery video content are much stricter than those for table/post-visit-only content because gallery videos require hardware installation. The environment of the tables and website afford continual change, but deadlines act as benchmarks to keep those interfaces stocked with new content that stays in sync with objects in the physical exhibition.

Exhibition Video Workflow

The workflow that videos follow to get to gallery screens, interactive tables and the collections website.

I maintain a spreadsheet to collect video information and maintain order over my exhibitions video workflow. These are the column headings.

All the steps and data points that need to be checked-off in my exhibition video workflow.

By the exhibition opening, I had all video information confirmed, and all branding and formatting completed. The running spreadsheet I keep as a video to-do list was filled with check-marks and green (good-to-go) highlighting. I had created media records in TMS and connected them to exhibition videos uploaded to YouTube; this allows a script to pull in the embed code so videos appear within the YouTube player on the collections site. I also linked the media records to other database entries so that they would show up on the collections site in relation to other objects and people. For example, since I linked the “Afreaks Process” video record to all of the records for the beaded Afreak objects, the video appears on each object page, like the one for The Haas Brothers and Haas Sisters’ “Evelyn”. Related videos like this one (that are not the primary object) are configured to appear at the bottom of an object page with the language, “We have 1 video that features Sculpture, Evelyn, from the Afreaks series, 2015.” Since the video has its own record in the database, there is also a corresponding “video page” for the same clip that presents the video at the top with related objects in a grid view below. I also connected object records to database entries for people, ensuring that visitors who click on a name find videos among the list of associated objects.

Screenshot of Haas Brothers record webpage

The webpage for the Haas Brothers record includes a video among the related objects.

It is highly gratifying to seed videos into this web of database connections. The content is so rich and so interesting that it really enhances the texture of the collections site and of exhibitions. Cooper Hewitt curators demonstrated their appreciation for the value of video by honoring video works as primary objects on display. They also utilized video in a demonstrative way to enhance the presentation of highlighted works. Beauty opened the doors for curating video works on interactive tables, and grouping videos in with clusters of data linked to collecting points (aka. tags). I’m pleased with the overall highly integrated and considered take on video content in the latest exhibition, and I hope we can push the integration even further as the curators become increasingly invested in adapting their practice to the extended exhibition platforms we have in place like tables, tags and web.

On Exhibitions and Iterations

Since reopening in December 2014, we’ve found that the coming opening of an exhibition is a big driver of iteration. The work involved in preparing an exhibition involves the whole museum and is one of the most coordinated and planned-out things we do, and because of this, new exhibitions push us to improve in a number of ways.

First, new exhibitions can highlight existing gaps or inefficiencies in our systems. Our tagging tool, for example, always sees a round of bug fixes or new features before an exhibition because it coincides with a time when it will see heavy use. Second, exhibitions present us with new technical challenges. Objects in the Heatherwick exhibition, for example, were displayed in the galleries grouped into “projects,” which is also how we wanted users to collect them with their Pens and view them on the website. To accomplish this we had to figure out a way that TMS, our collections management software, could store both the individual objects (for internal purposes) and the grouped projects (which would hold all the public-facing images and text), and figure out how to see that through to the website in a way that made registrars, curators and ourselves comfortable. Finally, a new exhibition can present an opportunity for experimentation. David Adjaye Selects gave us the opportunity to scale up Object Phone, a telephone-based riff on the audio guide, which originally started as a small, rough prototype.

Last week was the opening of our triennial exhibition “Beauty,” which similarly presented us with a number of technical challenges and opportunities to experiment. In this post I’ll share some of those challenges and the work we did to approach them.

Collecting Exhibition Text

Triennial's wall text, with the collect icon in the lower-right corner

Triennial’s wall text, with the collect icon in the lower-right corner

Since the beginning of the pen project we’ve been saying that the Pens don’t just have to collect objects. Aaron and Seb wrote in their paper on the project that “nothing would prevent the museum from allowing visitors to ‘collect’ individual designers, entire exhibitions or even architectural elements from the building itself in the future.” To that end, we’ve experimented with collecting shop items and decided that with the triennial we would allow visitors to collect exhibition text as well.

Exhibition text (in museum argot, “A-Panel” is the main text at the beginning of an exhibition and “B-Panel” are any additional texts you might find along the way) makes total sense as something that a visitor should be able to remember for later. It explains and contextualizes an exhibition’s goals, contents and organization. We’ve had the text on our collections since we reopened but it took a few clicks to get through from a visitor’s post-visit website. Now, the text will be right there alongside all of a visitor’s objects.

The exhibition text on a post-visit website

The exhibition text on a post-visit website

The open-ended part of this is what visitors will expect when they collect an “exhibition.” We installed the collection points with no helper text, i.e. it doesn’t say “press here to collect this exhibition’s text.” We think it’s clear that the crosshairs refer to the text, but one of our original ideas was that we could have a way for the visitor to automatically collect every object in the exhibition and I wonder if that might be the implied function of the text tag. We will have to observe and adapt accordingly on that point.

Videos Instead of Images

When we first added videos to our collections site, we found that the fastest way to accomplish what we needed was to use TMS for relating videos to objects but use custom software for the formatting and uploading of the videos. We generate four versions of every video file — subtitled and not subtitled at two resolutions each — which we use in the galleries, on the tables and on the website. One of the weaknesses of this pipeline is that because the videos don’t live in the usual asset repository the way all of our images do, the link between TMS and the actual file’s location is made by nothing more than a “magic string” and a bit of guesswork. This makes it difficult to work with the video records in TMS: users get no preview and it can be difficult to know which video ID refers to which specific video. All of this is something we’ll be taking another look at in the near future, but there is one small chunk of this problem we approached in advance of the Triennial: how to make our website show the video in place of the primary image if it would be more appropriate to do so.

Here’s an example. Daniel Brown’s On Growth and Form is an animation on display in the Triennial. Before, it would have looked like this — the primary image is a still rendering that has been added in TMS, and the video appears as related content further down the page.

growthandform

What we did is to say if the object is itself a video, animation or other screen-based media and we have an associated video record linked to the object, remove the primary image and put the video there instead. That looks like this:

Screen Shot 2016-02-16 at 3.33.50 PM

Like all good iterations, this one opened up a bunch of next steps. First, we need to figure out how to add videos into our main digital asset pipeline so that the guesswork can be removed from picking primary videos and a curator or image specialist can select it as “primary” the same way they would do with an image. Next, it brought up an item that’s been on the backburner for a while, which is a better way to display alternate images of an object. Currently, they have their own page, which gets the job done, but it would be nice to present some alternate views on the main object page as well.

Just a Reflektor Sandbox

It's fun!

It’s fun!

We had a great opportunity to do some experimentation on our collections site due to the inclusion of Aaron Koblin and Vincent Morisset’s interactive video for Arcade Fire’s Just a Reflektor. The project’s source code is already available online and contains a “sandbox” environment, a tool that demonstrates some of the interactive visual effects created for the music video in a fun, open-ended environment. We were able to quickly adapt the sandbox’s source code to fit on our collections site so that visitors who collect the video with their Pen will be able to explore a more barebones version of the final interactive piece. You can check that out here.

Fully Loaded Labels

When we were working on the Pen prototypes, we tried six different NFC tags before getting to the one that met all of our requirements. We ended up with these NTAG203 tags whose combination of size and antenna design made them work well with our Pens and our wall labels. Their onboard memory of 144 bytes, combined with the system we devised for encoding collection data on them, meant that we could store a maximum of 11 objects on a tag. Of course we didn’t see that ever being a problem… until it was. The labels in the triennial exhibition are grouped by designer, not by object, and in some cases we have 35 objects from a designer on display that all need to be collected with one Pen press. There were two solutions: find tags with more memory (aka “throw more hardware at it”) or figure out a new way to encode the tags using fewer bytes and update the codebase to support both the new and old ways (aka “maintenance nightmare”). Fortunately for us, the NTAG216 series of tags have become more commonly available in the past year, which feature 888 bytes of memory, enough for around 70 objects on a tag. After a few rounds of end-to-end testing (writing the tag, collecting it with a pen and having it show up on the post-visit website), we rolled the new tags out to the galleries for the dozen or so “high capacity” labels.

The new tag (smaller, on the left) and the old tag (right)

The new tag (smaller, on the left) and the old tag (right)

The most interesting iteration that’s been made overall, I think, is how our exhibition workflow has changed over time to accommodate the Pen. With each new exhibition, we take what sneaked up on us the last time and try to anticipate it. As the most recent exhibition, Beauty’s timeline included more digitally-focused milestones from the outset than any other exhibition yet. Not only did this allow us to anticipate the tag capacity issue many months in advance, but it also gave us more time to double check and fix small problems in the days before opening and gave us more time to try new, experimental approaches to the collections website and post-visit experience. We’re all excited to keep this momentum going as work ramps up on the next exhibitions!

 

Label Writer: Connecting NFC tags to collection objects

IMG_20150610_113555

Labels, for better or worse, are central to the museum experience. They provide visitors with access to basic object information (metadata) and a tiny glimpse into the curatorial research for everything in the galleries, helping to place objects in context. At Cooper Hewitt, they are also the gateway through which the Pen‘s “collect” interaction is realized.

In order for the Pen to know which object label you’re trying to collect, every label in the museum contains an NFC tag that is written with the object’s ID. When an object gets added to our database we give it an ID, an integer that is unique across our entire online collections database. Our beloved Spanking Cat, for example, has the ID number 18382391. Writing that number to an NFC tag is a simple task, but doing it hundreds of times for every new exhibition we roll out will get tedious very quickly. Thus, Label Writer was born.

Label Writer is an Android app that writes, reads and locks NFC tags based on the object to which the label refers. The staff member can look up the objects that are in a given room of our museum, select one or more of them, and assign them to the label in question. They can search for specific objects in case an object’s location hasn’t been updated yet. They can also write tags for videos and shop items.

The front and back of the NFC tag we use in our labels, with pennies for scale

From left to right: the back and front of the NFC tags we use in our labels, and pennies for scale.

Planning

After thinking about the app we came up with the following requirements:

  1. When processing a user’s visit, we need to know what type of thing they’ve collected. When the Pen launched, this was either objects or videos, and has since grown to include shop items. To facilitate that process, Label Writer would have to distinguish between types of things and write tags that indicated that.
  2. It would need to write multiple things to a tag, including things of different types. One label might contain three objects. Another label might contain one video and two objects.
  3. It would need to lock tags. Leaving the tags unlocked would enable anyone with an NFC-enabled smartphone to walk around the galleries and overwrite our tags. Locking the tags prevents this.
  4. It would need to read tags and display images of what’s on a tag. This is so we can double-check what is on a tag before we lock it. We only print one copy of every label – sometimes through an offsite service – and the wall labels (as opposed to the rail labels) have their NFC tags glued in and unable to be replaced.
  5. Label Writer would have to present objects in a constrained format — having to find the object on a label from our total collection of 210,000 objects every time, through accession number lookup or other traditional searches, would get annoying very quickly.
IMG_20150608_172543

The NFC tags on our wall labels are built in to the label.

IMG_20150608_172442

The NFC tags on our rail labels are interchangeable.

Production

I decided to build the app in Android because it has great support for NFC and we have plenty of Nexus 9 tablets at the museum for use in the galleries. I started with this boilerplate for an Android read/write NFC app and performed initial tests to make sure we could write a tag that could be read by some of the early Pen prototypes. Once that was established, I began fleshing out the UI of the app and worked on hooking it up to our API.

The API gives us so much to work with on the app’s frontend. Being able to display an object’s image is a much better way to confirm that a label is written correctly than by comparing IDs or accession numbers. The API also lets us see all of the objects in a given room of the museum, which means that the user can write labels in an ordered fashion. When the labels arrive from the printer they are grouped by room, and often we will not write the tags until they have been installed in the galleries, so “by room” is a convenient way to organize objects on the frontend. It also gives us easy access to videos and shop items, and allows the app to easily be expanded to write labels for more things from our collections database. Since our collections site alpha, we have stressed the importance of an easily-accessed permanent ID for everything: people, objects, videos, exhibitions, locations etc., and now with the Pen we can prepare labels that allow users to collect any one of those things during their visit to the museum.

01

When I took all these screenshots, the app was called “Tag Writer”, as in “NFC Tag Writer.” But “Label Writer” sounds better.

When the app is opened, the user is prompted with a few ways to group objects. Since we added videos and shop items to the app, this intro screen has grown a bit so it will probably get a redesign when we next expand its capabilities. But for now, users have a few options here:

  1. They can select a room from a dropdown menu (here’s a list of all of our rooms)
  2. They can enter an individual accession number
  3. They can enter a video’s ID
  4. They can search the shop (see Aaron’s recent post about adding shop items to our online collection)

When one of these options is used, the relevant objects appear on the screen. For example, selecting Room 106 brings up some of the posters from our current How Posters Work exhibition. Being able to display the images of the posters makes it much easier for the user to confirm that they are connecting the dots accurately — accession numbers and object IDs are easily confused (not to mention boring to look at).

02

The user can then tap one or more objects to add them to a label. In the screenshot below, you can see that two objects have been selected and the orange bar at the bottom has formatted them to be written to a tag — in this case, chsdm:o:68730187;18708395. The way that things get written to tags follows a format we agreed upon early in the Pen design process, as various developers would be building applications that relied on reading and parsing a Pen’s content. In brief, chsdm is a namespace for our museum that is not particularly necessary but serves as a header for what follows. o stands for object and then the ID (or semicolon-delimited IDs) that follow are the IDs of objects. The letter can change: v for video, s for shop, and on and on for whatever other things we might eventually write to tags. We add a pipe character (|) to delimit multiple types of things on a tag, so a tag with an object and a video might look like chsdm:o:18714653|chsdm:v:68764195. But all of this is handled by the app based on what the user selects in the interface.

03

Next, a user can hold the tablet up to the object label to write the NFC tag. When the tag is written, the orange bar at the bottom turns green to let the user know it went okay. Later, using the “Read Tags” functionality of the app, the user can confirm the tag’s contents by reading the NFC tag. The app parses the tag and loads the things it thinks the tag refers to. When this is confirmed, the user can lock the tag to make sure nobody overwrites it.

05

Here’s everything, from start to finish, using the object-lookup-by-accession-number functionality.

Next Steps

I mentioned that the home screen of this app will get a redesign as we allow more types of things to be written to tags. The user experience of the tag writing process needs a little finessing — a bug in how success messages get displayed has resulted in a few tags that get written with bunk data. Fortunately that is caught in the “read” phase of the workflow, but should be corrected earlier.

Overall, as we keep swapping out exhibitions, Label Writer will get more and more use. We will use these opportunities to collect feedback from the app’s users and make changes to the app accordingly.

Happy Staff = Happy Visitors: Improving Back-of-House Interfaces

“You have to make the back of the fence that people won’t see look just as beautiful as the front, just like a great carpenter would make the back of a chest of drawers … Even though others won’t see it, you will know it’s there, and that will make you more proud of your design.”

—Steve Jobs

In my last post I talked about improvements to online ticketing based on observations made in the first weeks after launching the Pen.

Today’s post is about an important internal tool: the registration station whose job is to pair a new ticket with a new pen. Though visitors will never see this interface, it’s really important that it be simple, easy, clear, and fast. It is also critical that staff are able to understand the feedback from this app because if a pen is incorrectly paired with a ticket then the visitor’s data (collections and creations) will be lost.

Like a Steve-Jobs-approved iPod or a Van Cleef & Arpels ruby brooch, the “inside” of our system should be as carefully and thoughtfully designed as the outside.

the view from behind a desk with screens and wires everywhere. a tablet positioned upright with some tiny text and bars of color.

Version 1 of the app was functional but cluttered, with too much text, and no clear point of focus for the eye.

Because the first version of the app was built to be procedurally functional, its visual design was given little consideration. However, the application as a whole was designed so that the user interface – running in a web browser – was completely separate from the underlying pen pairing functionality, which makes updating the front-end a relatively straightforward task.

Also, we were getting a few complaints from visitors who returned home eager to see their visit diary, and were disappointed to see that their custom URL contained no data. We suspected this could have been a result of the poor UI at ticketing.

With this in mind, I sat behind the desk to observe our staff in action with real customers. I did about three sessions, for about ten minutes each, sometimes during heavy visitor traffic and sometimes during light traffic. Here’s what I kept an eye on while observing:

  • How many actions are required per transaction? Is there any way to minimize the number of “clicks” (in this case, “taps”) required from staff?
  • Is the visual feedback clear enough to be understood with only partial attention? Or do  typography, colors, and composition require an operator’s full attention to understand what’s going on?
  • What extraneous information can we minimize or omit?
  • What’s the critical information we should enlarge or emphasize?

After observing, I tried my hand at the app myself. This was actually more edifying than doing observations. Kathleen, our head of Visitor Services, had a batch of about 30 Pens to pair for a group, and I offered to help. I was very slow with the app, so I wasn’t really of much help, moving through my batch of pens at about half the speed of Kathleen’s staff.

Some readers may be thinking that since the desk staff had adjusted to a less-than-excellent visual design and were already moving pretty fast with it, this could be a reason not to improve it. As designers, we should always be helping and improving. Nobody should have to live with a crappy interface, even if they’ve adjusted to it! And, there will be new staff, and they will get to skip the adjustment process and start on the right foot with a better-designed tool.

My struggle to use the app was fuel for its redesign, which you can see germinating in my drawings below.

some marker sketches of a tablet interface with lots of scribbled notes

After several rounds of paper sketches like these, the desk reps and I decided on this sequence as the starting point for version two of the app.

These were the last in a series of drawings that I worked through with the desk staff. So our first few “iterative prototypes” were created and improved upon in a matter of minutes, since they were simply scribbled on paper. We arrived at the above stopping point, which Sam turned into working code.

Here’s what’s new in version 2:

  • The most important information—the alphanumeric shortcode— is emphasized. The font is about 6 or 7 times bigger, with exaggerated spacing and lots of padding (white space) on all sides for increased legibility. Or as I like to call it, “glanceability.” This helps make sure that the front of house staff pair the correct pen with the correct ticket.
  • Fewer words. For example, “Check Out Pen With This Shortcode” changed to “GO”, “Pen has been successfully checked out and written with shortcode ABCD” changed to “Success,” etc. This makes it easier for staff to know, quickly, that the process has worked and they can move on to the next ticket/pen/customer.

“I didn’t have time to write a short letter, so I wrote a long one instead.”
Mark Twain

  • More accurate words. Our team uses a different vernacular from the people working at the desk. This is normal, since we don’t work together often, and like any neighboring tribes, we’ve developed subtly different words for different things. Since this app is used by desk staff, I wanted it to reflect their language, not ours. For example, “Pair” is what they call “check-out” and “Return” is what they call “check-in.”
  • Better visual hierarchy: The original app had many competing horizontal bands of content, with no clear visual clue as to which band needed the operator’s attention at any given time. We used white space, color (green/yellow/red for go/wait/stop), and re-arranging of elements (less-used features to the bottom, more-used features to the top) to better direct the eye and make it clear to the user what she ought to be looking at.
  • Simple animations to help the user understand when the app is “working” and they should just wait.

Still to come are added features (bulk pairing, maintenance mode) and any ideas the desk reps might develop after a couple of weeks of using the new version.

Imagine how difficult this process would have been if the museum had outsourced all of its design and programming work, or if it were all encased in a proprietary system.

Redesigning Post-Purchase Touchpoints

We re-opened the museum with “minimum viable product” relating to online ticket orders. Visitor-facing touchpoints like confirmation emails, eTicket PDFs and “thank you for your order” webpages were built to be simple and efficient. After putting them to the test with real visitors, room for improvement became obvious.

Here’s how we used staff feedback and designerly observation to iterate and improve upon 3 important touchpoints. The goal of this undertaking was to make things smoother for our front-of-house staff (who turned out to have quite a bit to juggle, given the new Pen and its backend complexities), and simpler for visitors (some of whom were confused by our system.. how dare they!).

The original confirmation webpage was designed with visitors buying on mobile (perhaps even while en route to the museum) in mind:

screen shot of a webpage with order number and a barcode for each ticket.

The original “Thank You” webpage was stripped of information, with the idea of getting you through the front desk transaction as efficiently as possible.

The original confirmation email was a few lines of text:

Screen shot of an email confirming cooper hewitt ticket order

Made in a pre-opening vacuum without real visitors to test upon, The original confirmation email was more self-promotional than it was anticipatory of visitors’ needs.

The original PDF attached to this confirmation email was designed for visitors who like to print things out and have something on paper:

The original eTicket PDF had one page (one "ticket") per visitor. The email went to the purchasing visitor's inbox.

The original eTicket PDF had one page (one “ticket”) per visitor. The email went to the purchasing visitor’s inbox.

Over a few weeks of heavy visitor traffic (with about 20% of visitors buying advance tickets online), I sat behind the front desk staff to quietly observe a handful of transactions every day. I initiated my observation sessions knowing that we needed to make the front desk move smoother and faster, but I didn’t yet know which touchpoints/services/operations would need changing.

These 3 touchpoints stood out to me as something that needed re-addressing if we wanted to make the front desk run more smoothly. (My daily observations also led to many efficiency-boosting changes made to internal tools, IT concerns, staffing needs, signage, and more.) This experience has made me a big believer in quiet observation as a direct route to improving services and systems. “Conference room conjecture” is worth very little compared to real observations and listening-based chats with your public-facing staff.

My advice on Observing and Listening for service design:

  •  You may observe a staff person answer a question incorrectly, or a problem that you could resolve yourself on the spot. Don’t intervene, tempting as it might be! You’re not there to fix problems, you’re there to fix problem patterns. Your mission is long-term.
  • When chatting with staff, listen quietly and attentively. It’s OK if you can’t offer an instant fix. You may not have a magic wand, but listening with empathy is at least half as good.
  • Focus on building trust with the staff you are observing over a period of days or weeks, so they will become comfortable sharing bad news as easily as they share the good. Remind them repeatedly that your intention is to improve their daily work situation.
  • Remember it can be very intimidating to feel “interrogated” or “observed” by someone who is your direct/indirect superior. Make sure they know your questions are motivated by a spirit of service, not by “tattle-telling” to other staff that things might be going amiss. You will get more honesty, and thereby, better design insights.

Here are the observation-based insights that motivated our choices:

  • Visitors sometimes get confused by the barcodes. They think something has to be scanned after their visit in order for their pen diary to get “Saved” or “sent to their email.”
  • Because this collateral is called an “eTicket,” some visitors are marching right up to the gallery entrance with their “eTicket,” and bypassing the front desk. “I already bought my ticket, why do I have to wait on this line?”
  • Visitors don’t know what the Pen is, and explaining it takes several minutes, slowing down the line.
  • Visitors may not have great cell service in our lobby, and probably haven’t gotten the wifi working yet, so if their email attachment hasn’t pre-downloaded, this will slow everything down.
  • Front desk staff each have different ways of handling eTickets. Most staff ask for the order number verbally. A few staff take the printout or phone and scan the barcode, avoiding the need to re-print a ticket (this is how the barcode was intended to be used).
  • The diversity of collateral that visitors may bring to the transaction makes things more complicated for our staff. “Is my customer looking at a webpage, an email, or a PDF? Should I tell them to look for an order number, hand me a barcode, or open the attachment?”
two gentlemen at a large white desk in a dark room full of wood paneling. a third gentleman sits behind the desk.

For their own ease of use, most desk reps were initiating the transaction by asking: “What’s your Order number?” so we designed to accommodate that preference instead of working against it.

The ideas we cycled through:

  • A picture of the Pen with an “enticing” explanation of what it does might help offset the burden on the front desk to explain it all very quickly.
  • We thought one barcode per visitor displayed in a list might let us hold on to our original “paperless dream.” (The “paperless dream” entailed scanning each barcode and pairing immediately with pens, bypassing our CRM and house-printed tickets.) When we ran this idea by our colleagues at the desk, though, we learned quickly that this would be extraordinarily confusing for guests, who need to remember their personal URL (usually printed on the ticket) to access their post-visit diary. What if a group of 5 friends come together, will we put the burden on the visitor to remember which URL goes with which friend? Will they have to write it down, or forward around the ticket email with added whose-URL-is-whose notes? That’s too much of a burden on guests, who are already working to assimilate new information about our Pen, which has already buffeted their expectations (and tried their transaction-length-patience) about what to expect during a museum front desk experience.
printouts of an email confirming tickets with barcodes and giant pen scribbled "x" with handwritten pen notes

What seems like a good idea at your desk may not seem so smart after you’ve shown it around to ground-level users

The current solution (after all, our work is never final):

screen shot of an email with lots of information about cafe, hours, map, the pen, and an image of museum interior and pen usage.

The order number is large and at the top of the email. It’s also in the subject line. Click this image to enlarge.

  • This solution makes the front desk staffer’s job simpler when a pre-order person arrives. It’s all about the order number. There is no more choice involved about whether to ask for the order number, or the barcode, or the purchaser’s name… or….
  • There is still a confirmation webpage, and it looks exactly like this.
  • There is no more PDF attachment to the email.
  • Since this is a “will-call” paradigm instead of an “eTicket” paradigm, we hope this solution will keep visitors from expecting that they can enter the museum directly without talking to a desk attendant first.
  • The order number is in the subject line, so if your email hasn’t fully downloaded, you won’t slow down the line.
  • The original idea was to save paper by allowing a visitor’s PDF to work as their ticket/URL reminder. This idea, though it does now involve reprinting tickets, may involve less user-printouts, since we’re simply asking folks to “bring” their order number, and not any printouts.

This is just one piece of an elaborate service design puzzle. More posts will be coming about other touchpoints we’ve created and re-designed based on observations made in the first months of running our new Pen service.