Exhibition Channels on Cooperhewitt.org

There’s a new organizational function on cooperhewitt.org that we’re calling “channels.” Channels are a filtering system for WordPress posts that allow us to group content in a blog-style format around themes. Our first iteration of this feature groups posts into exhibition-themed channels. Subsequent iterations can expand the implementation of channels to broader themed groupings that will help break cooperhewitt.org content out of the current menu organization. In our long-term web strategy this is an important progression to making the site more user-focused and less dictated by internal departmental organization.

The idea is that channels will promote browsing across different types of content on the site because any type of WordPress post—publication, event, Object of the Day, press, or video—can be added to a channel. Posts can also live in multiple channels at once. In this way, the channel configuration moves us toward our goal of creating pathways through cooperhewitt.org content that focus on user needs; as we develop a clearer picture of our web visitors, we can start implementing channels that cater to specific sets of users with content tailored to their interests and requirements. Leaning more heavily on posts and channels than pages in WordPress also leads us into shifting our focus from website = a static archive to website = an ever-changing flow of information, which will help keep our web content fresher and more engaged with concurrent museum programs and events.

Screenshot of the Fragile Beasts exhibition channel page on cooperhewitt.org

The Fragile Beasts exhibition channel page. Additional posts in the channel load as snippets below the main exhibition post (pictured here). The sidebar is populated with metadata entered into custom fields in the CMS.

In WordPress terms, channels are a type of taxonomy added through the CustomPress plugin. We enabled the channel taxonomy for all post types so that in the CMS our staff can flag posts to belong to whichever channels they wish. For the current exhibition channel system to work we also created a new type of post specifically for exhibitions. When an exhibition post is added to a channel, the channel code recognizes that this should be the featured post, which means its “featured image” (designated in the WordPress CMS) becomes the header image for the whole channel and the post is pinned to the top of the page. The exhibition post content is configured to appear in its entirety on the channel page, while all other posts in the channel display as snippets, cascading in reverse chronological order.

Through CustomPress we also created several custom fields for exhibition posts, which populate the sidebar with pertinent metadata and links. The new custom fields on exhibition posts are: Exhibition Title, Collection Site Exhibition URL, Exhibition Start Date, and Exhibition End Date. The sidebar accommodates important “at-a-glance” information provided by the custom field input: for example, if the date range falls in the present, the sidebar displays a link to online ticketing. Tags show up as well to act as short descriptors of the exhibition and channel content. The collection site URL builds a bridge to our other web presence at collection.cooperhewitt.org, where users can find extended curatorial information about the exhibition.

Screenshot of the sidebar on the <em>Fragile Beasts</em> exhibition channel page.

The sidebar on the Fragile Beasts exhibition channel page displays quick reference information and links.

On a channel page, clicking on a snippet (below the leading exhibition post) directs users to a post page where they can read extended content. On the post page we added an element in the sidebar called “Related Channels.” This link provides navigation back to the channel from which users flowed. It can also be a jumping-off point to a new channel. Since posts can live in multiple channels at once this feature promotes the lateral cross-content navigation we’re looking to foster.

Screenshot of sidebar on a post page displaying Related Channel navigation.

The sidebar on post pages provides “Related Channel” navigation, which can be a hub to jump into several editorial streams.

Our plan over the coming weeks is to on-board CMS users to the requirements of the new channel system. As we launch new channels we will help keep information flowing by maintaining a publishing schedule and identifying content that can fit into channel themes. Our upcoming exhibition Scraps: Fashion, Textiles and Creative Reuse will be our first major test of the channels system. The Scraps channel will include a wealth of extra-exhibition content, which we’re looking forward to showcasing with this new system.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site. Additional feature roll-out will be on-going.

Object Phone: The continued evolution of a little chatbot

Object Phone is a project that started small, took less than a day to code, and consisted of about a page of code. Initially it was just an experiment–a way for me to explore a new interface to our API. Object Phone allowed users to call or text objects in our collection, and receive some kind of response. It was met with mild fanfare.

Next, I was curious about using Object Phone in our galleries. I looked towards developing some better audio content, and we decided to produce a short audio tour of the David Adjaye Selects exhibit. It was somewhat cumbersome to use but an interesting experiment and one of my first “in-gallery beta-tests.” Needless to say, I tried to be as clear as possible that this was an “experiment.”

Later I started thinking about the broader uses for a system like Object Phone. Could it replace an expensive audio guide? Could it be used as an accessibility device? I started to think of many possible uses for the platform, and started to rewrite the code to support multiple outputs. In a way, I was thinking about the code for Object Phone as a mini framework for building voice and text based interactions with our content.

All of this got put on the back burner for a while. Object Phone is after all my little side project. Something I come back to when I need to center myself and let my brain think through a few problems. It’s very much a project I meditate on when I need to do that kind of thing.

About 6 months later I started playing with the code again. I realized it was pretty trivial to deliver images via MMS using Twilio’s API and I had also started to notice that MMS worked pretty nicely on devices like an Apple Watch, and looked pretty good in the notification screen on my iPhone. All of the sudden it was kind of fun again to receive texts from Object Phone. So, I set up a subscription service.

Inspired by a few chatty SMS based apps out there like Poncho and The Edit, I built a simple subscription service that would send random objects and images to subscribers once a day at noon. Again, I set this up quickly, sent out a request for some people to try it out, and started to make realizations.

Object Phone is getting some upgrades. Feature requests welcome.

A photo posted by Micah Walter (@micahwalter) on

The main realization I had was that Object Phone had just become a chatbot. To be clear, Object Phone has technically always been a chatbot. You send it messages, and it replies with some response. But now that it sends you something periodically based on your preferences (currently just the preference that you want to continue receiving messages) it seems more like a real chatbot. More importantly, this experiment has started to make me “think” of Object Phone as a chatbot–something I should have likely realized from the start.

I also realized that Object Phone’s chattiness happens in multiple directions. It indeed chats with its subscribers. It can send you messages once a day, and it can reply to your requests for info about objects with ease. But, I also added a back end feature which follows this same line of thinking. If a user sends Object Phone a message that it doesn’t understand, Object Phone asks me for some assistance. Here is the flow:

  1. A user messages Object Phone something like “Tell me about spanking cat.”
  2. Object Phone isn’t smart enough yet to decipher the message.
  3. Object Phone replies “OK, I don’t really understand what you are saying but I’ll ask around and get back to you.”
  4. Object Phone then sends our Cooper Hewitt Slack channel a message.
  5. The Slack message contains the user’s phone number, their message, and a link to an admin page where the operator can reply directly to the user.
Screen_Shot_2016-07-04_at_10_13_26_AM

A Slack Channel where Object Phone can tell our staff when it needs a little assistance.

Screen_Shot_2016-07-04_at_10_16_12_AM

An Object Phone admin page where our staff can reply directly to users

All of the sudden Object Phone is a conduit between Cooper Hewitt staff and its visitors. It’s talking directly to visitors, but also relaying messages back and forth to more knowledgeable staff when it needs assistance.

What the cool kids are doing

Conversational user experiences are all the rage right now. Facebook has recently opened up their Messenger platform and API to developers, which means anyone can build a simple chatbot on Facebook and reach all their followers with ease. Many other messaging services have open APIs as well. WeChat, LINE, What’sApp and Slack are just a few examples.

Slack for iOS Upload

Screenshot of the CNN chatbot for Facebook Messenger

It’s pretty clear that messaging apps are increasing in popularity, with users spending much of their days talking on platforms like SnapChat rather than thumbing through their Facebook feeds. Apple too has followed suit by announcing a much upgraded Messages app in their latest update to iOS.

Chatbots have also become much more sophisticated, with huge advancements in Natural Language Processing and Natural Language Understanding. There is now a wealth of information and publicly available code and APIs out there, making it easier than ever to spin up a pretty intelligent chatbot with little overhead.

The Future of Object Phone

My next steps are to make Object Phone more intelligent. It should be able to learn about your tastes and preferences. If you only want to receive objects from our Textiles department, you should be able to say so. If you want to get your daily update at 5am, you should be able to just tell it that.

More importantly, you should be able to interact with more than just objects. Users should be able to find out general info about our museum. Are we open today? How do I get to Cooper Hewitt? Can I buy tickets right here, right now?

Lastly, I’d love to see Object Phone make its way onto the platform of your choice. I think this is a critical next step. SMS is great, and available to nearly anyone with a cell phone, but apps like FB Messenger, WhatsApp, and LINE have the ability to connect a service like Object Phone with a captive audience, all over the world.

I think institutions like museums have a great opportunity in the chatbot space. If anything it represents a new way to broaden our reach and connect with people on the platforms they are already using. What’s more interesting to me is that chatbots themselves represent a way to interact with people that is by its very nature, bi-directional. It presents us with the challenge of conversation, and forces us to listen to our constituents in a very close and connected kind of a way. We should already be doing this.

If you’d like to participate in testing out Object Phone, please go to http://objectphone.cooperhewitt.org and sign up. You will receive an object every day at 12pm EST until you reply STOP.

Mass Digitization: Workflows and Barcodes

This is my first post in a four-part series about digitization. My name is Allison Hale, Digital Imaging Specialist at Cooper Hewitt. I started working at the museum in 2014 during the preparations for a mass digitization project. Before the start of digitization, there were 3,690 collection objects that had high resolution, publication quality photography. The museum is currently in phase two of the project and has completed photography of more than 180,000 collection objects.


Workflows

Cooper Hewitt was the first Smithsonian unit to take on digitization of an entire collection. Smithsonian’s Digitization Project Office directed the project and an onsite vendor completed the imaging. Museum staff played an intensive role, allocating up to fifty percent of a workweek on digitization administration. Additional hires in the Registration and Conservation departments eased the daily organization and handling of the objects.

The goal was to take a physical object from storage shelf to public-facing digital image within 24-48 hours.

The goal was to take a physical object from storage shelf to public-facing digital image within 24-48 hours.

Here is a simplified version of the workflow:

Physical or Object Workflow

  • The vendor’s photographic setup was located in our collections storage facility
  • Art handling technicians pulled pre-organized groups of objects from storage shelves to a staging area
  • A group of objects located on the same shelf or container were carted into the staging area and then placed individually on a photographic set
  • The object barcode was scanned to create a file name
  • Photograph is taken
  • Object placed back in the staging area, matched with its barcode tags, and returned to storage

Data Workflow

  • Assets from the project were stored on a production server
  • Museum staff completed daily upload of assets to Smithsonian’s Digital Asset Management System (DAMS)
  • DAMS became the repository for digital assets
  • CDIS (Collection DAMS Integration System) provided synchronization of metadata from TMS, and delivered images to the object records in the collections database, The Museum System
  • IDS (Image Delivery Service) provided public images for use on the Collections Website

Let me repeat the word simplified. Mass digitization applied to uniform collection can be simple, but application to various dimension, size, and materiality was new territory. I will point out some of the challenges that were faced during the process, and the digital innovations that improved efficiency and helped us to complete the process in 18 months.

Barcoding: Bridging the Physical to the Digital

A barcode tag attached to an object in the metalwork sub-collection.

A barcode tag attached to a wooden panel.

The first stage of digitization was assessment and barcoding of 258,000 objects. Museum objects are categorized in four curatorial departments: Drawings and Prints, Product Design and Decorative Arts, Textiles, and Wallcoverings. A Project Registrar was hired to oversee the barcoding equipment and printing, reconciling object locations, and maintaining a pace for the project. Thirteen barcoding technicians with expertise in object handling and conservation were hired to complete a conservation assessment and barcode each object.

The Museum System, the collections database, contains a barcode number in each object record. This unique identifier was printed as a barcode and affixed to each object. The Registration staff decided that the most efficient way to barcode was not by “cherry picking” random objects, but by systematically working on one shelf or container at a time. A SQL query of a TMS location was used to find all objects on one shelf or in a single container. A software program called Label Matrix would pull, format, and print the query barcodes. A 2-D barcode could be printed as a sticker, for a larger “cover sheet” or as a non-adhesive tag.

Here were some of the challenges:

1. The object’s recorded storage location needed to be accurate

In an ideal workflow, the entire collection would be inventoried before digitization. This would provide accurate locations for each object. Instead, during the barcoding process the technicians were responsible to note any inaccuracies on a spreadsheet. The Project Registrar then reconciled the locations in TMS. The technicians barcoded locations and containers to improve tracking. The barcodes can be differentiated by last number in the sequence: “2” indicates an object, “1” a container, and a location “0”.

2. Objects in storage must be accessible to the digitization technicians

The Product Design and Decorative Arts Department contained sub-collections that were stored in temporary housing. The temporary housing was designed to transport the objects, but not for permanent storage. Conservators and technicians built permanent storage containers that allowed technicians to easily remove and replace objects during digitization. An example of this is the matchsafe sub-collection.

(from left) Match safes packed for travel; Technicians make new containers and barcode objects; Match safes in storage with barcodes in each container and a corresponding coversheet

(from left) Matchsafes packed for travel; Technicians make new containers and barcode objects; Matchsafes in storage with barcodes in each container and a corresponding cover sheet.

3. Objects required special handling due to fragility or component assembly 

The collection contains 211,000 objects. Due to conservation concerns and component assembly, approximately 10 percent of the collection could not be digitized. A visual system was created to alert digitization staff to the conservation “status” of the object: green=ready for digitization, yellow=digitization handling by conservator only, and red=too fragile for digitization. The visual system allowed the vendor staff to work independently in storage, rather than referencing TMS records.

Digitization technicians used the visual system to identify containers ready for imaging.

Digitization technicians used the visual system to identify containers ready for imaging.

4. The barcode needed to be scanned by the reader in a timely manner

A 2-D barcode was used so that the technicians could efficiently scan holding the reader in different positions.

A technician scans a barcode on a coversheet during digitization.

A technician scans a barcode on a coversheet during digitization.

5. The project needed larger organization so that every person involved could plan the timing of digitization

A chart was made to organize the Departments’ sub-collections. An initial count in the TMS database in the sub-collection categories gave an approximate number of items to be digitized. From this number, the Smithsonian’s Digitization Project Office could estimate the digitization rate, a sub-collection schedule, and cost per image. Curators, conservators and digitization staff would meet before the beginning of each sub-collection to decide upon the aesthetic of the images, conservation concerns and handling specifications.

Outcomes:

Object barcoding was a necessary step before the start of digitization. During the imaging workflow, technicians scanned the barcode to input filename. Eliminating the manual entry of filenames saved an average of 14 seconds per file, amounting to 103 working days. It also greatly decreased the rate of human error from manual entry.

The barcode filename became the metadata link between DAMS (Digital Asset Management System) and the collections database TMS. (A good lead-in to my next post!)


 

Next Post: DAMS and Metadata Mapping!

Museums and the Web Conference Recap: Administrative Tools at Cooper Hewitt

The Labs team had a great time at Museums and the Web this year. We published two papers for the conference and presented them both to the audience of cultural heritage thinkers, makers, planners and administrators. Sam Brenner and I shared our paper, “Winning (and losing) hearts and minds of museum staff: Administrative interfaces at Cooper Hewitt,” which outlines the process of designing, developing and iterating two in-house built, staff-facing tools: Tagatron and the Pen Pairing Station. Both administrative tools are essential aides to staff managing new responsibilities associated with visitor-facing gallery technologies.

Here is the deck from our presentation:

Administrative interfaces at Cooper Hewitt (14)

Administrative interfaces at Cooper Hewitt

Introduction

  • Cooper Hewitt, Smithsonian Design Museum. New York, New York.
  • Our strategy around presenting design is to expose process—how things are made, how they are conceived, how they are designed.
  • This presentation will speak to our philosophy of openness around design process in sharing part of the back-story of how our current visitor-facing experience came together and how it’s maintained.

Administrative interfaces at Cooper Hewitt (1)

Visitor Interfaces

  • The visitor-facing technologies in the museum, introduced in 2014, invite new forms of engagement with the Cooper Hewitt collection. They encourage active participation, letting visitors play, design and collect through multi-touch table applications and the Pen.
  • Before we were able to re-design the visitor’s relationship to the museum we went through comprehensive changes at every level.

Administrative interfaces at Cooper Hewitt (2)

Comprehensive Re-design / Institutional Shift

  • We began a restoration of the mansion, stripping it down to its Carnegie steel girders.
  • To a similar degree we rethought the organizational infrastructure of Cooper Hewitt with a comprehensive re-design of operations, workflows and responsibilities.

Administrative interfaces at Cooper Hewitt (3)

New Responsibilities (for Everyone)

  • There were new jobs created to support the new visitor experience, including that of our Gallery Technology Manager, Mary Fe, whose job responsibilities include maintaining the Pens and troubleshooting touch tables and gallery interactives
  • The re-design affects every staff member at Cooper Hewitt:
  • Registrars: aggressive timetable to enter data
  • Security: understand the mission and visitor experience, teaching visitors on pen usage
  • Exhibitions: label programming, maintenance
  • Curators: tags, relations, chat formatting for length
  • Visitor services: pen pairing – whole new step in between “welcome” and ticket sale
  • Before we got to this stage there was the task of onboarding staff to new responsibilities, which fell largely to the Digital & Emerging Media department. With the allocation of new responsibilities also came the opportunity to create tools that could facilitate some of the work.

Administrative interfaces at Cooper Hewitt (4)

Defining the Need for Considered Interfaces

  • Why did we decide that new interfaces were necessary in certain parts of the workflow?
  • We started with observation, watching workflows as they emerged. We created tools to assist where necessary. The need for interfaces was in part logistical, in part technical and also in part human.
  • Candidates for interface development are parts of the new digital ecosystem where there is:
  • High volume of data
  • Large number of users
  • Complex tasks
  • Something that needs constraints or enforcement
  • Example: the job of assigning tags and related objects to everything we put on display for the reopening. The touch table interfaces utilize tag and related object information. This data does not live in TMS, so it is housed in a custom database.
  • The task of creating the data fell to the curators. Originally this was stored in Excel files. While the curators were happy using spreadsheets, we identified a few major issues with them. The biggest one was that every department had devised their own schema for storing the data, which would ultimately have to be reconciled
  • This example fits all of the criteria above.

Administrative interfaces at Cooper Hewitt (5)

Case Study 1: Tagatron

  • Explicit purpose of the Tagatron tool: make the work quicker; make the metadata consistent; make the organization of the metadata consistent
  • Making this tool highlighted for the digital team the complex relationship between the work, the tool, and the people responsible for each—even though we believed the tool made things easier, the tool had its own set of ongoing technical and usability issues
  • We found that those issues propagated an amount of distrust or lack of confidence in the larger project. Some of these were due to bugs in the tool, but some of it was just that now it was known that this was work that would be “enforced” or taken more seriously, which made users uncomfortable.
  • Key idea: the interface takes on a symbolic value in representing “new responsibilities” and can bring about issues that it might not have been designed to address. It takes on a complex position between human needs and technical needs.

Administrative interfaces at Cooper Hewitt (6)

Tagatron (continued)

  • These graphs illustrate how prolific the task of tagging and relating objects is. It was important to build Tagatron because it is crucial tool in the ongoing operation of the museum’s digital experience. More so than the spreadsheets ever could, it allows for scalability.
  • Since the re-opening the tool went through one major design and backend overhaul, and continues to see small iterations.

Administrative interfaces at Cooper Hewitt (7)

Case Study 2: Pen Pairing Stations

  • Context of Pen Pairing: Every visitor to the museum receives a Pen. At the museum’s front desk each Pen is paired with a unique admission ticket. Every ticket has a shortcode identifier that allows visitors to retrieve their Pen visit data online when they enter the code on their ticket.
  • Pen pairing is done at a very critical point in the visitor experience when the interaction needs to be quick and frictionless. Visitor Services Associates have to coordinate a number of simultaneous tasks.

Pen Pairing Station (continued)

  • This video depicts the Pen pairing process behind the front desk. It documents the first version of the Pen Pairing application, and shows the exposed Pen-reading circuit board before housing was built.
  • Pen pairing is one of the most demanding of the new responsibilities created by the “new experience”–has to fit between welcoming a visitor, taking their money, answering any questions, looking up their member ID.
  • Each use of the tool only lasts 5-10 seconds but we’ve spent many hours and built many versions of this tool to figure out exactly what needs to happen in that time to accomplish all the tasks, including updating databases, handling failures, serial communication
  • Every one of these iterations gave us an opportunity to be connected to the staff using the tools, not only to make something that works better, but to be a part of the conversation

Administrative interfaces at Cooper Hewitt (8)

Administrative Interfaces: What does success look like? How does it feel?

  • In informal interviews with Tagatron users we found trust to be a central theme of users’ response to the interface
  • Since Tagatron augments the curators’ use of TMS, they were less trusting of its database as a long-lasting data repository
  • Improving user feedback (like confirmation messages) helped build trust in the interface
  • Bill Moggridge, Designing Interaction: designing interaction is designing the relationship between people and things
  • We came to realize the responsibility of designing interfaces—validating and responding to users’ concerns; acknowledging the burden of new responsibilities
  • Administrative interfaces at the crux of the staff relationship to the new Cooper Hewitt experience
  • Anticipating issues in developing and maintaining administrative interfaces (when success feels like failure):
  • First, the human factor: being open to the feedback and creating an environment where the channels exist to communicate staff thoughts on the tool.
  • Second, the technical factor: being able to act on what you hear from staff and make the required changes to complete the feedback loop.
  • Our responsibility as facilitators of technology in the museum to hear and act on concerns.

Administrative interfaces at Cooper Hewitt-9

Questions to ask when starting an administrative application to anticipate issues and accommodate of feedback.

Question 1: To what degree should the (administrative) tool fit with pre-existing notions?

  • This question addresses the need to understand contextual use of the tool
  • Tagatron: curatorial culture around spreadsheets and TMS
  • Pen Pairing Station: this tool disrupted the expected ticket selling workflow. We learned the that the tool needed to make Pen Pairing as unobtrusive as possible

Administrative interfaces at Cooper Hewitt (10)

Question 2: How much of the underlying technology should come through to the interface?

  • Infrastructure & interfaces are layers of an onion—the best mental model for a tool’s interface might not reflect the best technical model for its back end
  • Tagatron: the filtering tools were a reflection of how data was stored in the database, not how curators expected it
  • Pen Pairing Station: error messages from all parts of the application stack came through to the user unaltered, this was not helpful to users
  • Highlights the need for a technical solution that allows for flexibility in the middle, “translation layer” of an application

Administrative interfaces at Cooper Hewitt (11)

Question 3: What kinds of feedback does the tool provide?

  • Feedback is the voice of the interface/ its personality–is it finicky or reliable? Annoying or supportive?
  • Tagatron: missing feedback created distrust
  • Pen Pairing: too much feedback caused confusion (error messages, validation handshake)
  • Our design and production methodology: working code always wins/ learning through doing; build small, working prototypes and continually iterate.
  • A more anticipatory form of design (like design thinking) could have helped us find answers to this question sooner

Administrative interfaces at Cooper Hewitt (12)

Question 4: Is it an appropriate time for experimentation?

  • Tagatron’s v1 included relatively unknown-to-us technology like MongoDB and nodejs. We should have used more familiar technology or done small-scale tests before implementing a project of this scale–it severely hindered our ability to accommodate feedback
  • Other tools we built that involved experimental tech were only successful because their scale and userbase were far smaller (label writer)

Administrative interfaces at Cooper Hewitt (13)

The result of everything: bridges, lines of communication opened

  • Building administrative tools for staff created cross-departmental conversation—in taking on the role of building and maintaining Tagatron and the Pen Pairing Station, the Digital & Emerging Media team engaged users from departments across the museum and observed closely how the tools fit into staff members’ larger roles

A Very Happy & Open Birthday for the Pen

lisa-pen-table-pic

Today marks the first birthday of our beloved Pen. It’s been an amazing year, filled will many iterations, updates, and above all, visits! Today is a celebration of the Pen, but also of all of our amazing partners whose continued support have helped to make the Pen a reality. So I’d like to start with a special thank you first and foremost to Bloomberg Philanthropies for their generous support of our vision from the start, and to all of our team partners at Sistel Networks, GE, Undercurrent, Local Projects, and Tellart.

Updates

Over the course of the past year, we’ve been hard at work, making the Pen Experience at Cooper Hewitt the best it can be. Right after we launched the Pen, we immediately realized there was quite a bit of work to do behind the scenes so that our Visitor Experience staff could better deal with deploying the Pen, and so that our visitors have the best experience possible.

Here are some highlights:

Redesigning post-purchase touchpoints – We quickly realized that our ticket purchase flow needed to be better. This article goes over how we tried to make improvements so that visitors would have a more streamlined experience at the Visitor Experience desk and afterwards.

Exporting your visits – The idea of “downloading” your data seemed like an obvious necessity. It’s always nice to be able to “get all your stuff.” Aaron built a download tool that archives all the things you collected or created and packages it in a nice browser friendly format. (Affectionately known as parallel-visit)

Improving Back-of-House Interactions – We spent a lot of time behind the visitor services desk trying to understand where the pain points were. This is an ongoing effort, which we have iterated on numerous times over the year, but this post recounts the first major change we made, and it made all the difference.

Collecting all the things – We realized pretty quickly that visitors might want to extend their experience after they’ve visited, or more simply,  save things on our website. So we added the idea of a “shoebox” so that visitors to our website could save objects, just as if they had a Pen and were in our galleries.

Label Writer – In order to deploy and rotate new exhibitions and objects, Sam built an Android-based application that allows our exhibition staff to easily program our NFC based wall labels. This tool means any staff member can walk around with an Android device and reprogram any wall label using our API. Cool!

Improving visitor information with paper – Onboarding new visitors is a critical component. We’ve since iterated on this design, but the basic concept is still there–hand out postcards with visual information about how to use the Pen. It works.

Visual consistency – This has more to do with our collection’s website, but it applies to the Pen as well, in that it helps maintain a consistent look and feel for our visitors during their post visit. This was a major overhaul of the collections website that we think makes things much easier to understand and helps provide a more cohesive experience across all our digital and physical platforms.

Iterating the Post-Visit Experience – Another major improvement to our post-visit end of things. We changed the basic ticket design so that visitors would be more likely to find their way to their stuff, and we redesigned what it looks like when they get there.

Press and hold to save your visit – This is another experimental deployment where we are trying to find out if a new component of our visitor experience is helpful or confusing.

On Exhibitions and Iterations – Sam summarizes the rollout of a major exhibition and the changes we’ve had to make in order to cope with a complex exhibition.

Curating Exhibition Video for Digital Platforms – Lisa makes her Labs debut with this excellent article on how we are changing our video production workflow and what that means when someone collects an object in our galleries that contains video content.

The Big Numbers

Back in August we published some initial numbers. Here are the high level updates.

Here are some of the numbers we reported in August 2015:

  • March 10 to August 10 total number of times the Pen has been distributed – 62,015
  • March 10 to August 10 total objects collected – 1,394,030
  • March 10 to August 10 total visitor-made designs saved – 54,029
  • March 10 to August 10 mean zero collection rate – 26.7%
  • March 10 to August 10 mean time on campus – 99.56 minutes
  • March 10 to August 10 post visit website retrieval rate – 33.8%

And here are the latest numbers from March 10, 2015 through March 9, 2016

  • March 10, 2015 to March 9, 2016 total number of times the Pen has been distributed – 154,812
  • March 10, 2015 to March 9, 2016 total objects collected – 3,972,359
  • March 10, 2015 to March 9, 2016 total visitor-made designs saved – 122,655
  • March 10, 2015 to March 9, 2016 mean zero collection rate – 23.8%
  • March 10, 2015 to March 9, 2016 mean time on campus – 110.63 minutes
  • Feb 25, 2016 to March 9, 2016 post visit website retrieval rate – 28.02%

That last number is interesting. A few weeks ago we added some new code to our backend system to better track this data point. Previously we had relied on Google Analytics to tell us what percentage of visitors access their post visit website, but we found this to be pretty inaccurate. It didn’t account for multiple access to the same visit by multiple users (think social sharing of a visit) and so the number was typically higher than what we thought reflected reality.

So, we are now tracking a visit page’s “first access” in code and storing that value as a timestamp. This means we now have a very accurate picture of our post visit website retrieval rate and we are also able to easily tell how much time there is between the beginning of a visit and the first access of the visit website–currently at about 1 day and 10 hours on average.

The Pen generates a massive amount of data. So, we decided to publish some of the higher level statistics on a public webpage which you can always check in on at https://collection.cooperhewitt.org/stats. This page reports daily and includes a few basic stats including a list of the most popular objects of all time. Yes, it’s the staircase models. They’ve been the frontrunners since we launched.

Those staircase models!

Those staircase models!

As you can see, we are just about to hit the 4 million objects collected mark. This is pretty significant and it means that our visitors on average have used the Pen to collect 26 objects per visit.

But it’s hard to gain a real sense of what’s going on if you just look at the high level numbers, so lets track some things over time. Below is a chart that shows objects collected by day for the last year.

Screen Shot 2016-03-09 at 3.50.36 PM

Objects collected by day since March 10, 2015

On the right you can easily see a big jump. This corresponds with the opening of the exhibition Beauty–Cooper Hewitt Design Triennial. It’s partly due to increased visitation following the opening, but what’s really going on here is a heavy use of object bundling. If you follow this blog, you’ll have recently read the post by Sam where he talks about the need to bundle many objects on one tag. This means that when a visitor taps his or her pen on a tag, they very often collect multiple objects. Beauty makes heavy use of this feature, bundling a dozen or so objects per tag in many cases and resulting in a dramatic increase in collected objects per day.

Pen checkouts per day since March 10, 2015

Pen checkouts per day since March 10, 2015

We can easily see that this, is in fact, what is happening if we look at our daily pen checkouts. Here we see a reasonable increase in checkouts following the launch of Beauty, but it’s not nearly as dramatic as the number of objects being collected each day.

Screen Shot 2016-03-09 at 11.40.09 PM

Immersion room creations by day since March 10, 2015

Above is a chart that shows how many designs were created in the immersion room each day over the past year. It’s also going to be directly connected to the number of visitors we have, but it’s interesting to see the mass of it along this period of time. The immersion room is one of our more popular interactive installations and it has been on view since we launched. So it’s not a big surprise it has a pretty steady curve to it. Also, keep in mind that this is only representative of “things saved” as we are not tracking the thousands of drawings that visitors make and walk away from.

We can slice and dice the Pen data all we want. I suppose we could take requests. But I have a better idea.

Open Data

Today we are opening up the Pen Data. This means a number of things, so listen closely.

  1. The data we are releasing is an anonymized and obfuscated version of some of the actual data.
  2. If you saved your visit to an account within thirty days of this post (and future data updates) we won’t include your data in this public release.
  3. This data is being licensed under Creative Commons – Attribution, Non-Commercial. This means a company can’t use this data for commercial purposes.
  4. The data we are releasing today is meant to be used in conjunction with out public domain collection metadata or our public API.

The data we are releasing is meant to facilitate the development of an understanding of Cooper Hewitt, its collection and interactive experiences. The idea here is that designers, artists, researchers and data analysts will have easy access to the data generated by the Pen and will be able to analyze  and create data visualizations so that we can better understand the impact our in-gallery technology has on visitors.

We believe there is a lot more going on in our galleries than we might currently understand. Visitors are spending incredible amounts of time at our interactive tables, and have been using the Pen in ways we hadn’t originally thought of. For example, we know that some visitors (children especial) try to collect every single object on view. We call these our treasure hunters. We also know that a percentage of our visitors take a pen and don’t use it to collect anything at all, though they tend to use the stylus end quite a bit. Through careful analysis of this kind of data, we believe that we will be able to begin to uncover new behavior patterns and aspects of “collecting” we haven’t yet discovered.

If you fit this category and are curious enough to take our data for a spin, please get in touch, we’d love to see what you create!

Curating Exhibition Video for Digital Platforms

First, let me begin this post with a hearty “hello”! This is my first Labs blog post, though I’ve been on board with the Digital and Emerging Media team since July 2015 as Media Technologist. Day-to-day I participate in much of the Labs activity that you’ve read about here: maintaining and improving our website; looking for ways to enhance visitor experience; and expanding the meaningful implementation of technology at Cooper Hewitt. In this post I will focus on the slice of my work that pertains to video content and exhibitions.

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

Detail: Brochure, Memphis (Condominiums): Portfolio, 1985

The topic of exhibition video is fresh in my mind since we are just off the installation of Beauty—Cooper Hewitt Design Triennial. This is a multi-floor exhibit that contains twenty-one videos hand-picked or commissioned by the exhibition curators. My part in the exhibition workflow is to format, brand, caption and quality-check videos, ushering them through a production flow that results in their display in the galleries and distribution online. Along with the rest of the Labs team, I also advise on the presentation and installation of videos and interactive experiences in exhibitions and on the web, and help steer the integration of Pen functionality with exhibition content. This post gathers some of my video-minded observations collected on the road to installing Beauty.

The Beauty curators and the Labs team came together when content for the show began to arrive—both loans of physical objects and digital file transfers. At this time, my video workflow shifted into high gear, and I began to really see the landscape of digital content planned for the exhibit. Videos in Beauty fall into roughly two categories: those that are the primary highlighted object on display and those that supplement the display of another object. Sam Brenner recently posted about reformatting our web presentation of video content when it stands in as primary collection object and has a medium that is “video, animation or other[wise] screen-based.” This change was a result of thinking through the flag we raised earlier for the curators around linking collection records to tags, i.e. “what visitors get when they collect works with the Pen.” As has been mentioned before on this blog, the relationship of collecting points (NFC tags) to collection objects does not need to be one-to-one; Beauty expanded our exploration of the tags-to-collection records relationship in a few interesting ways.

Collecting Neri Oxmann

When visitors collect at the Neri Oxman tag they save a cluster of collections database records, including 12 glass vessels and a video.

In the Beauty exhibition, collecting points are presented uniformly: one tag in each object label. Additionally, tags positioned beside wall text panels allow visitors to save chunks of written exhibition content. The curatorial format of the Triennial exhibition organized around designers (sometimes with multiple works on display), however, encouraged us to think carefully about the tag-collecting relationship. I was impressed to see the curators curating the Pen experience, including notes to me along with each video, like “the works in the show are jewelry pieces; the video will supplement,” “video is primary object; digital prints supplement,” and “video clips sequenced together for display but each video is separately collectible.” They were really thinking about the user flow of the Pen and the post-visit experience, extending their role in organizing and presenting information to all aspects of the museum experience.

Another first in the Beauty exhibition is the video content created specifically for interactive tables. With the curators’ encouragement, the designers featured in the exhibition considered the tables as a unique environment to present bonus content. For example, Olivier van Herpt provided a video of his 3D printer at work on the ceramic vessels on display in the exhibition. It was interesting to see the possibilities that the tables and post-visit outlets opened up—for one thing, the quality standards can be more relaxed for videos shown outside the monitors in the galleries. Also notable is the fact that the Beauty curators selected behind-the-scenes-type videos for tables and post-visit, suggesting that these outlets make room for content that might not typically make it onto gallery walls.

Still from Oliver van Herpt's "3D Printed Ceramic Process"

The video “3D Printed Ceramic Process” by Oliver van Herpt is an example of behind-the-scenes video content that was made for tables and website display only.

The practical fallout on my end was that these supplemental videos added to the already video-heavy exhibition, putting increased pressure on the video workflow. In turn, this revealed a major lack of optimization. The diagram of my video workflow shows, for example, several repeated instances of formatting, captioning and exporting. Multiplied by twenty-one, each of these redundant procedures takes up significant time. Application of branding is probably the biggest time-hog in the workflow—all of it is done manually and locking in the information of maker, credit line and video title with curators and designers is a substantial task. It’s funny, the amount of video content is increasing in exhibitions at Cooper Hewitt and it’s receiving increased attention from curators, but the supplementary videos in the galleries are not treated as first-order exhibition objects, so they don’t go through as rigorous a documentation process as other works in the show. Because of this, video-specific information required for my workflow remained in flux until the very last minute. Even the video content itself continued to shift as designers pushed past my deadlines to request more time to make changes and additions. In truth, the deadlines related to in-gallery video content are much stricter than those for table/post-visit-only content because gallery videos require hardware installation. The environment of the tables and website afford continual change, but deadlines act as benchmarks to keep those interfaces stocked with new content that stays in sync with objects in the physical exhibition.

Exhibition Video Workflow

The workflow that videos follow to get to gallery screens, interactive tables and the collections website.

I maintain a spreadsheet to collect video information and maintain order over my exhibitions video workflow. These are the column headings.

All the steps and data points that need to be checked-off in my exhibition video workflow.

By the exhibition opening, I had all video information confirmed, and all branding and formatting completed. The running spreadsheet I keep as a video to-do list was filled with check-marks and green (good-to-go) highlighting. I had created media records in TMS and connected them to exhibition videos uploaded to YouTube; this allows a script to pull in the embed code so videos appear within the YouTube player on the collections site. I also linked the media records to other database entries so that they would show up on the collections site in relation to other objects and people. For example, since I linked the “Afreaks Process” video record to all of the records for the beaded Afreak objects, the video appears on each object page, like the one for The Haas Brothers and Haas Sisters’ “Evelyn”. Related videos like this one (that are not the primary object) are configured to appear at the bottom of an object page with the language, “We have 1 video that features Sculpture, Evelyn, from the Afreaks series, 2015.” Since the video has its own record in the database, there is also a corresponding “video page” for the same clip that presents the video at the top with related objects in a grid view below. I also connected object records to database entries for people, ensuring that visitors who click on a name find videos among the list of associated objects.

Screenshot of Haas Brothers record webpage

The webpage for the Haas Brothers record includes a video among the related objects.

It is highly gratifying to seed videos into this web of database connections. The content is so rich and so interesting that it really enhances the texture of the collections site and of exhibitions. Cooper Hewitt curators demonstrated their appreciation for the value of video by honoring video works as primary objects on display. They also utilized video in a demonstrative way to enhance the presentation of highlighted works. Beauty opened the doors for curating video works on interactive tables, and grouping videos in with clusters of data linked to collecting points (aka. tags). I’m pleased with the overall highly integrated and considered take on video content in the latest exhibition, and I hope we can push the integration even further as the curators become increasingly invested in adapting their practice to the extended exhibition platforms we have in place like tables, tags and web.

On Exhibitions and Iterations

Since reopening in December 2014, we’ve found that the coming opening of an exhibition is a big driver of iteration. The work involved in preparing an exhibition involves the whole museum and is one of the most coordinated and planned-out things we do, and because of this, new exhibitions push us to improve in a number of ways.

First, new exhibitions can highlight existing gaps or inefficiencies in our systems. Our tagging tool, for example, always sees a round of bug fixes or new features before an exhibition because it coincides with a time when it will see heavy use. Second, exhibitions present us with new technical challenges. Objects in the Heatherwick exhibition, for example, were displayed in the galleries grouped into “projects,” which is also how we wanted users to collect them with their Pens and view them on the website. To accomplish this we had to figure out a way that TMS, our collections management software, could store both the individual objects (for internal purposes) and the grouped projects (which would hold all the public-facing images and text), and figure out how to see that through to the website in a way that made registrars, curators and ourselves comfortable. Finally, a new exhibition can present an opportunity for experimentation. David Adjaye Selects gave us the opportunity to scale up Object Phone, a telephone-based riff on the audio guide, which originally started as a small, rough prototype.

Last week was the opening of our triennial exhibition “Beauty,” which similarly presented us with a number of technical challenges and opportunities to experiment. In this post I’ll share some of those challenges and the work we did to approach them.

Collecting Exhibition Text

Triennial's wall text, with the collect icon in the lower-right corner

Triennial’s wall text, with the collect icon in the lower-right corner

Since the beginning of the pen project we’ve been saying that the Pens don’t just have to collect objects. Aaron and Seb wrote in their paper on the project that “nothing would prevent the museum from allowing visitors to ‘collect’ individual designers, entire exhibitions or even architectural elements from the building itself in the future.” To that end, we’ve experimented with collecting shop items and decided that with the triennial we would allow visitors to collect exhibition text as well.

Exhibition text (in museum argot, “A-Panel” is the main text at the beginning of an exhibition and “B-Panel” are any additional texts you might find along the way) makes total sense as something that a visitor should be able to remember for later. It explains and contextualizes an exhibition’s goals, contents and organization. We’ve had the text on our collections since we reopened but it took a few clicks to get through from a visitor’s post-visit website. Now, the text will be right there alongside all of a visitor’s objects.

The exhibition text on a post-visit website

The exhibition text on a post-visit website

The open-ended part of this is what visitors will expect when they collect an “exhibition.” We installed the collection points with no helper text, i.e. it doesn’t say “press here to collect this exhibition’s text.” We think it’s clear that the crosshairs refer to the text, but one of our original ideas was that we could have a way for the visitor to automatically collect every object in the exhibition and I wonder if that might be the implied function of the text tag. We will have to observe and adapt accordingly on that point.

Videos Instead of Images

When we first added videos to our collections site, we found that the fastest way to accomplish what we needed was to use TMS for relating videos to objects but use custom software for the formatting and uploading of the videos. We generate four versions of every video file — subtitled and not subtitled at two resolutions each — which we use in the galleries, on the tables and on the website. One of the weaknesses of this pipeline is that because the videos don’t live in the usual asset repository the way all of our images do, the link between TMS and the actual file’s location is made by nothing more than a “magic string” and a bit of guesswork. This makes it difficult to work with the video records in TMS: users get no preview and it can be difficult to know which video ID refers to which specific video. All of this is something we’ll be taking another look at in the near future, but there is one small chunk of this problem we approached in advance of the Triennial: how to make our website show the video in place of the primary image if it would be more appropriate to do so.

Here’s an example. Daniel Brown’s On Growth and Form is an animation on display in the Triennial. Before, it would have looked like this — the primary image is a still rendering that has been added in TMS, and the video appears as related content further down the page.

growthandform

What we did is to say if the object is itself a video, animation or other screen-based media and we have an associated video record linked to the object, remove the primary image and put the video there instead. That looks like this:

Screen Shot 2016-02-16 at 3.33.50 PM

Like all good iterations, this one opened up a bunch of next steps. First, we need to figure out how to add videos into our main digital asset pipeline so that the guesswork can be removed from picking primary videos and a curator or image specialist can select it as “primary” the same way they would do with an image. Next, it brought up an item that’s been on the backburner for a while, which is a better way to display alternate images of an object. Currently, they have their own page, which gets the job done, but it would be nice to present some alternate views on the main object page as well.

Just a Reflektor Sandbox

It's fun!

It’s fun!

We had a great opportunity to do some experimentation on our collections site due to the inclusion of Aaron Koblin and Vincent Morisset’s interactive video for Arcade Fire’s Just a Reflektor. The project’s source code is already available online and contains a “sandbox” environment, a tool that demonstrates some of the interactive visual effects created for the music video in a fun, open-ended environment. We were able to quickly adapt the sandbox’s source code to fit on our collections site so that visitors who collect the video with their Pen will be able to explore a more barebones version of the final interactive piece. You can check that out here.

Fully Loaded Labels

When we were working on the Pen prototypes, we tried six different NFC tags before getting to the one that met all of our requirements. We ended up with these NTAG203 tags whose combination of size and antenna design made them work well with our Pens and our wall labels. Their onboard memory of 144 bytes, combined with the system we devised for encoding collection data on them, meant that we could store a maximum of 11 objects on a tag. Of course we didn’t see that ever being a problem… until it was. The labels in the triennial exhibition are grouped by designer, not by object, and in some cases we have 35 objects from a designer on display that all need to be collected with one Pen press. There were two solutions: find tags with more memory (aka “throw more hardware at it”) or figure out a new way to encode the tags using fewer bytes and update the codebase to support both the new and old ways (aka “maintenance nightmare”). Fortunately for us, the NTAG216 series of tags have become more commonly available in the past year, which feature 888 bytes of memory, enough for around 70 objects on a tag. After a few rounds of end-to-end testing (writing the tag, collecting it with a pen and having it show up on the post-visit website), we rolled the new tags out to the galleries for the dozen or so “high capacity” labels.

The new tag (smaller, on the left) and the old tag (right)

The new tag (smaller, on the left) and the old tag (right)

The most interesting iteration that’s been made overall, I think, is how our exhibition workflow has changed over time to accommodate the Pen. With each new exhibition, we take what sneaked up on us the last time and try to anticipate it. As the most recent exhibition, Beauty’s timeline included more digitally-focused milestones from the outset than any other exhibition yet. Not only did this allow us to anticipate the tag capacity issue many months in advance, but it also gave us more time to double check and fix small problems in the days before opening and gave us more time to try new, experimental approaches to the collections website and post-visit experience. We’re all excited to keep this momentum going as work ramps up on the next exhibitions!

 

Press and hold to save your visit

01

During the development of a major project it’s inevitable that certain features just won’t make it into production in time for launch. Sometimes things fall out of scope, or they get left off the priority list and put on the back burner. Hopefully these features are not critical to the project, but invariably they were good ideas, and in many cases should warrant a revisit sometime down the road, when the dust has settled.

At Cooper Hewitt, one such “feature” was affectionately known as the “transfer stations.” The basic idea being that throughout the museum there would be small kiosk like stations where visitors could tap their pens and “transfer” the data from their pen to our database so that they could immediately see their collections from a mobile phone.

03

It was a nice idea, and we began to implement it, collecting specs and designing the steel stands that would support the kiosks. Eventually the parts showed up, but by that time, we were pretty knee deep in launching the Pen and the museum, so the transfer station idea got set aside.

Fast forward to today, nearly a year since the Pen has been in visitors hands and we’ve been thinking about how we can better onboard our visitors, and how we can remind them that there is something to do after they leave the museum. It’s a complex problem that we’ve tried to address in several ways.

  1. Visitors arriving at the museum typically don’t know anything about the Pen. At our Visitor Experience desk our staff are trained to quickly explain teach each visitor what they can do with the Pen, while in the background processing their orders and “pairing” each Pen to a ticket. It’s a critical part of the process and one we’ve spent a good deal of time optimizing.
  2. While visitors are waiting in line, there is an opportunity to help people learn about the Pen. We have a looping video playing in that spot that tries to do this job visually, and additionally, we have small postcards available that explain things further.
  3. On the way out the door, visitors are reminded that they should hold on to their tickets. This is supposed to happen at the door, in the moment where they are returning their pen.

There are lots of other visual cues and verbal reminders happening while you walk through the galleries, but no matter what, we find ticket stubs left behind. We know from our data that lots of our visitors are checking out their websites after their visits, but maybe we can do better. Also, part of the whole concept behind the Pen is that you “can” look at your collection from your mobile phone right away–we should make that happen more seamlessly.

Technically speaking, the transfer stations have been in play all along. When you walk up to one of our interactive tables and “dock” your pen, we read all the data on your pen ( the things you’ve collected so far ) and store them in our database. So, if you just keep walking up to tables and docking your pen, you’d be able to visit your collection on your mobile phone–no problem. But this doesn’t really do a great job of reminding you, or even letting you know that it’s possible. The tables are about browsing the collection on the tables, and that’s pretty much what their UI describes.

Also, we’ve been using an early version of the transfer station behind the scenes to do a final dump of your pen after you’ve left. This is so that in case you collected objects and didn’t go to one of the tables, you’ll be okay.

All along though, a few of us have been a little skeptical of the function and design of the transfer stations. Will they just create confusion with the visitor? Are they even necessary? Should they have a responsive visual user interface? To get to the bottom of some of these questions, well, we need to birth something into the world and see how it goes.

To get started, we chose to deploy two transfer stations in two areas on the second floor. There was a good deal of work that needed to happen. The transfer station parts needed to be identified, assembled and configured. We’d need to set up their built in Raspberry Pi computers to behave properly, and we’d need to work through their connection to power and network within the galleries. Enter Mary Fe!  She is our Gallery Technologist, the person you might see performing maintenance on some part of the technology throughout the galleries the next time you visit Cooper Hewitt. Mary Fe is the person who shows up at 8am before the museum opens to make sure everything is working and looking good.

I asked Mary Fe to work on this project from start to finish, and she’s written up a little documentation on how things went. She says:

I was called in to *clone* the existing and fully working Register station. The stations consist of Raspberry Pi mini computers connected to our museum network over ethernet and an NFC reader board designed by Sistel Networks that are able to download data from a Pen. The Raspberry Pi is mounted in the base of the extremely heavy stands you see in the photo below, and its corresponding NFC reader board is located at the top, behind the “plus” icon.

02

What we’d need

  • Raspberry Pi units programmed to save only ( vs. save and check in a pen )
  • Functioning data and power at the locations where wish to deploy the stands
  • The stands
  • Easy to understand signage

The first part was pretty easy to accomplish. I began by cloning SD cards for use with the Raspberry Pi. These had to be configured to “save” Pen data, and not “check in” the Pen so that the visitor could continue their visit, saving as many times as they like. After cloning, we assigned new names and unique IPs to the Raspberry Pi stations.

I ran into a little trouble when I started testing the data ports. Long story short, I had to learn how to tone/probe data ports from their locations in the galleries to their corresponding positions in the network closet. After a good deal of troubleshooting with our IT specialists in DC, the ports came to life and we assigned their IP addresses.

Once the ports were figured out, and the Raspberry Pi’s setup and configured, everything began to work. Right away I noticed visitors starting to use the transfer stations.

What’s next

We have more transfer stations waiting in the wings. However, as I mentioned above, deploying these two transfer stations is sort of an experiment. We want to watch and see how visitors react to them, if they cause more confusion, and how often they are getting used.

We’ve already been thinking of ways we might incorporate a screen to add a visual user interface to the stations. Perhaps a more guided experience would get visitors more involved with them. What kinds of problems will introducing a screen add to the device? Maybe we should think about e-ink, a touch screen, or a thermal printer? It’s hard to say at this point.

The next step is to collect some visitor feedback, look at the data, and start prototyping new ideas.

Screen Shot 2016-01-04 at 10.01.28 PM

Getting the IT department your museum needs

You may have noticed, we have a new position currently open for an IT Specialist. It will be open for the next three weeks, so if you are at all interested, please read through this post first and then go apply before the cut off. This is a really exciting position, and so I thought it might be worth it to try and explain why, as the job description doesn’t exactly “get into it.”

But first, a little background. Cooper Hewitt has gone through a real, honest to goodness, digital transformation. For the last four years, while the doors were closed, the museum underwent a complete restoration and renovation to it’s physical building and at the same time completely overhauled all things technology and digital throughout the museum and online. We built some groundbreaking stuff, and when we finally reopened just one year ago, we delivered a truly amazing experience–the likes of which we haven’t seen too much of, up until now.

Okay, so this is a pitch. I’m trying to draw you in. It’s such a great place to work, and we do really awesome stuff all day long, every single day. We rock! Are you hooked yet? Just watch the video below and you’ll understand.

But, really, we have done some pretty amazing things at Cooper Hewitt, and all the while with minimum staff resources and short deadlines. If you are a museum person, you’ve probably read the articles. If you are a design or technology person, same goes for you. Across the board, each and every staff person has had to morph into something new, something present in the digital world. In many cases, without even realizing it, each and every aspect of Cooper Hewitt has transformed.

Looking back, we were able to do a lot of this work because of a lot of incredible coincidences, generous people, and dedicated staff members. And, although many of us were annoyed or uncomfortable at the time, I think we can all see the payoff now. We all seem to now understand the vision behind it all. It is all starting to fall into place for many of us. I’m starting to get teary-eyed, so I’ll move on.

As you may imagine, our needs for IT infrastructure have had to keep in lock step with all of the innovation we’ve imposed along the way. We’ve adopted a complete constituent relationship management system ( Tessitura ) which is directly connected to the way we sell tickets, which is directly connected to the way our visitors engage with our technology, which directly connected to the way we analyze the visitor experience, which circles back to inform us on how we continue to develop these systems over and over again.

Our museum is online. The physical building is the number one consumer of our own API. When a visitor walks around the building, interacting with our interactives, our API is involved each step of the way. Sometimes I can feel the museum pulsating with activity–messages moving back forth from the galleries to the cloud, from one server to another. It’s sort of beautiful if you think about it.

We’re proud to announce that Cooper Hewitt has achieved LEED Silver certification for our museum campus, including the Carnegie Mansion and adjoining Miller/Fox townhouses. The project to attain certification began with the renovation planning phase in 2006, and was supported by architect of record Beyer Blinder Belle, design architect Gluckman Mayner Architects, and Atelier 10, who joined the project in 2011. Developed by the U.S. Green Building Council, the LEED rating system is the foremost program for buildings, homes and communities that are designed, constructed, maintained and operated for improved environmental and human health performance. Cooper Hewitt was awarded certification for optimizing energy performance, purchasing green-e certified electricity supply, maintaining 95 percent of existing structure and envelope, water use reduction, community connectivity and public transportation access. #cooperhewitt #leedcertified

A photo posted by Cooper Hewitt (@cooperhewitt) on

Of course with all of this comes an incredibly increased need for infrastructure management. We are no longer a museum that just requires some basic desktop support–someone to fix the projector bulb, or replace the toner cartridge and set up your network account. These days we are looking for creative people who can solve complex problems. We need the type of person who can look at our AWS account and explain to us how we could make better use of “reserved” or “spot instances”, come up with better ways for our developers to continuously test and deploy new code, and someone who can log on to any kind of machine and figure out what the problem is, what service is down, what needs to be patched, what log messages mean what.

All design, all the time. ✒️ Regram @choppingblock #cooperhewitt

A photo posted by Cooper Hewitt (@cooperhewitt) on

So how are things different these days? Well, first of all, the major shift is that IT is now a part of Digital & Emerging Media. They used to be two separate entities with IT reporting into our Director of Operations. It made sense before the transformation because IT was mainly about the day to day operational types of problems like I’ve mentioned above. Ultimately, IT was in charge of making sure everyone’s computers were working and that there was toner in all the printers. The need for this still exists today but it’s a much smaller part of the big picture. Our staff are far more self sufficient now, and we are able to lean more heavily on our psuedo-outsourced-desktop support line through our mothership down in D.C. ( Did I mention we are part of Smithsonian ? )

Now, IT is a central player in the world of Digital & Emerging Media and it’s critical that we approach our new hires with this in mind. Like I said above, we need creative, curious souls who want to be part of something really exciting.

  • If you are obsessed with getting the last drop of performance out of 25 servers, please apply.
  • If you ran your own BBS in 1989, your own phpBB in 2005 or your own Discourse in 2016, please apply.
  • If you see equal design brilliance in both ESU 400 and RFC 2246, please apply.
  • If you like to $ sudo apt-get update; sudo apt-get upgrade -y, please apply.
  • If you prefer mapping a network drive over just using Dropbox… well… still apply and we’ll see…

OK, what next? Rush right over to the posting and apply. This position is a Federal job, which means it comes with all the benefits government workers receive, including a nice retirement plan, great health care and a free unlimited ride subway card! But, do be sure to put time and effort into the application. I know it may seem a little outdated ( trust me, this is something we need to work on in #GovClub ), but be sure to respond to each aspect of the application very carefully. And of course, if you have any questions about the position or the application itself, feel free to reach out, I’d love to know if you’ve applied.