Announcing the Digital Collection Materials Project

Cooper Hewitt, Smithsonian Design Museum is pleased to announce it will begin its first major initiative to address the conservation needs of digital materials in 2017. Supported by the Smithsonian Collections Care and Preservation Fund, the Digital Collection Materials Project will serve to set standards, practices, and strategies related to digital materials in Cooper Hewitt’s permanent collection. Of the more than 210,000 design objects in the collection, it is estimated that roughly 150 items incorporate information conveyed in a digital form. Many of these objects are home and office electronics, personal computing and mobile devices, and media players with interfaces that span both hardware and software. Among the 150 items, there are also born digital works–examples of design that originated in electronic form that are saved as digital data. These include both creative and useful software applications, as well as media assets, such as videos and computer-aided designs.

The first phase of the Digital Collection Materials Project will be the design and execution of a collection survey. The second phase will be case studies of select objects. The final phase will synthesize the survey results and case study findings in order to determine recommendations for a strategic plan of care, preservation, and responsible acquisition of digital materials for the collection.

The historical core of Cooper Hewitt, Smithsonian Design Museum’s collection is comprised of objects selected by the museum’s founders, Sarah and Eleanor Hewitt, to document outstanding technical and artistic accomplishments in the decorative arts. Established in 1897 as an educational resource for The Cooper Union for the Advancement of Science and Art, Cooper Hewitt’s collection continues to expand to encompass a range of materials exemplifying the broad category of human ingenuity and artistry that today we call design. The diversity of the museum’s collection exemplifies the core institutional belief that design is best understood through process, a framework that fosters understanding of human activity as it intersects with many materials and technologies, including the important fields of interface design, interaction design, and user experience design.

The Digital Collections Materials Project will help preserve long-term access to digital materials in the collection while maintaining the integrity of the designs they express. It will also allow Cooper Hewitt to move forward responsibly with acquisitions in the exciting realm of digital design. Since digital materials are especially vulnerable to the deleterious effects of technological obsolescence and decay, which can lead to inaccessibility and information loss, there is an urgent need to address the conservation needs of digital materials in the collection. It is with an eye to these materials’ cultural significance and vulnerability that the museum moves forward with the Digital Collection Materials Project.

This project received Federal support from the Smithsonian Collections Care and Preservation Fund, administered by the National Collections Program and the Smithsonian Collections Advisory Committee.

Join Labs! Work with Digital Materials in the Collection

There is a goldmine of digital materials in Cooper Hewitt’s permanent collection—rarities like prototypes donated by interaction design pioneer Bill Moggridge; gaming classics like the Game Time wristwatch (which you should really see in action! ); icons of product design like Apple’s iPhone; and artistic achievements in code by contemporary artist-designers like Aaron Koblin.

 Digital Project, Ten Thousand Cents, 2007–08; Designed by Aaron Koblin and Takashi Kawashima; USA; processing, adobe flash cs3, php/mysql, amazon mechanical turk, adobe photoshop, adobe after effects; Gift of Aaron Koblin and Takashi Kawashima; 2014-41-2; Object Record

Digital Project, Ten Thousand Cents, 2007–08; Designed by Aaron Koblin and Takashi Kawashima; USA; processing, adobe flash cs3, php/mysql, amazon mechanical turk, adobe photoshop, adobe after effects; Gift of Aaron Koblin and Takashi Kawashima; 2014-41-2; Object Record

And we need your help! We are looking for two ultra-talented and fearless media spelunkers to dive into the collection and surface all of the computer, product design, and interaction design history within. We want you to help research and invigorate this part of the collection so that we can share it with the world. It’s a noble cause, and one that will help give museum visitors an even better experience of design at Cooper Hewitt.

One Laptop Per Child XO Computer, 2007; Designed by Yves Béhar, Bret Recor and fuseproject; injection molded abs plastic and polycarbonate, printed rubber, liquid crystal display, electronic components; steel, copper wire (power plug); H x W x D (closed): 3.5 × 22.9 × 24.1 cm (1 3/8 in. × 9 in. × 9 1/2 in.); Gift of George R. Kravis II; 2015-5-8-a,b; Object Record

One Laptop Per Child XO Computer, 2007; Designed by Yves Béhar, Bret Recor and fuseproject; injection molded abs plastic and polycarbonate, printed rubber, liquid crystal display, electronic components; steel, copper wire (power plug); H x W x D (closed): 3.5 × 22.9 × 24.1 cm (1 3/8 in. × 9 in. × 9 1/2 in.); Gift of George R. Kravis II; 2015-5-8-a,b; Object Record

Project Positions

We are hiring for two contract positions: Media Preservation Specialist and Time-Based Media Curatorial Assistant. The contractors will work together on the first phase of the Digital Collection Materials Project to survey and document collection items. Check out the official project announcement below to understand the full scope of the project.

To Apply

To apply for the Media Preservation Specialist or Time-Based Media Curatorial Assistant position:

  1. Read the official project announcement.
  2. Download the Request for Proposal for the position you wish to apply:
  3. Follow the Proposal Submission Guidelines outlined in the Request for Proposal.
  4. Submit your proposal to cooperhewittdigital@si.edu by December 20, 2016.

Looking forward to seeing your applications—we can’t wait to partner with you for this important work!

SketchBot (USA), 2012; Industrial Design by Universal Design Studio (United Kingdom); aluminum, plastic, assorted electrical components, javascript, html, css and python source files; H x W x D: 137.2 × 137.2 × 137.2 cm (54 × 54 × 54 in.); Gift of Google Inc.; s-g-1; Object Record

SketchBot (USA), 2012; Industrial Design by Universal Design Studio (United Kingdom); aluminum, plastic, assorted electrical components, javascript, html, css and python source files; H x W x D: 137.2 × 137.2 × 137.2 cm (54 × 54 × 54 in.); Gift of Google Inc.; s-g-1; Object Record

This project received Federal support from the Smithsonian Collections Care and Preservation Fund, administered by the National Collections Program and the Smithsonian Collections Advisory Committee.

Process Lab: Citizen Designer Digital Interactive, Design Case Study

Fig.1. Process Lab: Citizen Designer exhibition and signage, on view at Cooper Hewitt.

Fig.1. Process Lab: Citizen Designer exhibition and signage, on view at Cooper Hewitt

Background

The Process Lab is a hands-on educational space where visitors are invited to get involved in design process. Process Lab: Citizen Designer complimented the exhibition By the People: Designing a Better America, exploring the poverty, income inequality, stagnating wages, rising housing costs, limited public transport, and diminishing social mobility facing America today.

In Process Lab: Citizen Designer participants moved through a series of prompts and completed a worksheet [fig. 2]. Selecting a value they care about, a question that matters, and design tactics they could use to make a difference, participants used these constraints to create a sketch of a potential solution.

Design Brief

Cooper Hewitt’s Education Department asked Digital & Emerging Media (D&EM) to build an interactive experience that would encourage visitors to learn from each other by allowing them to share and compare their participation in exhibition Process Lab: Citizen Designer.

I served as project manager and user-experience/user-interaction designer, working closely with D&EM’s developer, Rachel Nackman, on the project. Interface Studio Architects (ISA) collaborated on concept and provided environmental graphics.

Fig. 2. Completed worksheet with question, value and tactic selections, along with a solution sketch

Fig. 2. Completed worksheet with question, value and tactic selections, along with a solution sketch

Process: Ideation

Project collaborators—D&EM, the Education Department, and ISA—came together for the initial steps of ideation. Since the exhibition concept and design was well established at this time, it was clear how participants would engage with the activity. Through the process of using cards and prompts to complete a worksheet they would generate several pieces of information: a value, a question, one to two tactic selections, and a solution sketch. The group decided that these elements would provide the content for the “sharing and comparing” specification in the project brief.

Of the participant-generated information, the solution sketch stood out as the only non-discrete element. We determined that given the available time and budget, a simple analog solution would be ideal. This became a series of wall-mounted display bins in which participants could deposit their completed worksheets. This left value, question, and tactic information to work with for the content of the digital interactive.

From the beginning, the Education Department mentioned a “broadcast wall.” Through conversation, we unpacked this term and found a core value statement within it. Phrased as a question, we could now ask:

“How might we empower participants to think about themselves within a community so that they can be inspired to design for social change?”

Framing this question allowed us to outline project objectives, knowing the solution should:

  • Help form a virtual community of exhibition participants.
  • Allow individual participants to see themselves in relation to that community.
  • Encourage participants to apply learnings from the exhibition other communities

Challenges

As the project team clarified project objectives, we also identified a number of challenges that the design solution would need to navigate:

  • Adding Value, Not Complexity. The conceptual content of Process Lab: Citizen Designer was complex. The design activity had a number of steps and choices. The brief asked that D&EM add features to the experience, but the project team also needed to mitigate a potentially heavy cognitive load on participants.
  • Predetermined Technologies. An implicit part of the brief required that D&EM incorporate the Pen into user interactions. Since the Pen’s NFC-reading technology is embedded throughout Cooper Hewitt, the digital interactive needed to utilize this functionality.
  • Spatial Constraints. Data and power drops, architectural features, and HVAC components created limitations for positioning the interactive in the room.
  • Time Constraints. D&EM had two months to conceptualize and implement a solution in time for the opening of the exhibition.
  • Adapting to an Existing Design. D&EM entered the exhibition design process at it’s final stages. The solution for the digital interactive had to work with the established participant-flow, environmental graphics, copy, furniture, and spatial arrangement conceived by ISA and the Education Department.
  • Budget. Given that the exhibition design was nearly complete, there was virtually no budget for equipment purchases or external resourcing.

Process: Defining a Design Direction

From the design brief, challenges, objectives, and requirements established so far, we could now begin to propose solutions. Data visualization surfaced as a potential way to fulfill the sharing, comparing and broadcasting requirements of the project. A visualization could also accommodate the requirement to allow an individual participants to compare themselves to the virtual exhibition community by displaying individual data in relation to the aggregate.

ISA and I sketched ideas for the data visualization [figs. 3 and 4], exploring a variety of structures. As the project team shared and reviewed the sketches, discussion revealed some important requirements for the data organization:

  • The question, value and tactic information should be hierarchically nested.
  • The hierarchy should be arranged so that question was the parent of value, and value was the parent of tactics.
Fig. 3. My early data visualization sketches

Fig. 3. My early data visualization sketches

Fig. 4. ISA’s data visualization sketch

Fig. 4. ISA’s data visualization sketch

With this information in hand, Rachel proceeded with the construction of the database that would feed the visualization. The project team identified an available 55-inch monitor to display the data visualization in the gallery; oriented vertically it could fit into the room. As I sketched ideas for data visualizations I worked within the given size and aspect ratio. Soon it became clear that the number of possible combinations within the given data structure made it impossible to accommodate the full aggregate view in the visualization. To illustrate the improbability of showing all the data, I created a leaderboard with mock values for the hundreds of permutations that result from the combination of 12 value, 12 question and 36 tactic selections [fig. 5, left]. Not only was the volume of information overwhelming on the leaderboard, but Rachel and I agreed that the format made no interpretive meaning of the data. If the solution should serve the project goal to “empower participants to think about themselves within a community so that they can be inspired to design for social change,” it needed to have a clear message. This insight led to a series steps towards narrativizing the data with text [fig. 5].

Concurrently, the data visualization component was taking shape as an enclosure chart, also known as a circle packing representation. This format could accommodate both hierarchical information (nesting of the circles) and values for each component (size of the circles). With the full project team on board with the design direction, Rachel began development on the data visualization using D3.js library.

Fig. 5. Series of mocks moving from a leaderboard format to a narrativized presentation of the data with an enclosure chart

Fig. 5. Series of mocks moving from a leaderboard format to a narrativized presentation of data with an enclosure chart

Process: Refining and Implementing a Solution

Through parallel work and constant communication, Rachel and I progressed through a number of decisions around visual presentation and database design. We agreed that to enhance legibility we should eliminate tactics from the visualization and present them separately. I created a mock that applied Cooper Hewitt’s brand to Rachel’s initial implementation of the enclosure chart. I proposed copy that wrapped the data in understandable language, and compared the latest participant to the virtual community of participants. I opted for percentage values to reinforce the relationship of individual response to aggregate. Black and white overall, I used hot pink to highlight the relationship between the text and the data visualization. A later iteration used pink to indicate all participant data points. I inverted the background in the lower quarter of the screen to separate tactic information from the data visualization so that it was apparent this data was not feeding into the enclosure chart, and I utilized tactic icons provided by ISA to visually connect the digital interactive to the worksheet design [fig. 2].

Next, I printed a paper prototype at scale to check legibility and ADA compliance. This let us analyze the design in a new context and invited feedback from officemates. As Rachel implemented the design in code, we worked with Education to hone the messaging through copy changes and graphic refinements.

Fig. 5. A paper prototype made to scale invited people outside the project team to respond to the design, and helped check for legibility

Fig. 5. A paper prototype made to scale invited people outside the project team to respond to the design, and helped check for legibility

The next steps towards project realization involved integrating the data visualization into the gallery experience, and the web experience on collection.cooperhewitt.org, the collection website. The Pen bridges these two user-flows by allowing museum visitors to collect information in the galleries. The Pen is associated with a unique visit ID for each new session. NFC tags in the galleries are loaded with data by curatorial and exhibitions staff so that visitors can use the Pen to save information to the onboard memory of the Pen. When they finish their visit the Pen data is uploaded by museum staff to a central database that feeds into unique URLs established for each visit on the collection site.

The Process Lab: Citizen Designer digital interactive project needed to work with the established system of Pens, NFC tags, and collection site, but also accommodate a new type of data. Rachel connected the question/value/tactic database to the Cooper Hewitt API and collections site. A reader-board at a freestanding station would allow participants to upload Pen data to the database [fig. 6]. The remaining parts of the participant-flow to engineer were the presentation of real time data on the visualization screen, and the leap from the completed worksheet to digitized data on the Pen.

Rachel found that her code could ping the API frequently to look for new database information to display on the monitor—this would allow for near real-time responsiveness of the screen to reader-board Pen data uploads. Rachel and I decided on the choreography of the screen display together: a quick succession of entries would result in a queue. A full queue would cycle through entries. New entries would be added to the back of the queue. An empty queue would hold on the last entry. This configuration assumed that if the queue was full when they added their entry participants may not see their data immediately. We agreed to offload the challenge of designing visual feedback about the queue length and succession to a subsequent iteration in service of meeting the launch deadline. The queue length has not proven problematic so far, and most participants see their data on screen right away.

Fig. 6. Monitor displaying the data visualization website; to the left is the reader-board station

Fig. 6. Monitor displaying the data visualization website; to the left is the reader-board station

As Rachel and I brought the reader board, data visualization database, and website together, ISA worked on the graphic that would connect the worksheet experience to the digital interactive. The project team agreed that NFC tags placed under a wall graphic would serve as the interface for participants to record their worksheet answers digitally [fig. 7].

Fig. 7. ISA-designed “input graphic” where participants record their worksheet selections; NFC tags beneath the circles write question, value and tactic data to the onboard memory of the Pen

Fig. 7. ISA-designed “input graphic” where participants record their worksheet selections; NFC tags beneath the circles write question, value and tactic data to the onboard memory of the Pen

Process: Installation, Observation & Iteration

Rachel and I had the display website ready just in time for exhibition installation. Exhibitions staff and the project team negotiated the placement of all the elements in the gallery. Because of obstacles in the room, as well as data and power drop locations, the input wall graphic [fig. 7] had to be positioned apart from the reader-board and display screen. This was unfortunate given the interconnection of these steps. Also non-ideal was the fact that ISA’s numeric way-finding system omitted the step of uploading Pen data at the reader-board and viewing the data on-screen [fig.1]. After installation we had concerns that there would be low engagement with the digital interactive because of its disconnect from the rest of the experience.

As soon as the exhibition was open to the public we could see database activity. Engagement metrics looked good with 9,560 instances of use in the first ten days. The quality of those interactions, however, was poor. Only 5.8% satisfied the data requirements written into the code. The code was looking for at least one question, one value, and one tactic in order to process the information and display it on-screen. Any partial entries were discounted.

Fig. 8. A snippet of database entries from the first few days of the exhibition showing a high number of missing question, value and tactic entries

Fig. 8. A snippet of database entries from the first few days of the exhibition showing a high number of missing question, value and tactic entries

Conclusion

The project team met the steep challenges of limited time and budget—we designed and built a completely new way to use the Pen technology. High engagement with the digital interactive showed that what we created was inviting, and fit into the participatory context of the exhibition. Database activity, however, showed points of friction for participants. Most had trouble selecting a question, value and tactic on the input graphic, and most did not successfully upload their Pen data at the reader-board. Stringent database requirements added increased difficulty.

Based on these observations, it is clear that the design of the digital interactive could be optimized. We also learned that some of the challenges facing the project could have been mitigated by closer involvement of D&EM with the larger exhibition design effort. Our next objective is to stabilize the digital interactive at an acceptable level of usability. We will continue observing participant behavior in order to inform our next iterations toward a minimum viable product. Once we meet the usability requirement, our next goal will be to hand-off the interactive to gallery staff for continued maintenance over the duration of the exhibition.

As an experience, the Process Lab:Citizen Designer digital interactive has a ways to go, but we are excited by the project’s role in expanding how visitors use the Pen. This is the first time that we’ve configured Pen interactivity to allow visitors to input information and see that input visualized in near real-time. There’s significant potential to reuse the infrastructure of this project again in a different exhibition context, adapting the input graphic and data output design to a new educational concept, and the database to new content.

Mass Digitization: Digital Asset Management

This part two in a series on digitization. My name is Allison Hale, Digital Imaging Specialist at Cooper Hewitt. I started working at the museum in 2014 during the preparations for a mass digitization project. Before the start of digitization, there were 3,690 collection objects that had high resolution, publication quality photography. The museum has completed phase two of the project and has completed photography of more than 200,000 collection objects.


Prior to DAMS (Digital Asset Management System), image files were stored on optical discs and RAID storage arrays. This was not an ideal situation for our legacy files or for a mass digitization workflow. There was a need to connect image assets to the collections database, The Museum System, and to deliver files to our public-facing technologies.

Vendor Server to DAMS Workflow

The Smithsonian’s DAMS Team and Cooper Hewitt staff worked together to build workflow that could be used daily to ingest hundreds of images. The images moved from a vendor server to Smithsonian’s digital repository. The preparation for the project began with 5 months of planning, testing, and upgrades to network infrastructure to increase efficiency. During mass digitization, 4 Cooper Hewitt staff members shared the responsibility for daily “ingests” or uploads of assets to DAMS. Here is the general workflow:

Cooper Hewitt to DAMS workflow.

Cooper Hewitt to DAMS workflow.

  • Images are stored by vendor in a staging server, bucketed by a folder titled with shoot date and photography station. The vendor delivers 3 versions of each object image in separate folders: RAW (proprietary camera format or DNG file), TIF (full frame/with image quality target), JPG (full-scale, cropped and ready for public audience)
  • Images are copied from the server into a “hot folder”–a folder that is networked to DAMS. The folder contains two areas, a temporary area and then separate active folders called MASTER, SUBFILE, SUB_SUBFILE
  • Once the files have moved to the transfer area, the RAW files move to the MASTER folder, the TIF to the SUBFILE folder, and the JPG files to the SUB_SUBFILE folder. The purpose of the MASTER/SUB/SUB_SUB structure is to keep the images parent-child linked once they enter DAMS. The parent-child relationship keeps files “related” and indexable
  • An empty text file called “ready.txt” is put into the MASTER folder. Every 15 minutes a script runs to search for the ready.txt command
  • Images are ingested from the hot folder into the DAMS storage repository
  • During the initial setup, the DAMS user created a “template” for the hot folder. The template automatically applies bulk administrative information to the image’s DAMS record, as well as categories and security policies
  • Once the images are in DAMS, security policies and categories can be changed to allow the images to be pushed to TMS (The Museum System) via CDIS (Collection Dams Integration System) and IDS (Image Delivery Service)

DAMS to TMS and Beyond: Q&A with Robert Feldman

DAMS is repository storage and is designed to interface with databases. A considerable amount of planning and testing went into connecting mass digitization images to Cooper Hewitt’s TMS database. This is where I introduce Robert Feldman, who works with Smithsonian’s Digital Asset Management Team to manage all aspects of CDIS—Collection Dams Integration System. Robert has expertise in software development and systems analysis. A background in the telecommunications industry and experience with government agencies allows him to work in a matrixed environment while supporting many projects.

AH: Can you describe your role at Smithsonian’s DAMS?

RF: As a member of the DAMS team, I develop and support CDIS (Collection-DAMS Integration System). My role has recently expanded to creating and supporting new systems that assist the Smithsonian OCIO’s goal of integrating the Smithsonian’s IT systems beyond the scope of CDIS. One of these additional tools is VFCU (Volume File Copy Utility). VFCU validates and ingests large batches of media files into DAMS. CDIS and VFCU are coded in Java, and makes use of Oracle and SQL-Server databases.

AH: We understand that CDIS was written to connect images in DAMS to the museum database. Can you tell us more us more about the purpose of the code?

RF: The primary idea behind CDIS is to identify and store the connection between the image in DAMS and the rendition in TMS. Once these connections are stored in the CDIS database, CDIS can use these connections to move data from the DAMS system to TMS, and from TMS to DAMS.

AH: Why is this important?

RF: CDIS provides the automation of many steps that would otherwise be performed manually. CDIS interconnects ‘all the pieces together’. The CDIS application enables Cooper Hewitt to manage its large collection in the Smithsonian IT systems in a streamlined, traceable and repeatable manner, reduces the ‘human error’ element, and more.

AH: How is this done?

RF: For Starters, CDIS creates media rendition records in TMS based on the image in DAMS. This enables Cooper Hewitt to manage these renditions in TMS hours after they are uploaded into DAMS and assigned the appropriate DAMS category.

CDIS creates the media record in TMS by inserting new rows directly into 6 different tables in the TMS database. These tables hold information pertaining to the Media and Rendition records and the linkages to the object record. The Thumbnail image in TMS is generated by saving a copy the reduced resolution image from DAMS into the database field in TMS that holds the thumbnail, and a reference to the full-resolution image is saved in the TMS MediaFiles table.

This reference to the full-resolution image consists of the DAMS UAN (the Unique Asset Name – a unique reference to the particular image in DAMS) appended to the IDS base pathname. By stringing together the IDS base pathname with the UAN, we will have a complete url – pointing to the IDS derivative that is viewable in any browser.

The full references to this DAMS UAN and IDS pathname, along with the object number and other descriptive information populates a feed from TMS. The ‘Collections’ area of the Cooper Hewitt website uses this feed to display the images in its collection. The feed is also used for the digital tables and interactive displays within the museum and more!

A museum visitor looking at an image on the Digital Table. Photo by Matt Flynn.

A museum visitor looking at an image on the Digital Table. Photo by Matt Flynn.

Another advantage of the integration with CDIS is Cooper Hewitt no longer has to store a physical copy of the image on the TMS media drive. The digital media image is stored securely in DAMS, where it can be accessed and downloaded at any time, and a derivative of the DAMS image can be easily viewed by using the IDS url. This flow reduces duplication of effort, and removes the need for Cooper Hewitt to support the infrastructure to store their images on optical discs and massive storage arrays.

When CDIS creates the media record in TMS, CDIS saves the connection to this newly created rendition. This connection allows CDIS to bring object and image descriptive data from TMS into DAMS metadata fields. If the descriptive information in TMS is altered at any point, these changes are automatically carried to DAMS via a nightly CDIS process. The transfer of metadata from TMS to DAMS is known as the CDIS ‘metadata sync’ process.

On the left, image of object record in The Museum System database. On right, object in the DAMS interface with mapped metadata from the TMS record.

On the left, image of object record in The Museum System database. On right, object in the DAMS interface with mapped metadata from the TMS record. Click photo to enlarge.

Because CDIS carries the object descriptive data into searchable metadata fields in the DAMS, the metadata sync process makes it possible to locate images in the DAMS. When a DAMS user performs a simple search of any of the words that describe the object or image in TMS, all the applicable images will be returned, provided of course that the DAMS user has permissions to see those particular images!

Image of search functionality in DAMS. Click to enlarge image.

Image of search functionality in DAMS. Click to enlarge image.

The metadata sync is a powerful tool that not only provides the ability to locate Cooper Hewitt owned objects in the DAMS system, but also provides Cooper Hewitt control of how the Smithsonian IDS (Image Delivery Service) displays the image. Cooper Hewitt specifies in TMS a flag to indicate whether to make the image available to the general public or not, and the maximum resolution of the image to display on public facing sites on an image by image basis. With each metadata update, CDIS transfers these settings from TMS to DAMS along with descriptive metadata. DAMS in turn sends this information to IDS. CDIS thus is a key piece that bridges TMS to DAMS to IDS.

AH: Can you show us an example of the code? How was it written?

RF: What was once a small utility, CDIS has since expanded to what may be considered a ‘suite’ of several tools. Each CDIS tool or ‘CDIS operation type’ serves a unique purpose.

For Cooper Hewitt, three operation types are used. The ‘Create Media Record’ tool creates the TMS media, then the ‘Metadata Sync’ tool brings over metadata to DAMS, and finally the ‘Timeframe Report’ is executed. The Timeframe Report emails details of the activity that has been performed (successes and failures) in the past day. Other CDIS operations are used to support the needs of other Smithsonian units.

The following is a screenshot of the listing of the CDIS code, developed in the NetBeans IDE with Java. The classes that drives each ‘operation type’ are the highlighted classes in the top left.

A screenshot of the listing of the CDIS code, developed in the NetBeans IDE with Java.

A screenshot of the listing of the CDIS code, developed in the NetBeans IDE with Java.

It may be noted that more than half of the classes reside in folders that end in ‘Database’. These classes map directly to database tables of the corresponding name, and contain functions that act on those individual database tables. Thus MediaFiles.java in edu.si.CDIS.CIS.TMS.Database performs operations on the TMS table ‘MediaFiles’

Something I find a little more interesting than the java code is the configuration files. Each instance of CDIS requires two configuration files that enable OCIO to tailor the behavior of CDIS to each Smithsonian unit’s specific needs. We can examine one of these configuration files- the .xml formatted cdisSql.xml file.

The use of this file is two-fold. First, it contains the criteria CDIS uses to identify which records are to be selected each time a CDIS process is run. The criteria is specified by the actual SQL statement that CDIS will use to find the applicable records. To illustrate the first use, here is an example from the cdisSql.xml file:

The cdisSql.xml file.

The cdisSql.xml file.

This query is part of the metadataSync operation type as the xml tag indicates. This query obtains a list of records that have been connected in CDIS, are owned by Cooper Hewitt (CHSDM), and have never been metadata synced before (there is no metadata sync record in the cdis_activity_log table).

A second use for the cdisSql.xml file is it contains the mappings used in the metadata sync. Each Smithsonian unit has different fields in TMS that are important to them. Because Cooper Hewitt has its own xml file, CDIS provides specialized metadata mapping for Cooper Hewitt.

Code for the metadata sync mapping.

A selection of code for the metadata sync mapping.

If we look at the first query, the creditLine in TMS database table ‘object’ is mapped to the field ‘credit’ in DAMS. Likewise, the data in the TMS object table, column description is carried over to the description field in DAMS, etc. In the second query, there are three different fields in TMS appended to each other (with spaces between them) to make up the ‘other_constraints’ field in DAMS. In the third query (which is indicated to be a ‘cursor append’ query with a delimiter ‘,’ a list of values may be returned from TMS. Each member of the list is concatenated into to a single field in DAMS (the DAMS ‘caption’ field) with a comma (the specified delimiter) separating each value returned in the list from TMS. The metadata sync process combines the results of all three of these queries AND MORE to generate the update for metadata sync in DAMS.

The advantage of locating these queries in the configuration file is it allows for flexibility for each Smithsonian unit to be configured with different criteria for the metadata sync. This design also permits CDIS to perform a metadata sync on other CIS systems (besides TMS) that may use any variety of RDBMS systems. As long as the source data in can be selected with SQL query, it can be brought over to the DAMS.

AH: To date, how many Cooper Hewitt images have been successfully synced with the CDIS code?

RF: For Cooper Hewitt, CDIS currently maintains the TMS to DAMS connections of nearly 213,000 images. This represents more than 172,000 objects.

AH: From my perspective, many of our team projects have involved mapping metadata. Are there any other parts of the code that you find challenging, rewarding?

RF: As for challenges – I deal with many different Smithsonian Units. They each have their own set of media records in various IT systems and they all need to be integrated. There is a certain balancing act that must take place nearly every day. That is provide for the unique needs for each Smithsonian Unit, while also identifying the commonalities among the units. Because CDIS is so flexible, without proper planning and examining the whole picture with the DAMS team, CDIS would be in danger of becoming too unwieldy to support.

As far as rewards- I have always valued projects that allow me to be creative. Investigating the most elegant ways of doing things allows me to keep learning and be creative at the same time. The design of new processes, such as the newly redesigned CDIS and VFCU fulfill that need. But the most rewarding experience is discovering how my efforts are used by researchers, and educate the public in the form of public facing websites and interactive displays. Knowing that I am a part of the historical digitization effort the Smithsonian is undertaking is very rewarding in itself.

AH: Has the CDIS code changed over the years? What types of upgrades have you recently worked on?

RF: CDIS has changed much since we have integrated the first images for Cooper Hewitt. The sheer volume of the data flowing through CDIS has increased exponentially. CDIS now connects well over half a million images owned by nearly a dozen different Smithsonian Units, and that number is growing daily.

CDIS has undergone many changes to support such an increase in scale. In CDIS version 2, CDIS was intrinsically hinged to TMS and relied on database tables in each unit’s own TMS system. For CDIS version 3, we have taken issues such as this into account, and have migrated the backend database for CDIS to a dedicated schema within DAMS database. Cooper Hewitt’s instance of CDIS was updated to version 3 less than two months ago.

Now that the CDIS database is no longer hinged to TMS, CDIS version 3 has opened the doors to mapping DAMS records to a larger variety of CIS systems. We no longer depend the TMS database structure, or even that the CIS system uses the SQL-Server RDBMS. This has enabled the Smithsonian OCIO the ability to expand CDIS’s role beyond TMS and allow integration with other CIS systems such as the National Museum of Natural History’s EMuseum, the Archives of American Art’s proprietary system as well as the Smithsonian Gardens IRIS-BG. All three are currently using the new CDIS today, and there are more coming on board to integrate with CDIS in the near future!


 

Conclusion

One challenge has been correcting mass digitization images that end up in the wrong object record. If an object was incorrectly barcoded, the image in barcode’s corresponding collections record is also incorrect. Once the object record’s image is known to be incorrect, the asset must be exported, deleted, and purged from DAMS. The image must also be deleted from the media rendition in TMS. When the correct record is located, the file’s barcode or filename can be changed and re-ingested into DAMS. The process can take several days.

The adoption of Smithsonian’s DAMS system has greatly improved redundancy and our workflow with digitization and professional photography. The flexibility of the CDIS coding has allowed me to work with photography assets of our collection’s objects, or “collection surrogates” and images from other departments, such as the Library. Overall, the change has been extremely user-friendly.

Thank you Smithsonian’s DAMS Team!

 

Traveling our technology to the U.K.

Visitors to the London Design Biennale use our “clone” of the Wallpaper Immersion Room.

Recently, we launched a major initiative at the inaugural London Design Biennale at Somerset House. The installation was up from September 7th through the 27th and now that it has closed and the dust has settled, I thought I’d try and explain the details behind all the technology that went into making this project come alive.

Quite a while back, an invitation was extended to Cooper Hewitt to represent the United States in the London Design Biennale, an exhibition featuring 37 countries from around the world. Our initial idea being to spin up a clone of our very popular “Wallpaper Immersion Room” and hand out Cooper Hewitt Pens.

The idea of traveling our technology outside the walls of the Carnegie Mansion has been of great interest to the museum ever since we reopened our doors in 2014. The process of figuring out how to make our technology portable, and have it make sense in different environments and contexts was definitely a challenge we were up for, and this event seemed like the perfect candidate to put that idea through its paces.

So we started out gathering up the basic requirements and working through all that would be needed to make it all come together, including some very generous support from the Secretary of the Smithsonian and the Smithsonian National Board, Bloomberg Philanthropies, and Amita and Purnendu Chatterjee.

The short version is, this was a huge undertaking. But it all worked in the end, and visitors at the first-ever London Design Biennale were able to use Cooper Hewitt Pens to explore 100 wallpapers from our collection, create their own designs and save them. Plus, visitors could collect and save installations from other Biennale participants.

Thanks to a whole bunch of people, there's an Immersion Room in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Thanks to a whole lot of people, there are @cooperhewitt pens in London @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

The long version is as follows.

An Immersion Room in England

First and foremost, we wanted to bring the Immersion Room over as our installation for the London Design Biennale. So, let’s break down what makes the Immersion Room what it is.

The original Immersion Room, designed by Cooper Hewitt and Local Projects, made its debut when the museum reopened in December 2014, following a major renovation. It is essentially an interactive experience where visitors can manipulate a digital interactive touch-table to browse our collection of wallpapers and view them at scale, in real time, via twin projectors mounted to the ceiling. Additionally, visitors can switch into design mode and create their own wallpapers; adjusting the scale, orientation, and positioning of a repeating pattern on the wall. This latter feature is arguably what makes the experience what it is. Visitors from all walks of life love spending time drawing in the Immersion Room, typically resulting in a selfie or two like the ones you see in the images below.

? #cooperhewitt #londondesignbiennale

A photo posted by Sibel Yalcin (@sibellyalcin) on

#londondesignbiennale #immersionroom #cooperhewitt #doodling

A photo posted by Helen (@helen3210) on

What I’ve just described is essentially the minimal viable product for this entire effort. One interactive table, two ceiling mounted projectors, a couple of computers, and a couple of walls to project on.

From bar napkin to fabrication–we've managed to clone the Immersion Room!

A photo posted by Micah Walter (@micahwalter) on

The Immersion Room uses two separate computers, each running an application written in OpenFrameworks. There is the “projector app,” which manages what is displayed to the two projectors, and there is the “table app,” which manages what visitors see and interact with on the 55” Ideum table. The two apps communicate with each other over a local network, with the table app essentially instructing the projector app with what it should be displaying in real time.

Here is a basic diagram of how that all fits together.

Twin projector and computer setup for Wallpaper Immersion Room

Twin projector and computer setup for Wallpaper Immersion Room

Each application loads in content on startup. This is provided to the application by a giant json file that is managed by our Collections API and meant to be updated each night through a cron job. When the applications start up, they look at the json file and pull down any new or changed assets they might need.

At Cooper Hewitt, this means that our curators are able to update content whenever they want using our collections management system, The Museum System (TMS). Updates they make in TMS get reflected on the digital table following a data-deploy and reboot of the table and projector applications. This is essentially the workflow at Cooper Hewitt. Curators fill in object data in TMS, and through a series of tubes, that data eventually finds its way to the interactive tables and our collections website and API. For this project in London, we’d do essentially the same process, with a few caveats.

Make it do all the things

We started asking ourselves a number of questions. It’s a mix of feature creep and a strong desire to put some of the technology we’ve built through it’s paces–to determine if it’s possible to recontextualize much of what we’ve created at Cooper Hewitt and have it work outside the museum walls.

Questions like:

  • What if we want to allow visitors to save the wallpapers and the designs they create?
  • What if we wanted to hand out a Cooper Hewitt Pen to each visitor?
  • What if we want to let people use the Pen to save their creations, wallpapers, and ALL the other installations around the Somerset House?!

All of a sudden, the project becomes a bit more complicated, but still, a great way to figure out how we would translate a ton of the technology we’ve built at Cooper Hewitt into something useful for the rest of the world. We had loads of other ideas, features, and add-ons, but at some point you have to decide what falls in and out of scope.

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

Unpacking 700 Cooper Hewitt Pens we shipped to the U.K., batteries not included!

So this is what we decided to do.

  • We would devise a way to construct the physical build out of a second Immersion Room. This would essentially be a “set” with walls and a truss system for suspending two rather heavy projectors. It would have a floor, and would be slightly off the ground so we could conceal wiring and create a place for the 55” touch table to rest.
  • We’d pre-fabricate the entire rig in New York and ship it to London to be assembled onsite.
  • We’d enable the Immersion Room to allow visitors to save from a selection of 101 wallpapers from our permanent collection. These would be curated for the Utopia theme of the London Design Biennale.
  • We’d enable the design feature of the Immersion Room and allow visitors to save their designs.
  • We’d hand out Cooper Hewitt Pens to each visitor who wanted one, along with a printed receipt containing a URL and a unique code.
  • We’d post coded NFC tags all throughout Somerset House to allow visitors to use their Pens to collect information about each participating country, including our own.
  • We’d build a bespoke website where visitors would go following their visit to see all the things they’ve collected or created.

These are all of the things we decided to do from a technology standpoint. Here is how we did it.

pen-www

The first step to making this all work was to extract the relevant code from our production collections website and API. We named this “pen-www” and intended that this codebase serve as a mini framework for developing a collecting system and website. In essence it’s simply a web application (written in PHP) and a REST API (also PHP). It really needed to be “just the code” required to make all the above work. So here is another list, explaining what all those requirements are.

  • It needs to somehow generate a simple collections website that is capable of storing relevant info about all the things one could potentially collect. This was very similar to our current codebase at Cooper Hewitt, but we added the idea of “organizations” so that you could have multiple participants contributing info, and not just Cooper Hewitt.
  • It needs all the API methods that make the Pen work. There are actually just a handful that do all the hard work. I’ll get to those in a bit.
  • It needs to handle image uploads and processing of those images (saved designs from the Immersion Room table).
  • It needs to create “visits” which are the pages a visitor lands on when entering their unique code.
  • It needs a series of scripts to help us import data and set things up.
  • We would also need some new code to allow us to generate paper receipts with unique codes printed on them. At Cooper Hewitt this is all done via our Tessitura ticket printing system, so since we wouldn’t have that at Somerset House, we’d need to devise a new way of dealing with registering and pairing pens, and printing out some kind of receipt.

So, pen-www would become this sort of boilerplate framework for the Pen. The idea being, we’d distill the giant codebase we’ve developed at Cooper Hewitt down to the most essential parts, and then make it specific to what we wanted to do for London. This is an important point. We aren’t attempting to build an actual framework. What we are trying to do is to boil out the necessary code as a starting point, and then use that code as the basis for a new project altogether.

From our point of view, this exercise allows us to understand how everything works, and gets us close enough to the core code so that we can think of repeating this a third or a fourth time—or more.

The API at the center of everything

We built the Cooper Hewitt API with intentions of making it flexible enough to be easily expanded upon or altered. It tries to adhere to the REST API pattern as much as it can, but it’s probably better described as “REST-ish.” What’s nice about this approach has been that we’ve been able to build lots and lots of internal interfaces using this same pattern and code base. This means that when we want to do something as bespoke as building an entire replica of our seemingly complex Pen/Visit system, and deploy it in another country, we have some ground to stand on.

In fact, just about all of the systems we have built use the API in some way. So, in theory, spinning up a new API for the London project should just mean pointing things like the Immersion Room interactive table at a new API endpoint. Since the methods are the same, and the responses use the same pattern, it should all just work!

So let’s unpack the API methods required to make the Pen and Immersion Room come to life. These are all internal/private API methods, so you can’t take them for a spin, and I can’t share the actual code with you that lies beneath, but I think you’ll get the idea.

Pens – there’s a whole class of API methods that deal with the Pen itself. Here are the relevant ones:

  • pens.checkoutPen – This marks a Pen as having been checked out for an associated visit
  • pens.getCurrentCheckout – This gets the currently checked out Pen for a specific visit
  • pens.getCurrentVisit – This does the opposite of the getCurrentCheckout, and returns a visit for a specific Pen.
  • pens.returnPen – This marks the Pen as having been returned.

Visits – There is another class of API methods that deal with the idea of “visits.” A visit is meant to represent one individual’s visit to the museum or exhibition, or some other physical location. Each visit has an ID and a corresponding unique code (the thing we print on a visitor’s paper receipt).

  • visits.getActivity – Returns all the activity associated with a visit
  • visits.getInfo – Returns detailed info about a specific visit
  • visits.processPenActivity – This is a major API method that takes any activity recorded by the Pen and processes it before storing the info in the appropriate location in the database. This one gets called frequently and is the method that happens when you tap your Pen on a reader board at one of our digital tables. The reader board downloads all the info on the Pen, and calls this API method to deal with whatever came across.
  • visits.registerVisit – This marks a visit as having been registered. It’s what generates your unique code for the visit.

Believe it or not, that is basically it. It’s just a handful of actions that need to be performed to make this whole thing work. With these methods in place, we can:

  • Pair pens with newly created visits so we can hand Pens out to visitors.
  • Process data collected by the Pen, either from NFC stickers it has read, or via our Interactive Table.
  • Do a final read of the Pen and return the Pen to the pool of possible pens.

So, now that we have an API, and all the relevant methods we can start building the website and massaging the API code to do things in the slightly different ways that make this whole thing live up to its bespokiness.

On the website end of things we will follow the KISS principle or “Keep it simple, stupid.” The site will be devoid of fancy image display features, extended relationship mapping and tagging, and all the goodies we’ve spent years developing for the Cooper Hewitt Collections website. It won’t have search, or fancy search, or search by color, or search by anything. It won’t have a shoebox or even a random button (ok, maybe I’ll add that later today). For all intents and purposes, the website will simply be a place to enter your unique code, and see all your stuff.

https://londonbiennale.cooperhewitt.org

https://londonbiennale.cooperhewitt.org

The website and its API will live at https://londonbiennale.cooperhewitt.org. It consists of just two web front ends running Apache and sitting behind an NGINX load balancer, and one MySQL instance via Amazon’s RDS service. It’s very, very similar to just about all of our other systems and services except that it doesn’t have fancy extras like Logstash logging, or an Elasticsearch index. I did take the time to install server monitoring and alerting, just so we can sleep at night, but really, it’s pretty bare bones.

At first glance there isn’t much there to look at. You can browse the different participants and you can create a Cooper Hewitt account or sign in using our Single Sign On service, but other than that, there is really just one thing to do–enter your code and see all your stuff.

Participants

Participants

All your content are belong to us

In order for this project to really work, we’d need to have content. Not only our own Cooper Hewitt content, but content from all the participants representing the 36 other countries from around the world.

So here is the breakdown:

  • Each participant or organization will have a page, like this one for Australia https://londonbiennale.cooperhewitt.org/participants/australia/
  • Each participant will have one “object.” In the case of all 37 participants, this object will represent their “booth” like this one from Australia – https://londonbiennale.cooperhewitt.org/objects/37643049/
  • Each “booth” will contain an image and the catalog text provided by the London Design Biennale team. If there is time, we will consider adding additional information from each participant (we haven’t done this as of yet).
  • Cooper Hewitt’s record will have some more stuff. In addition to the object representing Cooper Hewitt’s booth, we will also have 100 wallcoverings from our permanent collection.
  • You can collect all of these via the Immersion Room table and your Pen. Here is our page – https://londonbiennale.cooperhewitt.org/participants/usa/ There are also two physical wallapapers that are part of our installation, which you can of course collect as well.

All told, that means 140 objects in this little microsite/sitelet. You can actually browse them all at once if you are so inclined here – https://londonbiennale.cooperhewitt.org/objects/

"Booth" pages

“Booth” pages

Visit Pages

So what does a visitor get when they go to the webpage and type in their unique code. Well, the answer to that question is “it depends.” For objects that we imported from our permanent collection (the 101 wallpapers) you get a nice photo of the wallpaper, a chatty description of the wallpaper written by our curator, Greg Herringshaw, having to do with “Utopia” — the theme of this year’s London Design Biennale. You also get a link back to the collection page on the Cooper Hewitt website. For the 37 booths, you get a photo and the catalog info for each participants, and if you created and saved your own design in the wallpaper immersion room, you get a copy of the PNG version of your design, which you can, of course, download and do with what you like. (Hint: they make cool wall posters.)

Additionally, you get timestamps related to your visit. This way, just like on the Cooper Hewitt website, you get to retain a record of your visit–the date and time of each collected object and a way to recall your visit anytime in the future.

Visit page example

Visit page example

Slow Progress

All of this code replication, extraction, and re-configuring took quite a long time. The team spent long hours, nights, and weekends trying to sort it all out. In theory this should all just work, but like any project, there are unique aspects to the thing you are currently trying to accomplish, which means that, no matter what, you’re gonna be writing some new code.

Ok, so let’s check in with what we’ve got so far.

  • A physical manifestation of the Wallpaper Immersion Room and all it’s hardware, computers, wires, etc.
  • A website and API to power all the fun stuff.
  • A bunch of content from our own permanent collection and the catalog info from the London Design Biennale team.
  • Visit pages

We still need the following:

  • A way to issue Pens to visitors as they arrive.
  • A way to print a unique code on some kind of receipt, which we give to the visitors as well.
  • A way to check in Pens as visitors return them.
  • The means to get the table pointing at the right API endpoint so it can save things and processPenActivity as well.

To accomplish the first three items on the list, we enlisted the help of Rev Dan Catt.

That time @revdancatt assembled 700 pens for @london_design_biennale #ldb16

A photo posted by Micah Walter (@micahwalter) on

Dan is planning to write another extensive blog post about his role in all of this, but in a nutshell, he took our Pen registration code and built his own little mini-registration station and ticket printer. It’s pictured below and performs all of the functions above (1 through 3). It uses a small Adafruit thermal printer to print the receipts and unique codes, and it is simple enough to use with a small web based UI to give the operator some basic feedback. Other than that, you tap a pen and it does the rest.

Dan's Raspberry Pi powered Pen registration and ticket printing station.

Dan’s Raspberry Pi powered Pen registration and ticket printing station.

Tickets printing for the first time

Tickets printing for the first time

For the last item on the list, I had to re-compile the code Local Projects delivered to us. In the code I had to find the references to the Cooper Hewitt API endpoints and adjust them to point at the London API endpoint. Once I did this, and recompiled the OpenFrameworks project we were in business. For a while, I had it all set up for development and testing on my laptop using Parallels and Visual Studio. Eventually I compiled a final version and we installed it on the actual Immersion Room Table.

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Working on the OpenFrameworks code on Parallels on my MacBook Pro

Cracking open the Local Projects code was a little scary. I’m not really an OpenFrameworks programmer, or at least I haven’t been since grad school, and the Local Projects code base is pretty vast. We’ve had this code compiled and running on all the interactive tables at Cooper Hewitt since December of 2014. This is the first time I (or anyone I know of) has attempted to recompile it from source, not to mention make changes to it beforehand.

That said, it all worked just fine. I had to find an old copy of Visual Studio 2012, but other than that, and tracking down a few dependencies, it wasn’t a very big deal. Now we had a copy of the Immersion Room table application set up to talk to the London API endpoint. As I mentioned before, all the API methods are named the same, and set up the same way, so the data began to flow back and forth pretty quickly.

Content Management

I mentioned above that we had to import 100 wallpapers from our collection as well as the data for all 37 booths. To accomplish all of this, we wrote a bunch of Python and PHP scripts.

We needed to do the following with regard to content:

  • Create a record for each of the 37 participants
  • Import the catalog info as an object for each of the 37 participants
  • Import the 100 wallcoverings from the Cooper Hewitt collection. We just used, you guessed it, our own API to do this.
  • Massage the JSON files that live on the Projector and Table applications so they have the correct 100 wallpapers and all their metadata.
  • Display the emoji flag for each country, because emoji.

In the end, this was just a matter of building the necessary scripts and running them a number of times until we had what we wanted. As a sort of side note, we decided to use London Integers for this project instead of Brooklyn Integers, which we normally use at Cooper Hewitt, but that’s probably a topic for a future post.

Shipping code, literally

At some point we would need to put all the hardware and construction pieces into crates and ship them across the pond. At the time, our thinking was to get the code running on the digital table and projector computers as close to production ready as we could. We installed all the final builds of the code on the two computers, packed them up with the 55” interactive table, and shipped them over to London, along with six other crates full of the “set” and all its hardware and parts. It was, in a nutshell, impressive.

As the freight went to London, we continued working on the website code back home—making the site look the way we wanted it to look and behave the way we wanted it to behave. As I mentioned before, it’s pretty feature free, but it still required some spit and polish in the form of some of Rachel’s Sassy-CSS. Eventually we all settled on the aesthetics of the site, added a lockup that reflected both the Cooper Hewitt and London Design Biennale brands (both happen to be by Pentagram) and called it a day. We continued testing the table application and Dan continued working on the Pen registration app and receipt printer so it would be ready when we landed.

Building the set with the team at Somerset House.

Building the set with the team at Somerset House.

We landed, started to build the set, and many, many things started to go wrong. I think all of the things that went wrong are probably the topic of yet another blog post, but let’s just say for now: if you ever decide to travel a whole bunch of A/V equipment and computers to another country, get everything working with the local power standard and don’t try to transform anything.

2500 batteries #ldb16

A photo posted by Micah Walter (@micahwalter) on

Eventually, through a lot of long days and sleepless nights, and with the help of many, many kind-hearted people, we managed to get all the systems up and running, and everything began to work.

dscf0213

We flipped the switch and the whole thing came to life and visitors started to walk up to our booth, curious and excited to see what it would do. We started handing out Pens and I started watching the data flow through.

By the close of the show, visitors had used the Pen to collect over 27,000 objects. Eventually, I’ll do a deeper data analysis, but for now, the feeling is really great. We created a portable version of the Pen and all of its underlying systems. We traveled a giant kit of A/V tech and parts overseas, and now people in a country other than the United States can experience what Cooper Hewitt is all about: a dynamic, interactive deep dive into design.

Design your own Utopia at the London Design Biennale

Design your own Utopia at the London Design Biennale

-m

Exhibition Channels on Cooperhewitt.org

There’s a new organizational function on cooperhewitt.org that we’re calling “channels.” Channels are a filtering system for WordPress posts that allow us to group content in a blog-style format around themes. Our first iteration of this feature groups posts into exhibition-themed channels. Subsequent iterations can expand the implementation of channels to broader themed groupings that will help break cooperhewitt.org content out of the current menu organization. In our long-term web strategy this is an important progression to making the site more user-focused and less dictated by internal departmental organization.

The idea is that channels will promote browsing across different types of content on the site because any type of WordPress post—publication, event, Object of the Day, press, or video—can be added to a channel. Posts can also live in multiple channels at once. In this way, the channel configuration moves us toward our goal of creating pathways through cooperhewitt.org content that focus on user needs; as we develop a clearer picture of our web visitors, we can start implementing channels that cater to specific sets of users with content tailored to their interests and requirements. Leaning more heavily on posts and channels than pages in WordPress also leads us into shifting our focus from website = a static archive to website = an ever-changing flow of information, which will help keep our web content fresher and more engaged with concurrent museum programs and events.

Screenshot of the Fragile Beasts exhibition channel page on cooperhewitt.org

The Fragile Beasts exhibition channel page. Additional posts in the channel load as snippets below the main exhibition post (pictured here). The sidebar is populated with metadata entered into custom fields in the CMS.

In WordPress terms, channels are a type of taxonomy added through the CustomPress plugin. We enabled the channel taxonomy for all post types so that in the CMS our staff can flag posts to belong to whichever channels they wish. For the current exhibition channel system to work we also created a new type of post specifically for exhibitions. When an exhibition post is added to a channel, the channel code recognizes that this should be the featured post, which means its “featured image” (designated in the WordPress CMS) becomes the header image for the whole channel and the post is pinned to the top of the page. The exhibition post content is configured to appear in its entirety on the channel page, while all other posts in the channel display as snippets, cascading in reverse chronological order.

Through CustomPress we also created several custom fields for exhibition posts, which populate the sidebar with pertinent metadata and links. The new custom fields on exhibition posts are: Exhibition Title, Collection Site Exhibition URL, Exhibition Start Date, and Exhibition End Date. The sidebar accommodates important “at-a-glance” information provided by the custom field input: for example, if the date range falls in the present, the sidebar displays a link to online ticketing. Tags show up as well to act as short descriptors of the exhibition and channel content. The collection site URL builds a bridge to our other web presence at collection.cooperhewitt.org, where users can find extended curatorial information about the exhibition.

Screenshot of the sidebar on the <em>Fragile Beasts</em> exhibition channel page.

The sidebar on the Fragile Beasts exhibition channel page displays quick reference information and links.

On a channel page, clicking on a snippet (below the leading exhibition post) directs users to a post page where they can read extended content. On the post page we added an element in the sidebar called “Related Channels.” This link provides navigation back to the channel from which users flowed. It can also be a jumping-off point to a new channel. Since posts can live in multiple channels at once this feature promotes the lateral cross-content navigation we’re looking to foster.

Screenshot of sidebar on a post page displaying Related Channel navigation.

The sidebar on post pages provides “Related Channel” navigation, which can be a hub to jump into several editorial streams.

Our plan over the coming weeks is to on-board CMS users to the requirements of the new channel system. As we launch new channels we will help keep information flowing by maintaining a publishing schedule and identifying content that can fit into channel themes. Our upcoming exhibition Scraps: Fashion, Textiles and Creative Reuse will be our first major test of the channels system. The Scraps channel will include a wealth of extra-exhibition content, which we’re looking forward to showcasing with this new system.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site.

My mock-up for the exhibition channel structure and design. Some of the features on the mock were knocked off the to-do list in service of getting an MVP on the live site. Additional feature roll-out will be on-going.

Object Phone: The continued evolution of a little chatbot

Object Phone is a project that started small, took less than a day to code, and consisted of about a page of code. Initially it was just an experiment–a way for me to explore a new interface to our API. Object Phone allowed users to call or text objects in our collection, and receive some kind of response. It was met with mild fanfare.

Next, I was curious about using Object Phone in our galleries. I looked towards developing some better audio content, and we decided to produce a short audio tour of the David Adjaye Selects exhibit. It was somewhat cumbersome to use but an interesting experiment and one of my first “in-gallery beta-tests.” Needless to say, I tried to be as clear as possible that this was an “experiment.”

Later I started thinking about the broader uses for a system like Object Phone. Could it replace an expensive audio guide? Could it be used as an accessibility device? I started to think of many possible uses for the platform, and started to rewrite the code to support multiple outputs. In a way, I was thinking about the code for Object Phone as a mini framework for building voice and text based interactions with our content.

All of this got put on the back burner for a while. Object Phone is after all my little side project. Something I come back to when I need to center myself and let my brain think through a few problems. It’s very much a project I meditate on when I need to do that kind of thing.

About 6 months later I started playing with the code again. I realized it was pretty trivial to deliver images via MMS using Twilio’s API and I had also started to notice that MMS worked pretty nicely on devices like an Apple Watch, and looked pretty good in the notification screen on my iPhone. All of the sudden it was kind of fun again to receive texts from Object Phone. So, I set up a subscription service.

Inspired by a few chatty SMS based apps out there like Poncho and The Edit, I built a simple subscription service that would send random objects and images to subscribers once a day at noon. Again, I set this up quickly, sent out a request for some people to try it out, and started to make realizations.

Object Phone is getting some upgrades. Feature requests welcome.

A photo posted by Micah Walter (@micahwalter) on

The main realization I had was that Object Phone had just become a chatbot. To be clear, Object Phone has technically always been a chatbot. You send it messages, and it replies with some response. But now that it sends you something periodically based on your preferences (currently just the preference that you want to continue receiving messages) it seems more like a real chatbot. More importantly, this experiment has started to make me “think” of Object Phone as a chatbot–something I should have likely realized from the start.

I also realized that Object Phone’s chattiness happens in multiple directions. It indeed chats with its subscribers. It can send you messages once a day, and it can reply to your requests for info about objects with ease. But, I also added a back end feature which follows this same line of thinking. If a user sends Object Phone a message that it doesn’t understand, Object Phone asks me for some assistance. Here is the flow:

  1. A user messages Object Phone something like “Tell me about spanking cat.”
  2. Object Phone isn’t smart enough yet to decipher the message.
  3. Object Phone replies “OK, I don’t really understand what you are saying but I’ll ask around and get back to you.”
  4. Object Phone then sends our Cooper Hewitt Slack channel a message.
  5. The Slack message contains the user’s phone number, their message, and a link to an admin page where the operator can reply directly to the user.
Screen_Shot_2016-07-04_at_10_13_26_AM

A Slack Channel where Object Phone can tell our staff when it needs a little assistance.

Screen_Shot_2016-07-04_at_10_16_12_AM

An Object Phone admin page where our staff can reply directly to users

All of the sudden Object Phone is a conduit between Cooper Hewitt staff and its visitors. It’s talking directly to visitors, but also relaying messages back and forth to more knowledgeable staff when it needs assistance.

What the cool kids are doing

Conversational user experiences are all the rage right now. Facebook has recently opened up their Messenger platform and API to developers, which means anyone can build a simple chatbot on Facebook and reach all their followers with ease. Many other messaging services have open APIs as well. WeChat, LINE, What’sApp and Slack are just a few examples.

Slack for iOS Upload

Screenshot of the CNN chatbot for Facebook Messenger

It’s pretty clear that messaging apps are increasing in popularity, with users spending much of their days talking on platforms like SnapChat rather than thumbing through their Facebook feeds. Apple too has followed suit by announcing a much upgraded Messages app in their latest update to iOS.

Chatbots have also become much more sophisticated, with huge advancements in Natural Language Processing and Natural Language Understanding. There is now a wealth of information and publicly available code and APIs out there, making it easier than ever to spin up a pretty intelligent chatbot with little overhead.

The Future of Object Phone

My next steps are to make Object Phone more intelligent. It should be able to learn about your tastes and preferences. If you only want to receive objects from our Textiles department, you should be able to say so. If you want to get your daily update at 5am, you should be able to just tell it that.

More importantly, you should be able to interact with more than just objects. Users should be able to find out general info about our museum. Are we open today? How do I get to Cooper Hewitt? Can I buy tickets right here, right now?

Lastly, I’d love to see Object Phone make its way onto the platform of your choice. I think this is a critical next step. SMS is great, and available to nearly anyone with a cell phone, but apps like FB Messenger, WhatsApp, and LINE have the ability to connect a service like Object Phone with a captive audience, all over the world.

I think institutions like museums have a great opportunity in the chatbot space. If anything it represents a new way to broaden our reach and connect with people on the platforms they are already using. What’s more interesting to me is that chatbots themselves represent a way to interact with people that is by its very nature, bi-directional. It presents us with the challenge of conversation, and forces us to listen to our constituents in a very close and connected kind of a way. We should already be doing this.

If you’d like to participate in testing out Object Phone, please go to https://objectphone.cooperhewitt.org and sign up. You will receive an object every day at 12pm EST until you reply STOP.

Mass Digitization: Workflows and Barcodes

This is my first post in a four-part series about digitization. My name is Allison Hale, Digital Imaging Specialist at Cooper Hewitt. I started working at the museum in 2014 during the preparations for a mass digitization project. Before the start of digitization, there were 3,690 collection objects that had high resolution, publication quality photography. The museum is currently in phase two of the project and has completed photography of more than 180,000 collection objects.


Workflows

Cooper Hewitt was the first Smithsonian unit to take on digitization of an entire collection. Smithsonian’s Digitization Project Office directed the project and an onsite vendor completed the imaging. Museum staff played an intensive role, allocating up to fifty percent of a workweek on digitization administration. Additional hires in the Registration and Conservation departments eased the daily organization and handling of the objects.

The goal was to take a physical object from storage shelf to public-facing digital image within 24-48 hours.

The goal was to take a physical object from storage shelf to public-facing digital image within 24-48 hours.

Here is a simplified version of the workflow:

Physical or Object Workflow

  • The vendor’s photographic setup was located in our collections storage facility
  • Art handling technicians pulled pre-organized groups of objects from storage shelves to a staging area
  • A group of objects located on the same shelf or container were carted into the staging area and then placed individually on a photographic set
  • The object barcode was scanned to create a file name
  • Photograph is taken
  • Object placed back in the staging area, matched with its barcode tags, and returned to storage

Data Workflow

  • Assets from the project were stored on a production server
  • Museum staff completed daily upload of assets to Smithsonian’s Digital Asset Management System (DAMS)
  • DAMS became the repository for digital assets
  • CDIS (Collection DAMS Integration System) provided synchronization of metadata from TMS, and delivered images to the object records in the collections database, The Museum System
  • IDS (Image Delivery Service) provided public images for use on the Collections Website

Let me repeat the word simplified. Mass digitization applied to uniform collection can be simple, but application to various dimension, size, and materiality was new territory. I will point out some of the challenges that were faced during the process, and the digital innovations that improved efficiency and helped us to complete the process in 18 months.

Barcoding: Bridging the Physical to the Digital

A barcode tag attached to an object in the metalwork sub-collection.

A barcode tag attached to a wooden panel.

The first stage of digitization was assessment and barcoding of 258,000 objects. Museum objects are categorized in four curatorial departments: Drawings and Prints, Product Design and Decorative Arts, Textiles, and Wallcoverings. A Project Registrar was hired to oversee the barcoding equipment and printing, reconciling object locations, and maintaining a pace for the project. Thirteen barcoding technicians with expertise in object handling and conservation were hired to complete a conservation assessment and barcode each object.

The Museum System, the collections database, contains a barcode number in each object record. This unique identifier was printed as a barcode and affixed to each object. The Registration staff decided that the most efficient way to barcode was not by “cherry picking” random objects, but by systematically working on one shelf or container at a time. A SQL query of a TMS location was used to find all objects on one shelf or in a single container. A software program called Label Matrix would pull, format, and print the query barcodes. A 2-D barcode could be printed as a sticker, for a larger “cover sheet” or as a non-adhesive tag.

Here were some of the challenges:

1. The object’s recorded storage location needed to be accurate

In an ideal workflow, the entire collection would be inventoried before digitization. This would provide accurate locations for each object. Instead, during the barcoding process the technicians were responsible to note any inaccuracies on a spreadsheet. The Project Registrar then reconciled the locations in TMS. The technicians barcoded locations and containers to improve tracking. The barcodes can be differentiated by last number in the sequence: “2” indicates an object, “1” a container, and a location “0”.

2. Objects in storage must be accessible to the digitization technicians

The Product Design and Decorative Arts Department contained sub-collections that were stored in temporary housing. The temporary housing was designed to transport the objects, but not for permanent storage. Conservators and technicians built permanent storage containers that allowed technicians to easily remove and replace objects during digitization. An example of this is the matchsafe sub-collection.

(from left) Match safes packed for travel; Technicians make new containers and barcode objects; Match safes in storage with barcodes in each container and a corresponding coversheet

(from left) Matchsafes packed for travel; Technicians make new containers and barcode objects; Matchsafes in storage with barcodes in each container and a corresponding cover sheet.

3. Objects required special handling due to fragility or component assembly 

The collection contains 211,000 objects. Due to conservation concerns and component assembly, approximately 10 percent of the collection could not be digitized. A visual system was created to alert digitization staff to the conservation “status” of the object: green=ready for digitization, yellow=digitization handling by conservator only, and red=too fragile for digitization. The visual system allowed the vendor staff to work independently in storage, rather than referencing TMS records.

Digitization technicians used the visual system to identify containers ready for imaging.

Digitization technicians used the visual system to identify containers ready for imaging.

4. The barcode needed to be scanned by the reader in a timely manner

A 2-D barcode was used so that the technicians could efficiently scan holding the reader in different positions.

A technician scans a barcode on a coversheet during digitization.

A technician scans a barcode on a coversheet during digitization.

5. The project needed larger organization so that every person involved could plan the timing of digitization

A chart was made to organize the Departments’ sub-collections. An initial count in the TMS database in the sub-collection categories gave an approximate number of items to be digitized. From this number, the Smithsonian’s Digitization Project Office could estimate the digitization rate, a sub-collection schedule, and cost per image. Curators, conservators and digitization staff would meet before the beginning of each sub-collection to decide upon the aesthetic of the images, conservation concerns and handling specifications.

Outcomes:

Object barcoding was a necessary step before the start of digitization. During the imaging workflow, technicians scanned the barcode to input filename. Eliminating the manual entry of filenames saved an average of 14 seconds per file, amounting to 103 working days. It also greatly decreased the rate of human error from manual entry.

The barcode filename became the metadata link between DAMS (Digital Asset Management System) and the collections database TMS. (A good lead-in to my next post!)


 

Next Post: DAMS and Metadata Mapping!

Museums and the Web Conference Recap: Administrative Tools at Cooper Hewitt

The Labs team had a great time at Museums and the Web this year. We published two papers for the conference and presented them both to the audience of cultural heritage thinkers, makers, planners and administrators. Sam Brenner and I shared our paper, “Winning (and losing) hearts and minds of museum staff: Administrative interfaces at Cooper Hewitt,” which outlines the process of designing, developing and iterating two in-house built, staff-facing tools: Tagatron and the Pen Pairing Station. Both administrative tools are essential aides to staff managing new responsibilities associated with visitor-facing gallery technologies.

Here is the deck from our presentation:

Administrative interfaces at Cooper Hewitt (14)

Administrative interfaces at Cooper Hewitt

Introduction

  • Cooper Hewitt, Smithsonian Design Museum. New York, New York.
  • Our strategy around presenting design is to expose process—how things are made, how they are conceived, how they are designed.
  • This presentation will speak to our philosophy of openness around design process in sharing part of the back-story of how our current visitor-facing experience came together and how it’s maintained.

Administrative interfaces at Cooper Hewitt (1)

Visitor Interfaces

  • The visitor-facing technologies in the museum, introduced in 2014, invite new forms of engagement with the Cooper Hewitt collection. They encourage active participation, letting visitors play, design and collect through multi-touch table applications and the Pen.
  • Before we were able to re-design the visitor’s relationship to the museum we went through comprehensive changes at every level.

Administrative interfaces at Cooper Hewitt (2)

Comprehensive Re-design / Institutional Shift

  • We began a restoration of the mansion, stripping it down to its Carnegie steel girders.
  • To a similar degree we rethought the organizational infrastructure of Cooper Hewitt with a comprehensive re-design of operations, workflows and responsibilities.

Administrative interfaces at Cooper Hewitt (3)

New Responsibilities (for Everyone)

  • There were new jobs created to support the new visitor experience, including that of our Gallery Technology Manager, Mary Fe, whose job responsibilities include maintaining the Pens and troubleshooting touch tables and gallery interactives
  • The re-design affects every staff member at Cooper Hewitt:
  • Registrars: aggressive timetable to enter data
  • Security: understand the mission and visitor experience, teaching visitors on pen usage
  • Exhibitions: label programming, maintenance
  • Curators: tags, relations, chat formatting for length
  • Visitor services: pen pairing – whole new step in between “welcome” and ticket sale
  • Before we got to this stage there was the task of onboarding staff to new responsibilities, which fell largely to the Digital & Emerging Media department. With the allocation of new responsibilities also came the opportunity to create tools that could facilitate some of the work.

Administrative interfaces at Cooper Hewitt (4)

Defining the Need for Considered Interfaces

  • Why did we decide that new interfaces were necessary in certain parts of the workflow?
  • We started with observation, watching workflows as they emerged. We created tools to assist where necessary. The need for interfaces was in part logistical, in part technical and also in part human.
  • Candidates for interface development are parts of the new digital ecosystem where there is:
  • High volume of data
  • Large number of users
  • Complex tasks
  • Something that needs constraints or enforcement
  • Example: the job of assigning tags and related objects to everything we put on display for the reopening. The touch table interfaces utilize tag and related object information. This data does not live in TMS, so it is housed in a custom database.
  • The task of creating the data fell to the curators. Originally this was stored in Excel files. While the curators were happy using spreadsheets, we identified a few major issues with them. The biggest one was that every department had devised their own schema for storing the data, which would ultimately have to be reconciled
  • This example fits all of the criteria above.

Administrative interfaces at Cooper Hewitt (5)

Case Study 1: Tagatron

  • Explicit purpose of the Tagatron tool: make the work quicker; make the metadata consistent; make the organization of the metadata consistent
  • Making this tool highlighted for the digital team the complex relationship between the work, the tool, and the people responsible for each—even though we believed the tool made things easier, the tool had its own set of ongoing technical and usability issues
  • We found that those issues propagated an amount of distrust or lack of confidence in the larger project. Some of these were due to bugs in the tool, but some of it was just that now it was known that this was work that would be “enforced” or taken more seriously, which made users uncomfortable.
  • Key idea: the interface takes on a symbolic value in representing “new responsibilities” and can bring about issues that it might not have been designed to address. It takes on a complex position between human needs and technical needs.

Administrative interfaces at Cooper Hewitt (6)

Tagatron (continued)

  • These graphs illustrate how prolific the task of tagging and relating objects is. It was important to build Tagatron because it is crucial tool in the ongoing operation of the museum’s digital experience. More so than the spreadsheets ever could, it allows for scalability.
  • Since the re-opening the tool went through one major design and backend overhaul, and continues to see small iterations.

Administrative interfaces at Cooper Hewitt (7)

Case Study 2: Pen Pairing Stations

  • Context of Pen Pairing: Every visitor to the museum receives a Pen. At the museum’s front desk each Pen is paired with a unique admission ticket. Every ticket has a shortcode identifier that allows visitors to retrieve their Pen visit data online when they enter the code on their ticket.
  • Pen pairing is done at a very critical point in the visitor experience when the interaction needs to be quick and frictionless. Visitor Services Associates have to coordinate a number of simultaneous tasks.

Pen Pairing Station (continued)

  • This video depicts the Pen pairing process behind the front desk. It documents the first version of the Pen Pairing application, and shows the exposed Pen-reading circuit board before housing was built.
  • Pen pairing is one of the most demanding of the new responsibilities created by the “new experience”–has to fit between welcoming a visitor, taking their money, answering any questions, looking up their member ID.
  • Each use of the tool only lasts 5-10 seconds but we’ve spent many hours and built many versions of this tool to figure out exactly what needs to happen in that time to accomplish all the tasks, including updating databases, handling failures, serial communication
  • Every one of these iterations gave us an opportunity to be connected to the staff using the tools, not only to make something that works better, but to be a part of the conversation

Administrative interfaces at Cooper Hewitt (8)

Administrative Interfaces: What does success look like? How does it feel?

  • In informal interviews with Tagatron users we found trust to be a central theme of users’ response to the interface
  • Since Tagatron augments the curators’ use of TMS, they were less trusting of its database as a long-lasting data repository
  • Improving user feedback (like confirmation messages) helped build trust in the interface
  • Bill Moggridge, Designing Interaction: designing interaction is designing the relationship between people and things
  • We came to realize the responsibility of designing interfaces—validating and responding to users’ concerns; acknowledging the burden of new responsibilities
  • Administrative interfaces at the crux of the staff relationship to the new Cooper Hewitt experience
  • Anticipating issues in developing and maintaining administrative interfaces (when success feels like failure):
  • First, the human factor: being open to the feedback and creating an environment where the channels exist to communicate staff thoughts on the tool.
  • Second, the technical factor: being able to act on what you hear from staff and make the required changes to complete the feedback loop.
  • Our responsibility as facilitators of technology in the museum to hear and act on concerns.

Administrative interfaces at Cooper Hewitt-9

Questions to ask when starting an administrative application to anticipate issues and accommodate of feedback.

Question 1: To what degree should the (administrative) tool fit with pre-existing notions?

  • This question addresses the need to understand contextual use of the tool
  • Tagatron: curatorial culture around spreadsheets and TMS
  • Pen Pairing Station: this tool disrupted the expected ticket selling workflow. We learned the that the tool needed to make Pen Pairing as unobtrusive as possible

Administrative interfaces at Cooper Hewitt (10)

Question 2: How much of the underlying technology should come through to the interface?

  • Infrastructure & interfaces are layers of an onion—the best mental model for a tool’s interface might not reflect the best technical model for its back end
  • Tagatron: the filtering tools were a reflection of how data was stored in the database, not how curators expected it
  • Pen Pairing Station: error messages from all parts of the application stack came through to the user unaltered, this was not helpful to users
  • Highlights the need for a technical solution that allows for flexibility in the middle, “translation layer” of an application

Administrative interfaces at Cooper Hewitt (11)

Question 3: What kinds of feedback does the tool provide?

  • Feedback is the voice of the interface/ its personality–is it finicky or reliable? Annoying or supportive?
  • Tagatron: missing feedback created distrust
  • Pen Pairing: too much feedback caused confusion (error messages, validation handshake)
  • Our design and production methodology: working code always wins/ learning through doing; build small, working prototypes and continually iterate.
  • A more anticipatory form of design (like design thinking) could have helped us find answers to this question sooner

Administrative interfaces at Cooper Hewitt (12)

Question 4: Is it an appropriate time for experimentation?

  • Tagatron’s v1 included relatively unknown-to-us technology like MongoDB and nodejs. We should have used more familiar technology or done small-scale tests before implementing a project of this scale–it severely hindered our ability to accommodate feedback
  • Other tools we built that involved experimental tech were only successful because their scale and userbase were far smaller (label writer)

Administrative interfaces at Cooper Hewitt (13)

The result of everything: bridges, lines of communication opened

  • Building administrative tools for staff created cross-departmental conversation—in taking on the role of building and maintaining Tagatron and the Pen Pairing Station, the Digital & Emerging Media team engaged users from departments across the museum and observed closely how the tools fit into staff members’ larger roles