The Congressional Papers Section (CPS) met during the Society of American Archivists (SAA) annual conference in Washington, D.C. on August 15. The venue in the nation’s capital provided session organizers a chance to bring in congressional staffers and gave the opportunity for archivists to visit with their congressional delegations. The most prominent theme of the day was “advocacy”—advocacy for the history of undocumented groups, advocacy for saving important data, and advocacy for congressional offices to keep and donate their records—and the various ways this can be achieved.
The first session, sponsored by the CPS Diversity Task Force, was based on “archival silences” or the “empty spaces in…archival records and/or practices.” Panel contributors were from a public research library (Andrea Copeland), a history/memory preservation project (Adrena Ifill), a public and civic programs foundation (Scott St. Louis), and the Congressman Charles Rangel Archive Project (Kimberly Peach). Talks were on the following: the records of the Indianapolis AME Bethel Church (founded in 1836) and how Copeland persuaded their custodians to donate them to her institution; how Ifill helps African-American members of Congress to place their records in archives and draws documents from a variety of sources to tell their stories; how Peach uses the Rangel collection to illustrate local history; and how public policy foundations and archives can further civic education and illuminate previously under-studied areas. Copeland’s “lessons learned” included the need to diversify the profession (to better work with minority communities and access otherwise forgotten collections), getting participation from heritage groups and third party organizations, and being open to working with living donors. She introduced the idea of “participatory” heritage, a new and more activist way of thinking about archives as “service to the community.”
Diversity Panel (with speaker Andrea Copeland)
The CPS Electronic Records Committee (ERC), in partnership with the CPS Constituent Services System (CSS) Task Force, put together the second session on preserving and accessing CSS data, which can contain important topical, administrative, and demographic information. The session’s aim was to “demystify” CSS data and argue for its inclusion in a collection. Along with a summary of the Task Force’s recent report and a Senate Historical Office survey of current CSS use, the session introduced West Virginia University’s promising new tool to make CSS data accessible to researchers. The tool has now been granted a Lyrasis Catalyst Grant for further research and development, and both the ERC and the Task Force plan to follow its progress.[1] Then the group got an opportunity to talk with a Senate systems administrator (Vik Kulkarni) and a House office legislative correspondent (Lucy Shaw). Questions were on duplication between CSS and shared drives, who uses CSS, the differences between the House and Senate vendor contracts and downloads, and when to start the conversation about getting CSS data (Vik joked you should ask before the Member is elected!). Vik urged archivists to be thorough in discussions about the CSS with offices, such as finding out the context of the data, getting internal codes defined, technical facts, and any system migration information. Lucy pointed out that she sometimes finds her CSS clunky and glitchy, and that archivists should be aware of past problems and the presence of “junk data.”
CSS Panel (with Jessica Tapia showing the WVU CSS Tool)
The final panel, moderated by Nathan Gerth of the Advocacy Task Force, focused on the idea and practice of advocacy, on which CPS has a new brief “CPS Advocacy Day 2018”.[2] The session featured “insider perspectives” from several Senate staffers from offices in various stages of archiving, along with the experiences of a repository archivist, Leigh McWhite of the University of Mississippi. It was an open and often frank discussion, which ranged from how we as archivists can advise offices better, getting the attention of the Member, useful communications strategies, why the word “archivist” still scares staffers(!) and what to do about it, to the challenges of finding the best and most suitable repository–even whether a repository will even want the collection. To the last issue, which McWhite joked was a “dirty secret,” there may be institutions which have internal collecting policies that place limitations on what they can take and that there may be bureaucratic barriers on their end that prevent more active outreach.
Hopefully, attendees left motivated about new ways they can share and showcase their collections, work with legislators and their staffers, and why saving the CSS data (which can contain those communications and interactions with underrepresented groups!) may be worth it.
[1] The WVU CSS tool can be downloaded at https://github.com/wvulibraries/rockefeller-css. They have used Senate and House office data sets, one using the “Archival” download (limited number of fields) and the more robust “SDIFF” Interchange. More information about the tool can be found in the CSS Task Force report.
In 2016 the Task Force on Technical Approaches to Email Archives was formed to “construct a working agenda for the community” and articulate “a conceptual and technical framework” to archiving email. A Task Force feature of note is that, alongside members from academic libraries and institutional archives, there are representatives from two private sector tech companies, Microsoft and Google. Its final report “The Future of Email Archives,” is now out.[1] The report’s “primary objective…is assessing and recommending methods by which archivists can engage with these new…technologies as well as identifying gaps and recommending additional development.” The report recognizes that, while most archives have a basic handle on capturing, preserving, and providing access to email, it’s not yet done in a systematic way to become truly effective. It’s not enough to have the shared goal of archiving email; there must also be a deployment of new techniques and approaches, complemented by advanced technology, and improved advocacy. The following is a synopsis and review of the report.
In the first section, the “Untapped Potential of Email,” the Task Force observes that, far from being a dying medium, email is still pervasive and powerful, and still contains the important communications which archivists need to capture and maintain to fulfill their critical responsibility of preserving our history. It’s imperative for archivists to understand email in order to become fully knowledgeable about their collections. The nature of email—it’s ephemerality, size, tendency to disorganization, software requirements—means that it is fundamentally different from other electronic records. Among many other points made was that email is already changing traditional archival practices, in that it is drawing more repositories to acquire born-digital accessions. The following section, the “Email Stewardship Lifecycle,” extensively goes through the email lifecycle model, with the associated preservation challenges and suggestions for managing each level.
The more technical and substantive material exists in the next three sections, which are about defining email, assessing the landscape of email preservation, and identifying problems and promulgating potential solutions. The section “Email as a Documentary Technology” explains an email’s architectural characteristics (the structure of a message, operational and administrative features, software systems, and why email became so widely adopted), security vulnerabilities and their complications, and additional components (such as attachments and links). The next section, “Current Services and Trends”, is an excellent summary of the state of email communications in today’s society and how the IT industry and the archival/library community are addressing various challenges. Included are problems with email abuse and security (which can have a deleterious effect on ensuring authenticity) and the features industry is introducing to combat them (the report notes which solutions are the most and least valuable to archives). Also considered is the range of existing records management solutions and how useful they are (or not) to long-term email archiving, and the additional work the archival and library community has taken in capturing, authenticating, and preserving messages. This section notes that the archival community has already consolidated around disk imaging and various types of data exports as effective ways to capture emails. Discussion of current preservation tools and strategies are in “Potential Solutions and Sample Workflows.” As the report states, “regardless of the chosen approach, tools should be able to exchange data regarding both the email and relevant preservation actions.” The pros and cons of various preservation strategies, and suggestions for what a community data model should look like, are thoroughly discussed.
The report closes with the “Path Forward: Recommendations and Next Steps,” which provides both short-term and long-term proposals in the areas of community development/advocacy and tools support, testing, and development, helpfully targeted to repositories of varying levels of electronic records archiving and tech-savviness.
Some immediate takeaways from the report:
The documenting of process should be central to any email preservation strategy, and is part of an overall need to develop new kinds of archival practices (such as aggregate description).
The unique demands of email require extensive human intervention, and much earlier in the lifecycle than other electronic records because “decisions made or not made by those who send, receive, and manage email [at the start]…determine if and how it can be made available to future researchers.”
The openness of email that is key to its success and popularity is the primary complication of preserving it in the long-term, in accordance with archival principles.
Archivists should be aware that current “email archiving” industry solutions are largely geared to legal or records management issues of compliance, retention, and risk management, i.e. deletion of emails as soon as possible!
Since email is bulky and unorganized, automation (such as predictive coding) and even artificial intelligence in some form will play an increasing role in its management.
Scattered throughout the report are workflows/scenarios, term and format definitions, technical explanations, and case studies. One of the most valuable parts of the report is the series of appendices on email preservation tools and current email preservation research projects, including an automating system processes model. It’s a great resource for those who want to know about the latest initiatives and standards in email archiving.
The report provides one of the most comprehensive assessments of the subject and many readers will find their challenges, concerns, and experiences with archiving email reflected and verified. It certainly succeeds in establishing an “interoperable toolkit” that provides useful information and solid ideas for creating or furthering an email archiving plan or strategy without getting bogged down by policy decision specifics or limitations of tools. The report not only “gets down in the weeds,” but also considers the bigger picture by making connections between current technology and process and available financial and labor resources. The report is not a one-off but part of a larger Task Force program to secure stakeholders, follow up on promising approaches, and augment existing recommendations. The Task Force urges that, at whatever level you perform email preservation and access, you can achieve a workable solution.
[1] Officially announced at the session “Email Archiving Comes of Age” #201 on August 16, 2018 at the Society of American Archivist annual conference in Washington, D.C.
The Digital Archives Specialist (DAS) is one of two certificate programs available through the Society of American Archivists (SAA). Through both online and in-person courses, the curriculum is “designed to provide you with the information and tools you need to manage the demands of born-digital records.”[1] SAA provides excellent information on their website for those interested in checking out individual courses to meet a specific need and for those who want to pursue the DAS certificate. To earn a DAS certification, you must complete courses in four areas: Foundational (four), Tactical and Strategic (three), Tools and Services (one) and Transformational (one). SAA allows 24 months to complete the coursework, then up to five months after that to pass the comprehensive exam. All tests for individual courses are timed and taken online, with testing out of Foundational courses as an option. The comprehensive exam is taken the same way, but is only available during the months of April, August and December.
I’ve worked in the Baylor Libraries in Texas since 1995, first as a staff member in the fine arts library when I began my MLIS, and then as an assistant in library development when I completed my degree in December 2000. I moved to the W. R. Poage Legislative Library in June 2008, but didn’t begin working with archives until five years after arriving. I was strongly encouraged by the interim Dean to take the Certified Archivist (CA) exam and passed in August 2016.
I decided to pursue the DAS certificate to increase my understanding of the digital records received by congressional archives and to make better decisions about what to digitize. The nuts and bolts of getting the certificate were not overly taxing. SAA requires that you take at least two courses in person; however, I ended up taking seven on site and four online.[2] Two of the in-person courses were taught here at Baylor and hosted by the Poage Library. Hosting provided a good way for me to take two classes I needed, and also benefited other library staff members on campus. The farthest in-person course I took was in Kentucky. It was my last class and I wanted to take the comprehensive exam in April 2018, so I traveled for the timing to work out. The other courses were closer for me in Houston and Austin. Poage Library paid for most of the courses and I paid for travel.
Earning a DAS certificate facilitated my growth in knowledge of digital archives. I took courses in the essentials of electronic records – appraisal, arrangement and description, and preservation and access. I learned how to manage, curate and preserve digital archives. I even took a class on command line interface which was new to me, yet seemed familiar in an HTML kind of way.
What the process didn’t do was teach me the technical skills to be my own digital archive shop.[3] A few of the courses provided hands-on experiments with ingesting, arranging, and describing digital records. I did not possess a lot of technical skills other than HTML going in, and the DAS curriculum was not designed to teach them to me since the certificate requires only one “Tools and Services” course. I am very fortunate that at Baylor we have a digitization center with staff who possess the technical skills necessary to work with digital objects. However, they are not archivists trained in archival principles with an understanding of which artifacts possess significant research value, and are thus worthy of sustaining digitally across generations of technology. Digitizing material or retaining digital objects without applying archival principles is obviously expensive and unsustainable. The DAS curriculum has taught me to create the practices, procedures, and plans to complement and enhance the digital archives at my institution and library.
The DAS provided professional growth and will assist with my CA recertification. I am glad I made the effort to earn it, and I look forward to taking more courses next year to meet the 5-year recertification. Never stop growing. Never stop learning!
[2] A number of courses have been added since I started in October 2016, and will make it easier to complete more classes online.
[3] DAS courses can help with learning technical skills with classes such as Advanced Digital Forensics for Archivists, Research Data Curation, Preservation Formats, Tool Selection and Management, and Archival Collections Management Systems.
The Electronic Records Committee (ERC) of the Congressional Papers Section is pleased to announce a new set of modules for its electronic records manual.
The modules, by new entrants John Caldwell (University of Delaware) and Nathan Gerth (University of Nevada), are on workflows involving BitCurator (a suite of digital preservation tools) and a descriptive tool for digital records called Brunnhilde. The BitCurator module sets out the steps needed to create a disk image, and the Brunnhilde module explains how an appraisal report can be generated from a disk image. Because these modules work with different aspects of disk images using BitCurator, they closely complement each other. BitCurator is an important resource in the archival community and we are pleased to have some examples of possible workflows for it. The modules feature diagrams, screenshots, and command line instructions, and cover the areas of accessioning and description. These modules bring our collection to 15!
The idea of the modular manual is to provide documentation for a possible method to address a need in an electronic records workflow. An institution can mix and match them to create an electronic records workflow that meets its needs. Community members are invited to contribute their own process documentation. The goal is to build up a collection of modules that offer alternatives to each task that makes up the electronic records workflow, from donor discussions through access.
Modules are offered separately or complete in a PDF portfolio.
When formulating this initiative, the ERC put together an outline for further modules. Please contact the ERC if you would like to contribute a module and we will provide you with suggestions.
Alison White, of the Senate Historical Office, and other current or former Congressional archivists, participated in a session on issues involving digital preservation of congressional records at the National Digital Stewardship Alliance’s Digital Preservation conference back in October. Instead of doing just a regular blog post about the session, she put together a special “storify” of the archivists’ tweets.
One of the core functions of the U.S. Congress is to represent the views and needs of constituents. Members of Congress serve as constituents’ connection to the federal government, representing constituents, state, and local issues while debating legislation of national importance.
Constituent correspondence, or issue mail and casework, has traditionally been maintained as voluminous paper files (and sometimes microfilm), but since the late 1970s, Congress has employed computerized systems. As systems have become proprietary and grown in complexity, the exported data poses real challenges to the archives and libraries seeking to preserve the documentation of this crucial relationship between constituents and their representatives.
Without a strategic effort by the archival community to process, preserve, and provide access to constituent data, an entire body of historical documentation of American democracy is in danger of disappearing.
With this reality in mind, the Congressional Papers Section formed the CSS/CMS Task Force in August 2016 and charged it with investigating the issue and making recommendations. Over the last year, the Task Force has pulled together the most comprehensive report to date about Constituent Services Systems (CSS) in the Senate and Correspondence Management Systems (CMS) in the House of Representatives.
The report provides an overview of how CSS and CMS developed over time, which vendors and systems are currently in use, how systems are deployed in congressional offices, and how data is transferred and exported. It looks at the common obstacles faced by archives and libraries that receive the data and the ways in which archivists have been trying to address these challenges. It also provides a list of academic work that has relied on analog constituent correspondence, while noting the vast potential for research, both inside and outside the academy, with constituent data.
Finally, and most importantly, the report provides guidance for short- and long-term management and preservation in collecting repositories. The short-term guidelines provide instructions for conversations with congressional offices about constituent data, as well as basic born-digital best practices to ensure data is stored safely.
In the long-term, the report recommends a vested advocacy coalition to support management guidelines in member offices, in commercial vendors working with congressional offices, and in collecting repositories, and it calls on the community to develop a technological solution for processing, preserving, and providing access to constituent data that will benefit both large and small repositories.
The Electronic Records Committee is excited to host a CPS Day (July 26) session titled “Hands-On Introduction to Born-Digital Processing.” Our “Electronic Records Modular Manual,” which showcases electronic records archiving workflows using different processing methods and tools, has proven popular with CPS members, and we thought it would be useful to bring the modules to life for those who may be curious about some of the solutions available. Our session will feature demonstrations of four digital preservation tools—Bag It/Bagger, DataAccessioner, Bulk Extractor, and a timestamp analyzer—which cover such needs as packaging materials together for ingest, checksums generation, file migration, PII identification, and metadata preservation. The session will focus on a scenario involving records of a retiring Senator and will include moderators to walk participants through each tool. Participants will get a chance to try out the tools, either on their own or with a group, with supplied data sets. The end of the session will be reserved for questions and discussion.
Our session is at the end of the day, so consider staying a while longer to take advantage of this tutorial! Watch the CPS listserv for more details and instructions.
The Electronic Records Committee (ERC) of the Congressional Papers Section is pleased to announce a new set of three modules for its electronic records manual.
One of the modules covers the important issue of processing email, in this case Microsoft PST files. It goes through the steps of using Microsoft Outlook, a PST Reporter, and a PST Viewer to process an email account and provides a comparison table on their benefits. This module uses the skills of three Senate archivists who regularly use these tools. The other two modules by ERC member Jim Havron reflect his knowledge and experience helping archives find solutions to implementing electronic records storage and use Google Drive as a collaboration, storage, and access tool. The modules include workflows, diagrams, and tool instructions, and cover the areas of processing, digital preservation, and access.
The idea of the modular manual is to provide documentation for a possible method to address a need in an electronic records workflow. Once enough modules are written, an institution can mix and match them to create an electronic records workflow that meets its needs. Community members are invited to contribute their own process documentation. The goal is to build up a collection of modules that offer alternatives to each task that makes up the electronic records workflow, from donor discussions through access.
Modules are offered separately or in a portfolio (to which institutions can add additional modules as they become available).
When formulating this initiative, the ERC put together an outline for further modules. Please contact the ERC if you would like to contribute a module and we will provide you with suggestions.
By Elisabeth Butler, Senate Historical Office, and John Caldwell, University of Delaware
This month, the Association of Centers for the Study of Congress (ACSC) held its annual meeting at the Library of Congress in one of its beautiful meeting rooms in the Jefferson Building. ACSC is an independent alliance of more than 50 political papers collecting organizations and institutions that promotes the history and understanding of the U.S. Congress. Since many ACSC members are also CPS members, John Caldwell, former Senate Historical Office Fellow and now of the University of Delaware, and I thought we should do a blog post on the meeting! The meeting featured some great sessions on the theme of “connecting and communicating;” a keynote by Dr. Colleen Shogan, Deputy Director of the Library’s National and International Outreach, on the Library’s core strategies related to the development and execution of its extensive outreach initiatives; a bit of “SpeedGeeking,” a fun learning and networking exercise; and thoughts and insights from congressional celebrities. We took notes on the sessions we thought would be of interest to CPS members.
ACSC was honored to host Senate Majority Leader Mitch McConnell (R-KY) who is an avid proponent of preserving the records of Congress. In 1991, Senator McConnell founded the McConnell Center at the University of Louisville to “nurture Kentucky’s next generation of great leaders,” with programs focusing on service, leadership and civic education. The Center is also the home of the Senator Mitch McConnell and Elaine L. Chao Archives. The Senator has been sending records periodically to the archive since the 1990s, and has had a professional archivist on his staff to assist in this process. The Senator regularly sits down with Deborah Skaggs Speth, the curator of his archives, to record an oral history of his time in leadership. His wife, former Secretary of Labor and sitting Transportation Secretary Elaine Chao, has also donated her personal papers to the archive.
When asked what advice he would give on the value of archiving, the Senator said that he would tell fellow members “you are more important than you think you are.” Members of Congress do things of great importance with wide-ranging consequences, and it’s important for members to preserve their role in these events through archiving.
Deborah Skaggs Speth (McConnell Archives) with Senator Mitch McConnell
In addition, ACSC always showcases former Members of Congress. This year’s was former Rep. Barney Frank and his long-serving Chief of Staff Peter Kovar. During the time with Rep. Frank, we learned about his experiences while in office, the sense of “legislative fellowship” that exists on the Hill, and his view that there is a lack of understanding of how the constituency drives the decisions and actions of Congressional members.
Left to Right: Peter Kovar, Barney Frank, and moderator Ray Smock
Not many people are aware that the Library of Congress is itself a congressional repository, a fact which was underlined in the session on the Library’s venerable Manuscripts Division. The Division has 900 congressional collections, which date from the 18th century to the present. First up was Connie Cartledge, Senior Archives Specialist, who talked about how the Division processes its congressional papers, with a focus on the Senator Daniel Patrick Moynihan and Rep. Jack Kemp papers. They survey the papers, prepare a processing plan, mostly process to the folder level, and decide on the arrangement of series. Cartledge illustrated her talk with slides showing the two collections going through the laborious process of archival processing. So what happens to the digital materials in these collections? Kathleen O’Neill, Senior Archives Specialist, spoke on this topic. Like most archival institutions, the Division is seeing more and more unique born digital materials in the recent collections and are facing the same issues of PII, scale, and ensuring authenticity and access to researchers. It uses digital forensic tools to produce reports and analysis on records, observes the “original order” rule, is committed to bit-level preservation, and serves up digital collections in “bags” to researchers. Intellectual order happens in the finding aid. Another speaker, Ryan Reft, gave an overview of the history and contents of the Division’s congressional collection section, but remarked that the Division is now shifting its focus on collecting the papers of the more significant political actors and related campaign and interest groups.
So how have congressional collections been used in the past year? The session “Researchers, Teachers, Oh My!” featured Hope Grebner Bibens, Political Papers Archivist at Drake University, a graduate student Caitlin Rathe, and Brad Owens, a lecturer in journalism. Grebner Bibens spoke on how she uses the Senator Tom Harkin collection to create courses for students on archives and studying how Congress works. Rathe, whose dissertation focuses on the development of food assistance policy from the 1960s to 1980s in the U.S. and United Kingdom, talked about her research in various institutions, including the Senator Robert and Elizabeth Dole Institute, which she found particularly useful for studying farm policy. Of note was her observation that studying the Dole papers helped her trace the thinking that shaped food policy and what Congress’ intent was at the time. Finally, Owens told how he used the papers of his district’s 1930s-1940s Congressman, especially constituent correspondence, to teach his students about the “game of politics” and to be the source of stories for debate. In the course of his research he discovered materials on racial attitudes and their influence on local politics which he found especially fascinating. All the speakers agreed that researchers must be open-minded with the materials because you never know what you may find!
In the session “Creating Connections with Existing Collections,” we heard from three individuals who are experimenting with new forms of use and access to existing archival resources. Jaime Mears, a Program Officer in the Library of Congress’ National Digital Initiatives office, talked about the education and outreach her office does. In the last year, they’ve developed a “hack to learn” event, teaching archivists and information professionals how to develop tools to perform complex computational analysis on existing data. They also hosted a data summit in September 2016, inviting librarians and archivists to tackle the daunting but vitally important role of “archives as data.” Leah Weinryb Grohsgal, a Senior Program Officer at the National Endowment of the Humanities, talked about the Chronicling America national digital newspaper program which has digitized over 12 million newspaper pages. They recently hosted a data challenge, allowing users to take the API (application program interface) for Chronicling America and develop an open source tool to analyze the data. The grand prize winner designed a tool to analyze the use of biblical quotes in newspapers. Finally, Andrew Wilson, National Archives’ Director of Digital Engagement, talked about its Innovation Hub, where citizens can become archivists and transcribe digitized records or scan their own to contribute to the National Archives Catalog.
Friday morning afforded us two incredible sessions: the first—“Wikipedia and Civic Engagement”– discussed the value of Wikipedia in civil discourse, the hosting of Wikipedia edit-a-thons to enhance Wikipedia’s accuracy by correcting or adding articles, and the use of the Wikidata platform to add data elements to articles which cross all of the Wikipedia language-specific libraries. For instance, P485 is the Wikidata property “archives at;” using this attribute, we can link finding aids to existing articles (regardless of language), both increasing access to holdings and validating the information in Wikipedia.
The second Friday session, “Digital Preservation at the Library of Congress,” was on the Library of Congress’ involvement in a range of digital preservation initiatives. Kate Murray, from Digital Collections and Management Services, talked about the Federal Agencies Digital Guidelines Initiative, a collaborative effort of more than 20 federal agencies which are working to set standards and develop tools to preserve both reformatted and born-digital records. Ted Westervelt from the Acquisitions and Bibliographic Access Directorate, talked about the Recommended Formats Statement, which is a list of criteria to look at when preserving both analog and digital materials. The statement is not prescriptive, but rather gives institutions a set of practices and standards to use when measuring long-term preservation needs. Abbie Grotke, a member of the Web Archiving team, talked about the value of having an active web archiving program. Specifically, Grotke spoke about the challenges of selecting the resources to preserve and how to manage and make accessible over one petabyte of web content. Abigail Potter, a Senior Innovation Specialist from the National Digital Initiatives Division, gave attendees a history of the National Digital Information Infrastructure and Preservation (NDIIP) program and its successors. NDIIP was an important outlet for cultural heritage institutions, providing guidance, sharing tools, and fostering partnerships in digital preservation.
The Library of Congress panel also included a presentation from Elizabeth England, the National Digital Stewardship Resident at Johns Hopkins University. Elizabeth’s project focuses on developing a preservation program for 12 TB of born digital photographs. Elizabeth walked the audience through her project, explaining her appraisal decisions (including sampling the collection for retention), sharing the functionality of two Python scripts to automate the accessioning process, and open access tools like Open Refine to clean up her data. This presentation was particularly valuable for our community, which is used to dealing with vast amounts of photographs in collections.
This year’s ACSC meeting was a wonderful mix of sessions that really spoke to the need of connecting with one another and communicating the value of our collections with researchers, students, and the public at large. There was a strong emphasis on digital records and how current technology can help us process, preserve, and use them, and we were especially fortunate to hear how the Library of Congress is contributing to that effort.
The full conference agenda is available here. There is also a great blog post on a session at last year’s ACSC conference.
The Electronic Records Committee (ERC) of the newly renamed Congressional Papers Section is pleased to announce a new set of five modules for our electronic records manual.
These five modules tackle such complex subjects as file format migration, file fixity, describing digital/paper accessions, and the digital transfer process. There is also a module on the technological and security aspects of configuring and using offline computers for researchers. Systems or programs featured include Archivist Toolkit and DataAccessioner. Most of the modules include extensive workflow steps, diagrams, and tool instructions, and cover the areas of digital preservation, accessioning, description, and access. Thanks to contributors Adriane Hanson, Erin Wolfe, Jim Havron, Katie Delacenserie, and Elisabeth Butler.
The idea of the modular manual is to provide documentation for a possible method to address a need in an electronic records workflow. Once enough modules are written, an institution can mix and match them to create an electronic records workflow that meets its needs. Community members are invited to contribute their own process documentation. The goal is to build up a collection of modules that offer alternatives to each task that makes up the electronic records workflow, from donor discussions through access.
Modules are offered separately or in a portfolio (to which institutions can add additional modules as they become available). An updated portfolio is offered.
When formulating this initiative, the ERC put together an outline for further modules. Please contact the ERC if you would like to contribute a module and we will provide you with suggestions.