Journey into the archive with our new online gallery.
The Wonders of the ADS, is a digital exhibition dedicated to highlighting the outstanding digital data held in the ADS archive.
The Wonders of the ADS digital exhibition developed out of a collaborative project with Carlotta Cammelli, a Leeds University MA Art Gallery and Museum Studies student as part of her Masters dissertation. The project entitled Unearthing the Archive: Exploring new methods for disseminating archaeological digital data aimed to develop an innovative online approach to present specific digital objects (such as photographs, drawings, documents, videos and 3D data files) from the ADS collections in order to increase public engagement with the data in our archive.
Traditionally the ADS is used by researchers with specific interests in mind. The structure of the ADS into individual archives also means that sometimes interesting material can be buried within the vast quantity of data held by the ADS. Continue reading Wonders of the ADS:→
This is the first part of a (much delayed) series of blogs investigating the storage requirements of the ADS. This began way back in late 2016/early 2017 as we began to think about refreshing our off-site storage, and I asked myself the very simple question of “how much space do we need?”. As I write it’s evolving into a much wider study of historic trends in data deposition, and the effects of our current procedure + strategy on the size of our digital holdings. Aware that blogs are supposed to be accessible, I thought I’d break into smaller and more digestible chunks of commentary, and alot of time spent at Dusseldorf airport recently for ArchAIDE has meant I’ve been able to finish this piece.
Here at the ADS we take the long-term integrity and resilience of our data very seriously. Although what most people in archaeology know us for is the website and access to data, it’s the long term preservation of that data that underpins everything we do. The ADS endeavour to work within a framework conforming to the ISO (14721:2003) specification of a reference model for an Open Archival Information System (OAIS). As you can see in the much-used schematic reproduced below, under the terminologies and concepts used in the OAIS model ‘Archival Storage’ is right at the heart of the operation.
How we actually achieve this is actually a pretty complicated process, documented in our Preservation Policy; suffice to say it’s far more than simply copying files to a server! However, we shouldn’t discount storage space entirely. Even in the ‘Zettabyte Era’, where cloud-based storage is commonplace and people are used to streaming or downloading files that – 10 years ago – would have been viewed as prohibitive, we still need some sort of space on which to keep our archive.
At the moment we maintain multiple copies of data in order to facilitate disaster recovery – a common/necessary strategy for any organisation that wants to be seen as a Digital Archive rather than simply a place to keep files. Initially, all data is maintained on the main ADS production server maintained by the ITS at the University of York which is backed up via daily snapshot, with these snapshots stored for a month, and furthermore backed up onto tape for 3 months.
In addition to this, all our preservation data is synchronised once a week from the local copy in the University of York to a dedicated off site store, currently maintained in the machine room of the UK Data Archive at the University of Essex . This repository takes the form of a standalone server behind the University of Essex firewall. In the interests of security outside access to this server is via an encrypted SSH tunnel from nominated IP addresses. Data is further backed up to tape by the UKDA. Quite simply, if something disastrous happened here in York, our data would still be recoverable.
This system has served us well, however recently a very large archive (laser scanning) was deposited with us. Just in it’s original form it was just under a quarter of the size of all our other archives combined, and almost filled up the available server space at York and Essex. In the short term, getting access to more space is not a problem as we’re lucky to be working with very helpful colleagues within both organisations. Longer-term however I think it’s unrealistic to simply keep on asking for more space at ad-hoc intervals, and goes into a wider debate over the merits of cloud-based solutions (such as Amazon) versus procuring traditional physical storage space (i.e. servers) with a third party. However I’ll save that dilemma for another blog!
However, regardless of which strategy we use in the future, for business reasons (i.e any storage with a third party will cost money) it would be good to be able to begin to predict or understand:
how much data we may receive in the future;
how size varies according to the contents of the deposit ;
the impact of our collections policy (i.e. how we store the data);
the effect of our normalisation and migration strategy.
Thus was the genesis of this blog….
We haven’t always had the capacity to ask these questions. Traditionally we never held information about the files themselves in any kind of database, and any kind of overview was produced via home brew scripts or command-line tools. In 2008 an abortive attempt to launch an “ADS Big Table” which held basic details on file type, location and size was scuppered by the difficulties in importing data by hand (my entry of “Comma Seperated Values” [sic] was a culprit). However we took a great leap forward with the 3rd iteration of our Collections Management System which incorporated a schema to record technical file-level for every file we hold, and an application to generate and import this information automatically. As an aside, reaching this point required a great deal of work (thanks Paul!).
As well as aiding management of files (e.g. “where are all our DXF files?”), this means we can run some pretty gnarly queries against the database. For starters, I wanted to see how many deposits of data (Accessions) we received every year, and how big these were:
As the graph above shows, over the years we’ve seen an ever increasing number of Accessions, that is the single act of giving us a batch of files for archiving (note: many collections contain more than one accession). Despite a noticeable dip in 2016, the trend has clearly been for people to give us more stuff, and for the combined size of this to increase. A notable statistic is that we’ve accessioned over 15 Tb in the last 5 years. In total last year (2017), we received just over 3 Terrabytes of data, courtesy of over 1400 individual events; compared with 2007 (a year after I started work here) where we received c. 700Mb in 176 events. That’s an increase of 364% and 713% respectively over 10 years, and it’s interesting to note the disparity between those two values which I’ll talk about later. However at this point the clear message is that we’re working harder than ever in terms of throughput, both in number and size.
Is this to do with the type of Accessions we’re dealing with? Over the years our Collections Policy has changed to reflect a much wider appreciation of data, and community. A breakdown of the Accessions by broad type adds more detail to the picture:
Aside from showing an interesting (to me at least) historical change in what the ADS takes (the years 1998-2004 are really a few academic research archives and inventory loads for Archsearch), this data also shows how we’ve had to handle the explosion of ‘grey literature’ coming from the OASIS system, and a marked increase in the amount of Project Archives since we started taking more development-led work around 2014. The number of Project Archives should however come with a caveat, as in recent years these have been inflated by a number of ‘backlog’ type projects that have included alot of individual accessions under one much larger project, for example:
This isn’t to entirely discount these, just that they could be viewed as exceptional to the main flow of archives coming in through research and development-led work. So without these, the number of archives looks like:
So, we can see the ALSF was having an impact 2006-011, and that 2014-2016 Jenny’s work on Ipswich and Exeter, and Ray’s reorganisation of CTRL was inflating the figures somewhat. What is genuinely startling, is that in 2017 this ceases to be the case, we really are taking 400+ ‘live’ Accessions from Project Archives now. How are these getting sent to us? Time for another graph!
The numbers clearly show that post-2014 we are seeing alot more smaller archives being delivered semi-automatically via ADS-easy (limit of 300 files) and OASIS images (currently limited to 150 raster images). When I originally ran this query back in early 2017 it looked like ‘Normal’ deposits (*not that there’s anything that we could really call normal, a study of that is yet more blogs and graphs!) were dropping off, but 2017 has blown this hypothesis out of the water. What’s behind this, undoubtedly the influence of Crossrail which has seen nearly 30 Accessions, but also HLCs, ACCORD, big research projects, and alot of development-led work sent on physical media or via FTP sites (so perhaps bigger or more complex than could be handled by ADS-easy). Put simply, we really are getting alot more stuff!
There is one final thing I want to ask myself before signing off; how is this increase in Accessions affecting size? We’ve seen that total size is increasing (3 Tb accessioned in 2017), but is this just a few big archives distorting the picture? Cue final graphs…
I’m surprised somewhat by the first graph, as I hadn’t expected the OASIS Grey Literature to be so high (1.5 Tb), although anecdotes from Jenny and Leontien attest to size of files increasing as processing packages enable more content to be embedded (another blog to model this?). Aside from this, although the impact of large deposits of Journals scans (uncompressed tiff) can be seen in most years, particularly 2015, it does seem as though we’re averaging around 1.5 Tb per year for archives. Remember, this is just what we’re being given and before any normalisation for our AIP (what we rely on for migration) and DIP (what we disseminate on the website). And, interestingly enough, the large amount of work we are getting through ADS-easy and OASIS images isn’t having a massive size impact, just under 400Gb combined for the last 3 years of these figure.
Final thoughts. First off, I’m going to need another blog or two (and more time at airports!) to go deeper into these figures, as I do want to look at average sizes of files according to type, and the impact of our preservation strategy on the size of what we store. However, I’m happy at this stage to reach the following conclusions:
Over the last 5 years we’ve Accessioned 15 Tb of data.
Even discounting singluar backlog/rescue projects and big deposits journal scans, this does seem to represent a longer term trend in growth
OASIS reports account for a significant proportion of this amount: at over a Tb a year
ADS-easy and OASIS images are having a big impact on how many Accessions we’re getting, but not an equal impact on size.
After threatening to fall away, non-automated archives are back! And these account for at least 1.5Tb per year, even disregarding anomalies.
Right, I’ll finish there. If anyone has read this far, I’m amazed, but thanks!
ps. Still here? Want to see another graph? I’ve got lots…
On 30th November 2017 the first ever International Digital Preservation Day will draw together individuals and institutions from across the world to celebrate the collections preserved, the access maintained and the understanding fostered by preserving digital materials.
The aim of the day is to create greater awareness of digital preservation that will translate into a wider understanding which permeates all aspects of society – business, policy making and personal good practice.
To celebrate International Digital Preservation Day ADS staff members will be tweeting about what they are doing, as they do it, for one hour each before passing on to the next staff member. Each staff member will be focusing on a different aspect of our digital preservation work to give as wide an insight into our work as possible. So tune in live with the hashtags #ADSLive and #idpd17 on Twitter or follow our Facebook page for hourly updates. Here is a sneak preview of what to expect and when:
The ADS, Historic England and the Council for British Archaeology are pleased to announce the beta release of ADS Library.
Weaving a web of references.
The ADS Library is the fusion of existing datasets. These include journal and series backruns archived with the ADS, the Library of unpublished fieldwork reports (aka the Grey literature library) which is mostly populated with reports from OASIS and last but not least the British and Irish Archaeological Bibliography (BIAB) which is in itself a collection of different datasets which have been collected over the last hundred years.
The project to get these references online as a single resource has involved cleaning, mapping and enhancing the data from the different datasets. Allowing them to share the same data structure and hopefully give users as consistent information about each item listed in the library. Some records simply show the existence of a report or publication and others link out to the publication itself where available. There was some overlap in the combined datasets and we have endeavoured to merge records where appropriate in order to limit the existence of duplicates in the lists of results. Continue reading ADS Library: BETA version now online!→
In December of last year (2016), I completed the final stage of the digital archive and dissemination for the The Rural Settlement of Roman Britain project. The first publication and (revised) online resource were launched at a meeting of the Society for the Promotion of Roman Studies at Senate House of the University of London.
I’ve written previous blogs on the project, so won’t repeat myself here too much. Suffice to say that the final phase publishes the complete settlement evidence from Roman England and Wales, together with the related finds, environmental and burial data. These are produced alongside a series of integrative studies on rural settlement, economy, and people and ritual, published by the Society for the Promotion of Roman Studies as Britannia Monographs. The first volume, on rural settlement, has now been published, while the two remaining volumes will be released in 2017 and 2018.
The existing online resource has been updated both in content and functionality: the project database is available to download in CSV format, and most key elements of the finds, environmental and burial evidence have been added into the search and map interface. Hopefully the dissemination of the data in these forms allows re-use of this fantastic dataset in a variety of ways and, I hope, by a variety of users.
As with previous posts on this project, I’d like to say how much I’ve enjoyed working with the team at Reading and Cotswold. Producing an online archive and formal publication in tandem and in such a short time is no mean undertaking. I’m particularly happy/impressed with the determination by the researchers to make their data openly available at the earliest opportunity. Hopefully this is a benchmark that others will aspire to reach. A debt of thanks is also due to all those organisations that assisted the project, particularly the HERs of England and Wales who provided exports from their systems and aided the team at Cotswold with access to fieldwork reports. Finally, I’d have been lost without the awesome Digital Atlas of the Roman Empire created by Johan Åhlfeldt. At an early stage it became clear that creating any kind of ‘baseline mapping’ of Roman archaeology (combining NMP + HER data for example) would be problematic – both in terms of technical overheads and copyright. To do something on the scale of the EngLaId project’s ArcGIS WebApp simply wasn’t in the scope of the project! Johan’s work was thus timely and extremely useful in providing a broad backdrop of Roman Britain in which to compare the project results.
The rationale behind much of the interface work was to act as data publication of an academic synthesis and not to get tied down in building something akin to a Roman portal. Throughout the project we’ve been at pains to point out that this is very much a synthesis and interpretation of the excavated evidence in relation to a research question. Not a complete inventory or atlas of every Roman site. Indeed, it became clear that as soon as the data collation had been completed 31st December 2014 for sites in England and March 2015 for sites in Wales), it was effectively missing all the discoveries made in the following years. Thus although providing broad context was necessary in this case, if someone wanted to know everything about the Roman period (including sites not excavated) from a particular area they’d be best off consulting the relevant HER.
This in turn leads onto the $64,000 Question which I was asked at every event around England and Wales (including the final one in London). “What plans are there to keep this database updated”? Without wishing to appear pessimistic, I would always answer “None”. Aside from the logistics and finances of keeping a large database as this constantly updated, there’s also the fact that this is a very subjective synthesis of a much larger resource. To my mind, the key question is how do we make it easier for other researchers to build on this and have academic synthesis of a period or theme happen on a more regular basis. One of the answers to this is surely access to data, especially the published and non-published written sources. This isn’t really radical, and indeed increased access to data is being explored and recommended by the Historic England Heritage Information Access Strategy. The work of the Roman Rural Settlement project has many lessons to inform these strategies, some of which will form future papers by the project team. Out of curiosity I’ve undertaken my own analysis of the project database and ‘grey literature’ sources (a term I don’t like!) and the OASIS system but will save that for a separate blog post. ..
At the post-launch meal I did end up asking the team a rather cheesy question of “which is your favourite record”? The responses were often based around the level of finds, or in the relative level of information the site could add to a regional picture. My answer(s) were perhaps a little more prosaic, for example I really like records such as Swinford Wind Farm (Leicestershire) which has fieldwork reports disseminated via OASIS, and a Museum Accession ID. However my heart veers towards 42 London Road, Bagshot (Surrey): the site of my very first experience of archaeology as a somewhat geeky 16 year old. The site was never published, and thus it’s great to see it live on in this resource and with a link to the corresponding HER record to (hopefully) allow users to go and explore the wider area. Perhaps even to undertake their own research project. To my mind, to stimulate further work large and small that would be a great legacy of the project.
Back in November (16th-18th), I was lucky enough to be invited to participate in the Cultural Heritage and New Technologies (CHNT) conference in Vienna. As detailed in my excitable post, written in advance of the event, my involvement was to represent the ADS at the session and subsequent round tables hosted by the ARIADNE project on the subject of Digital Preservation. One of the reasons I was so excited was that it was one of the few occasions on which the focus of such sessions was solely on the issues surrounding Digital Preservation: how it’s undertaken, problems and the challenge of ensuring re-use. It was also the first time, in public at least, that individuals representing organisations undertaking Digital Preservation from across Europe came together to present as a united front and presented to the wider heritage community. In addition, the event also took place at the beautiful Vienna town hall in (see below), a fantastic venue.
It was incredibly heartening to hear from European colleagues on their experiences, successes and challenges. I also felt that all the papers in the session – no doubt due to the diligence of co-chairs from DANS , DAI IANUS and the Saxonian State Department for Archaeology – meshed together really well. Although there were common themes, each was unique and presented a different tale to tell. Although somewhat biased, at the end of the formal session I came away thinking that I had not only contributed, but had learnt in equal measure. For those interested, IANUS have agreed to host the abstracts and presentations from the session on their website. I’d recommend these to everyone interested in a European-wide approach to the issues of digital archiving.
The first round table followed the formal session, and was listed as an open invitation for delegates to query the archivists in the room about where/when/what/how to archive. Surprisingly, considering the high profile parallel sessions, the room was packed with an array of people from a variety of backgrounds and countries across Europe. As such, the conversation veered between the extreme poles of the subject matter – for example the basic need for metadata versus adherence to the CIDOC-CRM. Reading between the lines here, what I thought the attendance and diverse topics showed was that this type of event was not only useful, but actually essential for archivists and non-archivists alike. Not only to correct misconceptions and to genuinely try and help, but also to alert us to the issues as perceived from the virtual work-face.
After a well-earned rest, and a quick visit to the Christmas markets for a small apfelwein, the next day was a chance for all the archivists to get together for an informal round table on issues affecting their long term, and shorter term objectives. Issues ranged from the need for accreditation – one of the ADS’ goals in this regard is to learn from DANS’ experience of achieving NESTOR – to file identification and persistent identifiers. In this setting the ADS is perceived as very much the elder statesperson (!) in the room, having been in the business for 20 years now, and it’s a good feeling to be able to pass onto colleagues advice and lessons from our own undertakings. I think it’s important that we continue to do this, not only to be nice (and I like to think we’ve always been approachable!), but also to achieve a longer-term strategic strength. Although we (the ADS) are winning many of the challenges at home in terms of championing the need for consideration of digital archives, there’s always more to be done. When we can also point to equivalents in continental Europe, I feel we only make our cause stronger.
However I’m also conscious that this isn’t just a one-way street and that we’ve still a great deal to learn from our European colleagues. Not only in things like accreditation, but also shared experiences on tools, file formats, metadata standards and internal infrastructure. We often say that Digital Preservation never stands still, so in this regard it’s good to look at what others are doing and reflect on what we could do better. Events such as this – and the international community of archaeologists doing Digital Preservation built in its wake – serve to make us richer in knowledge, and renewed of purpose. Looking forward to the next one!
It’s long been known that the conservation and built heritage sector have not really engaged with OASIS, the ADS and digital archiving in general. We wanted to investigate why and what could be done about this.
The project aimed to:
Establish a state of the sub-sector snapshot of digital archiving practice/awareness
Survey practitioners we have not traditionally engaged with – IHBC, RTPI etc. facilities managers, local authority staff, etc.
Conduct outreach in terms of event attendance, video, leaflet and training workshop.
To mark our shared 20th anniversary year, Internet Archaeology and the Archaeology Data Service have combined forces to launch the Open Access Archaeology Fund, with the specific aim of supporting the journal publishing and archiving costs of researchers who have no means of institutional support. We are asking you to support our efforts by pledging a recurring or single gift.
We are grateful for all gifts and to say thank you, everyone who donates over £25 will receive a token of our appreciation – one of our highly desirable red USB trowels. A limited number of special edition orange and purple trowels are also available for those who make donations of between £50-£74.99 (orange) and £75 and over (purple).
Fund allocation will be prioritised to those without means of institutional support, namely early career researchers and independent scholars. As the Fund develops, we will publish the total raised and a list of the articles and archives assisted by your generosity.
Thank you for your support, by giving to the Open Access Archaeology Fund you help to reduce the barriers to open archaeological research and advance knowledge of our shared human past. Donate Today – Every Gift Helps
Next month, the Archaeology Data Service (ADS) are contributing to an exciting session at the CHNT conference in Vienna: Preservation and re-use of digital archaeological research data with open archival information systems. The session is being organised by partners within the ARIADNE consortium, and chaired by members of the Data Archiving and Networked Services (DANS – Netherlands), the Research Data Centre Archaeology and Ancient Studies (DAI IANUS – Germany ), the Saxonian State Department for Archaeology (Germany) and the ADS (UK).
The original rationale behind organizing the session was the need to ensure preservation and re-use of the ever-growing corpus of digital data produced through archaeological activity. Put simply, what we are creating must be available for future generations to consult, but also feed back into current research and practice. Accordingly, the focus of the session is on the services and duties of existing repositories and archives, including case studies and experiences of technical considerations such as formats, authenticity/validity and metadata. Participants will also offer wider perspectives on the rationale for curation, how it can be achieved, lessons learned, the relevance of the OAIS-standard and future challenges. Believing that there is no true preservation without re-use, the session also concentrates on dissemination; discussing accessibility, publicity (getting people to re-use data), and novel and creative methods of data publication as demonstrated through case studies.
The speakers are drawn from a range of cultural heritage institutions, representing a mix of established digital archives and current research projects that are investigating archival solutions, thus offering a range of international perspectives on the Session themes. From an ADS point of view, it will be great to meet up with familiar faces but also hear from (and get to know) new projects. In this vein the Session is followed by a Round Table which will allow for further discussion on topics, as well as allow those new to digital curation the discover more about the subject.
This inclusive participation, and learning from the experiences of international partners is a key theme of the ARIADNE project, and personally I’m excited to not only offer a UK perspective but also to learn from my colleagues and to feed back into my day-to-day role at the ADS.
It’s hard to believe, but next week will mark my 10 year anniversary at the ADS. I originally started on a one-year contract to oversee the archiving of key digital outputs produced by English Heritage ALSF projects (with the job title of ALSF Curatorial Officer), but have since stayed on in the role of Digital Archivist, more recently taking over responsibility as the ADS’ Preservation Lead.
The realisation that I’d spent a decade in one organisation initially triggered a Proustian flashback of projects, archives and even files I’d worked on, and thus the idea of a blog was born. I was tempted to call this blog something like “In Search of Lost Time” (time being a portmanteau of my first name and initial of my surname), but was perhaps a little floral as well as erroneous: here at the ADS we never lose anything…
Curious as to what I’d achieved over this period (apart from a sense of satisfaction in safeguarding humanity’s digital heritage), I returned to the ADS Collections Management System (CMS) to query what it was I had worked on. In short, I’ve been responsible for
1018 accessions (the act of receiving and ingesting data from a depositor)
1 x 3.5 inch floppy disc
298 x CD-ROMs
46 x DVDs
208 x Emails
337 x FTP downloads
12 x HTTP downloads
87 USB hard drives
30 USB memory sticks
5523 Web uploads (via OASIS)
Archiving 377 collections
Updating/adding to a further 169 collections (Journals, collections of OASIS reports etc)
Curated 323,050 accessioned files (in 800,000+ files on our AIPs and DIPs)
Undertaken 4094 processes (e.g. migrations)
Of which 966 processes related to the creation of Preservation PDF/A (12,592 files if you’re curious)
Drunk at least 11300 cups of tea (a slightly spurious figure based on an average of 5 cups a day x (10x(annual working days – holiday)).
Over that time, and all those cups of tea, there are definitely some projects that stick in my mind as being memorable. So, to commemorate my decade in data, here are my top 10 covering every year I’ve been at the ADS:
One of the first sizeable projects to come through as part of my ALSF work, this was instrumental in building up a strong start to the project. It’s also a useful dataset arising from a modern appraisal of an old rescue excavation.
Although tempted to opt for Gwithian (check out the photos!), I went for this project which was completed in 2008. It’s a nice mixture of reports, data and photos from (to my mind) quite an important site, especially if you’re interested in the dating of pit alignments.
Primarily because I worked on the fieldwork project (look carefully for pictures of a youthful Tim), but also as it was at the time, the largest archive we held. A detailed archive for a very interesting site.
Although at first appearance this is a somewhat modest archive, it represents a great leap forward. This was the first archive from an agreement between ADS and Southampton Arts and Heritage, whereby digital archives arising from development-led work in the City of Southampton would be passed onto the ADS. We now have several agreements with Local Authorities to perform this role (for example see Worcestershire), and it all started here. As an aside, I often use this archive as an example to show to students as it comprises a compact, well-documented dataset including reports, images and a plan – essential material for anyone working in/researching the city.
The PaMELA database consists of two main parts: a literal digital transcription of Jacobi’s card index (the Jacobi Archive); and a searchable database with typological and chronological keys (the Colonisation of Britain database). I could spend hours browsing this archive!
I expected to put down the Roman Rural Settlement of Britain project, but I won’t consider that finished until the final interface (with access to all data) is finished later this year. So I’ve gone for this project, a rescue of a dataset that had been available on another website, but subsequently removed. The interface has a strong spatial element, and after some thought I moved away from Google Maps and ESRI products (such as ArcGIS Server) to embrace OpenLayers. In the end the hard-learnt lessons (e.g. how to close a polygon?) reaped dividends in my work on the large map for the Roman project.
Before working for the ADS, I’d spent most of my professional life working for Birmingham Archaeology (previously known as BUFAU). That organisation closed in 2012, and subsequently a project undertaken to ensure that all key physical and digital materials are transferred to a suitable archive. We’re only halfway through the project, but already we have the majority of the c.2000 reports written over the years, and a selection of digital materials. It’s been good to go back to where I started, and even to archive some of my own (not very good!) reports!
I’ll end the blog there, who knows, I may update this in another 10 years!