ADS Business Process Review

In early 2018, as part of  the ADS strategic plan to maintain and develop our world-leading position in digital preservation and Open Access publishing in Archaeology, the ADS management team commissioned a Business Analyst at the University of York (Jamie Holliday) to provide an external, critical, yet friendly review of the work of the ADS and Internet Archaeology. The aim was to identify opportunities to improve our service delivery, processes, management practices and staff development. The review took a mainly qualitative approach, using a balanced scorecard methodology, looking at ADS from the perspective of:

  • Customers
  • Finances
  • Internal Processes
  • Learning & Growth

The review also commented on more general strategic issues that emerged, including succession planning, achieving clarity of vision and improving our financial position to allow for increased reinvestment. A follow-up review, conducted by the University’s Assistant Director of Information Services and Head of IT Infrastructure, Arthur Clune, focused on ADS Technical Systems. The reports, recommendations and ADS Action Plans were received by the ADS Management Committee in October 2018, although there is ongoing work on charging models.

Staffing News

The most immediate and visible impacts of the review have been some changes to ADS roles and staffing. In September 2018, with the departure of Louisa Matthews to undertake a PhD in the University of Newcastle we took the opportunity to create a new post, held by Katie Green. Whilst it has the job title of Collections Development Manager, it actually combines aspects of this role with that of her former job as Communications and Access Manager. Other aspects of the former CDM role have been taken by Ray Moore, our new Archives Manager. Ray is now the first port of call for archive costings, and also oversees the day-to-day work of the archivists. The most recent change is that we have appointed a Deputy Director to oversee operations management:  Tim Evans, who joined ADS in 2006 as ALSF Digital Archivist and is currently HERALD project manager, will take this post up from December. Tim will retain responsibility for oversight of HERALD, the OASIS redevelopment project, and will also begin to represent ADS in a broad range of external partnerships. Finally, we hope soon to be looking to appoint at least one Digital Archives Assistant, an entry-level trainee grade for budding archivists.

Watch this space!

Julian Richards

ADS Director

in the dark near the Tannhäuser Gate

Blade Runner 1982, by Bill Lile Image shared under a  CC BY-NC-ND 2.0 licence

As it’s World Digital Preservation Day I thought I’d finished the following blog about our work with managing the digital objects within our collection. Like most of my blogs (including the much awaited sequel to Space is the Place) these often languish for a while awaiting a final burst of input. To celebrate WDPD 2018, here we go….


I half-heartedly apologise for the self indulgent and title to this blog, which most readers will know is taken from Rutger Hauer’s speech in the film Bladerunner (apparently he improvised). Aside from being an unashamed lover of the original film, like Roy Batty in the famous rooftop finale I’ve recently been prone to reflection on the events I’ve witnessed [at the ADS] over the last few years. In all honesty these aren’t quite on a par with “Attack ships on fire off the shoulder of Orion”, but perhaps as impressive in their own little way.

This reflection isn’t prompted by any impending doom – that I’m aware of – but rather that the  some of my recent work has been looking at the work the ADS has done in the context of the last two decades, for example looking at the history of OASIS as we move further into the redevelopment project, and revisiting the quantities and types of data we store as we find we’re rapidly filling up file servers. Along with this is a sudden realisation that after so long here I have become part of the furniture (I’ll let the reader decide which part!). However as colleagues inevitably leave – and although we do take the utmost care to document decisions (meeting minutes, copies of old procedural documents etc) the institutional memory sometimes becomes somewhat blurred, even taking on mythical status: “We’ve always done it like that”, “That never worked because…”, “So-and-so was working on that when they left”, “A million pounds” and so on.

Saving uploading Julian’s consciousness to an AI, which even with our best efforts we’re still some way off perfecting, there’s a danger of much of this internal history becoming lost (like tears in the rain). Over the past few years I’ve quite enjoyed talking – mainly to peers within the wider Digital Preservation community – about issues and problems/successes at the ADS. And just recently I gave a talk at CAA (UK) in Edinburgh about the twenty year journey of the ADS, from one of the 5 AHDS centres to a self-sustaining accredited digital archive. The talk itself didn’t have a particularly large audience, perhaps a result of the previous nights party (the conference as a whole was welcoming and well-organised) or the glittering papers in the parallel session, plus on this occasion I think I struggled to get 20 years of history into exactly 20 minutes!

The main thing I really wanted to communicate to people was quite how far the ADS have come technically and conceptually, from our beginnings in 1996, where we are now, and more importantly where we want to be. As a previous blog has covered in massive detail (WITH GRAPHS!) our holdings have grown considerably over the years, with associated problems in finding room to store things. Another issue as we surge past 2.5 million files is increasing the capacity for our users (and us!) to find things. As I showed an enraptured audience at CAA we’ve come along way from  2006 (when I joined) when we were running 2 or 3 physical servers, to the present day where we have a dispersed system of nearly 40 virtual machines with a range of software, which in turn support a large array of tools, applications and services that underpin our website(s), and the flows of data we provide to third parties.

The legendary minerva 2. Bought in 2006 and with its mighty 8Gb of RAM was for many years the backbone of the ADS systems. Now retired from the University server racks, it now sits quietly in our office. Contrary to the sticker, this does not contain the consciousness of Michael Charno.
A simplified representation of our current systems: 38 virtual machines supporting a range of server software which support ADS Applications and Services,

I always think this is an unseen part – to many outsiders –  of what the ADS do, and along with the procedures we have for actually being an archive there’s a whole lot of work going on underneath what we make visible to our users. In the talk to CAA I used the common analogy of a swan, what you see is the website, what you don’t see are the feet paddling away underneath. This doesn’t detract from the website of course, a commitment to providing access to data has always been a fundamental part of what and who we are. It’s as frustrating to us as to a user when someone can’t find what they’re looking for, especially when they know it exists. Which is why it is interesting (and I really think it is) to look at how we manage our data, and to make the ‘ADS Swan’ as efficient as possible.

For example, back in the old days (2006) interfaces to data were effectively hard coded into web-pages using the ColdFusion platform (CFML) as an interface between the XHTML and underlying file server and database. This was ok in its way, although still required someone to either code in links to file, generate file listings in page (or via separate scripts or commands). A common source of many broken links of this era is simply human error in generating these lists and replicating them in the web-page.

The old way of doing things…

Of course, even at the time my colleagues were aware that this was not the most efficient way we could work, and even the functions of ColdFusion (and its successors OpenBlue Dragon and Lucee) that generated listings directly in the code were still reliant on someone actually setting which directory was needed and how to handle the results directly in the page. Not great for when we had to update things… There was also an issue of the information displayed in the page, effectively you came to an archive, scrolled through and were presented with descriptions that were often little more than the file-name. There was also the massive issue of a disconnect between the files and the interface, actual file-level metadata was only stored in the files (e.g. CSV) in the file store. Our Collections Management System (CMS) stored lots of information about the collection, and we know it had files in it, but not the details. Any fixing, updating, migrating, querying all had to be done by hand, which was fine when we only had a small number of collections but presented problems when scaling up. Effectively, we had to get our files (or objects/datastreams) into some sort of Digital Asset Management System. Cue project SWORD-ARM

This project is probable deserving of its own essay, suffice to say we investigated using Fedora (Commons, as it later became) as a DAM for storing all the lovely rich technical and thematic metadata we collect, and perhaps most importantly had already collected (we already had several hundred collections of nearly a million files at this point). In short, an implementation of Fedora to suit our needs was deemed too complicated, and with too high a level of subsequent software development and maintenance for us to sustain. At that point -and again to our understanding and needs  – if even deleting a record required issuing a ticket for our systems team (the magnificent Michael and prodigious Paul at that point), then we were onto a loser. For our needs, perhaps all we needed as a database and a programming language…

The heroes of this story were undoubtedly Paul Young, Jenny Mitcham, Jo Gilham and Ray Moore who between them created an extension to our existing CMS: the Object Management System (OMS). The OMS is really too big to explore in too much detail, but the design of it was based on three overarching principles:

  1. To manage our digital objects in-line with the PREMIS data model
  2. To store accurate and consistent technical file-metadata
  3. To store thematic metadata about the content (what does the file show/do?)

The ambition was, and still is, to have a situation where a user provides much of this information ready-formed courtesy of an application such as ADS-EASY or OASIS. But most importantly I believe (and for this blog not to derail into masses of detail) was the move towards an implementation of the semantic units as defined in PREMIS. To explain, consider the shapefiles below.

What’s an object?

In our traditional way of doing things we just had a bunch of files on a server. Here, we have the files in the database but also a way of classifying and grouping them to explain what they are. So for example, a Shapefile has commonly used the dBase IV format (.dbf) for storing attributes; we also get .dbf as stand-alone databases.  We need to know that this .dbf is part of a larger entity, and should only be “handled” as part of that entity. In this case a Shapefile is normalized to GML (3.2) for preservation, and zipped up for easy dissemination. All of these things are part of the same representation object, we need to keep them together however dispersed they are across servers, associate them with the correct metadata, and plan their future migration accordingly.

And of course this is where we can store all our lovely technical and thematic metadata. For example I know for any object:

  • When it was created
  • What software created it
  • Who created it
  • Who holds copyright
  • Geographic location
  • It’s subject (according to archaeological understanding)
  • The file type – according to international standards of classification
  • Its checksum
  • Its content type
  • If it’s part of a larger intellectual entity

And we’re close to also fully recording an objects life-cycle within our system

  • When it was accessioned
  • When it was normalized – and the details of this action
  • When it was migrated
  • If it was edited
  • etc etc

I’ve deliberately over-simplified a very complicated process there as I’m running out of words. But suffice to say that the hard work many people (including current colleagues Jenny and Kieron) have put in on developing this system is nearing a stage where the benefits of all this are tantalizing close.

Now, readers from a Digital Preservation background will understand how that’s essential for how we need to work. The lay reader may well be thinking of the benefit to them. Put simply, this offers the chance to explore our objects having and independence away from their parent collections. For example, when working on the British Institute in Eastern Africa Image Archive (https://doi.org/10.5284/1038987) Ray built this specialised interface for cross-searching all the images. In this case all the searching is done on the metadata for the object representation, so for example:

http://archaeologydataservice.ac.uk/archives/view/object.cfm?object_id=1187702

It’s not too much of a jump to see future versions of the ADS website look to incorporate cross-collection searching. Allowing people quick, intuitive access to the wealth of data we store and perhaps, a way to cite the object… Something to aim for in a sequel at least.

Anyway, as always if you’ve made it this far thanks for reading

Tim

Wonders of the ADS:

Journey into the archive with our new online gallery.

The Wonders of the ADS, is a digital exhibition dedicated to highlighting the outstanding digital data held in the ADS archive.

 

Carlotta Cammelli

The Wonders of the ADS digital exhibition developed out of a collaborative project with Carlotta Cammelli, a Leeds University MA Art Gallery and Museum Studies student as part of her Masters dissertation. The project entitled Unearthing the Archive: Exploring new methods for disseminating archaeological digital data aimed to develop an innovative online approach to present specific digital objects (such as photographs, drawings, documents, videos and 3D data files) from the ADS collections in order to increase public engagement with the data in our archive.

Traditionally the ADS is used by researchers with specific interests in mind. The structure of the ADS into individual archives also means that sometimes interesting material can be buried within the vast quantity of data held by the ADS.
Continue reading Wonders of the ADS:

Space is the Place (part I)

server-racks-clouds_blue_circuit” by Kin Lane. CC BY-SA 2.0

This is the first part of a  (much delayed) series of blogs investigating the storage requirements of the ADS. This began way back in late 2016/early 2017 as we began to think about refreshing our off-site storage, and I asked myself the  very simple question of “how much space do we need?”. As I write it’s evolving into a much wider study of historic trends in data deposition, and the effects of our current procedure + strategy on the size of our digital holdings. Aware that blogs are supposed to be accessible, I thought I’d break into smaller and more digestible chunks of commentary, and alot of time spent at Dusseldorf airport recently for ArchAIDE has meant I’ve been able to finish this piece.

——————-

Here at the ADS we take the long-term integrity and resilience of our data very seriously. Although what most people in archaeology know us for is the website and access to data, it’s the long term preservation of that data that underpins everything we do. The ADS endeavour to work within a framework conforming to the ISO (14721:2003) specification of a reference model for an Open Archival Information System (OAIS). As you can see in the much-used schematic reproduced below, under the terminologies and concepts used in the OAIS model ‘Archival Storage’ is right at the heart of the operation.

How we actually achieve this is actually a pretty complicated process, documented in our Preservation Policy; suffice to say it’s far more than simply copying files to a server! However, we shouldn’t discount storage space entirely. Even in the ‘Zettabyte Era’, where cloud-based storage is commonplace and people are used to streaming or downloading files that – 10 years ago – would have been viewed as prohibitive, we still need some sort of space on which to keep our archive.

At the moment we maintain multiple copies of data in order to facilitate disaster recovery – a common/necessary strategy for any organisation that wants to be seen as a Digital Archive rather than simply a place to keep files. Initially, all data is maintained on the main ADS production server maintained by the ITS at the University of York which is backed up via daily snapshot, with these snapshots stored for a month, and furthermore backed up onto tape for 3 months.

In addition to this, all our preservation data is synchronised once a week from the local copy in the University of York to a dedicated off site store, currently maintained in the machine room of the UK Data Archive at the University of Essex . This repository takes the form of a standalone server behind the University of Essex firewall. In the interests of security outside access to this server is via an encrypted SSH tunnel from nominated IP addresses. Data is further backed up to tape by the UKDA. Quite simply, if something disastrous happened here in York, our data would still be recoverable.

This system has served us well, however recently a very large archive (laser scanning) was deposited with us. Just in it’s original form it was just under a quarter of the size of all our other archives combined, and almost filled up the available server space at York and Essex. In the short term, getting access to more space is not a problem as we’re lucky to be working with very helpful colleagues within both organisations. Longer-term however I think it’s unrealistic to simply keep on asking for more space at ad-hoc intervals, and goes into a wider debate over the merits of cloud-based solutions (such as Amazon) versus procuring traditional physical storage space (i.e. servers) with a third party. However I’ll save that dilemma for another blog!

However, regardless of which strategy we use in the future, for business reasons (i.e any storage with a third party will cost money) it would be good to be able to begin to predict or understand:

  • how much data we may receive in the future;
  • how size varies according to the contents of the deposit ;
  • the impact of our collections policy (i.e. how we store the data);
  • the effect of our normalisation and migration strategy.

Thus was the genesis of this blog….

We haven’t always had the capacity to ask these questions. Traditionally we never held information about the files themselves in any kind of database, and any kind of overview was produced via home brew scripts or command-line tools.  In 2008 an abortive attempt to launch an “ADS Big Table” which held basic details on file type, location and size was scuppered by the difficulties in importing data by hand (my entry of “Comma Seperated Values” [sic] was a culprit). However we took a great leap forward with the 3rd iteration of our Collections Management System which incorporated a schema to record technical file-level for every file we hold, and an application to generate and import this information automatically. As an aside, reaching this point required a great deal of work (thanks Paul!).

As well as aiding management of files (e.g. “where are all our DXF files?”), this means we can run some pretty gnarly queries against the database. For starters, I wanted to see how many deposits of data (Accessions) we received every year, and how big these were:

Number of Accessions (right axis) and combined size in Gb (left axis), ADS 1998-2017

As the graph above shows, over the years we’ve seen an ever increasing number of Accessions, that is the single act of giving us a batch of files for archiving (note: many collections contain more than one accession). Despite a noticeable dip in 2016, the trend has clearly been for people to give us more stuff, and for the combined size of this to increase. A notable statistic is that we’ve accessioned over 15 Tb in the last 5 years. In total last year (2017), we received just over 3 Terrabytes of data, courtesy of over 1400 individual events; compared with 2007 (a year after I started work here) where we received c. 700Mb in 176 events. That’s an increase of 364% and 713% respectively over 10 years, and it’s interesting to note the disparity between those two values which I’ll talk about later. However at this point the clear message is that we’re working harder than ever in terms of throughput, both in number and size.

Is this to do with the type of Accessions we’re dealing with? Over the years our Collections Policy has changed to reflect a much wider appreciation of data, and community. A breakdown of the Accessions by broad type adds more detail to the picture:

Number of Accessions by broad type, ADS 1998-2017

Aside from showing an interesting (to me at least) historical change in what the ADS takes (the years 1998-2004 are really a few academic research archives and inventory loads for Archsearch), this data also shows how we’ve had to handle the explosion of ‘grey literature’ coming from the OASIS system, and a marked increase in the amount of Project Archives  since we started taking more development-led work around 2014. The number of Project Archives should however come with a caveat, as in recent years these have been inflated by a number of ‘backlog’ type projects that have included alot of individual accessions under one much larger project, for example:

This isn’t to entirely discount these, just that they could be viewed as exceptional to the main flow of archives coming in through research and development-led work. So without these, the number of archives looks like:

Accessions for Project Archives: all records and with backlog/ALSF/CTRL removed, ADS 1998-2017

So, we can see the ALSF was having an impact 2006-011, and that 2014-2016 Jenny’s work on Ipswich and Exeter, and Ray’s reorganisation of CTRL was inflating the figures somewhat. What is genuinely startling, is that in 2017 this ceases to be the case, we really are taking 400+ ‘live’ Accessions from Project Archives now. How are these getting sent to us? Time for another graph!

Number of Accessions for Project Archives, split by delivery method, ADS 1998-2017

The numbers clearly show that post-2014 we are seeing alot more smaller archives being delivered semi-automatically via ADS-easy (limit of 300 files) and OASIS images (currently limited to 150 raster images). When I originally ran this query back in early 2017 it looked like ‘Normal’ deposits (*not that there’s anything that we could really call normal, a study of that is yet more blogs and graphs!) were dropping off, but 2017 has blown this hypothesis out of the water. What’s behind this, undoubtedly the influence of Crossrail which has seen nearly 30 Accessions, but also HLCs, ACCORD, big research projects, and alot of development-led work sent on physical media or via FTP sites (so perhaps bigger or more complex than could be handled by ADS-easy). Put simply, we really are getting alot more stuff!

There is one final thing I want to ask myself before signing off; how is this increase in Accessions affecting size? We’ve seen that total size is increasing (3 Tb accessioned in 2017), but is this just a few big archives distorting the picture? Cue final graphs…

Size (Gb) of Accessions for Journals, Archives and Grey Literature, ADS 1998-2017
Size (Gb) of Accessions from Normal archives, ADS-easy and OASIS images, ADS 1998-2017

I’m surprised somewhat by the first graph, as I hadn’t expected  the OASIS Grey Literature to be so high (1.5 Tb), although anecdotes from Jenny and Leontien attest to size of files increasing as processing packages enable more content to be embedded (another blog to model this?). Aside from this, although the impact of large deposits of Journals scans (uncompressed tiff) can be seen in most years, particularly 2015, it does seem as though we’re averaging around 1.5 Tb per year for archives. Remember, this is just what we’re being given and before any normalisation for our AIP (what we rely on for migration) and DIP (what we disseminate on the website). And, interestingly enough, the large amount of work we are getting through ADS-easy and OASIS images isn’t having a massive size impact, just under 400Gb combined for the last 3 years of these figure.

—————-

Final thoughts. First off, I’m going to need another blog or two (and more time at airports!) to go deeper into these figures, as I do want to look at average sizes of files according to type, and the impact of our preservation strategy on the size of what we store. However, I’m happy at this stage to reach the following conclusions:

  • Over the last 5 years we’ve Accessioned 15 Tb of data.
  • Even discounting singluar backlog/rescue projects and big deposits journal scans, this does seem to represent a longer term trend in growth
  • OASIS reports account for a significant proportion of this amount: at over a Tb a year
  • ADS-easy and OASIS images are having a big impact on how many Accessions we’re getting, but not an equal impact on size.
  • After threatening to fall away, non-automated archives are back! And these account for at least 1.5Tb per year, even disregarding anomalies.

Right, I’ll finish there. If anyone has read this far, I’m amazed, but thanks!

Tim

ps. Still here? Want to see another graph? I’ve got lots…

Total files Accessioned per year, ADS 1998-2017

Following the closure of Birmingham Archaeology (BUFAU), a project was initiated to identify and secure important born-digital archival material, and latterly to arrange transfer to the ADS. I’ve had the pleasure of archiving this digital material, including images, CAD files, databases and GIS over the last few months. The archives and reports of Birmingham Archaeology can now be accessed from the overview page:  http://archaeologydataservice.ac.uk/archives/view/1959/

A total of 68 BUFAU archives have been released. Below I will highlight some of my favourite archives that I have worked on over the last couple of months.

Connecting Derby

Ahead of the redevelopment of Derby Inner Ring Road, Birmingham Archaeology was commissioned to undertake archaeological fieldwork. This site consisted of several different archaeological investigations, including a watching brief, an evaluation, an excavation and an historic building recording. Stratified archaeological deposits spanned a period from the 11th to the 20th centuries. This archive includes an extensive image gallery, reports, CAD files and GIS.
Continue reading Birmingham Archaeology Digital Archives

New Look Website

The ADS are pleased to announce that the ADS Library will be moving out of its Beta phase and go Live on Tuesday 16th January. Concurrently with this the ADS will also be launching a newly designed website. The main aim of the new website design is to make it easier for our users to access our searchable resources. With the launch of the ADS Library the ADS now provides three main heritage environment search tools:

Each of these tools should be used to search for different types of information held by the ADS. Archsearch is for searching metadata records about monuments and historic environment events in the UK. The ADS Archives is the place to search for historic environment research data (such as images, plans, databases) and contains international and UK data. The ADS Library is a bibliographic tool for searching for written records on the historic environment of Britain and Ireland. Where possible, the record will provide a direct link to the original publication or report.

Close up of the drop down menu available on the new website.

In order to make the differences between these  search tools clear to users, and to make all three tools easy to find from our main website, we will be introducing a new website menu with drop-down links that enable a user to go straight to each of our search resources. This new drop-down menu can be seen in the image on the right.

Users will also be given the option to access a main search page that will explain the differences between each of the available search options. This page will then allow you to choose  which search facility to send your chosen keywords to.

 

The new ADS search page. Clicking on one of the buttons below the search bar will search the chosen resource.

The ADS has also taken this opportunity to redesign the layout of our website, creating a bold new home page, designed to better highlight our featured collections and news items, while providing links to our new search and deposit pages.

New home page.

Our new Deposit page will also provide clearer links to the different types of data deposit options available to researchers wishing to archive data with the ADS.

New page, highlighting the three different methods of depositing data with the ADS.
New deposit page.

 

 

 

 

 

 

 

Our new About page provides clear links to our operations policies and details of our governance.

New about page.

The new design will include a help tab on our menu with links to frequently asked questions and our contact details, allowing users to troubleshoot problems faster and get the right help quicker.

The new design will reduce the number of main tabs in the menu. This means that some of our resources have moved location. For example our Teaching and Learning page will now be found under the Advice tab. However, despite the reduction in the number of main options on the menu, the introduction of the drop-down feature will mean that, in practice, more pages will be directly accessible from the menu than previously. Overall the new design will surface the most important pages of our website better and make our key resources accessible via fewer clicks.

Although the design and structure of the website has changed, and some things may now be found in a different location, very few URLs have changed. Only out-of-date pages have been removed so bookmarks to specific pages should still work, and Archsearch, the ADS Archives and the ADS Library are still navigated in exactly the same way. If you have any trouble finding resources  please contact help@archaeologydataservice.ac.uk .

We hope you enjoy the new design.

 

 

 

 

 

 

 

2017 Round-up of Internet Archaeology

It’s been another busy year for Internet Archaeology. One of the reasons I manage to just about stay on top of things is the help of a small number of volunteers who have given up their time to work on a whole range of aspects of the journal production, promotion and management. So I gladly namecheck Erica Cooke, Lesley Collett and Hayden Strawbridge.

This lovely infographic was created by Lesley and sums up the 2017 visitors and page views of the journal very nicely. It’s good to know that all that content we work on actually gets read…a lot. And if page loading takes just a few seconds longer on a Tuesday, now you know why!

ADS Goes Live on International Digital Preservation Day

On 30th November 2017 the first ever International Digital Preservation Day will draw together individuals and institutions from across the world to celebrate the collections preserved, the access maintained and the understanding fostered by preserving digital materials.

The aim of the day is to create greater awareness of digital preservation that will translate into a wider understanding which permeates all aspects of society – business, policy making and personal good practice.

To celebrate International Digital Preservation Day ADS staff members will be tweeting about what they are doing, as they do it, for one hour each before passing on to the next staff member. Each staff member will be focusing on a different aspect of our digital preservation work to give as wide an insight into our work as possible. So tune in live with the hashtags #ADSLive and #idpd17 on Twitter or follow our Facebook page for hourly updates. Here is a sneak preview of what to expect and when:

Continue reading ADS Goes Live on International Digital Preservation Day

Meet the #OAFund winner!

To mark the 2017 Open Access week, we thought it would be a good time to introduce the winner of our first Open Access Archaeology fund award (see our original announcement here), decided on after much deliberation and consideration by the panel of 3 independent judges. So…

Meet Chris

Figure 1: Chris with his geophysics equipment. Image credit: C. Whittaker

Chris Whittaker carried out a survey at Breedon on the Hill, a multi-period hilltop site, as part of his undergraduate dissertation at Newcastle University, supervised by Dr Caron Newman. After graduating he worked outside archaeology in the technology sector. However conscious that his data was potentially at risk, he applied to the fund to help preserve the data and publish his findings. He has since started to study for a research master’s in settlement archaeology at Newcastle University.

The judges felt that Chris’ proposal – Breedon Hill, Leicestershire: an archaeological investigation at the multi-period hilltop site – was “an important site and methodically-collected dataset, which made good use of both Internet Archaeology and ADS, with the data having considerable potential for re-use to inform future fieldwork”.

About Breedon Hill
Breedon Hill, Leicestershire is a scheduled ancient monument. The hilltop was the site of a univallate hillfort present from the Early-Middle Iron Age. From the 7th century AD, a minster church was founded within the hillfort enclosure. Today, approximately two-thirds of the Iron Age rampart, and much of the hillfort interior, has been irretrievably lost due to quarrying (Figure 2). The investigation combined magnetometry and resistivity geophysical surveys, alongside digital terrain models (processed LIDAR data), to contribute to the understanding of the character and development of the hillfort interior and its immediate environment. Very little is known about the different phases of occupation at the hilltop, as previous excavations have primarily focussed on the ramparts, and so Chris’ investigation sought to address this issue.

Figure 2: Breedon Hill Quarry. Taken from http://www.geograph.org.uk/p/4597198 ©Anthony Parkes and licensed for reuse under creativecommons.org/licenses/by-sa/2.0

The results of Chris’ geophysical survey reveal several phases of roundhouses and post-hole built structures, as well as several potential associated enclosures, in the south-eastern part of the hillfort interior. These will be published as part of a future open access article in Internet Archaeology and will link to a related digital archive deposited with the Archaeology Data Service. We are looking forward to working with Chris in the coming months.

The church at Breedon in relation to what remains of the western rampart. Image credit: C. Whittaker

Chris said “The work was undertaken while I was an undergraduate student, firstly as part of an independent summer research programme (processing the LIDAR data), and secondly as part of an undergraduate dissertation (undertaking the geophysical survey). Publisher or institutional paywalls are often barriers for local researchers to study the world around them. And I know from personal experience that projects such as the digitisation of volumes of the Derbyshire Archaeological Journal, preserved with the ADS, are of great benefit to local and school-level research alike. From a research perspective [open access] offers many opportunities for colleagues from different backgrounds to build on and potentially refine the resources preserved.”

And now, we start all over again…
As you know, the Open Access Archaeology fund is made up of donations, set aside to support the digital archiving and publication costs of those researchers for whom funding is simply not available despite research quality and whose digital data is potentially at greater risk.

Thank you to everyone for your support for our #OAFund which is now being used to support the open access dissemination of Chris’ work. Of course, in making the first award, we now need to start all over again to raise sufficient funds for the next round to help more early career and independent researchers like him. So please consider donating today and help to reduce the barriers to open archaeological research and advance knowledge of our shared human past.

https://www.yorkspace.net/giving/donate/archaeology-fund

We want to send out lots more of our little USB trowels just like last year and we have an extra special gift for everyone who sets up a recurring monthly or annual gift!

Open Access Archaeology Fund ready to make its first award!

Nine months ago, we launched our Open Access Archaeology Fund. We have sent our little USB trowels all over the globe by way of a ‘thank you’ and we have been thrilled with everyone’s generosity, not least in such austere times.

So, it makes us even happier to say that sufficient funds have now been accrued and we are in a position to make our first award to cover costs of an unfunded proposed archive or article. (Full details of eligibility can be found here)

So if you or someone you know, has already submitted an article proposal or approached ADS about an archive for which you have no funding, then you can apply to the fund today.

Have you donated yet?
The successful application will likely deplete the fund substantially but we did not want to delay making the first award – it is infinitely preferable that the benefits of the fund can be fast and tangible. However we need more donations to do it all again in 6 months time!

Every donation you make helps to ensure that more archaeological research is open and accessible.

Donate today