Category Archives: Digital Humanities

Sonic Visualisation

This sonic visualisation involved a few stages. First, original music had to be created using an M-AUDIO Axiom air mini 32 midi-controller keyboard, recorded using the Ignite software package and exported as a .wav file. This .wav file was imported into a downloadable edition of Sonic Visualiser, a sonic analysis tool created by the Centre for Digital Music at the University of London. Sonic Visualiser created an exportable .png file of the .wav file converted into a spectogram. Information on the nature of spectograms can be found at

Below is a copy of the .png file. To view it correctly, right-click the image, save it and then open it with Paint. This should allow one to view the entire image as a movable image in its proper scale.

The .png file is a static image without sound. Sonic Visualiser creates a .sv file that can be viewed if the software is downloaded but it is not exportable. To create a video of its playback necessitated using a camera, in this case an Olympus SZ-14.

The video below is a slightly cropped version of the visualisation (some bass frequency notes appear below the horizontal axis). It features green, rather than white, spectogram effects. As the camera could only record the sound distantly, there was a significant drop in both the quality of the audio as well as its volume (it may be best to playback at a high volume).

The video file below is a .wmv file version of a .mp4, converted using WinFF in order to reduce its file size sufficiently to enable its uploading.

Spectograms record the frequency, loudness and length of every sound. The louder the sound, the darker the colour. Notes sound when the image reaches the left vertical axis.

Sonic visualisation

Satellite Mapping and ‘Citizen Science’


The OpenStreetMap Foundation was founded in England in 2006 as a non-profit organisation to generate free and reusable geographical data that can be used for geospatial analysis; information that the company has described as “raw data” that “can be used to make a map, but it is not [itself] a map”. Modern cartography is an exact science. The development of satellite imagery has served to highlight how much of the globe has yet to be accurately mapped. Can the OpenStreetMap initiative possibly serve as a solution to this challenge?

Widespread use of satellite technology (not least through smartphones) has encouraged “crowdsourcing” initiatives whereby the general public can volunteer information via the Internet. An intended incentive to encourage this development in the field of cartography is the Humanitarian OpenStreetMap Team (HOSM). This is a voluntary organisation registered under US law that seeks to map areas that have been designated by Hilary Clinton’s Health Access Initiative as requiring medical facilities but have not yet been properly mapped.

The link between OpenStreetMap (which neither stores nor offers satellite imagery) and HOSM (which is hosted by Imperial College London) is DigitalGlobe, a commercial American company that provides both the satellite imagery and the map editing software that enables the general public to volunteer information on HOSM. A recent initiative operating on the same principle is GlobalXplorer; an archaeological mapping project (supported by National Geographic and NASA) that uses satellite imagery. However, while the latter is a project that can allow room for error without any potentially serious consequences, is the same true for cartography for humanitarian purposes?

The possibility of error in even highly professional mapping projects is perhaps highlighted by the story on Wikipedia regarding the US Department of Defence’s National Geospatial Intelligence Agency (NGA). It also works with Digital Globe (the provider of images for Google Earth and Google Maps), as well Google and Microsoft, but such combined expertise did not prevent an US Navy ship from being grounded during 2013 through relying on a digital map that misplaced a reef by eight miles. Whether true or not, this story surely illustrates that inaccurate information is not only counterproductive but also potentially dangerous when it comes to mapping. Is this a potential problem with cartography for humanitarian purposes? As the author is not a smartphone user, the app-based MapSwipe service was not a practical alternative to examine this question. Therefore, it was decided to focus exclusively on Humanitarian OpenStreetMap by creating an account and examining the project in closer detail.

Humanitarian Open Street Map

The first thing one discovers on signing up for a Humanitarian OpenStreetMap account is that its map identity editor (or iD editor) is remarkably easy to use. It offers three tools to edit a map; namely, points, to indicate places; lines, to indicate roads or paths; and areas, to indicate the outline of buildings, fields or any other large objects. As a case study, I took part in the project entitled “Jigawa State Road Network Mapping for Vaccine Delivery Routing”. This is based on satellite images of Nigeria. As is the case in the GlobalXplorer project, these images are subdivided to very many degrees into grids, known as ‘tiles’, in order to focus more closely on a particular area. Users are required to work on mapping one grid, or ‘tile’, at a time. At first, I found this a novel procedure. By using the ‘line marking’ facility, I was able to indicate paths that connected various villages, such as in the screenshot below:

On another tile I worked to highlight alternative routes between villages, such as in the example below:

On another tile, I worked to try to link routes. However, at this stage doubts came into my mind as to the value of the process. I tagged all the routes that I identified as ‘paths’ in order to indicate that they did not seem to be gravel roads. As one can see from the screenshot below, however, there were surrounding areas that I did not know whether or not it would be correct to mark as paths. Were these truly safe paths, free from any potential obstacles and on entirely solid ground? And, if not, could I be absolutely sure that those paths I did mark are?

Inspection of marks created by other users, as well as a re-examination of the instructional video about using the iD editor provided by Humanitarian OpenStreetMap itself, highlighted the frequency of inaccurate markings. Even though all markings are supposed to be peer-reviewed in time, the peer-reviewer who created the onsite video judged that these inaccuracies were not really all that important. This surprised me.

Can I tell enough from a satellite image, such as the one above, about the quality of land in question to make informed judgments that it is a safe route? The Humanitarian OpenStreetMap site gave absolutely no information, or guidelines, to answer this question. Such doubts led me to examine the more recent GlobalXplorer project for useful information, or guidelines, on how best to study satellite imagery. This was no more encouraging. A six-minute video ended with the announcement that potential users, having watched this video, were now “half-way through your training as a space archaeologist”. This is despite the fact that the video announced (or perhaps one should say “admitted”) that it is very difficult to tell the difference between piles of dirt and archaeological digs from satellite imagery. This tutorial provided me with only two useful tips. First, that perfect linear markings are not generally evident in nature and so can often be used to identify anthropogenic, or man-made, features in a landscape. Second, that scale bars can potentially be used to identify the size of objects, although to follow the latter piece of advice would seem to necessitate attempting to reduce the size of the tiles provided to users to several more degrees.

Citizen Science and Participatory Culture

What did I learn from the experience of engaging with Humanitarian OpenStreetMap (HOSM)? The notion of freely available satellite imagery being used for good seems particularly positive. It reflects the idea behind the ‘citizen science’ movement: that technology can help address social challenges more effectively not least through allowing scientists and the general public to work together, by workshops or other means. Be that as it may, my assessment of the implications of what I contributed to HOSM did not create a comfortable feeling. In particular, I would shudder to think that a map I helped to create could, in any way, be inadvertently responsible for an ambulance failing to reach its destination. To contribute any more to such a project therefore seemed almost irresponsible. Another unappealing feature was that the contributor terms for OpenStreetMap, which one must accept before being allowed to use HOSM, also seemed to involve strange “miscellaneous” provisions that seemed to indicate that the legal parameters of this project are not very clearly defined, which may or may not be a common feature to crowdsourced projects with an intentionally open ended and international, or “global”, user base. I wonder if Lawrence Lessig has turned his legal mind to this particular phenomenon?

My main conclusion from this exercise was that there is a significant difference between research projects that use mapping as a data visualisation tool (a recent example being an attempt to map all the world’s archival institutions by a Brazilian archivist at and projects in which more specialist geospatial skills, such as those of a geographer, are an evident requirement. The creator of GlobalXplorer, Sarah Parcak, has also written a specialist study Satellite remote sensing for archaeology on how to use light spectrum analysis techniques in studying satellite imagery in order to detect what lies both on and beneath the surface of the earth. I therefore find it a little surprising that she can feel confident about the entirely random users of her crowdsourced GlobalXplorer project, which is based on the same simplistic mapping technique of HOSM. The great possibility of inaccurate information being mapped is a reason why I am inclined to believe that cartography must be “left to the professionals”; namely, geographers or those with specialist training in the use of Geographical Information Systems (GIS). Past readings by this author on the use of GIS for historical studies served only to remind him that this is a specialist field that I am certainly not well qualified to pursue without much more extensive study.

The ‘citizen science’ movement has some notable patrons, including University College London and its EXCITES project, which is associated with Nominet Trust (described as ‘the UK’s only dedicated TechforGood funder’), and its most visible face in the United States may be the Zooinverse and Open Genomics Engine projects. I was certainly impressed by Sarah Parcak’s award-winning lecture to Technology, Entertainment & Design (TED) about how satellite light-spectrum analysis of the earth may infinitely expand the possibilities of archaeological discoveries in the future. However, it seems to me that the connotations of both her research and methods are likely to remain within her specialised academic field, and that her enthusiasm for the idea that a citizen-science movement of the 21st century will ‘democratise the process of archaeological discovery’ via smartphone usage essentially rests on an assumption that all big data generated is inherently good data, in the process effectively ignoring that inaccurate data essentially amounts to no real data at all. Perhaps User Generated Content for all mapping projects may best originate with either authors on location or else with those that have direct knowledge of a location; the latter being a trait of the recent MapLesotho project.

The value of XML and HTML in society today: a short analysis and reflection

The value of XML and HTML in society today: a short analysis and reflection

XML was originally developed by the World Wide Web Consortium (W3C) to overcome the limitations of HTML, which is the markup language for web page content. XML owes its name as an ‘extensible markup language’ to the fact that it can be used for a great variety of purposes because of the much greater freedom it allows, compared to HTML, to design various markup terms for different purposes. Therefore, its usages are many. It has been described as a basis for the ‘semantic web’, or the creation of a web of ‘linked data’ through the development of varied schema and customised markup vocabularies (or ‘languages’).(1) It has been applied frequently for purposes of data exchange and information management. Its usability for the former is great, so much so that XML has become the basis for ‘most electronic commerce applications’ and this has long been its ‘most popular’ usage.(2) However, XML has also been of value in the field of information management and, to some extent, publishing.

In the field of library and information studies, one good example of its usage has been the creation of the Encoded Archival Description (EAD) standard to better link the contents of various marked up archival catalogues, reflecting the intrinsic value of XML to the development of useful metadata standards.(3) A key feature of XML, as well as all good standards, is that they are non-proprietary in nature: their usage is not dependent on particular commercial software. Indeed, an XML document can be created, read and shared offline as well as online. This trait might be said to reflect the fact that ‘a formative influence’ on the creation of XML was the pre-existing Text Encoding Initiative (TEI), which has been called ‘the de facto standard for literary computing’ for the past few decades.(4) Its creation was motivated by the desire to create digital scholarly editions of texts than can be preserved perpetually.

A common schema in XML and TEI is the definition of particular document types. This is a process known as Document Type Definition (DTD). It is a schema that originated with SGML and creates a necessity for internal consistency within a marked up document for it to be ‘well formed’. Some well-formed XML and a valid TEI document are distinct entities, however. The schema adopted by the TEI programme is closer to the International Organisation for Standardisation (ISO) standard of RELAX NG than the XML schema as defined by the W3C and while the TEI guidelines for the creation of suitable elements in a text encoding are remarkably extensive they are also very specific.(5) In total, there are a total of 503 defined elements and 210 attributes, organised into 21 modules, included within the TEI Guidelines. However, this was simplified during 1995 to a ‘TEI Lite’ edition of the full TEI encoding schema and this consists of only 145 elements. TEI Lite has been judged to ‘meet the needs of 90% of the TEI community 90% of the time’.(6)

Although ‘XML exists because HTML was successful’,(7) in contrast to XML, which is usually used to cover the back end process of data management (making it particularly useful for the maintenance of very large websites, such as online archives and commercial ventures), HTML might be described as the markup language that is used for the ‘front page’ presentation of information online. Indeed, the very existence of HTML is fundamentally tied into the development of the Internet as a media or communications tool,(8) which has made an impact on society comparable to that made by the development of mass print journalism in the mid-nineteenth century or the development of television in the mid-twentieth century.(9)

The usage of HTML has transcended two limitations of traditional print media, of being bound by a physical format and the associated costs of production, precisely because a HTML file, or ‘document’, is essentially a computer file that can be viewed remotely using a web browser. The very meaning of the acronym HTML—Hypertext Markup Language—reflects the fact that hypertext is the technology that allows for the creation of links on the web, which could be said to be the most important feature within HTML. It is this process that enables HTML-based projects to facilitate the presentation, or linking, of multimedia content (such as audio and visual content in addition to text) at the one location or to link the locations of various computer files on different servers by means of the use of Uniform Resource Locators (URLs).(10)

Like many computer files, a HTML file, or ‘document’, is alterable and versatile. It can be combined with other technologies (including Cascading Style Sheets or ‘CSS’) to enhance its own text formatting, or presentation, options. Its functionality can be enhanced by the use of Hypertext Preprocessor (PHP) code, which can turn HTML files, or web pages, into ‘dynamic pages’ that can be processed by means of Relational Database Management Systems (RDBMS), facilitating the creation of ‘big data’ from website content.(11) The possibility of altering HTML files, or ‘web pages’, is what has created the idea of interactive, as opposed to static, websites (a development first nicknamed as ‘Web 2.0’). The options that exist in defining their functionality is also what enables them to be ‘responsive’: they can be designed so as to be represented differently depending on what device on which they are displayed. They can also be made searchable online through embedding ‘meta[data] tags’, or associated keywords, into the documents.



The centrality of XML to commercial transactions in the business world is undoubtedly the best, or most valuable, example of the use of text encoding within society today. Citing specific examples of this is not practicable because the schema used in various commercial transaction programmes are necessarily confidential in order to preserve, or protect, their integrity. If this is the reality of the world of data exchange, what can we say in conclusion about text encoding within the world of publishing?

Like any language, the value of markup languages is only as good as the uses for which they are applied. Markup has been defined as ‘any means of making explicit an interpretation of a text’ while a knowledge of markup techniques has been described as ‘a core competence of digital humanities’, so much so that text encoding (including TEI) ‘should be a central plank’ of digital humanities curricula. This is because text encoding creates ‘the foundation for almost any use of computers in the humanities’.(12) Effective practice is dependent on the existence of effective standards. This is why the creation of schema and standards for the presentation, processing and preservation of literary documents in a digital format through the TEI is undoubtedly important. However, an ability to use HTML for web design and XML for information management is also valuable. As a practice, digital humanities is related to information and library studies and archival science. However, the ‘digital humanities’ is also a scholarly discipline in the sense that it exists to encourage all students of the humanities to not only become literate in the use of text-encoding techniques but also to realise their value in both pursuing research questions and presenting research answers. In so far as this technological reorientation takes place, scholars within the humanities may be said to be effectively following what has already occurred within the world of government and business in terms of the effective management and presentation of information (a.k.a. data) so that it can be more readily, or easily, processed with a specific purpose in mind.


(2)Benoit Marchal, XML by example (Indianapolis, 2000), 2 (quote), 6-7


(4)Julianne Nyhan, ‘Text encoding and scholarly digital editions’, in C. Warwick, M. Terras, J. Nyhan (eds)Digital Humanities in practice (London, 2012), 117 (quote)



(7)Benoit Marchal, XML by example (Indianapolis, 2000), 7 (quote)

(8)Lee M. Cottrell, HTML and XHTML demystified (New York, 2011), chapter 1


(10)Lee M. Cottrell, HTML and XHTML demystified (New York, 2011), 4


(12)C. Warwick, M. Terras, J. Nyhan (eds)Digital Humanities in practice(London, 2012), 121


Electronic Ink and Online “Presence”

The second coming of the cloud of unknowing?
The second coming of the cloud of unknowing?

It has often been said that people are rather too fond of using the word “revolution”. Literally, it simply means the turning of a wheel. The degree to which the term gets ascribed and then “un-scribed” to various social developments may indicate that the wheel metaphor is still the most appropriate literary usage of the term.

In popular debates about the Internet it is often suggested that a “revolution” is at work. This is sometimes associated with the idea of “connected” or “disconnected” societies (a.k.a. “tribes”). It is also sometimes associated with the phenomenon of cloud storage of data; a process recently documented in an RTE show called “Cloud Control: who controls your data?” In academic debates about the Internet it is more common for people to refer to the idea of “a revolution in print”; namely, the reinvention, or revitalisation, of the printing trade in previously unimagined ways that has occurred in recent times.

Consider the cost of printing any publication, from a book or a magazine you admire to the glossy junk mail that comes through your letterbox. This involves the cost of not only just a single copy but also literally thousands, or in some cases millions, of copies that have not only to be printed but also to be distributed, marketed and “consumed”. Then consider how easy it is to “create a web page” without any printing or distribution costs whatsoever. That is a truly remarkable development and an illustration of how change in technology can have a knock-on affect on the purely human dynamic of how people can communicate. Is this not a revolutionary change?

While the business phenomenon of “big data” compiled from social media has acquired a lot of news coverage of late (the idea of “digital footprints”), less attention has been given to the phenomenon of “electronic ink”. This goes beyond the mere possibility of creating digital texts (e.g. with a word processor). It also involves the question of publishing and the availability of new text-reading technologies such as ebook software (whose usability, or readability, is not dependent on the backlit screens of the traditional computer monitor). The world of “electronic ink” has not only changed the media of publishing but it has also changed the possibilities of publishing. The social impact that this has had on society is a question that may have as far reaching consequences as had the development of the trade of journalism through the creation of cheaper means of printing and the propagation of newspapers in the nineteenth century. In itself, this development may conjure up a fascinating picture.

In popular fictional movies, a familiar stereotype since days of Humphrey Bogart is the picture of the world-weary private eye, or even “spy”, who witnesses the ugly side of life to the nth degree and, yet, just might be able to survive this ordeal with his or her wits, or health (mental or physical), intact. This is akin to the shady world of “film noir”. In the age of the arrival of the journalist over a half-century earlier, a similar stereotype existed regarding the world of the journalist: the world-weary “pen for hire” who occupies a dubious place between the world of barroom gossip and the courthouse and just might survive this ordeal alive long enough to tell the tale. Like the lowly civil servant heroes in the central or eastern European fictions of Kafka or Dostoyevsky, these pens-for-hire were often very well educated people who were burdened by a sense of low pay, lack of opportunities and under-appreciated talents, but just like the author of a novel, they were prepared to “expose” their inner state of mind in print, even if this did not necessarily occur without a price.

Can we make a similar comparison with individuals today who decide to publish their thoughts, come what may, online? What is the meaning of having an “online presence”? A recent survey found that 40% of twenty-year old Americans thought that they had “a good life online”. This is a piece of statistical data. Is it factually correct? In the light of the fact that nobody can breathe online and, therefore, nobody can be said to be alive online, one must certainly say that it is literally incorrect. Those youths who answered this survey in the positive were evidently testifying to the fact that they are used to communicating online. If this was not done over the phone line (a technology that has been with us since the 1870s), it was evidently done in print (a technology that has been with us for about 3,000 years).

Why is an online form of communication equated in people’s minds with a living presence? Consider any piece of text. If you had a book volume of the complete works of William Shakespeare in your hands, you do not possess the thoughts, feelings or actualities of whatever William Shakespeare’s life was over 500 years ago: you have only a reproduction of those of his momentary thoughts that he decided to put in print and know nothing more of his life. The same is true if you possess the text of a newspaper article by a journalist or indeed the text of some author’s Internet blog. One cannot breathe through a printed text any more than one can breathe online. You could try, but you’d find in a matter of seconds that it is simply not possible. So when people speak of “existing online” they are literally referring to having “printed [text] online”. This may have been something as simple as a single “tweet” or a comment on youtube.

How many writings have been committed to print by authors who decided afterwards that it was not really worth writing in the first place? It is impossible to tell, but it is quite probable that it is a very large number. Initially, journalists wrote in a purely anonymous manner. By-lines, in which the author’s name appears, are a more recent development. It is often suggested that the ultimate online edition of collective writing is Wikipedia; an encyclopaedic development that has been the subject of scholarly analysis in itself (such as “Wikipedia: community or social movement?”, Interface, November 2009). While Wikipedia can contact the contributors, its contents are published anonymously. I wonder, would it attract as many contributors if every article carried a by-line?

When people toy around with catchphrases that “the Internet has made the whole world urban” (by virtue of being linked through the written word) or “everyone is naked online” (both these catchphrases originated, I believe, with the New York Times), one must realise that these catchphrases are simply reflections of the mental impact that the process of writing can have on an individual. Writing may be (and more often than not is) simply the documenting of a momentary thought, but the moment it appears in print it is often interpreted by a reader as a direct insight into another’s state of mind due to the very fact of the potential permanence of print that can be re-read many times. There is a reason why “putting something in writing” is essential to legal practice. It is also a reason why most people instinctually avoid writing (who likes to have their own words thrown back at them at a later date?) and it is also partly why those who in the past chose to be a writer were often as guarded about the process as any would-be artist, for they know that what they produce is but one aspect of themselves and this should not (yet probably will) be interpreted by others as a reflection of their entire lives.

Having a (written) online presence is not, therefore, essentially any different than the process of being a published author in any age. Many are attracted to online publishing as a venting space in a similar manner to those who had their own printing press in the past (should they have feared that “All The King’s Men” would come crashing through their doors at any moment to smash up the printing press or toss all their printed flyers or pamphlets in the fire place?). Writing and reading (literacy) has often been typified as a instrument of indestructible power (“in the beginning, was the Word”) and it is arguably essential to the maintenance of effective human communication. So what is occurring through the “electronic ink revolution” is less the creation of new online “living spaces” than a very significant change in the culture of publication. It seems that the academic world was fairly quick to “tune in” to this reality and, indeed, interesting papers from a Europe-wide conference on the theme of “the changing cultures of publication” can be found at the following link:

Online Learning

Woman's hand holding a red phone
Woman’s hand holding a red phone

Does the availability of online learning tools encourage more collaborative approaches to education? Over a decade ago, articles within The Journal of Interactive Online Learning emphasised the value of using online tools to enhance teachers’ productivity and professional satisfaction, enabling them to have “a voice” beyond the classroom. More recently, students have been encouraged to use collaborative writing tools, such as Google Docs, to enable them to reflect, in the light of each other’s experience, on their potential roles as creators of knowledge, or portfolios, from the earliest stage of their studies.

It has been suggested that in creating such portfolios, individuals can organise their learning according to their own “Personal Learning Environment” based on the use of their personal choice of information-management tools. Furthermore, if an individual student becomes accustomed from the earliest stages of their studies to applying their learning to the idea to creating a portfolio, it can potentially show an employer that they have learned to apply their acquired skills and knowledge to different contexts, as well as to work within teams, far more so than a mere proof of qualification could ever do. From this premise, it has been suggested that the use of social-media tools (including customised “elgg” tools) may also become a basis for the educational process to become more “progressive” due to its receptivity to utilising any or all forums for educational purposes.

The value of collaboration is certainly not a new idea: the saying that “two heads are better than one” is as old as society itself. What is, perhaps, a new idea within this debate about online learning tools is the attempt to redefine what is the purpose of a “liberal arts education”. While this has connotations for all forms of educational institutions, it is a debate that is perhaps most related to the long-term educational debate (since universities were first created in medieval times) of “what is the idea of a university?”. On the surface, redefining the parameters of this debate in the light of contemporary society’s needs may seem to be only a positive development. A traditional argument regarding the value of a liberal arts education is that it is a process that enables an individual to fully develop their understanding of what it means to be human and, from this basis, acquire the skills necessary to later contribute as a free individual to the development and collective wisdom of adult society. This very notion of collective wisdom may seem to have connotations of religious precepts, or a preoccupation with the ideas of wisdom and discernment as moral concepts. To many contemporary eyes, however, this can seem to be too much of an “ivory tower” idea, as if education is a process designed to satisfy only the individual, not the community. And yet what is a community but a collection of free individuals?

What is the relationship between the collation of knowledge and collective wisdom? From a business perspective, the value in the collation of knowledge is the aggregation of data sets to enable more effective, or productive, business analysis tools. The “big data sets” generated from online social media exist to serve this purpose and this can only be a good thing, according to a business logic. But does this data embody human wisdom if it does not take account of the free agent that is the individual? Is the traditional “liberal arts education” idea still not the ultimate guarantor of individuals’ independence of thought, even if it be still conceived partly in the light of the traditional contrast between Greco-Roman (abstract logic) and Judeo-Christian (moral logic and wisdom) world views or traditions of thought?

Just as coexistence is an essential feature of life, there is no reason why different models of education can or should not also coexist. Online research tools and presentations can illustrate the past worlds of Greco-Roman and Byzantine civilisations just as much as the present, and they should also be able to highlight the common denominators of life in every age. If there is perhaps a naïve sense, or even fear, in some quarters that new forums for the dissemination of information can serve to dissolve meaning, this may be but the result of the sense that it is the business world rather than the humanitarian intellect that is setting the agenda for such developments. However, a flip side to this situation is that processes cannot be assessed fully until all results are produced.

It may be true that the business world has played an ever-growing role in the professionalisation of society ever since the nineteenth century. It may also be true that it is only in the present of ubiquitous online information that the results of this development are becoming evident to all. The forums for online learning may be considered as an illustration that the processes of business-analysis of information and humanist-enquiry, also based on information, are and essentially have always been co-existing and far more mutually beneficial than may be evident on first glance. The old saying that it is by the fruit that we can recognise the value of the endeavour has myriad connotations that can be as liberating for the mind in the present as in any age. The processes of mind mapping may simply have changed its outward form.

Is critical discourse different in the “Digital Humanities” than it is in the humanities?

Can history be freed from ideology?
Can history be freed from ideology?

Alan Liu, a Californian professor of English literature, has raised the question of whether or not there is any real cultural criticism within the digital humanities. He notes that a professional motivation for the advocacy of digital humanities by academics is to compensate for the decline of government funding for the arts and humanities in general. Partly for this reason, he has raised the question of whether or not the criteria for critical discourses in the digital humanities and the humanities actually differ. For instance, does hyperlinking really constitute a transcending of traditional narrative structures or is it simply a new form of footnote? Does the growing popularity of the term “data” in humanities scholarship reflect a methodological shift or is it a purely linguistic shift in emphasis?

Responding to this debate, historian Fred Gibbs has suggested that a peer review rubric of digital humanities scholarship could evolve based on the four principles of “transparency, reusability, data, design”.  Gibbs’ idea would seem to directly mirror what is taking place in the emphasis of governments on e-government. According to this model, government records should be made “open” (transparent) and “(re)usable” for citizens through being exposed to the information (data) they contain. In this way, the public can have a greater appreciation of what role governments and citizens have, or can play, in society (design). If digital humanities scholarship has created an additional criteria to traditional humanities scholarship it may be the result of the debate upon why the communications (and, in turn, business) revolution made possible by digital technology has influenced people’s conception of what society and citizenship actually means. In this sense, the question that the digital humanities debate may be raising right now is not “what does it mean to be human in the digital age?” but “what does it mean to be a citizen in the digital age?”

How can education serve a civic purpose is an essentially political question. That is where the issue of funding for education or education-agendas arise. Beyond the field of money or politics, however, humanists will continue to champion regardless the idea that access to knowledge enhances our sense of humanity just as much as they have always done. The very fact that computer scientists, or information-management specialists, are perpetually developing tools for allowing for better access to information may have drawn the thinking of traditional humanists and computer scientists closer together, or perhaps they were never truly all that far apart?

The impact of technology on humanistic scholarship is a question that has often been raised but rarely answered. For instance, contrary to initial expectations, the development of railways in the early nineteenth century did not totally change people’s sense of what it meant to be human. Instead, it merely changed the business world and people’s capacity to travel and, in turn, to be exposed to a wider section of society. The tool of the internet may change the tools of education but there is no essential reason to expect that it can, or even could, fundamentally change its content. That is a process that will be based entirely on human endeavour. In this sense, the digital humanities would seem to be very political in its nature by virtue of the fact that it is highlighting issues of intent and social responsibility and perhaps interrogating traditional humanists’ priorities in this regard.

It seems to me that the healthiest potential development this might raise is that by placing more emphasis on the processing of information than the mere making of an argument, humanities scholarship may be liberated from perhaps its principal bugbear during the previous century, which was ideology being used as a short-hand for individual prejudice or as a substitute for critical thinking and empirical scholarship. Pioneers of “digital history”, such as Dan Cohen and Roy Rosenzweig, have suggested in the past that historical thought may be “debugged” from the bug of ideology through such a process, although, to date, it seems that this is not a debate that has infused the historical community at large, whether within or beyond academia. This may indicate that such “digital humanists” still have a lot to do to get their message across!

Is the open ethos of Digital Humanities something radical?


Miriam Posner of UCLA has recently suggested that digital humanities scholarship has an unrealised potential for radicalism by “critically investigating structures of power”. This is not necessarily a new idea. Almost a decade ago, Clay Shirky and other commentators suggested that a combination of the freedom of access to digital information and open source scholarship was bound to challenge long established practices of institutions, be they educational or even governmental. This idea may seem to be far less novel today in the light of the fact that open source scholarship has now governmental support both in the United States and the European Union. Nevertheless, Posner suggests that digital humanities scholarship still has the capacity to promote new ideas through the novel interrogation of sources and, in turn, the raising of new debates.

An example of this may be seen in the extensive responses to her own discussion piece. These responses are accessible from her own website and link one forum for debate with another, in the process  potentially bring greater vitality to each debate. For instance, Posner’s observations regarding how “profoundly ideological is the world being constructed around us with data” was linked by Angus Grieve-Smith to his own forum on the subject of “Technology and Language”. This expands upon her point that how we classify information, or even ideas, through language has tremendous power to shape how people think.

Does the digital research community reflect fully on the humanistic connotations of this reality? One might be inclined to answer this question with a simple “no”. For instance, Posner (who comes from a film-studies background) refers to the work of her UCLA colleague Anne Gilliland, who is a leading figure in the work of Library and Information Studies. Over the past decade, Gilliland has been a frequent contributor to new journals such as Archival Science on the role of Library and Information Studies specialists (such as herself) in redefining the archival profession through the development of new metadata standards in describing both records and collections. However, new metadata standards have not altered the traditional governmental objective of archives, which is to quantify information regarding both individuals and organisations in such a manner as to create records to enable more efficient governance.

It would be no exaggeration to say that this process of record creation has underpinned the concept of governance ever since the days of the Roman Empire and, in turn, shaped the very idea of civilisation itself. The existence of a logical process of recording and processing information has long been typified as the bedrock of civilisation, outside of which one may find only the chaotic world of nature where, in the absence of a logically ordered concept of society, there is only disorder, ignorance and unreformed barbarism. Nobody may like the idea of their personal identity being reduced to statistical information within a governmental record, but without this process political society would not essentially exist.

This raises an interesting concept, which is if an individual or organisation wishes to champion a particular cause (for instance, a campaign for social justice) then how they categorise that cause may, in itself, be the touchstone of its chances for success. Posner’s interest in feminist critiques of film studies prompts her to focus on ideas of gender and even race. Someone interested in archival science or history might focus more specifically on the concept of citizenship, for few words carry more consequential connotations than that term. If one can view classical civilisation as a root of civilisation, this was not only due to its preoccupation with the exercise of logic but also because it introduced the legal concept of citizenship. More often than not, this was defined against an idea of slavery. “Citizens” were protected by the law, while those who were not citizens had no legal rights at all and were generally classified as “slaves” or “barbarians” (hence the idea that society was a question of championing “civilisation” against “barbarism”). When people conceive of miscarriages of justice, or unwarranted subjugations of people, in our own day the first concept that generally springs into people’s minds (or, indeed, onto their lips) is still the question of “civic rights”.  Is Posner, therefore, essentially focusing on the idea of a “digital citizen”?

This term “digital citizen” has recently been invented to promote the idea of responsible use of the internet. It is based on a moral code of respecting and protecting both oneself and others through the use of the internet and an essentially legal code, based on the idea of respecting intellectual property. But can a “digital citizen” be more than this? Can the advent of the information highway allow non-governmental organisations to play a part in refining or, indeed, improving whatever ideas of citizenship may exist within those societies that they inhabit? Evidence may suggest that this process is already underway, in which case one may argue from Posner’s perspective that digital humanities scholars may actually have a key role to play in creating a meaningful concept of “digital citizenship” in the years ahead. In this, they may benefit from participating in debates with archivists regarding the relationship between human rights and recordkeeping. This has been a regular theme of archival conferences in recent years and a debate that was encouraged not least by the south African archivist Verne Harris.  Professor Anne Gilliland of UCLA will be speaking on this broad theme at the Liverpool University Centre for Archive Studies on 28 November 2016.

A Definition of Digital Humanities


Digital Humanities is a term that originated with the phenomenon of Humanities Computing and, in fact, many “Institutes of Humanities Computing” are now known as “Institutes for Digital Humanities”. Why the change?

Ever since the mid-20th century, Humanities Computing had been a specialised field that involved the creation of machine-readable texts, as well as the development of computer-analysis techniques for analysing such material. However, more recently the development of Open Source learning facilities, via the internet, has changed the framework for humanities research in general, making what was once a specialised field a much more central activity.

Some see “Digital Humanists” as those who are facilitating this change across all disciplines: in effect, as the facilitators of the growth of digital research skills within all the humanities disciplines.

This is reflected by the fact that institutes for Digital Humanities have not generally developed purely as institutes for digital research, but rather as departments that serve as a bridge between departments for individual disciplines and the digital research community, including computer scientists.

In seeking to act as a bridge between these two fields, institutes of digital humanities are serving to enhance the traditional research skills, as well as the criteria for meaningful research, amongst humanists in general by making empirical research methodologies more central to all forms of humanities research.

The term “digital humanities” is frequently interchangeable with “digital arts”. This is because humanist studies carried out in Europe have traditionally been referred to as studies in “the arts”, although similar research is more commonly referred to in North America as “the humanities” .