The Trouble with Tagging

Posted by &filed under Experiments.

A team of scholars and technologists at the Emory Libraries led by Rebecca Sutton Koeser and Brian Croxall are developing tools for identifying and marking up names, places, and organizations in Emory’s collection of materials associated with the poets known as the Belfast Group. Tagging these entities will make it possible to examine and present some of these writers’ social and geospatial networks. Because it connects identifiers to the semantic web, the tools created for the Networking the Belfast Group project will give users access to much more data than documents like finding aids regularly provide.

In order to compare the team’s tools with doing the same work by hand, I tagged the people, businesses, and places in the Frank Ormbsy finding aid with identifiers. These identifiers connect people to resources of linked data, providing more information about “Frank Ormsby” and establishing that we’re always talking about the same entity throughout our finding aids. That might sound easy enough with someone named “Ormsby,” but when you start trying to establish which “Robert Johnson” a particular finding aid is referencing, it becomes much more complicated. Marking up the 14,000 lines of XML took even more time than we expected—about fifty hours. The vast majority of my time was spent determining who people were in the database we are using for our unique identifying numbers, the Virtual International Authority File (VIAF).

The other issues I encountered centered on how language and networks function. The first was, bizarrely enough, a question about parts of speech. If a group is said to be Northern Irish, do I mark that phrase as a geographic place name? If the text said, this group was from Northern Ireland, I would have not even questioned marking it with the identifier for “Northern Ireland.” When it was an adjective, I hesitated and conferred with the other team members. Is this because adjectives describing groups of people have more wrapped up in them than geography? Is it just a categorization hiccup?

The second kind of question I asked was about relationships. If a poem has Belfast in the title, should this be given a geographical identifier? If a poem title has the name T.S. Eliot in it, does T.S. Eliot get marked as the person? The relationship is quite different from most of the marks, where T.S. Eliot would have authored materials in the collection or where Belfast would have been a site of production or distribution rather than a subject. When something is about, rather than by a person, should it be tagged? And whatever we decide on that question, should the same policy apply to places when they are used either as subjects or sites? Similarly, there were reviews of books where I tagged the authors of both the text being reviewed and of the review itself. The actual collection houses only the reviewer’s work; however, this seemed a different kind of “about” than a poet writing about another person.

I also ran into questions about how to differentiate members of families. When a letter is from the Longleys—a husband and wife—whose identifier do we use? When a Longley is referred to without the first name, how do I decide which Longley is meant? Sometimes it was obvious that Michael Longley was meant, because he is a poet and the collection includes more of his materials; however, I could see how making assumptions about who to identify might reduce the presence, particularly of women, in these networks.

At the center of these questions is how we understand networks. Knowing that two entities are related is quite different from knowing how two entities are related. These questions have implications not only for the functions of the tools created by the team, but also for how researchers will use and understand the networks we produce using them.

This post currently has no responses.

The Poetry of Things (in DBpedia)

Posted by &filed under Experiments.

Saxifraga
Saxifraga cochlearis

In his poem A Sort of a Song, William Carlos Williams wrote “no ideas but in things” and “saxifrage is my flower that splits the rocks.” What I’m doing here almost certainly isn’t what he meant– in fact, I may be doing the reverse, in that I am taking a poem and words and, in a sense, converting it back to, or at least representing it as, its component “things.” Even though it isn’t quite what Williams intended, these lines kept coming to mind as I worked ont his post, and it seems related to the things in poetry I’m discussing here.

Early on, near the beginning of this project, when we were experimenting with some of the tools and technologies we thought we might use to improve the process of identifying and tagging names in XML text, I noticed some strange output when I ran some of the poetry from the Belfast group sheets against the DBPedia Spotlight annotation service. Because I wasn’t restricting the identified resources to persons, places, or organizations (which is what our tools usually do when we’re trying to identify names to be tagged, e.g. in the NameDropper OxygenXML plugin we’re developing), it was identifying things like “potato”, “rock”, “eye”, “mouth”, “hand”, and “root” in the text. We’re now at the point in the project that we’re starting to shift towards using the tools we’ve been developing to enhance the EAD and TEI XML associated with the Belfast Group, and as I’ve begun working on tagging some of the poetry I was reminded of this and thought it might be worth a little more investigation and thought.

For this experiment, I restricted myself to Seamus Heaney’s poem Digging, as it appears in the draft on one of the Belfast group sheets (there are some slight wording differences from the published version).

Below are the things that DBpedia Spotlight identifies in the poem. I’m using the DBpedia thumbnails (or Wikipedia thumbnails, in the few cases where the DBpedia thumbnail image link was broken) to emphasize the “thingness” of the entities that Spotlight recognizes. Each image links to the corresponding DBpedia resource, and if you hover your mouse over the image you should see a snippet of the poem where the entity was recognized. I’ve sorted them out into three groups semi-manually, since I’m still having difficulty filtering based on support and similarity scores without losing useful data, although in this case it seemed like very few of the identified resources had high certainty, I suspect due to the poetic language.

First, the things that DBpedia Spotlight recognized accurately, in the order that they occur in the poem.


Finger

Finger

Thumb

Thumb

Pen

Pen

Gun

Gun

Window

Window

Sound

Sound

Spade

Spade

Sink

Sink

Rhythm

Rhythm

Potato

Potato

Drill

Drill

Boot

Boot

Knee

Knee

Root

Root

Brightness

Brightness

Scattering

Scattering

Potato

Potato

Hardness

Hardness

Hand

Hand

God

God

Handle (grip)

Handle (grip)

Spade

Spade

Grandparent

Grandparent

Cutting

Cutting

Peat

Peat

Bog

Bog

Milk

Milk

Bottle

Bottle

Paper

Paper

Drink

Drink

Sod

Sod

Shoulder

Shoulder

Good and evil

Good and evil

Potato

Potato

Slapping

Slapping

Peat

Peat

Cutting

Cutting

Spade

Spade

Man

Man

Finger

Finger

Thumb

Thumb

Pen

Pen

It’s sort of an odd way to read a poem, but it’s also kind of intriguing. Among other things, I think this highlight how full of actual physical items, especially body parts, the text is.


Second, a few of the resources that aren’t quite correctly matched up to the text, but are still interesting and semi-relevant.


Squatting

Squatting

Coarse fishing

Coarse fishing

Lugger

Lugger

Shaft mining

Shaft mining

Tell

Tell

Fell

Fell

I actually found these mis-identifications somewhat thought-provoking. To some degree, they betray the extent to which DBpedia is thing-centric, so that verbs and adjectives are mis-identified as nouns (again, with low confidence or support scores). But I find the notion of the poet’s pen “squatting” between thumb and finger, in the sense of taking up residence in an abandoned space without permission, rather appealing and fascinating. In the case of some of the other mis-identifications, it seems that Spotlight is picking up the context of digging and working outdoors, hence the mountains and archeological entities. And in the case of the lugger ship, this mis-identification actually drove me back to the text, and when I looked at “lug” in context I discovered that I didn’t actually know what it was, and had to go looking to figure out that the lug and shaft are parts of a shovel or spade.


Third, some of the mis-identified things that are humorously, obviously wrong. In this case have actors and musicians or bands, conceptually unrelated items, and even a video game. I’m including these here partly because they make me laugh, but also to demonstrate that the technology still has limitations and we need to be careful how we apply it.


Doris Day

Doris Day

Toner

Toner

Vomiting

Vomiting

(no image)

Turf war

Common cold

Common cold

Molding (process)

Molding (process)

The Edge

The Edge

The Roots

The Roots

Dig Dug

Dig Dug

For those who are interested, here are some technical notes on how I generated this post.

  1. Got a copy of the TEI xml for the Heaney Belfast group sheets from the current Beck Center Belfast Group site (now available on GitHub!)
  2. Ran the NameDropper lookup-names python script on the TEI file, restricting it to the poem I was interested in and setting the certainty pretty low, to generate a CSV file.
        lookup-names heaney1.xml --tei-xpath '//t:body[@xml:id="heaney1_1045"]' -c 0.1 \
         --scores --csv /tmp/heaney-digging.csv
      
  3. Wrote a simple python script to iterate through the CSV file and generate the HTML I wanted for each item, pulling the label and thumbnail from DBpedia, and using the context pulled from the poem.
  4. Manually sorted out the entities I wanted into the three groups, preserving order, and fixed missing thumbnails where I could (some of the DBpedia thumbnail references are invalid; I’m guessing this is because they have been updated on Wikipedia since the last time the current DBpedia data was regenerated).

This post currently has no responses.

Mapping places in “Around the World in 80 Days”

Posted by &filed under Experiments.

The last time I re-read ‘s Around the World in 80 Days (sometime this spring, when I made a long trip myself), I was surprised to notice that Verne used a lot of very specific place names in the western United States that I wouldn’t have necessarily expected a 19th century Frenchman to know. Perhaps what caught my eye was the variant spellings of familiar names– amongst such places as Laramie, Salt Lake City, and Omaha, Verne references the “Wahsatch Mountains” and the “Tuilla Valley” (now usually spelled Wasatch and Tooele).

Here is a Google map I’ve created using some of the tools and technologies we’re using for this project (this is a short enough book that it wouldn’t be that difficult to build a detailed map of the trip, but where’s the fun in that?) The points on the map each have a links to the corresponding DBpedia record and snippets of context where the places are mentioned in the text. Below, I’ll explain more about how I created it.

View Places in “Around the World in 80 Days” in a larger map

After I finished reading the book, it occurred to me that currently available technologies should make it pretty easy to extract and map place names from the text, and since geographical location is so significant in the work it might be an interesting experiment. I went looking for maps of Phileas Fogg’s great trip and was surprised to find not much.

Wikipedia map of Phileas Fogg's trip in 'Around the World in 80 Days'
Wikipedia map of Phileas Fogg’s trip in Around the World in 80 Days

The wikipedia page for Around the World in 80 Days has a map of the trip, but it’s just an image, and fairly high level. There’s a Map Tales version, TripLine version, and a couple of Google maps versions (here and here), but they are still fairly high-level, and gloss over some of a lot of the details, which I think is what makes the trip so interesting.

When I started investigating extracting place names and generating a map, my first thought was to try Edina Unlock, which I had heard about but never had the opportunity to work with. However, I wasn’t able to get any results, and it’s not clear to me if the service is still being maintained or supported. Once we started doing development for this project, I figured out that I could use the python scripts we’ve created as part of the “Name Dropper” codebase. I grabbed the text from Project Gutenberg, cleaned it up a little bit and split it up by chapter, and then used the lookup-names script from namedropper-py to generate CSV files of the recognized place names for each chapter. The benefit of using DBpedia and semantic web technologies is that, once resources are identified and linked to a DBpedia resource we have all the other information associated with those items– in this case, latitude and longitude. Using the CSV data and DBpedia, I wrote some simple python code to generate a georss feed that I could import into a Google map. Some of the drawbacks to this approach are that I’m limited to the names that DBpedia Spotlight can identify (and I’m still trying to figure out a good way to filter good answers from bogus ones), and I’m relying on the geo-coordinates that are listed in DBpedia (you may notice on the map above that Oregon is pretty clearly in the wrong place).


For those who are interested, here are the nitty-gritty, step-by-step details of how I went from text to map.

  1. Downloaded the plain-text version of the novel from Project Gutenberg.
  2. Manually removed the Project Gutenberg header and footer from the text, as well as the table of contents.

    Note that Around the World in 80 Days is in the Public Domain in the U.S., and according to the Project Gutenberg License, once you have removed the Gutenberg license and any references to Project Gutenberg, what you have left is a public domain ebook, and “you can do anything you want with that.”
  3. Split the text into individual files by chapter using cplit (a command-line utility that splits a file on a pattern):
      csplit -f chapter 80days.txt "/^Chapter/" '{35}'
      
  4. Ran the NameDropper lookup-names python script on each chapter file to generate a CSV file of Places for each chapter.
    (Note that this is C-shell foreach syntax; if you use something else you’ll have to find out the for loop syntax.)
      foreach ch ( chapter* )
        echo $ch
        lookup-names --input text $ch -c 0.1 --types Place --csv $ch.csv
        end
      
  5. At this point, I concatenated the individual chapter CSV files into a single CSV file that I could import into Excel, where I spent some time sorting the results by support and similarity scores to try to find some reasonable cut-off values to filter out mis-recognized names without losing too many accurate names that DBpedia Spotlight identified with low certainty. It was helpful to be able to look at the data and get familiar with the results, but I think now I might skip this step.
  6. I wrote some python code to iterate over the CVS files, aggregate unique DBpedia URIs, and generate a GeoRSS file that could be imported into Google Maps. It’s not a long script, but it’s too long to include in a blog post, so I’ve created a GitHub gist: csv2georss.py. I experimented with filtering names out based on the DBpedia Spotlight similarity/support scores, but I couldn’t find a setting that omitted bad results without losing a lot of interesting data, and it turned out to be easier to remove places from the final map.
  7. Ran the script to generate the GeoRSS:
        python csv2georss.py > 80days-georss.xml
      
  8. Made a new Google Map and imported the GeoRSS data. (Login to a google account at maps.google.com, select ‘create map’, ‘import’, and choose the GeoRSS file generated above. A couple of times Google only showed the first name; if that happens I recommend just do the import again, and check the box to replace everything on the map.)
  9. Went through the map and removed place names that were mis-recognized from common words based on the context snippets included in the descriptions. For example, I ran into things like Isle of Man for man, Winnipeg for win, Metropolitan Museum of Art for met. Because the script aggregates multiple references to the same place, each mis-recognized name only needed to be removed once. When I ran the lookup-names script with -c 0.1 I only had to remove 5 of these; when I ran it with -c 0.01 I had to remove significantly more (over 30).

This post currently has no responses.

Using DBpedia to graph writers influence

Posted by &filed under Experiments.

A technical discussion of the process of generating a network graph of authorial influence using DBpedia, SPARQL, and Gephi.

The image to the right is a network graph of author influence that I generated based on data from DBpedia. I’m sharing it here because I think it is an interesting and cool way to show off some of the power of linked open data, and to start looking at and thinking about the networks of connections between authors.

Most diagrams of the linked-data web that I have seen (like the image below) put DBpedia somewhere at the center, and it’s certainly come into play as we start working on this project. We’re currently working on making use of the DBpedia Spotlight service to identify and annotate named entities in our target content, and several of the data sources we have looked at include references or equivalence to DBpedia resource URIs (to get an idea of what Spotlight does, try out the demo to annotate some text of your own).

This means that I’ve been spending time looking at individual records (like the one for Seamus Heaney or Michael Longley) to see what kind of information is actually available to us.  For instance, we’re using DBpedia Spotlight for discovery, but want to store VIAF (Virtual International Authority File) identifiers where we can, so we need a way to map DBpedia resource URIs to the equivalent VIAF resource.  Some DBpedia records include a viaf property, but not all of them; the viaf property is generated from the authority control link at the bottom of an author’s Wikipedia page, which isn’t always present.

Linking Open Data cloud diagram

Linking Open Data cloud diagram, by Richard Cyganiak and Anja Jentzsch. http://lod-cloud.net/

In the process of looking at and working with some of the DBpedia records for the Belfast poets, I noticed a couple of interesting properties: influenced and influenced by.  Around the same time I discovered this, I happened across a blog post that discusses using this influence information to graph the history of philosophy, and a related post that extends the approach to graph the entire influence network on wikipedia.  I decided to do something similar, but I’m looking at authors instead of philosophers, and instead of casting a wider net, I decided to see how I might narrow the data to Irish authors.

What follows are some SPARQL queries I used to get authorial influence information.  If you want to get a better idea what the query returns, I recommend you copy it and try it out in the DBPedia SPARQL query interface; I have found it useful to look at the HTML results format while I’m tinkering with a query, and then switch to CSV when I’m ready to download the data and import into a tool like Gephi. Here’s a SPARQL query for information about which writers are designated as influencing others:

SELECT ?source ?target
WHERE {
  ?p a dbpedia-owl:Writer .
  ?p dbpedia-owl:influenced ?i .
  ?p rdfs:label ?source .
  ?i rdfs:label ?target
  FILTER langMatches( lang(?source), "EN" ) .
  FILTER langMatches( lang(?target), "EN" )
}

This is very similar to the query used for the philosophers in the post I linked above, but instead of looking for philosophers I’m restricting my results to writers, and I’m actually using the English labels in DBpedia instead of decoding the URIs.  This suggests one immediate value of this type of linked data resource: with a simple change I could export the results of this query with labels in any of the languages that DBpedia supports; this may not be as significant when we’re dealing with names, but could be pretty powerful useful for other types of resources.  I’m using the “source” and “target” output names here as a convenience, because I’m planning to save the results as CSV and import them into Gephi as “edges” or connections between nodes, and let Gephi automatically generate the nodes based on these connections.

After looking at the author-influence network, I wanted to limit the data to just Irish authors. It took me a couple of tries to find a useful property to filter on; there is a nationality property set to Irish people for some of the authors we’re working with, but strangely it’s not set for everyone; eventually I settled on the subject Irish poets. Because this is a much smaller dataset, I took some extra effort to write a query that would find all influence relationships where the Irish poet was either the person being influenced or being influenced by others, and I’m using both the “influenced” and “influencedBy” properties. It’s an interesting graph, but it also starts to show some of the limits of DBpedia; it’s great as a broad resource, but it’s clearly biased towards the interests of Wikipedia contributors, and if you try to drill down into specifics you may find there isn’t a lot of depth.

For anyone who’s interested, here’s the more complicated query I used to gather the data about Irish poets and influence:

SELECT ?source ?target
WHERE {
  ?p a dbpedia-owl:Writer .
  ?p dcterms:subject category:Irish_poets
  {
    ?p dbpedia-owl:influenced ?o .
    ?p rdfs:label ?source .
    ?o rdfs:label ?target }
  UNION {
    ?o dbpedia-owl:influenced ?p .
    ?o rdfs:label ?source .
    ?p rdfs:label ?target
  }
  UNION {
    ?p dbpedia-owl:influencedBy ?o .
    ?o rdfs:label ?source .
    ?p rdfs:label ?target
  }
  UNION {
    ?o dbpedia-owl:influencedBy ?p .
    ?p rdfs:label ?source .
    ?o rdfs:label ?target
  }
FILTER langMatches( lang(?source), "EN" ) .
FILTER langMatches( lang(?target), "EN" )
}

Network graphs generated with Gephi; deep zoom images generated and hosted by zoom.it.

This post currently has no responses.

Proposal for DH 2013

Posted by &filed under Uncategorized.

What follow is a poster proposal for the 2013 Digital Humanities conference that both Brian and I wrote, as project manager and lead scholar/developer, respectively. Our fingers are crossed, and we’ll know more around 1 February 2013.

Networking the Belfast Group through the Automated Semantic Enhancement of Existing Digital Content

There is increasing work on and interest in social networks in the digital humanities community (Meeks 2011). Analysis is frequently done on digital content—including images (Akdag Salah et al. 2012); email (Hangal et al. 2012); and citation networks (Visconti 2012)—because the data lend themselves to aggregation, conversion, and analysis. Yet despite this flurry of activity, the possibility exists for an exponential jump in network analysis. After all, the holdings and catalogs of galleries, libraries, archives, and museums (GLAMs) include traces of vast paper-based networks, but the data are locked away in forms that don’t easily lend themselves to analysis. What if we could open up that content? In this poster, we will report on an attempt to provide tools for archivists to expose the information embedded in the descriptions of their collections as well as a test case for analyzing that data: an examination of the networks of the Irish poets collectively known as “the Belfast Group.”

Our goal is to develop software tools and design a workflow to enhance TEI and EAD—documents that are already commonly created and maintained by archivists and text centers—without radically increasing the time and effort involved. The software tools (http://github.com/emory-libraries-disc/name-dropper) consist of a plugin for the Oxygen XML editor and command line scripts that will, first, make use of DBPedia Spotlight to identify and annotate recognized names and other resources within the text and, second, connect to linked-data systems (starting with the Virtual International Authority File [VIAF]) to provide authoritative, scholarly identifiers.# The scripts will allow technical users to inspect and tune the results or to automatically tag high-certainty resources, and the plugin will provide a user-friendly interface to review and accept suggested names while editing a document. The enhanced documents should provide significant benefits to GLAMs, allowing them to connect disparate types of content (e.g., digitized texts or photographs from an archival collection) and augment with data from other linked data systems. Furthermore, the enhanced documents will make it possible to expose these data in more machine-readable and research accessible formats. Our tools and workflow could be applied to resources held by different archives (for a different approach, see Blanke et al. 2012). What’s more, enhancing these documents helps GLAMs provide a means for researchers to do non-consumptive, social network research on the metadata of collections that might otherwise be closed or problematic in other ways (e.g., restricted correspondence from living authors).

Although our tools are not yet complete, we have already begun preliminary visualization and analysis of network relationships using data that mirrors what we will generate automatically by Summer 2013. The difficulties of defining “the Belfast Group” make for a compelling test case for our attempt to understand networks via data that are newly machine readable. The Group is a contentious network since the label has been variously applied to a weekly writing workshop that ran from 1963-1972, the most famous poets who attended that workshop—including Seamus Heaney, Michael Longley, and Paul Muldoon—or more loosely applied to all of the writers who “put Belfast on the literary map” (Clark 6). The significance of the writing workshop is debated by critics and often rejected by the poets themselves, sometimes vehemently. In contrast to a more formalized group, some scholars identify “an informal community” of poets evidenced by their letters, promotion of each other, and poems dedicated to each other (Drummond 32), connections which are richly documented by archival materials held at Emory University.

Using preliminary data manually generated from a subset of the correspondence EAD, our data suggests a wider set of connections in the Group than traditional scholarly approaches. The latter selectively emphasize the relationships of the most prominent authors and the role of the writing workshop (see fig. 1). Since our data is based on a much larger set of artifacts, as well as their complete metadata, we find that the locus of poetic activity in Belfast is not so oriented around the workshop (see fig. 2). Once we collect the full dataset via our completed tools and workflow, we will compare it with models generated by traditional scholarly methods, to identify significant gaps and discrepancies in either model.

Providing not only this new analysis of the Belfast Group’s network and a report on the development of our tools, our poster presentation at DH 2013 will also include a hands-on demonstration of the software tools and interactive visualizations of network data.


Figure 1. Graph of relationships inferred from Heather Clark’s Ulster Renaissance. Nodes are sized by degree and colored by hub score. The writing workshop is the strongest hub; the trio of large nodes represent Michael Longley, Derek Mahon, and Seamus Heaney.

Figure 2. Relationship graph based on preliminary correspondence data, sized and colored as in figure 1. Based on this data, the writing workshop does not function as a hub at all, and Paul Muldoon becomes the largest node.

References

Akdag Salah, Alkim Almila et al. “Exploring Originality in User-Generated Content with Network and Image Analysis Tools.” Digital Humanities 2012. University of Hamburg. 19 July 2012.
Blanke, Tobias et al. “Information Extraction on Noisy Texts for Historical Research.” Digital Humanities 2012. University of Hamburg. 19 July 2012.
Clark, Heather. The Ulster Renaissance: Poetry in Belfast, 1962-1972. Oxford: Oxford University Press, 2006.
Drummond, Gavin. “The Difficulty of We: The Epistolary Poems of Michael Longley and Derek Mahon.” The Yearbook of English Studies, Vol. 35, Irish Writing since 1950 (2005), pp. 31-42
Hangal, Sudheendra. “Processing Email Archives in Special Collections.” Digital Humanities 2012. University of Hamburg. 20 July 2012.
Litta Modignani Picozzi, Eleonora, Jamie Norrish, and Jose Miguel Monteiro Vieira. “Complex entity management through EATS: the case of the Gascon Rolls Project.” Digital Humanities 2012. University of Hamburg. 18 July 2012.
Moretti, Franco et al. “Networks, Literature, Culture.” Digital Humanities 2011. Stanford University. 21 June 2011.
Meeks, Elijah. “More Networks in the Humanities or Did books have DNA?” Digital Humanities Specialist. 6 December 2011. Web. 1 November 2012. https://dhs.stanford.edu/visualization/more-networks/.
Mendes, Pablo N. et al. “DBpedia Spotlight: Shedding Light on the Web of Documents.” Proceedings of the 7th International Conference on Semantic Systems (I-Semantics). Graz, Austria. 7–9 September 2011. http://www.wiwiss.fu-berlin.de/en/institute/pwo/bizer/research/ publications/Mendes-Jakob-GarciaSilva-Bizer-DBpediaSpotlight-ISEM2011.pdf.
Pitti, Daniel, et al. “The Social Networks and ARchival Context Project.” Digital Humanities 2011 Stanford University. 22 June 2011.
Pitti, Daniel, et al. SNAC: The Social Networks and Archival Context Project. Web. 29 October 2012. http://socialarchive.iath.virginia.edu/.
Visconti, Amanda. View DHQ: Citation Network Visualization for Digital Humanities Quarterly. Web. 1 November 2012. http://digitalliterature.net/viewDHQ/.

This post currently has no responses.