Hi DH fans! I have a question for you — when you think about information on the Internet, what benchmarks might come to mind that would reveal a certain topic as having thorough, accessible information online?Continue reading “DHC Weekly 5/24: Wikipedia Cross-Lingual Image Analysis”
Hi DH fans! This week on the blog I want to draw your attention to a resource that is less about research and more about every day safety and security as we navigate the shark-infested waters of “the Internet.”Continue reading “DHC Weekly 5/17: Terms of Service; Didn’t Read”
Hi DH Superfans!
We’ve been talking about some text analysis tools lately here on the blog, and this week I’d like to turn to a tool that allows for some light-weight and accessible analysis of the vast and unknowable text-based dataset that is twitter.com!Continue reading “DHC Weekly 5/13: twXploder!”
Continuing with the theme of text analysis, this week I want to go hyper-granular and a little old school to talk about a part-of-speech tagging tool, CLAWS.Continue reading “DHC Weekly 5/3: Part of Speech Tagger”
Last week, we looked at a set of corpora that in part allow one to ask questions about language’s changes over time — today I’m going to talk about a similar tool, the Google N-gram Viewer.Continue reading “DHC Weekly 4/26: Google Ngram…friend or foe?”
Last week on the blog, I wrote about Voyant, a text anaysis tool that can be used to discover all sorts of stats about a text or a corpus of texts — what words are used most frequently, in what combinations, in what contexts, and so on. I used The Adventures of Sherlock Holmes as my test corpus, because its consistent tone and easily accessible public domain status makes it an ideal example for the sorts of questions textual analysis tools like Voyant can prompt one to ask of a literary text. But what other sorts of corpora are out there, and what sorts of projects does analyzing them lead to? Today, I want to write about a publically accessible collection of English language corpora amassed by Mark Davies, a professor of linguistics at Brigham Young University.Continue reading “DHC Weekly 4/19: Corpora Works of Mercy”
Hello DH fans!
This week we’re leaving mapping behind us and turning to a category of DH tools oft-utilized in the classroom: text analysis! I’m going to be taking a look at one of the most oft-used text analysis tools, Voyant! Voyant is so popular because it’s quite out-of-the-box easy to use, with no coding necessary. In practice, I have found this to mean that Voyant is a little idiosyncratic and difficult — but I’m going to try to break down its basics for you all this week!Continue reading “DHC Weekly 4/12: Voyant and Text Analysis”
Hello DH-fans! I know it’s been map city on the blog lately, but I want to talk this week about one more mapping tool, historypin. Historypin, unlike the other mapping tools I’ve written about is designed to be created by communities rather than by solo authors. You set up your collection, and then any user can submit a pin, and with it a memory or piece of media.Continue reading “DHC Weekly 3/23: Historypin”
Hi DH fans! Do you remember a few weeks ago, when I looked at ArcGIS for mapping? And then a few weeks before that, when I looked at TimelineJS for making timelines? Well, what if you could do both…at the same time?Continue reading “DHC Weekly 3/8: Timemapper”
Hello DH-ers! Do you remember a few weeks ago, when I introduced you to a new tool from JSTOR? In that post, I mentioned that JSTOR’s other tool in beta, the Text Analyzer, hadn’t been working for me. Well, after many emails exchanged with an infinitely patient project manager at JSTOR (thank you Michael!!), the issue has been sorted, and I am so excited to tell you all about the Text Analyzer!Continue reading “DHC Weekly 2/15: JSTOR Text Analyzer”