Resources

For us, Digital Humanities is a research method for the Humanities which uses digital tools to turn traditional objects of study into data. As data we can analyze text, map historical documents, create visualizations, and then use humanities skills to analyze these results.

Getting started with DH

Explore our video tutorials on the basics of data collection and data cleaning using Excel; OpenRefine, a quick way to make common edits to a lot of Excel data at once; and RStudio, a programming language to clean and manipulate data.

We strive to use open source programs that are supported by a community of users and are typically free or low cost to use. Check out our full explanation on Network Analysis, Text Analysis, and Basic Data Visualization.

New to DH and want to start small? Check out our teaching resources and learn new methods

Resources for Grad Students

Video
  • Watch this video to see Joey Stanley’s Brand Yourself workshop.

Take control of your online presence and get ready to hit the job market by creating a unified digital narrative.

Blog
  • Check out Joey Stanley’s blog to see his notes on the workshop.
Presentation
  • Download the Brand Yourself workshop PowerPoint.

Outside Resources

Programming Historian is a directory of different lessons on R, JavaScript, Python, mapping and more. These are all tailored to the humanities.

DH Toychest – curated by Alan Liu the DH Toychest offers a list of DH tools, sample datasets for practice, tutorials, and a wealth of other information and resources.

 

Chrome has a plugin called WebScraper that makes it easier to scrape websites and to bypass any coding. Follow the instructions on the website to download the plug in and it might be good to watch the intro video on the same page. This will allow you to get a feel for the plug in and how it works.

Once the plug in is installed, if you right click anywhere on the page (doesn’t have to be the one you want to scrape) and click on “Inspect (Element)”. There should be a tab at the top of the inspector that is called WebScraper. This is the plug in! Next click on Create new sitemap > Import Sitemap, then you can paste the sitemap the DigiLab has created, or any other sitemap that has already been created here. This is the place where the digilab will upload its sitemaps for public use. They will be named based on the website they are scraping.

The last step before we scrape is you need to change the URL in the sitemap before you import it. So after you paste the sitemap into the webscraper, change what is inside “startUrl”:[“PUT URL IN HERE BETWEEN QUOTATION MARKS”]. Don’t change anything else to the sitemap besides the URL itself. Once this is imported, you can click on Sitemaps [name of sitemap] at the top of the inspector, and click Scrape. You will be asked to put request intervals, 2000 is plenty of time for both request and page load. This is just to show the browser that you are a human and not a robot.

Let the plug in do it’s thing, and you might have to click on the refresh data button once it is done. Then you can export the table as a csv and now you have your data!