NETWORK ANALYSIS 1-2:15 Shawn
Basic network analysis: nodes (characters, places, words, books) and edges (connections between those nodes, ideas that comes up, books mentioned etc.) weight?
Software that can be used to do this:
A plug for GELPHI, export to earth.
Republic of Letters
Google refine (allows you to clean up big data sets)
Interested in seeing other people’s experiences in starting a network analysis project. Digital component add on to dissertation, uses a text based. Evaluated in computer science dept. not in English dept.
Very few texts online of women’s writing. Some through evoo but not good. When you don’t have a corpus online digitized you need to do work on your own: respondent took high res photos of docs, TEI of documents but it took six months to do one play out of 12 texts.
Question about social network analysis in times before there were social networks. Discussant is interested in intellectual network analysis. Particular theories, letters, travel areas etc. works with Margaret Cavendish
Issues with current tech:
Well known tool is Republic of Letters which can help to map letters and see basic connections.
Cordell: think of each letter as a connection and the more letters the more connection. Who is central to network who is on periphery?
What were challenges with GEPHI?
Response: it visualized people pretty well and so could see names and locations and it was fine, but to add in other data was harder and didn’t work well. Then used neat line to map locations and texts and that didn’t necessarily line up with GEPHI.
Can you get metadata from evoo? Negotiate with proquest…
Way to use excel, node xl: allows to create a network graph in the same way you create a pie chart in excel, can tweak the various visualization options.
Plug-ins for GEPHI, one written by Dave shepherd at UCLA. Takes network graph and if you have a field with XY coordinates, it lays out the data and you can export it into google earth. Can illustrate why network graphs are different from maps, because in regards to geographies it obscures more than it illuminates. The plug in is called export to earth.
What is the design method?
Inter textual and para textual connections between author and work being published during the period
What is that intellectual network?
How to determine edges after nodes are no longer made up?
Perhaps use literary theory to determine what is a node and what are edges.
Respondent notes that he used affect theory but it got more and more complicated and edges turned out to be affects.
Turns out the edges are more interesting than the nodes, yet the nodes are totally privileged. What about developing edge-based mapping?
Ontology of network analysis what counts as nodes and what as edges?
Human centered. Animal centered. Object centered. Etc.
Novels or authors being edges? And then connected based on who is talking about those things.
Trying to get this to a point to open it up as a project that people can critique.
Other theories that might include network analysis:
Actor network theory: distributed agency models… Among or across people and commodities. Mapping networks of commodity circulation.
Anderson’s Imagined Communities and Warner’s counterpublic vs public (outside of direct social interaction that are none the less conceived of…)
The conceptual community rather than the actual community…
All data about reprinted texts from newspaper archives.
Nodes are individual publications
Connections between them are shared texts and the more texts two publications share the larger the edge between them. If two texts reprinted fifty of the same texts in a decade then it is perhaps a strong connection. Age before copyright law. We can speculate about who was reading who and what publications were in conversation. HOWEVER the whole graph can be inverted with texts as the nodes and publication reprints as the connections between them.
What are the best sources for encoding data for network analysis?
How do we determine nodes and edges?
How do we conduct rigorous selection of nodes and edges in network analysis?
How can we collaborate on encoding non digitized/create collaboratives to save time?
Is it ethical to put “ghost entries” (letters that have been burned or lost) in data analysis?