James Spann

MIT Department: Media Arts and Sciences

Undergraduate Institution: Rochester Institute of Technology

Faculty Mentor: Andrew Lippman

Research Supervisor: Hisham Bedri, Travis Rich

Websites: LinkedIn, Personal site

 .

Biography

I am a second-year student of Computer Science with an immersion in Mathematics at Rochester Institute of Technology. I enjoy Fencing, 35mm Film Photography, and Creative Writing. My current research interests in Cryptocurrency and Data Science are in fields that I am focused on exploring and building for the world ahead. My plan is to continue my academic career, after my bachelors degree, to dive deeper to help bridge these disciplines and make them accessible for solutions and use in everyday problems.

2017 Poster Presentation

2017 Research Abstract

Detecting bias through user interaction

James Spann, Rochester Institute of Technology, Rochester, NY

Hisham Bedr, Viral Communications Group, MIT Media Lab, Cambridge, MA

Andrew Lippman, Viral Communications Group, MIT Media Lab, Cambridge, MA 

A major task in the field of sentiment analysis is understanding how a person conceptually links diverse concepts or ideas and finds ways to display those relationships. We describe a gamified web-based tool for exposing user sentiment about news articles based on the users political affiliations and their sensitivity to perceived bias in those articles. Our challenge is the complexity of engaging the user in this solo act of understanding their own sentiment. A problem is finding articles to use for the experiment that express a suitable range of political bias. There is no existing corpus, so we create it on-the-fly. Users enter the site and state their political leaning. They are then presented with a recent news article that is accompanied with a sentence from a source that has a different political viewpoint. The task is to isolate a sentence in the full article that presents the idea of the test sentence in context. They input the measure of bias they perceive in the article and get to compare that with their peers assessments. We use previous user responses and a bias detection algorithm to show how the current users response compares to other members of the same political affiliation. From this we are able to build a dataset comprised of a series of related sentences that can show how users from differing political affiliations understand a news story. In our future work, we hope to run a user study and build out the types of information gathered from this dataset.