Khosla, An, and Lim write deep-learning algorithm for neighborhoods


October 16, 2014

neighborhood algorithim

Human beings have a remarkable ability to make inferences based on their surroundings. Is this area safe? Where might I find a parking spot? Such decisions require us to look beyond our “visual scene” and weigh an exceedingly complex set of understandings and real-time judgments. This begs the question: Can we teach computers to “see” in the same way? And once we teach them, can they do it better than we can? The answers are “yes” and “sometimes,” according to research out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Researchers have developed an algorithm that can look at a pair of photos and outperform humans in determining things like which scene has a higher crime rate, or is closer to a McDonald’s restaurant. To create the algorithm, the team — which included PhD students Aditya Khosla, Byoungkwon An, and Joseph Lim, as well as CSAIL principal investigator Antonio Torralba — trained the computer on a set of 8 million Google images from eight major U.S. cities that were embedded with GPS data on crime rates and McDonald’s locations. They then used deep-learning techniques to help the program teach itself how different qualities of the photos correlate. Continue reading on MIT News.

Leave a Reply

Your email address will not be published. Required fields are marked *