[Asis-l] using deep learning to characterize campaign television ads

kalev leetaru kalev.leetaru5 at gmail.com
Mon Feb 8 14:56:48 EST 2016


Apologies for cross-posting. Thought many of you would find of great
interest this latest experiment, which took the Internet Archive's TV
Political Ad Archive of 267 campaign ads airing on monitored television
stations over the last several months, split them into a sequence of
images, one per second, and ran them through Google's neural network Cloud
Vision API to catalog the visual contents of each frame including major
objects, activities, and themes it depicts, extract any recognizable text,
estimate the geographic location it captures, and identify the presence and
emotional expression of any human faces. Coupled with the live airing data
compiled by the Archive (http://politicaladarchive.org/) and the fact that
ads were analyzed in sequence every 1 second, you can do all kinds of
analyses, from which themes were aired the most and where to trends in the
sequencing of themes in ads.

A few high-level trends are summarized here:

https://www.washingtonpost.com/news/monkey-cage/wp/2016/02/08/what-does-artificial-intelligence-see-when-it-watches-political-ads/

The full JSON output capturing the data output by the Cloud Vision API for
each frame is here:

http://blog.gdeltproject.org/computers-watching-ads-deep-learning-meets-campaign-2016/

You can download the image frames here:

http://blog.gdeltproject.org/image-frames-available-for-political-ad-image-analysis-pilot/

~Kalev
http://www.kalevleetaru.com/
http://blog.gdeltproject.org/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.asis.org/pipermail/asis-l/attachments/20160208/ab424010/attachment-0001.html>


More information about the Asis-l mailing list