NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1959
Title:Predicting the Politics of an Image Using Webly Supervised Data


		
The initial scores for this paper were: 8: Top 50% of accepted NeurIPS papers. A very good submission; a clear accept. 3: A clear reject. I vote and argue for rejecting this submission. 6: Marginally above the acceptance threshold. R1 thinks the collected dataset is a potentially high-impact contribution, and also likes the proposed model and results. R2 likes the overall problem set-up, but questions the assumptions made by the proposed model and points to the lack of clarity in the experiments. R3 also thinks the primary contribution is the collected dataset and points to some important missing details. The authors provide a rebuttal. In the post-rebuttal discussion R2 increases their score to 6 as some of their clarity concerns are addressed in the rebuttal but still thinks the model assumptions need further clarification in the paper (see below). R3 increases their score to 7 as many of their concerns were answered in the rebuttal. R1 maintains their positive rating. Given the final positive recommendations 8, 6 and 7, AC recommends accept. AC encourages the authors to further clarify and motivate the model assumptions in the final version of the paper (see also below). Here are anonymised excerpts from the reviewer discussion: R2: “The author response has made the experimental details clear and answers all of my questions satisfactorily. I am still not very convinced of the image only test time assumption. The authors argue this is a scientific assumption rather than a practical one. I am unconvinced this point is made clear in the paper through the motivation or experimental analysis.” R3: “R2 makes an interesting point about whether predicting political bias from *only* images is a realistic assumption. I can imagine scenarios in which people would see an image with little to no text (e.g., while scrolling through social media platforms, I see a lot of news articles with a large image and a fairly short headline). When trying to understand how image bias impacts user clicks or something of that nature, a method like the one described in the paper might be useful.” R1: “I appreciate R2’s point about the limitations of assuming that text is unavailable at test time. While I think this is a fair point, I also can imagine scenarios like the one R3 described. Another potential application for this system is as a resource for people who are interested in doing large-scale image analysis (eg. computational social scientists who want to explore a large set of images for different visual features of political framing, etc.)”