Student Data Privacy
Facebook Creates Marketing Profiles of Vulnerable Users Darren Davidson of The Australian, reported that Facebook executives outlined “how the social network can target ‘moments when young people need a confidence boost’ in pinpoint detail” for potential advertisers. Outlining strategies to reach users as young as 14, the 23 page document described how Facebook’s sentiment analysis can identify when users feel “‘stressed,’ ‘defeated,’ ‘overwhelmed,’ ‘nervous,’ ‘stupid,’ ‘silly,’ ‘useless,’ and a ‘failure.’” The document further specified timing of emotional arcs when advertising might be most productive. Davidson quotes the document as saying: “Anticipatory emotions are more likely to be expressed early in the week, while reflective emotions increase on the weekend. Monday-Thursday is about building confidence; the weekend is for broadcasting achievements.” Alex Heath, writing for Business Insider, posits: “Even if Facebook hasn't allowed advertisers to target young people based on their emotions, its sharing of related research highlights the kind of data the company collects about its nearly 2 billion users.” While Facebook has labeled news reporting of the document as “misleading,” and chalked it up to an “oversight,” the incident raises questions about digital traces, especially for youth, in spaces that may feel safe and private where the business model may not be transparent to its users.
Data & Equity Is Data Neutral? In a recent essay, danah boyd argues that there is “nothing about doing data analysis that is neutral.” News this week shows how data sources intended for surveillance served to challenge witness testimony and provide a record of bias. In Pennsylvania, surveillance cameras captured criminal fraternity hazing behaviors that resulted in the death of a student. The surveillance footage provided evidence that contradicted witness testimony and led to charges of involuntary manslaughter. In Texas, video evidence conflicted with police testimony in the police shooting of a black teen, showing that the boy was not using his car as a weapon and driving toward the police, but instead driving away. In both cases, the surveillance likely occurred without the knowledge or consent of the victims, who in one case was a college student and in another case was a 15-year-old. These examples highlight the complexity of privacy protection and the benefits and trade-offs of surveillance. Caroline Sinders, Eyebeam Open Lab Fellow at Buzzfeed News, explores the ways bias is baked into machine learning and urges transparency of algorithms and a shared method of addressing ethical consequences. Depending on which data points are chosen, groups can be unfairly stigmatized. Arguing that “predictive policing systems disproportionately target ‘street crime’ rather than white collar crime,” Brian Clifton, Sam Levigne, and Francis Tseng provide an interactive visualization of white collar crime risk zones across the U.S., noting places where convictions have occurred. This interactive map challenges the more common practice of focusing on crimes in low-income areas. Using algorithms in judicial decisions has already happened in Wisconsin. Court officials were reportedly “uneasy with using a secret algorithm to send a man to prison,” with one Justice citing a ProPublica study that predictive systems like Compas found that black defendants “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism.” The New Inquiry offers a syllabus-style reading list on predictive policing and the disproportionate consequences for already vulnerable groups. Happenings. The State Department invites public comment on collecting social media usernames (but not passwords) from visa applicants, as part of an emergency review of what has been referred to as their “extreme vetting” processes. Comments are due next Thursday, May 18. Discussions this week over proposed changes to the Affordable Care Act overlooked impacts to special education programs. The School Superintendents Association, a professional organization representing school administrators, reports that a majority of districts depend upon Medicaid disbursements to pay the salaries of health care professionals who work with special education students. School-based health services could be severely diminished under the proposed changes, potentially also impacting low-income students by reducing or preventing access to health services. Low income high school students were denied Upward Bound scholarships due to formatting errors in the online application. As school districts struggle under the burden of unpaid lunch debt, many of them resort to tactics that stigmatize children. The San Diego Tribune reports that children in the Vista school district will be using fingerprint scans to purchase school lunches. As technology allows for more intrusive tracking and targeting of vulnerable populations, actor and activist George Takei advises that Americans must understand and honor the past in order to learn from and not repeat it. Mapping Inequality is a visualization and archive of “credit worthiness” determined by the Home Owners Loan Corporation in Depression-era New Deal America. On its Look Different website, MTV offers a series of documentaries and short films to address bias in the U.S. and provoke discussion. Union Capital Boston is hosting a free webinar on May 11 to discuss the value of civic engagement and their model of community participation. Digital Literacy Teaching the “Olds” New Tricks to Empower Youth Digital Learning, Debunk Fake News, and Promote Media Literacy At a recent conference at the Harvard Graduate School of Education, Under the Trump Administration, What Is Next for Education?, participants unpacked challenges that adults face in creating the meaningful learning environments they want their kids to have. Sandra Cortesi, Director of the Youth & Media team at the Berkman Klein Center, highlighted the positive and powerful role that informal connected learning spaces are already playing and have the capacity to play in youths’ educational experiences. As reported in this write-up of the event in Huffington Post, Cortesi encouraged adult education about these spaces, calling upon those who work in education, technology, or related fields: “‘[to] mak[e] every effort to familiarize educators and caregivers with digital learning environments, and support those who will go the extra mile to advance this new era of online learning.’” New guidance from ConnectSafely can help teachers and parents go that extra mile by empowering youth to better understand and debunk “fake news” and master media literacy. In the Parent & Educator Guide to Media Literacy & Fake News, authors Kerry Gallagher and Larry Magid encourage adults to start their efforts in this area by accepting that “it’s not just kids who need a lesson in media literacy. Adults do as well.” Gallagher & Magid provide an overview of the societal circumstances that have led to the current media environment in which having “a vibrant and dynamic array of [digital] information sources . . . has also made it more difficult to know which sources can be trusted.” They then offer practical tips for both adult self-knowledge and insights for adults to try to transmit to teens and kids. These suggestions include cultivating emotional intelligence so that “media-triggered emotions” can be recognized and unpacked, then “[o]nce the emotion has been managed, engage children to think logically about how they can take action that is positive in response to that [triggering] information.” Gallagher & Magid also encourage adults to help youth plan how they will respond to false posts by importing the familiar anti-bullying framework: “For decades we have told students to stand up to bullying and teasing and that being a bystander is not acceptable. We need to apply this same standard to fake news and can teach how to stand up to false information without provoking more conflict.” This educational goal comports with research findings recently published in PlosOne: “explain[ing] the flawed argumentation technique used in the misinformation” can serve as “inoculation” against the effects of the misinformation. Happenings. A new tool for standing up to fake news: Google is asking adult and youth users alike to help promote information quality and fight fake news by making it “much easier for anyone to give it feedback on its search results . . . For everyday users, that means that if you see a result featured on Google's pages that you think is wrong or offensive, then you can actually do something about it.” Another recent dispatch from the human user/algorithm interface: a deputy editor at the Chicago Tribune raises questions in this Medium post about how difficult it is even for content creators to understand the algorithms Facebook uses to choose which content to highlight. In a recent LA Times op-ed, Professors Matthew A. Baum and David Lazer have this reminder about the limits of all users’ knowledge of digital companies’ toolkits: “the public must hold Facebook, Google and other platforms to account for their choices. It is almost impossible to assess how real or effective their anti-fake news efforts are because the platforms control the data necessary for such evaluations. Independent researchers must have access to these data in a way that protects user privacy but helps us all figure out what is or is not working in the fight against misinformation.” Slate looks at how teachers are turning to ed tech and online games to build on students’ increased interest in learning about the political realm without classes breaking down along political divides. After-school programs and other less formal learning settings now have access to Google Classrooms, according to EdTech Magazine. ProPublica looks at new techniques for effective investigative journalism in the current political and media climate, which include being transparent about stories that journalists are pursuing so that readers with relevant information can more easily contribute to the investigation. EdTech interviews Professor Joseph Qualls about the impact of AI on education; Qualls predicts that “‘having large universities and large faculties teaching students is probably going to go away — not in the short-term, but in the long-term.’” A Media Justice convening (taking place just before the International Communications Association 2017 conference) is coming up later this month to “bring together activists, advocates, and researchers to advance the shared theory and practice of media justice.” Boston Civic Media is hosting Civic Imagination: Designing and Building a Better Future on June 3. What We’re Listening to . . . ConnectSafely’s Larry Magid interviews Michelle Ciulla Lipkin, Executive Director of NAMLE, about media literacy teaching strategies for addressing fake news. Zeynep Tufecki gives her first talk on her new book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, at the Berkman Klein Center. Data & Society hosts a discussion with Maurizio Ferraris and Martin Scherzinger “Post-Truth and New Realities: Algorithms, Alternative Facts, and Digital Ethics.” RadioLab brings us Debatable, a podcast about a Black queer student who challenges traditional modes of debate. Jose Antonio Vargas hosts an MTV documentary that asks what it means to be white. Constance Steinkuehler gives a thought-provoking keynote to DML 2016 on studying video game use. Have You Heard explores Truth in Edvertising, a “fast growing and completely unregulated . . . byproduct of an education marketplace.”