facebook_icon_typography_by_looolcoc2

Big data ethics: When did you consent to Facebook’s self-censorship research?

When did you consent to Facebook’s self-censorship research?

By Sean Rintel

Are corporate and academic research ethics up to the task of dealing with the “cool but creepy” dilemmas of big data?

A recent paper, Self Censorship on Facebook, shows that Facebook wants to know why users might abort a post at the last minute, and was able to collect data from 3.9 million users over 17 days to find out.

As noted by Ars Technica, this is the social media equivalent of online retailers finding out why people abandon online shopping carts before checkout.

New research possibilities

The methodology expands the possibilities of sociological research. The authors argue that self-censorship is a practice for designing the best possible face for a perceived audience. The study found that 71% of users “self-censor” at the last minute.

[P]osts are censored more frequently than comments, with status updates and posts directed at groups censored most frequently of all sharing use cases investigated.

[P]eople with more boundaries to regulate censor more; males censor more posts than females and censor even more posts with mostly male friends than do females, but censor no more comments than females; people who exercise more control over their audience censor more content; and, users with more politically and age diverse friends censor less, in general.

These findings are in line with the claims made by influential American sociologist Erving Goffman about impression management. He argued that individuals try to guide the impressions that others form of them while others try to infer what they can expect from the individual. In that game, controlling what others don’t know about you is as important as what they do.

The study can’t provide evidence about whether posts are deliberately aborted for reasons of self-presentation or for other reasons such as distraction, accident, or a device crash.

But it does provide a kind of empirical evidence that neither Goffman’s ethnographic methods or even excellent survey-based research can emulate.

New ethical challenges

Facebook is arguably denying users the right to informational self-determination in terms of their informed consent to participating in the research and by storing this data longer than needed for immediate technical purposes.

Does this kind of study violate user privacy? The authors claim not.

These analyses were conducted in an anonymous manner, so researchers were not privy to any specific user’s activity … the content of self-censored posts and comments was not sent back to Facebook’s servers.

When I contacted the researchers to clarify what was sent, Facebook communications manager Matt Steinfeld responded that the authors saw only a “binary value”.

“We didn’t track how long the post was … we only looked at instances where at least five characters were entered, to mitigate noise in our data,” he said.

“Nevertheless, self-censored posts of 5 characters or 300 characters were treated the same as part of this research.”

Steinfeld also asserted that Facebook’s Data Use Policy, which is available to users before they join Facebook (and remains available), constituted informed consent.

Facebook’s Data Use Policy on “Information we receive and how it is used” includes a section on data such as that collected for the study:

We receive data about you whenever you use or are running Facebook, such as when you look at another person’s timeline, send or receive a message, search for a friend or a Page, click on, view or otherwise interact with things, use a Facebook mobile app, or make purchases through Facebook.

In the “How we use the information we receive” section the last bullet point includes uses such as:

internal operations, including troubleshooting, data analysis, testing, research and service improvement.

This is at best a form of passive informed consent. The MIT Technology Review proposes that consent should be both active and real time in the age of big data.

That still leaves two questions about the status of user choice and rights.

First, is passive informed consent to collect data on technical interactions sufficient when users have actively chosen not to make content socially available?

Second, to what extent does a proposed right to be forgotten apply to data on technical interactions even if no social data is posted?

Even if users do not yet appreciate the value of this data in the same sense as their actual posts, Facebook and other entities certainly do. This is a form of metadata.

Australian and US governments use and share – sometimes apparently overshare – metadata for security purposes. Governments tend to rather hypocritically claim that metadata is an invaluable resource for surveillance (although that is highly contestable) while also denying that collecting it is a violation of privacy because they believe it is innocuous.

Facebook’s Steinfeld noted that “none of the revelations about government surveillance have extended to the topics covered in this study.”

However, this does not mean requests for such information have not occurred or could not occur in the future under blanket arguments that it is “not content”, “innocuous”, or most concerningly, collected with “informed consent”.

Oversight and informed consent

Facebook’s research is vetted by its own cross-Functional Review panel as part of an independently assessed privacy program.

This seems to be a reasonable corporate research review process, but the highly intrusive Facebook Beacon made its way through, so corporate interests clearly can win over ethical standards. Users have also often complained that Facebook makes changes first and asks (forgiveness) later.

As VentureBeat reports, the US Council for Big Data, Ethics, and Society will convene for the first time in 2014. Its influence on the big data ethics scene should be positive, but its task is hugely complex.

Big data research requires a total overhaul of the concept of informed consent as well as stronger principles aligned to new rights for the digital age.

Sean Rintel is the current Chair of Electronic Frontiers Australia.

The Conversation

This article was originally published at The Conversation.
Read the original article.

Read the original article at:

Rintel, S. (2014, January 16). When did you consent to Facebook’s self-censorship researchThe Conversation.

Syndicated as:

Related:

The Evolution of Fail Pets Part 2

In November last year UX Magazine published my article on The Evolution of Fail Pets such as Twitter’s Fail Whale.

Fred Wenzel, a Mozilla employee who I found to have coined the term (and has a gallery of Fail Pets), read the piece recently and picked up an error that I had made.

I attributed the cute “sad brick” (below) that appears when Flash crashes in Firefox to Adobe.

Source: crunchyroll.com

However, as Fred says, that was a misattribution:

Attentive readers may also notice that Mozilla’s strategy of (rightly) attributing Adobe Flash’s crashes with Flash itself by putting a “sad brick” in place worked formidably: Rintel (just like most users, I am sure) assumes this message comes from Adobe, not Mozilla.

As this image of an Adobe Flash plugin crash in Chrome shows, browser developers choose how to display errors for plugins. Google has gone with the more traditional puzzle-piece.

Source: crunchyroll.com

I should have noted that although the Firefox error message states that the Adobe Flash plugin had crashed, there was no Adobe logo on the error page, which would have been likely if it was an Adobe-designed error.

So, Mozilla is deliberately attributing failure to the company, but has chosen its own whimsical way of doing so. In my article, I call the “sad brick” as well as Google’s “sad puzzle piece”  an evolution of the original Fail Pet idea because instead of an attributable brand mascot (such as the Fail Whale), this does a more generic sad face. Beyond Mozilla and Google, many other companies are jumping on this low-key and less brand attributable whimsy: Microsoft’s new BSOD emoticon and Apple’s sad iCloud especially.

Source: uxmagazine

Source: uxmagazine

Read the full original article @

Rintel, S. (2011, November 2). The Evolution of Fail Pets: Strategic Whimsy and Brand Awareness in Error MessagesUX Magazine.

These ideas were also followed up by The Voice Project: Why error messages matter – and why not everyone thinks they are funny.

The Evolution of Fail Pets

UX Magazine just published my article on The Evolution of Fail Pets.

The piece considers error messages as a critical strategic moment in brand awareness and loyalty. Fail pets are of particular interest in terms of branding because they can result in brand recognition through earned media. However, that same recognition carries the danger of highlighting service failure. This article discusses the rise of and changes to the depictions of fail pets, from the initial, highly recognizable fail pets, to markedly more cautious error message imagery in later products.

Read the full article @

Rintel, S. (2011, November 2). The Evolution of Fail Pets: Strategic Whimsy and Brand Awareness in Error MessagesUX Magazine.

These ideas were also followed up by The Voice Project: Why error messages matter – and why not everyone thinks they are funny.