When Bots Socialize for Fame and Money
November 6, 2011 8:39 PM   Subscribe

The Socialbot Network - A UBC study suggests that many Facebook users will friend total strangers. Researchers said they collected 250 gigabytes of information from Facebook users by using socialbots — fake Facebook profiles created and controlled by computer code (sic). The researchers said they got the approval of UBC’s behavioural research ethics board. The data they collected was encrypted and anonymized and deleted after they completed their data analysis.

The researchers found that even operating the socialbot network (SbN) at a conservative pace, each socialbot could collect on average 175 new chunks of publicly inaccessible data per day.

The fake Facebook profiles, which were set up with names, photos and computer-generated status updates, sent friend requests to about 5,000 random Facebook users. When people accepted those friend requests, the socialbots followed up by putting out friend requests to friends of the initial group.

As a result, it took only eight weeks for researchers to acquire 250 gigabytes of personal information from Facebook users.
posted by KokuRyu (64 comments total) 15 users marked this as a favorite
 
Well, that explains some of the weird stranger friend requests I've gotten.
posted by theredpen at 8:44 PM on November 6, 2011 [4 favorites]


I saw this paper earlier this week, and my first thought was that no US university would ever allow this kind of research. Some of my colleagues mentioned that a similar paper (not by this group) had similar results but was rejected by a conference because of concerns over the ethics of the research.

At the same time, I also have to say that this paper provides a really useful data point as to how many people blindly accept all friend requests.
posted by jasonhong at 8:45 PM on November 6, 2011 [2 favorites]


You cannot, sir, take from me anything that I will more willingly part withal — except my life — except my life — accept my life.
posted by nevercalm at 8:52 PM on November 6, 2011 [4 favorites]


Lately I've been getting friend requests from people who claim to have gone to my high school, within a year or so of me. My high school is small enough that I'm pretty sure I'd recognize those names, and I don't, plus these people don't have any friends in common with me, so understandably I was quite puzzled.

I was probably going to end up asking about this on the green, but now I don't have to.
posted by madcaptenor at 8:53 PM on November 6, 2011 [1 favorite]


I only friend names that seem obviously fake anymore.
posted by telstar at 8:55 PM on November 6, 2011 [1 favorite]


They could have accomplished the same thing by hiking down to Wreck Beach and handing out chocolate magic mushrooms.
posted by mannequito at 8:55 PM on November 6, 2011 [8 favorites]


I've seen topics like this before, but they always make me wonder - why bother? It seems to me that the best way to get private user information from facebook is to just buy it from them.
posted by rebent at 8:56 PM on November 6, 2011 [2 favorites]


Joke's on them. Their socialbot is friends with 18 of my socialbots.
posted by twoleftfeet at 8:58 PM on November 6, 2011 [12 favorites]


I don't think that this would pass my board, it probably involves a large number of minors (a protected population) and the deception aspect is questionable. I didn't see them addressing either of the issues.

8.07 Deception in (APA)
(a) Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study's significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible.

(b) Psychologists do not deceive prospective participants about research that is reasonably expected to cause physical pain or severe emotional distress.

(c) Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data. (See also Standard 8.08, Debriefing.)
posted by cgk at 8:59 PM on November 6, 2011 [3 favorites]


I'd love to see what the bot's Facebook profile looks like. I am assuming there are lots of photos that feature bikinis.
posted by KokuRyu at 8:59 PM on November 6, 2011 [2 favorites]


Metafilter: Home About FAQ Archives Tags Popular Random

Sorry about that. I should know better than to let an untrained bot loose here.
posted by twoleftfeet at 9:07 PM on November 6, 2011 [3 favorites]


I would suppose that the anonymization and deletion of the data helped them pass the ethics board? If it's impossible to connect the data with a real person, that seems reasonably ok to me.
posted by kavasa at 9:09 PM on November 6, 2011 [2 favorites]


I recently accepted a stranger's friend request because I saw the stranger had 12 or so friends in common with me - I figured I just didn't remember her or something.

Then I saw her wall - filled with random spammy stuff - and noticed the 12 friends she had in common with me were all over the map, as in a few from high school, a few from college, a few across the country this way, a few that way. I then deleted her as a friend.

I think maybe a few people get fooled, then it snowballs into "oh hey this person is friends with my friends, I guess I know her."
posted by 3FLryan at 9:10 PM on November 6, 2011 [1 favorite]


my first thought was that no US university would ever allow this kind of research

I seem to recall that Indiana University approved a phishing study a few years ago that was basically actual phishing, but "victims" were notified what they'd fallen for after the fact. But the main research question was something along the lines of, "How susceptible are people to phishing?"
posted by aaronetc at 9:13 PM on November 6, 2011


I think people do not view Facebook profiles as private information so what is the harm in accepting friend requests from someone who might only be tangentially connected to you. Someone have a few connections in common? Might was well accept.
posted by Ad hominem at 9:13 PM on November 6, 2011 [1 favorite]


They need to quantify how many pre-existing Facebook profiles are already bots, e.g. created by software like this. I would assume other bot accounts try to accept as many friend requests as possible to look legit.
posted by benzenedream at 9:15 PM on November 6, 2011


I'm friends with Haskell Wexler and John Sayles and yet don't know them irl. (OK, met them but that hardly counts.) i think most people friend anyone with a reasonable number of mutual friends, and of course, anyone who's friends with Skip E. Lowe.
posted by Ideefixe at 9:17 PM on November 6, 2011


Incidentally, I've never had a facebook account, but was down in the ghetto grocery store below my apt. yesterday and they had a handwritten sign that said 'Friend Us On Facebook!'.

Who the fuck becomes friends with their local cheap market?
posted by mannequito at 9:19 PM on November 6, 2011


I have over 10,000 Facebook friends. But about 800 of those are really more like acquaintances. 600 or so didn't send me an Xmas card last year, so the total is probably closer to 9,400.
posted by twoleftfeet at 9:30 PM on November 6, 2011 [3 favorites]


Who the fuck becomes friends with their local cheap market?

I dunno; maybe they announce specials? Job openings?
posted by dhartung at 9:32 PM on November 6, 2011 [1 favorite]


Summary of the results of this study: people are dumb.

I don't still associate with people who have admitted that they're friend-whoring on facebook; it hasn't even been an issue for the last.. few... five? years? I will admit that I don't use Facebook at all and have been contemplating asking for my profile to be deleted, but is friend-whoring still so rampant and indiscriminant? I recently got an email that a complicated-ex/crush tried to friend me on FB but that wasn't enough for me to log on again.
posted by porpoise at 9:33 PM on November 6, 2011 [2 favorites]


People at stoopid.



.
posted by zombieApoc at 9:35 PM on November 6, 2011 [2 favorites]


There's a wide range of folks on Facebook, including horny middle aged guys (like me), to kids who lack basic common sense (my son), to aging boomers unfamiliar with this new-fangled technology (my parents). You can't just dismiss this users groups as stoopid (well, maybe horny middle-aged guys).
posted by KokuRyu at 9:44 PM on November 6, 2011 [2 favorites]


"The triadic closure, interestingly, also operated from the users side; the socialbots received a total of 331 friendship requests from their extended neighborhoods."

So... they became popular enough to have people come to them.
posted by CrystalDave at 9:45 PM on November 6, 2011 [2 favorites]


I saw this paper earlier this week, and my first thought was that no US university would ever allow this kind of research.

It is my understanding that a lot of Sociology, CS, and/or InfoSci departments already have large amounts of quasi-anonymous data from Facebook, Twitter, and other social networks. These companies are pretty much happy enough to have academics do free research for them after all. The datasets might have names stripped out, but experience has taught us that anonymized data is rarely anonymous at all (see also the AOL Search debacle). If we can de-anonymize search query streams or movie reviews, we can sure as hell de-anonymize social network data with relative ease. Former students who wind up working at these companies tend to send datasets back to their old professors on a not so irregular basis. How else do papers like, say, Center of Attention: How Facebook Users Allocate Attention across Friends (PDF) get their Facebook data or Redrawing the Map of Great Britain from a Network of Human Interactions get anonymous call data for 12 billion phones calls in Great Britain?

The socialbots are interesting to find out more about how people will friend strangers, but there isn't very much need to crawl Facebook like this when academics already have the data in their hands. University IRBs don't tend to get involved when you're not collecting any data yourself.

And while that kind of data is at least not publicly available, there's really no shortage of large datasets for free download (including a huge Epinions dataset).
posted by zachlipton at 9:56 PM on November 6, 2011 [3 favorites]


"The triadic closure, interestingly, also operated from the users side; the socialbots received a total of 331 friendship requests from their extended neighborhoods."

Well the first thing we have to ask is how of those requests came from other socialbots? Even so, that's a fascinating result. For those not familiar, triadic closure is the term for a powerful force in social networks: if A-B share a strong tie and A-C share a strong tie, B-C are likely to form at least a weak tie. The extent of triadic closure varies depending on the dataset (and this can be interesting to study from a sociological perspective), but the concept holds true in a wide array of cases.
posted by zachlipton at 10:01 PM on November 6, 2011 [2 favorites]


if A-B share a strong tie and A-C share a strong tie, B-C are likely to form at least a weak tie.

But it's only a weak tie. That's why I've been pitching my new idea for an anti-social network, to be called DefaceBook. Instead of Friends, you indicate your Enemies. Remember, a friend of my enemy is my enemy. The network effect will be tremendous.

MeFi mail me with venture capital.
posted by twoleftfeet at 10:06 PM on November 6, 2011 [16 favorites]


I'm sure most of the random friend requests people have been seeing were spammers, rather then actual researchers.

Also, they could have gotten a lot more data by writing a crappy FB game and requesting a ton of permissions in order to work.
posted by delmoi at 10:13 PM on November 6, 2011 [3 favorites]


I think that the difference, zachlipon, is that unlike mining an anonymous dataset, they were involving human subjects in the research by instigating and monitoring their behavior through deception. The authors of the papers mention that they took steps to minimize risk but their footnote 7 is a reference to their own conference proceedings. I have seen students come up with much more creative ways to try to convince the board that everything is really harmless.

I think that as man-machine interaction blurs this will raise big issues for research ethics. I serve on an IRB and to me this looks like a CS department interloping into social science research without an appreciation of basic field-dependent ethical principals. I would deny this because there is no informed consent, no disclosure of participation, and no ability for a participant to retract their data from the study after learning of the deception. The line was crossed from passive observation to systematic large scale deception and their lack of understanding of that would cause me to reject it with contempt. (I can't really do that, but I would be thinking it.)
posted by cgk at 10:20 PM on November 6, 2011


250 gigabytes of personal information

This means nothing. Does this mean 250 gigabytes of photos that most of their owners don't consider particularly personal? When you think of someone having your "personal information" does it really include things like this picture I took at a surfing contest? That picture is posted to my Facebook page and only visible to my friends, but it hardly something I'd call "personal information".
posted by tylerkaraszewski at 10:31 PM on November 6, 2011


I recently accepted a stranger's friend request because I saw the stranger had 12 or so friends in common with me - I figured I just didn't remember her or something.

This is the problem with Google+. If you force people to use an arbitrary standard of identity, you actually increase the likelihood of confusion.
posted by ChurchHatesTucker at 10:37 PM on November 6, 2011


Yeah, we have trouble getting simple surveys through our IRB sometimes. This study involves tricking people to be a subject in your research. They should at least message the people after the study was over and ask for post-hoc consent.

Also, they ripped photos off of hotornot.com to put in the fake profiles for the bots. That is certainly problematic.
posted by demiurge at 10:40 PM on November 6, 2011 [1 favorite]


BBC reports,

But he questioned how ethical such research was.

"Facebook's security team is unlikely to look kindly on people who conduct experiments such as that done by the university researchers, and users are reminded that under Facebook's terms of service you are not allowed to create fake profiles, should use your real name, and should only collect information from other users with their consent," he said.

posted by infini at 10:46 PM on November 6, 2011


For those who don't feel like digging it out, the interesting bit:
We kept the SbN running for another 6 weeks. During this time, the socialbots added 3,517 more user profiles from their extended neighborhoods, out of which 2,079 profiles were successfully infiltrated. This resulted in an average acceptance rate of 59.1%, which, interestingly, depends on how many mutual friends the socialbots had with the infiltrated users, and can increase up to 80% ...
posted by Tell Me No Lies at 10:52 PM on November 6, 2011 [1 favorite]


only add people on facebook that you have actually met, and if you meet them everyday such as work buddies or drinking buddies, then there is no reason to friend them on facebook.
posted by kanemano at 11:15 PM on November 6, 2011


infini: "You collected reams of data from our users. But that doesn't matter, what matters is that Facebook didn't get a cut. Facebook gets very upset when people start thinking they can data mine on Facebook's turf without giving Facebook it's cut capiche?"
posted by Grimgrin at 11:25 PM on November 6, 2011 [2 favorites]


Oh no, strangers are scary! If I add them as a friend they might get access to not my social security number, not my credit card numbers, not my phone number, not my address, but a list of quotes I like and a bunch of statuses complaining about my cubicle job.
posted by drjimmy11 at 11:55 PM on November 6, 2011 [3 favorites]


Companies doing this for profit are also getting marked-as-private data via facebook apps.
posted by finite at 11:57 PM on November 6, 2011


I like it. Facebook should use their own bots as a warning to people.
This is the Facebook security team. "Tootsie McTuringtest" is not a real person and certainly not your real friend. You have just been fooled into friending a robot account from Facebook. We use such accounts to make public service announcements and demonstrate security vulnerabilities.

If this had been a real stranger's account or a robot account created by a stranger, that stranger would now have access to the following information: your email address, your home address, your telephone number, personal pictures of your friends and family, [...]
Then give that user some sort of consolation prize for being such a prize dork.
posted by pracowity at 12:48 AM on November 7, 2011 [5 favorites]


I can't imagine something facebook would be less likely to do pracowity, but I agree that in terms of privacy awareness that would be an awesome thing to do.
posted by litleozy at 12:58 AM on November 7, 2011


"Oh no, strangers are scary! If I add them as a friend they might get access to not my social security number, not my credit card numbers, not my phone number, not my address, but a list of quotes I like and a bunch of statuses complaining about my cubicle job."
Well, if you're anything like many internet users, those complaints about your cubicle job and favorite quotes of yours are identifiers when they're possibly combined with your physical description, hometown, highschool, birthday, first couple of jobs, maybe your dog's name, and some likes/dislikes.

Because now I have a profile of you, you see. So I double check that you don't have your chat name, or a phone number, or your address, or that you share a common last name with any of your friends (aka family).

With a bit of hacking, I can connect these common information tidbits to other accounts. Maybe you complained about your job on this site? Maybe on others. Maybe those favorite quotes, in combination with your birthday, appeared on another site. Maybe the easiest thing would be to get your e-mail from your Facebook account, and then use what I know about you to figure out possible passwords (like what most people use).

Maybe I can connect your birthday, to your name or your father's Auntie Kim (because I'm using hacking family research sites about you too), to that old job at Kroger's, to your e-mail, and get some credit card information off of a favorite shopping site. It could be Amazon. Or a porn site. Or a site you don't even remember making a purchase on.

So when people warn you about giving your information away....it's no small thing. I didn't even
-paraphrase it- that well. I probably made it sound way more complicated that it actually is, but that's just so you can see all of the connections someone can make.

Consider how many people use similar usernames. The same passwords. The same profile pictures (thank you Google image search?). Mind you, I like all of these websites, and ease of access, but I fully understand that it's scary as shit.
posted by DisreputableDog at 1:31 AM on November 7, 2011 [4 favorites]


I think people do not view Facebook profiles as private information...

Facebook knows more about you than you've told it. Are you sure they haven't harvested something you'd consider private? Purchasing histories? Browsing habits? Maybe some medical information?
posted by DU at 1:35 AM on November 7, 2011


I made the mistake of using the Facebook default security settings and found myself waking up in a bathtub full of ice with a strange ache in my side and a burning sensation in my rectum. Yes, that's right, Facebook harvested one of my kidneys and probed me rectally.
posted by pracowity at 1:43 AM on November 7, 2011 [1 favorite]


Be thankful. They could have probed your kidneys and harvested your only rectum.
posted by hat_eater at 2:01 AM on November 7, 2011 [1 favorite]


I knew it was a socialbot because she was cute and liked me and wanted to show me her pictures.

Why must such a thing be "too good to be true"? WHY?!
posted by -harlequin- at 2:49 AM on November 7, 2011 [1 favorite]


I imagine that there is a "huge pile of data" issue here == if enough people are willing to friend anyone and some subset of that group of indiscriminate frienders is willing to post useful personal information, then you will eventually be abkle to harvest the kind of information you are looking for.

The answer, to deal with researchers, facebook, and advertisers, is to lie about everything on your profile all the time. That will teach them to collect data!
posted by GenjiandProust at 2:57 AM on November 7, 2011


I've had a lot more weird applications for "friendship" or whatever on LinkedIn. Lots of wholly random people.
posted by miss tea at 3:47 AM on November 7, 2011 [1 favorite]


I'm way more skeeved by what google could figure out about me than by what I willingly share on Facebook.
posted by sevenyearlurk at 4:33 AM on November 7, 2011 [2 favorites]


Great. Now I know I am so unpopular I don't even get friend requests from bots.
posted by srboisvert at 4:51 AM on November 7, 2011 [1 favorite]


The researchers said they got the approval of UBC’s behavioural research ethics board.
Surely this is against Facebook's terms of service, isn't it? Not representing a real person? Maybe I'm confusing it with Twitter or something else, but can't you "report" incoming friend requests as that? For example, obviously fake porn spam accounts?

Do ethics boards not care about violating the terms of service of products that they use? In case it's not clear, I'm not snarking; I'm genuinely asking.
posted by Flunkie at 4:55 AM on November 7, 2011 [1 favorite]


This is how the singularity begins.

Skynet isn't smart enough, yet, to make friendly conversation. So it will get its friends to do it.
posted by LogicalDash at 4:56 AM on November 7, 2011


We need more SockBots roaming around here.
posted by pracowity at 5:18 AM on November 7, 2011


I just went to the store and gave my credit card information to an 18-year old that I have never met before. I do this a few times a day.
posted by solmyjuice at 7:39 AM on November 7, 2011 [3 favorites]


Privacy doesn't mean what people think it means anymore. Just like how something is not secret once you tell three people, something is not private once you tell Facebook. Something is also not private if it is discovered by Facebook, or Google, or Klout, or... We keep pretending like we have privacy in the age of databases, sticking our fingers in the leaks growing in the dam, but it's futile.

The fascinating thing about Facebook is it's proven that the veneer of privacy is enough. Facebook feels private when you're there, you only see stuff from your friends (and socialbots). And so people are comfortable posting things they'd otherwise think were private, like pictures of their kids. But the reality is your "private" data on Facebook can easily leak out, either by malicious intent or commercial transacition or well meaning accident.

It's going to take two generations until our society catches up to the new reality of privacy in the informatics age.
posted by Nelson at 7:42 AM on November 7, 2011


Tons of researchers in this thread pontificating about how this wouldn't pass their ethics-board musters... which means this study reveals data that is completely unknowable by their standards. Not saying that it is moral, or not; just pointing out that you have severely handicapped your research in ways that seem a bit... unnecessarily extreme to me, at least in this instance.

--

The other data point (openly godwinning, except that I am not using it to shut down debate, so it isn't a godwin) is the Nazi records of torture on POWs, which provided NASA (unintentionally, of course) with highly useful information on the limits of human endurance in planning space expeditions.

Or the highly realistic medical anatomies replicated in medical textbooks worldwide, provided by Nazi doctors, which can still not be replicated today in equivalent detail... since the subjects were alive during the original drawings.

So, good byproducts can come from evil, which is not the same as good byproducts making the evil justifiable.

--

In this case, however, I don't think anyone is arguing the study was evil, so much as "not meeting the guidelines we agreed to in my group/school/professional organization."

But in my mind, this is comparable to someone discovering a dangerous security fault in software*, and publicizing it so that corrections can be made as quickly as possible. (*Technically, they're discovering a dangerous security fault in meatware.) Hacking a system, for the express purpose of strengthening the overall benefit of the system.
posted by IAmBroom at 8:03 AM on November 7, 2011


only add people on facebook that you have actually met, and if you meet them everyday such as work buddies or drinking buddies, then there is no reason for me to friend them on facebook, but of course others may use Facebook differently than I do.

FTFY, kanemano.
posted by IAmBroom at 8:04 AM on November 7, 2011 [1 favorite]


IAmBroom, there are lots of kinds of research that is effectively off-limits due to ethical guidelines. For example, my colleagues tell me that in behavioral economics, no kind of deception is allowed at all, and journals will reject any submission that uses deception. This actually caused some issues with collaborative work, because one potential manipulation could have been to tell a mild lie (e.g. "you're below average in this regard") to see if that could have incentivized people to do better.

There is also good precedence for having a high standard for research too. Just the other week, I was discussing in my class about why IRB was founded in the first place, essentially as a reaction to the horrific research done by Nazis which actually did have some potential for scientific value, as well as some really awful studies done in the United States (see for example the Tuskegee Syphilis study).

Now, there is clearly a wide gap between the studies I listed above and the work done by SocialBot. However, it still seems somewhat questionable to me in terms of benefits to participants, what kinds of protections were in place (which do seem reasonable), how the data was collected, whether deception was the only way to get useful scientific data, and the proportional benefit in using deception to harm that may be caused. There is also good cause to be concerned too, given very serious past concerns about social science research, privacy, and Facebook.

As an aside, there is also a recent push by social science researchers to revise the IRB requirements, which have been fairly burdensome to do fairly trivial things. For example, in my field of human-computer interaction, we have to get an IRB to test the design of a user interface, which has extremely low risk. In other disciplines, a student might need to get an IRB to do an oral history with a relative. These rules may be changed in the near future, offering separate tiers of research standards for different kinds of research based on risk. For example, why put user studies of user interfaces in the same category as studies done by medical doctors?
posted by jasonhong at 8:21 AM on November 7, 2011 [1 favorite]


As a result of reading this study, I went to Facebook to try to weed out some contacts (generally people I have no interaction with - I did a big purge several years ago and have friended few new people since) and it's striking just how difficult and time-consuming it is to defriend people on Facebook.
posted by KokuRyu at 8:50 AM on November 7, 2011 [2 favorites]


Are you sure they haven't harvested something you'd consider private? Purchasing histories? Browsing habits? Maybe some medical information?

So when I'm looking at non-adblocked Facebook at work and see ads for gout treatments, does that mean Facebook knows something my doctor hasn't told me?
posted by marxchivist at 9:43 AM on November 7, 2011


Wow, for once an argument did not turn to an analogy for Nazis, but is actually about the exact same practices. Involuntary participation in experimentation, plain and simple. This is something that the HCI people (I did that in grad school too BTW) might not get.

1. Using deceptive practices, the researchers contacted and deceived non-consenting persons into participation in their study, including providing access to personal information that would not otherwise be publicly available.

2. Having practiced this deception, private information from additional third parties was collected by the researchers without the informed consent of the "friends" of the deceived research subjects.

3. Use of a "bot" in this research obscures the fact that it was a team of human researchers that were using a false identity to negotiate access to private information. They were flat out lying to their human subjects. The human researchers did not disclose their intentions either before, during, or after their data collection.

4. This is not a computer security issue, again that is obfuscation. These are human researchers with names and faces using computer mediated means to deceive unwilling participants.

5. Subpart D of 45cfr46 expressly limits the participation of children as a special protected population.
(a) Children are persons who have not attained the legal age for consent to treatments or procedures involved in the research, under the applicable law of the jurisdiction in which the research will be conducted.

So let me put that last one into plain English: a team of adult human researchers may have deceptively contacted my 13 year old daughter using false identities and gained access to her otherwise non-public information -- including photographs as well as information provided in a non-public manner by her "friends" who are also defined as children in most jurisdictions. "Adult researchers perform non-consensual data collection of private information on minors" reads the headline.

In non-technical language, it is "crap" like this that makes it more difficult for legitimate research (HCI or otherwise) to be conducted but serves as evidence that there is a need for conscientious monitoring of such work and vigilance by an informed IRB. I don't mean to derail here, but learning that People Are Stupid and Gullible (lol) is no excuse for bad methods. There are a lot of people who need a lot more preparation so they can learn to do legitimate human subjects research.
posted by cgk at 10:25 AM on November 7, 2011 [1 favorite]


CKG: a large exception to privacy intrusion is how possible it is. The fact that people already do it is a huge exception to the "is it ethical to do" rules.

I see it as similar to walking through a parking lot and writing down what people have in their cars. Cars provide a veneer of privacy, as previously noted, but seriously anyone could walk through and see what you have, intentionally or unintentionally. Just because people don't want anyone to look into their car through the window and see what's inside doesn't mean that there's any actual security preventing them from doing so, social norms notwithstanding.

Also, any time someone friends someone on facebook, they give consent for that person to view their profile. There is no guarantee that the person friended is going to scrape their data or be a researcher. It's merely a veneer of security.

Finally, I find it rather preposterous that people would complain about bots stealing profile information when, in truth, any time anyone uses an "app" on facebook, that app scrapes the entire profile for all of that person's friends list for personal information. Complaining about the bots is like complaining about the ice cubes on the deck of the titanic.
posted by rebent at 10:49 AM on November 7, 2011


it's striking just how difficult and time-consuming it is to defriend people on Facebook

Wait, wait, wait. I just "unfriended" two people with 6 total clicks (click to unfriend, click to confirm, click "okay" on confirmation screen).

It's SIMPLE to unfriend people. That seems like a red herring here.

I, too, am curious about the Terms of Service violations. Is that a factor in ethical research?
posted by mrgrimm at 1:01 PM on November 7, 2011


If I add them as a friend they might get access to not my social security number...

On the contrary: social security numbers can be predicted fairly accurately using data from Facebook and other social networking sites. The social networking data can be triangulated with other publicly available information to get social security numbers.

See Acquisti, A., & Gross, R. (2009). Predicting Social Security numbers from public data. Proceedings of the National Academy of Sciences, 106(27), 10975 -10980. doi:10.1073/pnas.0904891106. [open access]
posted by k8lin at 1:57 PM on November 7, 2011 [1 favorite]


> it's striking just how difficult and time-consuming it is to defriend people on Facebook

Wait, wait, wait. I just "unfriended" two people with 6 total clicks (click to unfriend, click to confirm, click "okay" on confirmation screen).

It's SIMPLE to unfriend people. That seems like a red herring here.
\

Oh, geez, I stand corrected.

Seriously, though, I think you're wrong. I want to unfriend someone with one click - three clicks is too many, and it takes several seconds to do so.

Of course, I just want to go through and defriend multiple FB friends in the same session, so maybe the 3-clicks metric works in individual cases.
posted by KokuRyu at 5:51 PM on November 7, 2011


« Older "For truth! For beauty! For art!"   |   An immortal soul, he was Autoluminescent. Newer »


This thread has been archived and is closed to new comments