RE: CISPA vs. SOPA post

As the CISPA bill moves to the house floor, The Center for Democracy and Technology (CDT), a major internet freedom advocacy group has withdrawn their protests against the CISPA bill.

CDT issued the following statement on CISPA—

In sum, good progress has been made. The Committee listened to our concerns and has made important privacy improvements and we applaud the Committee for doing so. However, the bill falls short because of the remaining concerns –the flow of internet data directly to the NSA and the use of information for purposes unrelated to cybersecurity. We support amendments to address these concerns. Recognizing the importance of the cybersecurity issue, in deference to the good faith efforts made by Chairman Rogers and Ranking Member Ruppersberger, and on the understanding that amendments will be considered by the House to address our concerns, we will not oppose the process moving forward in the House. We will focus on the amendments and subsequently on the Senate.

CISPA vs. SOPA: a summary and analysis of issues in online privacy legislation

This year witnessed SOPA/PIPA, and now we have CISPA, which stands for The Cyber Intelligence Sharing and Protection Act. The issue of privacy and information gathering online is a hot topic, not only for privacy activists, but also for governments and corporations. The government feels it does not have enough access to online information to maintain national security. Corporations worry about liability of policing the Internet and limits on their ability to grow and profit. And citizens are split between being terrified that their privacy will be destroyed irreparably and not caring one way or the other.

The apparent worries on all sides of the issue make it clear that there is an obvious need for some kind of bill. I won’t argue whether or not CISPA is good or bad. The arguments on both sides are fairly solid. Let’s be clear that CISPA is not the new SOPA. Here is why.

What is CISPA?

CISPA stands for The Cyber Intelligence Sharing and Protection Act, written by Rep. Mike Rogers (R-MI) and Dutch Ruppersberger (D-MD). The main objective of the bill is to facilitate the sharing of information between companies and the federal government in order to prevent cyber attacks. The Electronic Frontier Foundation (EFF) suggests that the bill is written too broadly and allows the monitoring of citizens private communication and allows companies, “to hand over large swaths of personal information to the government with no judicial oversight,” which would undermine all existing privacy laws.

CISPA vs. SOPA

The issues that the EFF and other privacy advocates have with CISPA are similar to SOPA, breach of privacy, allocation of authority, lack of oversight, vague language allowing abuse etc. However, the campaign against CISPA will be very different from the one against SOPA and is less likely to succeed. The biggest reason for this is that the major companies who acted against SOPA are actually for CISPA.

There are very good reasons for this in fact. A major concern for companies—and while not pertinent to this bill, the worry is also shared by institutions like colleges and universities—is liability. In order to avoid the risk of lawsuits, companies and institutions must spend large amounts of money on monitoring social media activity within their networks. SOPA put the responsibility of policing the Internet and monitoring ‘dangerous’ activity in the hands of the companies. For example, this meant that if Facebook failed to recognize a threat they would be at fault.

CISPA not only eliminates this duty from companies purview but also removes all culpability from them should a situation occur. This is fantastic news for these companies and they are unlikely to risk jeopardizing it.

What does CISPA mean for online privacy?

It’s difficult to say what the risk is if CISPA were to pass. The EFF argues that the bills text is too vague. However, after reading it, the bill seems pretty explicit. Mark Burnett, a security consultant with xato.net, argues that perhaps the language in CISPA isn’t as bad as the EFF and other privacy advocates think. He directly addresses several claims made by the EFF.

The EFF states that CISPA will give “companies a free pass to monitor and collect communications, including huge amounts of personal data like your text messages and emails, and share that data with the government and anyone else.” The bill expressly states that for cyber security purposes, an organization may “identify and obtain information about threats to their own rights and property,” writes Burnett.

(A) CYBERSECURITY PROVIDERS- Notwithstanding any other provision of law, a cybersecurity provider, with the express consent of a protected entity for which such cybersecurity provider is providing goods or services for cybersecurity purposes, may, for cybersecurity purposes
(i) use cybersecurity systems to identify and obtain cyber threat information to protect the rights and property of such protected entity; and
(ii) share such cyber threat information with any other entity designated by such protected entity, including, if specifically designated, the Federal Government. (HR 3523 RH)

Burnett continues by addressing the EFF’s fear that the term “cybersecurity threats’ is too vaguely defined. “Worst of all,” the EFF continues in their statement, “the stated definition of “cybersecurity” is so broad, it leaves the door open to censor any speech that a company believes would ‘degrade the network.’” Burnett argues that in fact the bill defines it clearly.

(4) CYBERSECURITY PURPOSE- The term `cybersecurity purpose’ means the purpose of ensuring the integrity, confidentiality, or availability of, or safeguarding, a system or network, including protecting a system or network from–
(A) efforts to degrade, disrupt, or destroy such system or network; or
(B) theft or misappropriation of private or government information, intellectual property, or personally identifiable information. (HR 3523 RH)

This definition of what a ‘cybersecurity purpose’ is seems pretty explicit and not nearly as open to loopholes as the EFF worries. Burnett argues, and I agree, that it would be a big leap for a company to claim that speech would ‘degrade the network.’ Burnett addresses other issues brought up by the EFF regarding intellectual property, monitoring and censoring, and civil and criminal immunity.

It comes down to an issue of balancing consumer privacy, commercial rights and national security. Communication in both the US and the world is increasingly online, which has led to an increased risk of cyber attacks. According to Representative Greg Walden (R-OR) in his article titled, Rethinking Communications Law in a Converged, 21st Century Marketplace, in CommLaw Conspectus the commercial industry has taken steps to protect the nations networks and any legislation in this area should, “seek to capitalize on commercial sector expertise and existing cybersecurity organizations and infrastructure.” Walden puts forward several key questions that need to be asked when considering bills like CISPA, “what has been the role of federal agencies in securing cyberspace? In what ways can federal agencies better partner with private enterprise to improve the cybersecurity defenses of our communications networks?“

He also addresses the issue of privacy. Most US privacy laws were written when electronic communications were still coming into existence. “As consumers are using increasingly diverse means to communicate, the divergent protections for consumer privacy have become more and more apparent,” he writes. Walden accurately addresses the problem that American consumer privacy laws are inadequate in a converging market and that the privacy protections in place are dependent on many variables like the means used to communicate, the carrier, the device, the application on the device and so on. However, he fails to address the issue of privacy protections from the government.

This whole issue falls into a vast gray area. There is not a baseline for legislators, companies, or citizens to build from. In February the White House unveiled a plan for an online privacy bill of rights.

The Consumer Privacy Bill of Rights provides a baseline of clear protections for consumers and greater certainty for businesses. The rights are:

  • Individual Control: Consumers have a right to exercise control over what personal data organizations collect from them and how they use it.
  • Transparency: Consumers have a right to easily understandable information about privacy and security practices.
  • Respect for Context: Consumers have a right to expect that organizations will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.
  • Security: Consumers have a right to secure and responsible handling of personal data.
  • Access and Accuracy: Consumers have a right to access and correct personal data in usable formats, in a manner that is appropriate to the sensitivity of the data and the risk of adverse consequences to consumers if the data are inaccurate.
  • Focused Collection: Consumers have a right to reasonable limits on the personal data that companies collect and retain.
  • Accountability: Consumers have a right to have personal data handled by companies with appropriate measures in place to assure they adhere to the Consumer Privacy Bill of Rights.

Office of the Press Secretary

In order to focus the discussion that has led to the proposition of bills like SOPA and CISPA a standard needs to be established that protects citizen privacy rights and brings policy into the digital age. The plan for a privacy bill of rights would accomplish this, rooting the discussion about government acquisition of information and authority in the digital realm in American law.

20 CommLaw Conspectus i. (2011 – 2012 ): 5300 words. . Web. Date Accessed: 2012/04/20.

The Nuances of Privacy

My previous posts have examined specific actions that the US government and/or corporations have taken regarding citizen’s privacy and the use of their information. The Internet has given these two bodies unprecedented access to our personal information and the tools to collect, store, and analyze it. However this is not a black and white issue and labeling the citizens as simply victims over-simplifies the issue.

We often focus solely on how government and businesses are tracking us. I’d like to step away from that for a moment to examine some of the complexities involved in privacy and surveillance. We keep digital records of ourselves day in and day out: search histories, photo albums on Flikr, Photobucket, Picasa, and Facebook. Instant message programs like Skype log our chats and store them for our later perusal.

Lee Humphrey, in his paper, Who’s Watching Whom: A study of Interactive Technology and Surveillance, describes Oscar Gandy’s theory on the relationship between surveillance and technology. Gandy draws from Jermey Bentham’s idea of the Panopticon.

The Panopticon is a conceptual building designed by Jeremy Bentham in 1791. It would confine each individual to a small cell. However every cell would be viewable from one central room , allowing for absolute control and surveillance. He hoped the design could be used for factories, schools, barracks, hospitals, madhouses, and jails.

Gandy posits that the growth in databases incentivizes a culture of surveillance and monitoring. He claims that information technology “facilitates the surveillance by an unseen corporate and bureaucratic observer.” The information that is gathered is then used to define and control access to goods and services, which are implicit in a modern capitalist economy. Gandy made this claim in 1993 long before Facebook, Amazon, Google and the number of other companies and technologies that record our online activities. Gandy argues that with this information the population can then be categorized based on inferred economic and political value.

To better understand the issue of privacy we must realize that it is not a simple issue. Humphrey defines four forms of surveillance but only places three in his list. The first form of surveillance is the general concept of a non-transparent body, like a state or corporation, monitoring the populace with impunity.

I. Voluntary Surveillance: This occurs when people provide their consent in being monitored. This is exemplified by consumer societies willing participation in the monitoring of their consumer tendencies
II. Lateral Surveillance: This is the nontransparent monitoring of citizens by each other. This occurs every time you decide to ‘stalk’ someone on Facebook.
III. Self-surveillance: This is the act of recording your own data so that it can be replayed or viewed later. Self-surveillance allows us to reexamine events that took place in the past and replace our interpretation of them with a new one that is informed by the second viewing. This means that the participants can examine their own behaviors in a way that was not possible before.

The issue of privacy is very often oversimplified during the common dialogue. Luckily for everyone scholars and academia exist to examine all the little nuances that are ignored throughout our daily routine. Humphrey performed a yearlong study on Dodgeball, the precursor to Foursquare and Google Latitudes. This case study is great example of the three forms of privacy and how the average user generally does not conceptualize how skewed their idea of privacy is. Dodgeball was a Google owned service that recorded and distributed location-based information of the users.

Humphrey found that the users were, in general, not concerned with their privacy because they, “felt they had control over their information and to whom it was sent and because they were experienced and savvy internet users.” During his interview process he found that some users were unconcerned about their privacy because they could control when they would share their location and to whom. This ignored the fact that Google and the Dodgeball service were monitoring them, thus making their understanding of their privacy naively narrow. For the most part Humphrey found that the users information did not go further than the user’s selected friends. However, they did not have control over the dissemination of their information to corporate entities.

But what could corporations possibly do with our feed of location check-ins? This information can be linked with other databases giving marketers and corporations a more holistic view of their consumer. This is an example of voluntary surveillance.

The Dodgeball case study also examined lateral surveillance. In order to function the service required lateral monitoring. The issues that come up in relation to this were more apparent to the users. There existed users who never posted their location and merely eavesdropped on the other users locations. Humphrey points out the lateral surveillance could also lead to stalking, but found that, again, users were unconcerned. Here is the response from one interviewee.

I just figure there are some people who are just more into checking in all the time and some people are lazier. So if you have someone who’s lazy and never checks in, are they surveying me? Not really, I’m willingly checking in. There’s no surveillance aspect to it at all because it requires that a user input a check in. There is no kind of surveillance. I never feel like it’s a stalking or whatever. You know, and in my circle, it’s at least 50% female. Although you know that’s probably not reflective of Dodgeball in general because I’m sure girls are afraid of the stalking aspect. You know, when you sign up for Dodgeball they explain all the ground rules. They do go way out of their way to try to calm people’s concerns. They go, ‘‘Look, we’ve prevented all these ways so that no one can really stalk you.’’ And you know that’s good enough for me. But then again I’ve never really been stalked so I can’t say it’s high on my priority of what I’m afraid of. I’d probably be more complimented. ‘‘Oh you’re stalking me? That’s so cute!’’
(Irwin, Los Angeles)

Finally, we’ve arrived at self-surveillance. Dodgeball provided a recording service so users could go through and look at their behavior and see patterns. “The self-surveillance that Dodgeball facilitated allowed users like Irwin to see where he had been in ways that were previously much more difficult to do. Not only did Dodgeball keep an itemized list of his social outings, but also it created a visual representation of outings on the Google Map. By combining his map with the maps of his friends, he could visually compare and contrast social outings,” writes Humphrey. This allows users to develop a connection between the data and their physical reality.

Perhaps the specificity of this study makes it not as applicable to everyone. However, it was clear from the study that mobile social networks encourage all three forms of surveillance. It doesn’t matter if you’re not using a service like Dodgeball or Foursquare. Allowing Twitter and Facebook to track your geographic location on your phone creates the same environment.

I believe that the lack of concern that Humphrey found in his study reflects the unsophisticated thoughts that are held by many online users regarding privacy. In order to have a real discussion about privacy rights the general public needs to have a more comprehensive understanding of the nature of their privacy and how it has been compromised.

Who’s Watching Whom? A Study of Interactive Technology and Surveillance.Lee Humphreys. Journal of Communication. 61.4 (Aug. 2011) p575.

 

I’m 12 years old (and a cyborg) wat is this?

During my (artificial) REM cycle sometime between 23 hundred hours and 7 hundred hours I processed some code. This code was similar to a lesson I had downloaded the previous morning. The code was CA 1-4-5-14. The code was analyzed in the 1800’s. The code once entered isolated a singularity, and protected the singularity from others.

The United States Constitution does not directly provide details on privacy protection; however since the 1800’s the fight for privacy rights have been interoperated by justices from multiple Constitutional Amendments. The most notable are the First, Fourth, Fifth, and Fourteenth. The issue lies in the fact that there is a statutory right to privacy that has been applied to data protection. Despite efforts from the U.S. Federal Trade Commission, there is still no direct legislation over digital privacy.

I ‘wake’ up fully charged. I unplug myself from the wall and begin my morning reboot. I log into the morning activities of cleaning myself and processing the tasks for the day. I log onto my companion. We begin our interface. I check my vitals; my mail, my stories, my contacts, my pictures of cats. I enter my zone and inform others where I am and my intents for the day.

It has been said that the world of cyberspace is ‘seductive and manipulative for children’ mostly due to the capacity of young people to enforce self-regulation is limited. In the digital sphere there are many areas where children and students are targeted because of this lack of regulation. Many younger adults will share a multiplicity of their private information through online registration, web profiles, internet based quizzes, entry forms, and coupon downloads. All of these actions allow for collection of information on this younger generation.

Within five minutes three others ‘like’ what I have planned. I know that I need to download some carbohydrates into my processor soon, but I don’t want to leave. Today I have decided to start a new interface. My web profile is not enough for me to be connected… I am missing out. I need a fast status updating program to inform my followers of my life. They care. They listen. They’ll follow.

One of the issues of this over exposure into the personal lives of younger generations is that the excessiveness of their lives could allow for negatives to be shown. In a very reductionist way of thinking, this could hinder the future of a child. A mistake in the past could become a very big issue in the future.

Warning. Warning. Upload in progress, refresh for information. What is this? Images of me. I did not put these here. Tag. Upload. Tag. Upload. No… it is happening too fast. These images will ruin me. A night I cannot remember… did I delete the memory… Error Warning: Future in Jeopardy. Course of action, delete posts, take down images. Was I too late? Time to move. Miss me. Follow me. Love me.

In addition to the expansive array of information that younger generations put up on the web there have been advances in technology that have been invented to protect children or prevent harm. This technology has the means to track people. There are positives and negatives to this type of technology and now more communities are experimenting with this type of techno tracking.

I transfer my avatar to the Dining Download center. Check in. DigiMe is at Dining Hall. I download my fill. ‘Totally full! I love pancakes!!’ Two minutes until three likes pop up. I go to my software downloading centers. Two new lessons today. ‘GUH homework… meet me in the Library.. it’s going down!’ DigiMe is at the Library. Buzz Buzz. Interface with DigiHer/Him. –SUP?- DigiMe@ Dorm getting it getting it. Interface. DigiMe+DigiHer/Him are in a relationship <3. DigiMe is at the movies with DigiHer/Him. Follow me. Find me. Know me.

A community like the United States has many groups set up to monitor this techno tracking.

Quis custodiet ipsos custodes? Instant translate: Who watches the watchmen? Similar searches: Alan Moore.

            The American Civil Liberties and the Electronic Frontier foundation state that this type of surveillance will treat children like inventory. The idea of safety and security of young adults has also been considered. An issue with this is the ruling by the Supreme Court over how children cannot be deemed equal with adults. This includes their vulnerability, their inability to make critical decisions in an informed manner, and the importance in a parental role.

DigiMe download back to dorm room. Internet calls+texting+web profile chat+ digital downloaded television. ‘Busy night’ Comment/Like. Follow me. Like this like that. Know me. Know me. Now a member of DigiThem. Follow them. Like this. Buy me. Share. Share. Not enough. USB direct connection to the digital sanguine stream. Heart beats at 54Mbps. Download and consume.

Where the issue lies is truly found in how much technology has been able to enter homes and the work place. There has been such a pervasive growth of social problems in the digital realm like racism and sexism that lead to a question of the legitimacy of the information present on the internet. As digital technology becomes the norm in our society we are faced with economic oppurtunities along with threats to our personal security.

Cat photos. Like. Beyonce music video. Watch. Advertisment Pop up: Beyonce with cats. Perfect. Click. Fill out survey… … … wait. Location. Social Security Number. Let us find you. Let us help you. Plug in. Jack in.  Follow. Consume. Find. Data. Computer used locate… Location identified. Please verify. You are the 10,000,000th visitorvictim. Process halt. CtrlAltDelete.

                Within the digitally literate society there is a divide of data between what information is and what can be classified as knowledge. What happens to accessible data in a rapidly growing digitally literate society is the ability for the younger generation to consume and manipulate the communication technologies. Engaging heavily into the digi-sphere is dangerous because often even the most digitally literate are still ignorant of the long term effects of communication and interfacing with the digital society.

I rip the USB out of my wrist. My focal screens go black. Reboot in progress. Reboot in progress. Open your eyes to start. I open my eyes. I look around as I hear birds chirping in the background. It is near dusk and the colors of the sunset pour in through my windows. I smell the cool spring air with a touch of daffodil on its wing. I see a glowing box on my wooden desk, it is all black besides the blue radiating from the screen. A message appears: Come back. We miss you. Interface once again. I look at the fading light. A picture of a cat with a female singer pops onto the screen with a message: they always come back. We know where you are. Not tonight I think to myself as I remove my clothes until I have on nothing but my underwear. With no wires in my veins I can finally run free. I sprint from my dorm into the woods. There is no service there, for now they cannot find me. 

Lawmakers and Facebook Act Against Employers Demanding Access to Facebook Profiles

According to the Associated Press, “it has become common for managers to review publically available Facebook profiles, Twitter accounts and other sites to learn more about job candidates.” Companies that don’t ask for passwords ask applicants “to friend human resource managers or to log into [Facebook] on a company computer during an interview. Once employed, some workers have been required to sign non-disparagement agreements to ban them from talking negatively about an employer on social media.”

While stories about companies, colleges, and government agencies asking job applicants for access to their Facebook profiles have grown in recent months, the issue gained public attention when Bob Sullivan, of MSNBC’s The RedTape Chronicles blog, posted an article titled, “Govt. agencies, colleges demand applicants’ Facebook passwords.” According to Sullivan, the Maryland Department of Corrections has been asking job applicants to log into their Facebook accounts and allow the interviewer to look over their shoulder as they clicked through their profile. This is nothing new. In the past, applicants have been asked to provide the department with their username and password. While these Facebook reviews are voluntary, most applicants agreed to them assuming they were necessary for doing well in their interview.

However horrifying the FBI’s development of social media monitoring applications may appear, a story that I’ve written about previously, monitoring applications already exist and are in use. Colleges, for example, have been taking advantage of social media monitoring companies, particularly to monitor college athletes. Sullivan noted this in his article, writing about instances where colleges require their athletes to friend “a coach or compliance officer… and [provide] access to their ‘friends only’ posts.”

On Friday, Erin Egan, Facebook’s Chief Privacy Officer, issued a statement regarding this problem,

“As a user, you shouldn’t be forced to share your private information and communications just to get a job. And as the friend of a user, you shouldn’t have to worry that your private information or communications will be revealed to someone you don’t know and didn’t intend to share with just because that user is looking for a job. That’s why we’ve made it a violation of Facebook’s Statement of Rights and Responsibilities to share or solicit a Facebook password.”

This approach, one that both Facebook and some lawmakers are taking, focuses less on the issue of privacy and more on liability. Egan provides an example as a warning about overstepping, “if an employer sees on Facebook that someone is a member of a protected group (e.g. over a certain age, etc.), that employer may open themselves up to claims of discrimination if they don’t hire that person.”

Sullivan added to Egan’s point, writing that employers might not have proper policies and training to protect their company in handling applicant’s private information. “The employer may assume liability for the protection of the information they have seen or for knowing what responsibilities may arise based on different types of information (e.g. if the information suggests the commission of a crime).”

Hints about potential liability have already been raised. One of the most recent warnings was raised when a jury awarded $4 million on March 14 of this year to two separate families affected by the 2007 shooting at Virginia Tech after the school was found negligent due to their inability to warn students. Bradley Shear, a D.C.-based lawyer who practices cyber and social media law, suggested that the monetary damage could have been much higher if the school employed the use of a social media monitoring company.

“Now that two $4 million dollar jury verdicts have been returned against an academic institution for a delay in properly warning its students about a killer being on the loose on campus, imagine if a school follows the above advice by Varsity Monitor [@tombuchheim It is still best practice for the athletic dept to continue to monitor social media for brand and athlete protection & edu] and a tragedy occurs that social media monitoring should have warned against but did not? Instead of multiple $4 million dollar jury verdicts would it be multiple $25 million or $50 million or $100 million dollar negligent social media monitoring jury verdicts?”

While little legislation has been put forward to protect students against monitoring, legislators in several states including Connecticut, California, Washington, Illinois and Maryland have begun the process of introducing bills that would prohibit companies from asking employees or potential employees for Facebook passwords.

“Employers can’t ask in the course of an interview your sexual orientation, your age, and yet social media accounts may have that information,” a California state senator said.

//

Social Media Monitoring Programs: a very real reality

FBI attempts to determine the feasibility of building a social media monitoring application

Over the past few years, there has been a massive global public outcry about how online users are tracked and their information collected. Both Facebook and Google have been criticized heavily for their lack of transparency on the issue and for what sometimes seems their disingenuous interest in improving their policies. Curiously, this popular anger so far seems to extend only to corporations. Concerns or fears about government tracking are less vocalized, which is not to say that they do not exist, simply that they are less a part of the popular narrative.

These concerns and complaints border on hypocrisy. We willingly give up our information to these companies through signing up for profiles, posting our photos, and sharing on friend’s walls. While we say we expect them to maintain our privacy, but all the while we know they plan to use the information to ‘improve’ our online experience.

This issue—actually a complex set of issues including questions about what is public? What is private? How can privacy be enforced?—is not one that can be solved solely between online users and individual companies. Companies are not in power, governments are. When the conversation is about governments where the power structure is decidedly in their favor, governments like the Peoples Republic of China, the issue seems easier to understand and discuss and solutions seem more achievable because there is less of a grey area. However, the issue is not as simple for the U.S. government, which does not exercise the amount of force and control over information that China is able to exert, despite making some dubious and controversial decisions that affect privacy, like the Patriot Act. The Federal Bureau of Investigation (FBI), however, appears to have adopted the view that what we publish online, particularly on social media sites, is public and open.

In mid-January 2012, the Federal Bureau of Investigation released a document that outlined the possibility of developing an application that would monitor all data published online, most specifically social media outlets, to benefit the FBI’s Strategic Information and Operations Center (SIOC). According to the FBI, the government does not currently have any current social media or news media collection processes or services used by SIOC. While the FBI does not yet have the capability the pursuit by the intelligence field of this type of information collecting is nothing new. In 2006, New Scientist published an article, describing how the Pentagon’s National Security Agency funded research about “mass harvesting of the information people post about themselves on social networks.” (Free full version of the article here)

The possibilities for building a massive database about the public, given the scale of this method of information gathering, are astounding. The FBI’s hopes for their proposed application’s capabilities, which New Scientist aptly summarized as their “wish list,” are equally astounding. They hope that the application would be able to search, vet, alert, select, map, and spot reports, tabs, Twitter and other social network monitoring along with analytical capabilities. Mashable.com published a summarization of the proposed application’s capabilities, stating that it would:

• Provide an automated search and scrape capability of both social networking sites and open source news sites for breaking events, crisis, and threats that meet the search parameters/keywords defined by FBI SIOC.

• Ability for user to create, define, and select parameters/key word requirements. Automated search of national news, local news, and social media networks. Examples include but are not limited to Fox News. CNN, MSNBC, Twitter, Facebook, etc.

• Provide instant notifications of breaking events, incidents, and emerging threats that have been vetted and meet the deÔ¨Åned search parameters.

• Ability to immediately access geospatial maps with coding in addition to providing critical infrastructural layers. Preferred maps include but are not limited to Google Maps, Google 3D maps, ESRI, and Yahoo Maps.

• Ability to instantly search and monitor key words and strings in all “publicly available” tweets across the Twitter Site and any other “publicly available” social networking sites/forums (i.e. Facebook, MySpace, etc.)

While the FBIs proposed new application seems alarming, it is important to remember that one of the FBI’s roles, in particular the role of the SIOC, is to search, vet, alert, select, map etc.

Whether or not we as citizens trust the US government with our privacy, this level of access to information and ability to track it as well as use it is unprecedented and, quite possibly, a breech of our privacy. Whether or not this is the case, the most important question ultimately is:

How private is our information when we are willing to share it with multiple corporations and, if not everyone on the Internet, then everyone in our friend networks?

When a user posts a status or a tweet, is that information public or private? Is your online presence on Facebook like a room in your home, where you assume absolute privacy with those you have allowed in, or like a booth in a restaurant where you merely hope for it? Or is it equivalent to the open street where anyone might listen in? This is the core of the issue.

While we expect the information and opinions we publish to stay private, we also expect that information to be available for those whom we want to see it. Oddly, we make those decisions without realizing that since the information we are sharing is no longer in our heads, it is by definition no longer private. We insist we want privacy, yet we publish our thoughts and share our information in open source, publicly available forums, which, no matter how strenuous their stated intent, have limited privacy options and maintain a visual and digital record of all the information they collect. Given these facts, the decision to share personal information online while simultaneously voicing concerns about privacy seems bizarre and illogical.