Monday, May 24, 2010
Boucher Bill Continues to Evoke Comment
Since Rep. Rick Boucher (D-VA) released his proposed privacy bill for public comment in early May, privacy advocates as well as interested industry representatives have been quick to criticize it as too overreaching or not sufficiently protective. This mixed reaction suggests that maybe he has actually struck a middle ground. A recent blogpost on the Workplace Privacy Counsel blog critcizes the bill as too burdensome for employers, and argues that despite its exclusion from coverage of businesses with 5,000 or less individuals, it will impact most employers since employers often collect "sensitive information" on their employees. Employers would actually have to disclose to the employees how they intend to use that sensitive information. The author expresses concern that the employers be faced with preparing a complex privacy notice, since different types of information require different uses and retention periods. Allusions to such complexity and the unwillingness of employers to be open and forthright are what cause privacy advocates to express concern about how sensitive personal information is being used, transferred, and retained. Yet, consumer groups have criticized the bill as not being comprehensive enough, and for preventing stronger state laws or individual rights of action. We know from press releases that Rep. Boucher has been studying this issue for quite some time, and is sensitive to being overreaching and quelching innovation. Yet he has heard the concerns of consumer privacy advocates and recognizes that left unchecked, privacy rights will be trampled. Rep. Boucher is to be applauded for reaching out by proposing his bill for comments, and starting a discussion that needs to be aired, hopefully in formal Congressional hearings sooner than later.
Sunday, May 23, 2010
What did Google do?
Fresh off the heels of its Buzz debacle Google is facing another class action suit, this time for collecting data from WiFi networks as it took pictures as part of its street view project (which has, of course, raised privacy concerns on its own). The complaint (available here) asserts that Google's WiFi information collection violated 18 USC 2511 (the wiretap act). This could be a problem for Google. When news of Google collecting information off wireless networks first came out, the company stated that the information was essentially nothing more than identifying data (e.g., machine addresses and network IDs). However, subsequently Google admitted that, not only did it identifying information for machines and networks, it also collected the actual traffic (i.e., payloads) running across the networks.
The distinction is important. The 18 USC 2511 prohibits intercepting any electronic communication. 18 USC 2510 defines "intercept" as
Given those definitions, if all Google had been acquiring was the identifying information of the machines communicating on a wireless network, they would have a good argument that what they did didn't count as "intercepting" as prohibited by the wiretap act. However, if Google was actually acquiring the communications passing across the networks, that argument loses a lot of its force. Even worse, in the complaint, the plaintiffs assert that
While the complaint is written a bit strangely, at least on the face of it, it appears as though the plaintiff's attorney has reason to believe that Google intercepted and decrypted encrypted communications on at least one occasion. If true, it's hard to imagine a more blatant violation of wireless privacy, and it's also hard to imagine a way that Google could escape liability.
So what will happen? Stay tuned. Assuming Google was served with on the 17th (the day the complaint was filed), their answer is due June 7 (see FRCP 12).
The distinction is important. The 18 USC 2511 prohibits intercepting any electronic communication. 18 USC 2510 defines "intercept" as
the aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device.(emphasis added) It also includes an explicit definition of "contents"
“contents”, when used with respect to any wire, oral, or electronic communication, includes any information concerning the substance, purport, or meaning of that communication.
Given those definitions, if all Google had been acquiring was the identifying information of the machines communicating on a wireless network, they would have a good argument that what they did didn't count as "intercepting" as prohibited by the wiretap act. However, if Google was actually acquiring the communications passing across the networks, that argument loses a lot of its force. Even worse, in the complaint, the plaintiffs assert that
a GSV [Google Street View] vehicle has collected, and defendant has stored, and decoded/decrypted Van Valin's wireless data on at least one occasion.
While the complaint is written a bit strangely, at least on the face of it, it appears as though the plaintiff's attorney has reason to believe that Google intercepted and decrypted encrypted communications on at least one occasion. If true, it's hard to imagine a more blatant violation of wireless privacy, and it's also hard to imagine a way that Google could escape liability.
So what will happen? Stay tuned. Assuming Google was served with on the 17th (the day the complaint was filed), their answer is due June 7 (see FRCP 12).
Labels:
class actions,
Google Streetview,
wiretap act
Wednesday, May 19, 2010
Privacy can hurt
While this blog is generally all about privacy and how to protect it, it's important to keep in mind that privacy can be a double edged sword. Take the case of Ward v. Cisco Systems. It all started with a 2007 post by an anonymous blogger about a patent infringement suit against Cisco in the Eastern District of Texas (see this article for background information). In it, the blogger, who claimed to be "just a lawyer, interested in patent cases, but not interested in publicity" made some rather acerbic comments about the lawyer suing Cisco, as well as about the Eastern District of Texas.
As it happened, the anonymous blogger wasn't "just a lawyer," he was Rick Frenkel, intellectual property counsel for Cisco. In the subsequent defamation suit filed (where else) in the Eastern District of Texas, the plaintiff's strategy highlighted the anonymity of the Troll Tracker, painting his actions as part of a sinister consipiracy by Cisco. As a result, Cisco changed its blogging policy to specify that:
(emphasis added)
In short, while privacy per-se isn't a bad thing, it can be dangerous, and that danger is something that businesses need to be aware of as they go about their business.
As it happened, the anonymous blogger wasn't "just a lawyer," he was Rick Frenkel, intellectual property counsel for Cisco. In the subsequent defamation suit filed (where else) in the Eastern District of Texas, the plaintiff's strategy highlighted the anonymity of the Troll Tracker, painting his actions as part of a sinister consipiracy by Cisco. As a result, Cisco changed its blogging policy to specify that:
If you comment on any aspect of the company’s business or any policy issue the company is involved in where you have responsibility for Cisco’s engagement, you must clearly identify yourself as a Cisco employee in your postings or blog site(s) and include a disclaimer that the views are your own and not those of Cisco. In addition, Cisco employees should not circulate postings that they know are written by other employees without informing the recipient that the source was within Cisco.
(emphasis added)
In short, while privacy per-se isn't a bad thing, it can be dangerous, and that danger is something that businesses need to be aware of as they go about their business.
Sunday, May 9, 2010
More on Email Privacy
I've been writing about email privacy with City of Ontario v. Quon and Stengart v. Loving Care, how about an encore from New York: People v. Klapper. Factually, People v. Klapper is pretty straightforward. The defendant, Andrew Klapper, was a dentist who installed keystroke logger on his office computers. As a result, when one of Mr. Klapper's employees accessed a personal email account from a work computer, Mr. Klapper learned the employee's email password, which Mr. Klapper later used to access the employee's personal email himself. As a result, Mr. Klapper was charged with Unauthorized use of a Computer, which appears to be a New York state law analog of the Computer Fraud and Abuse Act
Now, from an intuitive standpoint, what Mr. Klapper did seems wrong, and I would like to think that the law provides some disincentives for behavior like that engaged in by Mr. Klapper. However, that's a relatively minor point, as there's lots of behavior that people may find objectionable that the law doesn't prohibit, or even frown upon. Indeed, from the decision in this case, it appears that Mr. Klapper's activities fall into that broad class of behavior, as the judge dismissed the charges against him as facially insufficient. What isn't a minor point is the reason given for dismissing the charges. According to Judge Whiten
I don't like the end result of the case, but the reasoning behind it is an abomination which should be stricken from the face of history. If anything that you type into a computer is considered to not be private (i.e., "a reasonable expectation of internet privacy is lost, upon your affirmative keystroke"), then everything I do, including work done for clients that I have asserted is covered by attorney-client privilege, is potentially public and could be considered fair game for anyone who wants to request it in litigation. This would be a complete surprise for me, and, I'm guessing every other practicing lawyer in the country.
In any case, I expect that the reasoning behind People v. Klapper is unlikely to be considered persuasive in many cases going forward. However, the fact that it appeared in even one case serves as a reminder that, when it comes to information privacy law, relying on even the most basic principles can be a dicey proposition.
via
Now, from an intuitive standpoint, what Mr. Klapper did seems wrong, and I would like to think that the law provides some disincentives for behavior like that engaged in by Mr. Klapper. However, that's a relatively minor point, as there's lots of behavior that people may find objectionable that the law doesn't prohibit, or even frown upon. Indeed, from the decision in this case, it appears that Mr. Klapper's activities fall into that broad class of behavior, as the judge dismissed the charges against him as facially insufficient. What isn't a minor point is the reason given for dismissing the charges. According to Judge Whiten
In this day of wide dissemination of thoughts and messages through transmissions which are vulnerable to interception and readable by unintended parties, armed with software, spyware, viruses and cookies spreading capacity; the concept of internet privacy is a fallacy upon which no one should rely.
It is today's reality that a reasonable expectation of internet privacy is lost, upon your affirmative keystroke. Compound that reality with an employee's use of his or her employer's computer for the transmittal of non-business related messages, and the technological reality meets the legal roadway, which equals the exit of any reasonable expectation of, or right to, privacy in such communications.
I don't like the end result of the case, but the reasoning behind it is an abomination which should be stricken from the face of history. If anything that you type into a computer is considered to not be private (i.e., "a reasonable expectation of internet privacy is lost, upon your affirmative keystroke"), then everything I do, including work done for clients that I have asserted is covered by attorney-client privilege, is potentially public and could be considered fair game for anyone who wants to request it in litigation. This would be a complete surprise for me, and, I'm guessing every other practicing lawyer in the country.
In any case, I expect that the reasoning behind People v. Klapper is unlikely to be considered persuasive in many cases going forward. However, the fact that it appeared in even one case serves as a reminder that, when it comes to information privacy law, relying on even the most basic principles can be a dicey proposition.
via
Sunday, May 2, 2010
Limiting Information Sharing Based on Context
In this article, Computer World describes an argument made by Microsoft research Danah Boyd that social networks should consider the context in which information is provided, and not re-use the information outside of that context. The argument, to the extent it can be distilled down to one paragraph is as follows:
In the article, this concept was described as "relatively new." I'm not sure that that's correct. After all article 6 of the EU Data Privacy Directive provides that
which appears to be analogous to the concept of recognizing the context in which data is provided when deciding how that data should be used.
Of course, the question of whether an idea is a new one is entirely different from the question of whether the idea is a good one. However, recognizing the similarity between the proposed context limitations on social networks and the EU's data privacy directive can certainly be beneficial in evaluating the merits of the new idea. Specifically, the criticisms of the EU directive (e.g., here) can be examined to see if they also apply to the specific context based limitations, and if context based limitations can somehow be implemented in a way that addresses those criticisms.
"You're out joking around with friends and all of a sudden you're being used to advertise something that had nothing to do with what you were joking about with your friends," Boyd said. People don't hold conversations on Facebook for marketing purposes, she said, so it would be incorrect for marketing efforts to capitalize on these conversations.
In the article, this concept was described as "relatively new." I'm not sure that that's correct. After all article 6 of the EU Data Privacy Directive provides that
1. Member States shall provide that personal data must be:
(a) processed fairly and lawfully;
(b) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes. Further processing of data for historical, statistical or scientific purposes shall not be considered as incompatible provided that Member States provide appropriate safeguards;
(c) adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed;
(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that data which are inaccurate or incomplete, having regard to the purposes for which they were collected or for which they are further processed, are erased or rectified;
(e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the data were collected or for which they are further processed. Member States shall lay down appropriate safeguards for personal data stored for longer periods for historical, statistical or scientific use.
which appears to be analogous to the concept of recognizing the context in which data is provided when deciding how that data should be used.
Of course, the question of whether an idea is a new one is entirely different from the question of whether the idea is a good one. However, recognizing the similarity between the proposed context limitations on social networks and the EU's data privacy directive can certainly be beneficial in evaluating the merits of the new idea. Specifically, the criticisms of the EU directive (e.g., here) can be examined to see if they also apply to the specific context based limitations, and if context based limitations can somehow be implemented in a way that addresses those criticisms.
Subscribe to:
Posts (Atom)