I got an anonymous comment to my last post on the TSA's new security procedures saying that there has to be something we can do, rather than just submitting to whatever is advanced under the name of security. As it happens, there are several things that people can do to react to the TSA's new procedures.
The most well publicized protest is probably National Opt Out Day (warning - page includes naked picture taken with TSA's new scanners), wherein people will opt for being groped by a TSA agent to slow down processing of fliers on November 24 - the busiest flying day of the year. If that's your cup of tea, then it's certainly your right to opt out of the scanning (which you might want to do anyway, for both health and privacy reasons). For me though, I'm not at all interested in being groped by the TSA, even for the noble purpose of protest.
If you're more interested in an ineffectual protest with a touch of humor, you can try radiation shielding undergarments, or a bill or rights luggage tag (all of which are described in this article). My guess is that the bill of rights tag would just be ignored (much like the actual bill of rights), and that the metal undergarments would result in a referral for one of the TSA's special enhanced pat downs. Still, if you want to make a statement, those are another way to do it.
As a lawyer, my first thought was a declaratory judgment action seeking to preliminarily and permanently enjoin the TSA from implementing the new security measures. My next thought was that that was so obvious that someone must have already done it. However, a quick Google search didn't turn up much more than this thread, so maybe that's still available. The problem with this approach is that these types of DJ actions are really hard to win, and you may get bumped on procedural grounds before the judge ever reaches the merits of the case.
In the end though, my guess is that what will be necessary to reverse these new procedures is people (finally) taking a stand for privacy, and bringing enough bad press to the TSA and pressure on their elected representatives, that the TSA's current policies become radioactive. I'm not thrilled that we've reached that point, but it is a free country, and if our elected representatives make enough intrusive laws, sometimes the only way to respond is by replacing them with people who aren't so keen to invade people's privacy.
Tuesday, November 23, 2010
Sunday, November 14, 2010
Fighting the TSA
The Internet is currently burning up with a story about a man who would rather not fly than submit to the TSA's intrusive screening procedures, and how the TSA reacted to him. To make a long story short, once he decided to leave the security area and ask for a ticket refund, a TSA agent told him he had to return to the security area or would be subject to a civil fine of up to $10,000. A normal person's reaction to reading this story might be outrage at this sort of petty tyranny. As a lawyer, my first reaction was to question whether the threat was real. That is, is this a case of abuse of power by a misguided TSA employee acting outside his authority, or is it a case of abuse of power by a misguided TSA employee enforcing an egregiously bad law?
After about an hour of searching, I strongly suspect that this is a case of abuse of power by a misguided TSA employee acting outside his authority, though I have not been able to convince myself of that fact, and so the normal disclaimers about nothing on this blog being legal advice should go at least double for this post.
The reason I strongly suspect that this is a case of abuse of power by a misguided TSA employee acting outside his authority is that the regulations on penalties and prohibitions mostly focus on making sure that you can't get certain things into secure areas. For example, 49 C.F.R. 1540.107 says that no one can enter the sterile area or board an aircraft without going through a screening. However, in this case, the putative flyer wasn't trying to get into the sterile area or an aircraft without going through a screening - he made a conscious decision to avoid a screening by not entering the sterile area or boarding an aircraft. Similarly, 49 C.F.R. 1540.109 prohibits threatening, interfering with, assaulting or intimidating screening personnel. However, in this case, the putative flyer wasn't interfering at all. Indeed, the screening personnel could have done their jobs more easily if they had simply let him leave the airport. Because there is no evidence that leaving the airport had any adverse effect on security, or on the ability of the screening personnel to screen other passengers, it seems to fall outside of the general scope of the regulations, and so I suspect that the threat of a $10,000 civil penalty was not supported by law.
However, the reason I haven't been able to convince myself of the fact that a civil penalty couldn't have been imposed is that the relevant law is more than a little bit difficult to wade through, and the regs have previously been applied in ways that seem patently unjust. In terms of difficulty wading through the regs, I will give one example: 49 U.S.C. 46301(a)(5):
And that's just one example. As a lawyer, I can wade through that, cross checking sections, examining applicability to a given situation, etc. However, as a human being, I don't do that sort of thing for fun, and no one is paying me to write this blog. In terms of unjust application of the regs in the past, I refer readers to Rendon v. TSA an unhappy case where a civil fine imposed for asking some rather profane (but not unreasonable) questions about security procedures was upheld under the prohibition on interfering with screening personnel. While I think imposing a fine for trying to leave an airport is even worse than the situation in Rendon, given the result in Rendon, it wouldn't surprise me terribly if a fine, in fact, were imposed.
So what will happen in this particular case? Probably nothing. I doubt the TSA will seek penalties, given that the whole incident was video taped, and a trial would only lead to bad press and the possibility of their powers being curtailed. In the end, my guess is the whole thing will blow over, the TSA will keep their current security policies in place, and most people (e.g., me) who can't afford to skip flights just because we might not want to be molested by the TSA will end up being subjected to whatever form of invasive screening the TSA thinks is warranted without any realistic avenue for recourse.
After about an hour of searching, I strongly suspect that this is a case of abuse of power by a misguided TSA employee acting outside his authority, though I have not been able to convince myself of that fact, and so the normal disclaimers about nothing on this blog being legal advice should go at least double for this post.
The reason I strongly suspect that this is a case of abuse of power by a misguided TSA employee acting outside his authority is that the regulations on penalties and prohibitions mostly focus on making sure that you can't get certain things into secure areas. For example, 49 C.F.R. 1540.107 says that no one can enter the sterile area or board an aircraft without going through a screening. However, in this case, the putative flyer wasn't trying to get into the sterile area or an aircraft without going through a screening - he made a conscious decision to avoid a screening by not entering the sterile area or boarding an aircraft. Similarly, 49 C.F.R. 1540.109 prohibits threatening, interfering with, assaulting or intimidating screening personnel. However, in this case, the putative flyer wasn't interfering at all. Indeed, the screening personnel could have done their jobs more easily if they had simply let him leave the airport. Because there is no evidence that leaving the airport had any adverse effect on security, or on the ability of the screening personnel to screen other passengers, it seems to fall outside of the general scope of the regulations, and so I suspect that the threat of a $10,000 civil penalty was not supported by law.
However, the reason I haven't been able to convince myself of the fact that a civil penalty couldn't have been imposed is that the relevant law is more than a little bit difficult to wade through, and the regs have previously been applied in ways that seem patently unjust. In terms of difficulty wading through the regs, I will give one example: 49 U.S.C. 46301(a)(5):
(A) An individual (except an airman serving as an airman) or small business concern is liable to the Government for a civil penalty of not more than $10,000 for violating—
(i) chapter 401 (except sections 40103 (a) and (d), 40105, 40106 (b), 40116, and 40117), section 44502 (b) or (c), chapter 447section 44502 (b) or (c), chapter 447 (except sections 44717–44723), or chapter 449 (except sections 44902, 44903 (d), 44904, and 44907–44909) of this title; or
(ii) a regulation prescribed or order issued under any provision to which clause (i) applies.
And that's just one example. As a lawyer, I can wade through that, cross checking sections, examining applicability to a given situation, etc. However, as a human being, I don't do that sort of thing for fun, and no one is paying me to write this blog. In terms of unjust application of the regs in the past, I refer readers to Rendon v. TSA an unhappy case where a civil fine imposed for asking some rather profane (but not unreasonable) questions about security procedures was upheld under the prohibition on interfering with screening personnel. While I think imposing a fine for trying to leave an airport is even worse than the situation in Rendon, given the result in Rendon, it wouldn't surprise me terribly if a fine, in fact, were imposed.
So what will happen in this particular case? Probably nothing. I doubt the TSA will seek penalties, given that the whole incident was video taped, and a trial would only lead to bad press and the possibility of their powers being curtailed. In the end, my guess is the whole thing will blow over, the TSA will keep their current security policies in place, and most people (e.g., me) who can't afford to skip flights just because we might not want to be molested by the TSA will end up being subjected to whatever form of invasive screening the TSA thinks is warranted without any realistic avenue for recourse.
Friday, October 1, 2010
I know I've written this post before
Here's the wired headline: Scribd Facebook Instant Personalization Is a Privacy Nightmare. The article is about what you'd expect. There are complaints about automatically generated spam emails to your automatically created friends and confusing or non-existent opportunities to opt out. There's a Scribd PR person explaining how privacy is really very important to the company. There's the author suggesting that one way to fix the problem is to delete your Scribd profile, but characterizing that as extreme. I'm not 100% sure why I read the article. True, I don't use Scribd, and have never run across this particular feature. However, just seeing Facebook in the title gave me a pretty good idea what to expect. Someone in marketing wants to take advantage of the tremendous amount of data on Facebook (and get in on the whole "social media" bandwagon) and so they make it really easy to share data, and relatively difficult not to so do.
So what should people do instead of this? Well, there's always the possibility of not integrating with Facebook. Frankly, regardless of what they've been forced to do by public pressure, I will always distrust a company who's CEO famously doesn't believe in privacy. In the event that you must integrate with Facebook, you could always try little things like opt in rather than opt out participation, not automatically spamming Facebook friends, and sending making sure it's clear for someone how to opt out if they decide they don't like the program. There are also guidelines for interactive and behavioral advertising put out by organizations like the FTC and the IAB (though I consider those to be a bit outside the scope of this post). Whatever you do though, if you're going to move into the world of social media, you need to do it with your eyes open, or your company is likely to be integrated with Facebook in a headline that also includes unpleasant words like "nightmare" or "disaster."
So what should people do instead of this? Well, there's always the possibility of not integrating with Facebook. Frankly, regardless of what they've been forced to do by public pressure, I will always distrust a company who's CEO famously doesn't believe in privacy. In the event that you must integrate with Facebook, you could always try little things like opt in rather than opt out participation, not automatically spamming Facebook friends, and sending making sure it's clear for someone how to opt out if they decide they don't like the program. There are also guidelines for interactive and behavioral advertising put out by organizations like the FTC and the IAB (though I consider those to be a bit outside the scope of this post). Whatever you do though, if you're going to move into the world of social media, you need to do it with your eyes open, or your company is likely to be integrated with Facebook in a headline that also includes unpleasant words like "nightmare" or "disaster."
Monday, August 23, 2010
July/August Privacy Catch Up
So...the blog has been uncharacteristically quiet for the last month or so. This is not because nothing privacy related has happened in the legal world. For example, the FBI and federal prosecutors announced that they will not be filing criminal charges related to the Lower Merion Spy Cam Scandal (link here), something I wrote about hereas possibly being the creepiest privacy violation of 2009. Also, it turns out that the millimeter wave scanners used to see through clothes to catch those ever-elusive terrorists can store and transmit images, despite assurances from the TSA that that was not the case (link. In more positive news, the appeals court for the District of Columbia circuit has rejected a claim by the government that round the clock warrantless GPS surveillance is ok (article here). There was also some legislative action, as internet advertisers warned that a new privacy bill, the "best practices act" would "would turn the Internet from a fast-moving information highway to a slow-moving toll-road." Also, speaking of slow-moving toll-roads, Google and Verizon came together to formally announce that net neutrality (i.e., the concept that all traffic on the internet should be treated equally) is a rather quaint notion that shouldn't apply to wireless networks. All in all, it's been a relatively busy month or so.
So why no posts? Well, in addition to all of these privacy events, we also got a huge non-privacy decision - Bilski v. Kappos - which basically upended a decade's worth of precedent on whether you can get patents on novel software or business methods. Since software and business method patents are a big part of my practice, a good deal of the time that I would have spent on privacy was spent on patent stuff instead. To make matters worse, at least time-wise, I also got a copy of Starcraft II, which turned out to be a huge time suck. Happily, rather than releasing a full game, with three playable races and campaigns for each (the approach taken with the original), Blizzard decided to only release a human campaign, which turned out to be approximately a third of a game's worth of play for a full game's price. As a result, I not only get to get back to blogging sooner, I also get to know to avoid new releases from Blizzard in the future, which I guess means that everyone wins.
So why no posts? Well, in addition to all of these privacy events, we also got a huge non-privacy decision - Bilski v. Kappos - which basically upended a decade's worth of precedent on whether you can get patents on novel software or business methods. Since software and business method patents are a big part of my practice, a good deal of the time that I would have spent on privacy was spent on patent stuff instead. To make matters worse, at least time-wise, I also got a copy of Starcraft II, which turned out to be a huge time suck. Happily, rather than releasing a full game, with three playable races and campaigns for each (the approach taken with the original), Blizzard decided to only release a human campaign, which turned out to be approximately a third of a game's worth of play for a full game's price. As a result, I not only get to get back to blogging sooner, I also get to know to avoid new releases from Blizzard in the future, which I guess means that everyone wins.
Sunday, July 11, 2010
Why Do People Keep Thinking This is a Good Idea?
Earlier this month, Blizzard Entertainment (makers of World of Warcraft, among other successful computer games) decided that they would change their game forums from anonymous forums (i.e., you can't tell the identity of someone posting to the forums unless they tell you) to forums where comments are connected with a person's real name. After a firestorm of criticism (e.g., here) Blizzard spiked the program, at least for now. And the reason for going down this path, with its utterly predictable and embarrassing trajectory? Two words: Facebook Integration. Actually (as explained here) it's slightly more complicated than that, but what it boils down to is that Blizzard wanted to get in on some of that social networking magic, and giving everyone a single ID that was consistent across all of Blizzard's forums (and Facebook) seemed to be a good way to do it.
This is an old story, and one that often ends in class action lawsuits (e.g., Google Buzz, Facebook Beacon). Why do people keep doing this? My guess is because they see their existing user data as an asset, and they hate letting an asset go unexploited. However, that's the wrong mindset. The safest way to think of user data is as something that actually belongs to users, which they have allowed you to temporarily safeguard. The point of the user data isn't to exploit it, it's to allow a business to maintain its relationship with its users. If you want to integrate with Facebook - fine. However, the way to do so is going forward, collecting new data (with a clear explanation of what you're collecting the data for), and without degrading or changing the services provided for old users. True, at the outset, this seems much harder than leveraging an existing user base. On the other hand, many existing user bases don't like being leveraged, and going about things the hard way can take that into account, and avoid turning an existing base into a historical user base.
This is an old story, and one that often ends in class action lawsuits (e.g., Google Buzz, Facebook Beacon). Why do people keep doing this? My guess is because they see their existing user data as an asset, and they hate letting an asset go unexploited. However, that's the wrong mindset. The safest way to think of user data is as something that actually belongs to users, which they have allowed you to temporarily safeguard. The point of the user data isn't to exploit it, it's to allow a business to maintain its relationship with its users. If you want to integrate with Facebook - fine. However, the way to do so is going forward, collecting new data (with a clear explanation of what you're collecting the data for), and without degrading or changing the services provided for old users. True, at the outset, this seems much harder than leveraging an existing user base. On the other hand, many existing user bases don't like being leveraged, and going about things the hard way can take that into account, and avoid turning an existing base into a historical user base.
Monday, June 28, 2010
Tech Apologies of 2010
Wired put up an article on the biggest tech apologies so far this year (link). The list is:
Not separately counting the two separate Google apologies squished into the top bullet, that makes 3/7 apologies for privacy gaffes. The moral of the story - privacy mistakes are the gift that keeps on giving, at least in terms of bad publicity.
- Google: Sorry about Buzz, Street View Privacy Issues (providing information to unwelcome Buzz "followers" and recording WiFi data while making Street View maps)
- Adobe Apologizes For Old Flash Bug (failing to patch bug for 16 months)
- McAfee’s Antivirus Snafu (releasing update that shut down computers running XP)
- AT&T Begs Pardon for iPad E-mail Breach (allowed hackers to identify email addresses of iPad customers through a flaw in an authentication web site)
- Facebook Apologizes for Privacy Shortcomings (Sort Of) (Mark Zuckerberg issues non-apology for constantly changing facebook privacy policies)
- Ellen Degeneres Didn’t Mean To Hurt Apple’s Feelings (Apparently, a comedian made fun of Apple...and this made the list why?)
- Apple: Sorry We Couldn’t Keep Up With iPhone 4 Orders (The description says it all)
Not separately counting the two separate Google apologies squished into the top bullet, that makes 3/7 apologies for privacy gaffes. The moral of the story - privacy mistakes are the gift that keeps on giving, at least in terms of bad publicity.
Sunday, June 20, 2010
Ontario v. Quon Decided
As described in this article from Computer World, the Supreme Court has issued its decision in City of Ontario v. Quon. A quick recap of the facts: the city of Ontario California issued Jeff Quon (a SWAT team member) a pager. Quon exceeds his text message allotment on the pager and is audited. The audit reveals the Quon has overwhelmingly used the pager for personal text messages. Quon is subsequently disciplined.
The decision was totally unsurprising - the police department was allowed to audit messages sent during work hours on the pager it provided. What was surprising, or at least, was something of a relief, was that the Court reached the expected result in a way that leaves a nascent right to employee privacy in electronic communications basically unscathed. Indeed, the Court seemed to go out of its way to avoid upsetting precedent like Stengart v. Loving Care, which had found that employees have at least some expectation of privacy in personal emails, even if sent on company computers. For example on page 14 of its decision, the Supreme Court specifically distinguished personal emails such as were at issue in Stengart:
All in all, I think Ontario v. Quon was a good decision. Indeed, given the issues involved, and the potential for damage, it was probably the best that the Court could have done.
The decision was totally unsurprising - the police department was allowed to audit messages sent during work hours on the pager it provided. What was surprising, or at least, was something of a relief, was that the Court reached the expected result in a way that leaves a nascent right to employee privacy in electronic communications basically unscathed. Indeed, the Court seemed to go out of its way to avoid upsetting precedent like Stengart v. Loving Care, which had found that employees have at least some expectation of privacy in personal emails, even if sent on company computers. For example on page 14 of its decision, the Supreme Court specifically distinguished personal emails such as were at issue in Stengart:
OPD’s audit of messages on Quon’s employer-provided pager was not nearly as intrusive as a search of his personal e-mail account or pager, or a wiretap on his home phone line, would have been.
All in all, I think Ontario v. Quon was a good decision. Indeed, given the issues involved, and the potential for damage, it was probably the best that the Court could have done.
Sunday, June 13, 2010
Movement in the Streetview cases
Via this article from Wired's threat level blog, we learn that Google has begun its defense in the Streetview litigation by moving to have all the various lawsuits that have been filed against it consolidated in the Northern District of California (Google's motion can be found here). We also learned what is likely to be Google's defense (at least in the United States). According to the motion
(from page 18 of the pdf)
Actually, maybe learned is a bit too strong of a word, since it was generally expected (see, e.g., here) that Google would defend using the public accessibility exception to the wiretap act. However, it is nice to actually see it in writing from someone who has authority to speak for Google, rather than relying on second-hand prognostications from commentators with no particular relation to the case.
Google will likely argue that even if plaintiff's allegations are true, Google did not violate the federal Wiretap Act (and similar state statutes) for a number of reasons, including the fact that open WiFi transmissions are "readily accessible" to the general public under 18 U.S.C. 2511(2)(g)(i).
(from page 18 of the pdf)
Actually, maybe learned is a bit too strong of a word, since it was generally expected (see, e.g., here) that Google would defend using the public accessibility exception to the wiretap act. However, it is nice to actually see it in writing from someone who has authority to speak for Google, rather than relying on second-hand prognostications from commentators with no particular relation to the case.
Sunday, June 6, 2010
Is Wireless Data Picked up by Google Publicly Accessible?
Some new developments in the Google Streetview WiFi monitoring controversy.
First, according to this article one of the lawyers suing Google is alleging that a Google patent application for increasing the accuracy of location based services by intercepting data communications indicates that the Google Streetview monitoring was intentional. I find this unconvincing. Unlike many other countries, the United States doesn't have a requirement that a company exploit patented technology. Absent some other evidence of intentionality, the patent application proves nothing (and, of course, if there was other evidence of intentionality, the patent application wouldn't be necessary).
Second, and more interestingly, some observers (e.g., here) have stated that the lawsuits against Google may have no merit because the electronic communications privacy act has a safe harbor for intercepting communications which are publicly accessible. It's an interesting argument, but I don't know it's a show stopper. The relevant statutory provision is 18 USC 2511(2)(g)(i):
"readily accessible to the general public" is then defined in 18 USC 2510(16):
That definition is the reason I don't think the publicly accessible argument is a show stopper. As I noted here, at least one of the parties bringing suit against Google has alleged that Google engaged in decrypting the communications it intercepted. I don't know what evidence they have to back that allegation. However, at this point, it doesn't matter, since at this stage in the litigation a court is bound to accept the allegations in the complaint as true.
Whether they have enough to get through discovery is another question entirely, but one which won't be raised until Google files its answer and moves for summary judgment.
First, according to this article one of the lawyers suing Google is alleging that a Google patent application for increasing the accuracy of location based services by intercepting data communications indicates that the Google Streetview monitoring was intentional. I find this unconvincing. Unlike many other countries, the United States doesn't have a requirement that a company exploit patented technology. Absent some other evidence of intentionality, the patent application proves nothing (and, of course, if there was other evidence of intentionality, the patent application wouldn't be necessary).
Second, and more interestingly, some observers (e.g., here) have stated that the lawsuits against Google may have no merit because the electronic communications privacy act has a safe harbor for intercepting communications which are publicly accessible. It's an interesting argument, but I don't know it's a show stopper. The relevant statutory provision is 18 USC 2511(2)(g)(i):
(g) It shall not be unlawful under this chapter or chapter 121 of this title for any person—
(i) to intercept or access an electronic communication made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public;
"readily accessible to the general public" is then defined in 18 USC 2510(16):
(16) “readily accessible to the general public” means, with respect to a radio communication, that such communication is not—
(A) scrambled or encrypted;
...
That definition is the reason I don't think the publicly accessible argument is a show stopper. As I noted here, at least one of the parties bringing suit against Google has alleged that Google engaged in decrypting the communications it intercepted. I don't know what evidence they have to back that allegation. However, at this point, it doesn't matter, since at this stage in the litigation a court is bound to accept the allegations in the complaint as true.
Whether they have enough to get through discovery is another question entirely, but one which won't be raised until Google files its answer and moves for summary judgment.
Monday, May 24, 2010
Boucher Bill Continues to Evoke Comment
Since Rep. Rick Boucher (D-VA) released his proposed privacy bill for public comment in early May, privacy advocates as well as interested industry representatives have been quick to criticize it as too overreaching or not sufficiently protective. This mixed reaction suggests that maybe he has actually struck a middle ground. A recent blogpost on the Workplace Privacy Counsel blog critcizes the bill as too burdensome for employers, and argues that despite its exclusion from coverage of businesses with 5,000 or less individuals, it will impact most employers since employers often collect "sensitive information" on their employees. Employers would actually have to disclose to the employees how they intend to use that sensitive information. The author expresses concern that the employers be faced with preparing a complex privacy notice, since different types of information require different uses and retention periods. Allusions to such complexity and the unwillingness of employers to be open and forthright are what cause privacy advocates to express concern about how sensitive personal information is being used, transferred, and retained. Yet, consumer groups have criticized the bill as not being comprehensive enough, and for preventing stronger state laws or individual rights of action. We know from press releases that Rep. Boucher has been studying this issue for quite some time, and is sensitive to being overreaching and quelching innovation. Yet he has heard the concerns of consumer privacy advocates and recognizes that left unchecked, privacy rights will be trampled. Rep. Boucher is to be applauded for reaching out by proposing his bill for comments, and starting a discussion that needs to be aired, hopefully in formal Congressional hearings sooner than later.
Sunday, May 23, 2010
What did Google do?
Fresh off the heels of its Buzz debacle Google is facing another class action suit, this time for collecting data from WiFi networks as it took pictures as part of its street view project (which has, of course, raised privacy concerns on its own). The complaint (available here) asserts that Google's WiFi information collection violated 18 USC 2511 (the wiretap act). This could be a problem for Google. When news of Google collecting information off wireless networks first came out, the company stated that the information was essentially nothing more than identifying data (e.g., machine addresses and network IDs). However, subsequently Google admitted that, not only did it identifying information for machines and networks, it also collected the actual traffic (i.e., payloads) running across the networks.
The distinction is important. The 18 USC 2511 prohibits intercepting any electronic communication. 18 USC 2510 defines "intercept" as
Given those definitions, if all Google had been acquiring was the identifying information of the machines communicating on a wireless network, they would have a good argument that what they did didn't count as "intercepting" as prohibited by the wiretap act. However, if Google was actually acquiring the communications passing across the networks, that argument loses a lot of its force. Even worse, in the complaint, the plaintiffs assert that
While the complaint is written a bit strangely, at least on the face of it, it appears as though the plaintiff's attorney has reason to believe that Google intercepted and decrypted encrypted communications on at least one occasion. If true, it's hard to imagine a more blatant violation of wireless privacy, and it's also hard to imagine a way that Google could escape liability.
So what will happen? Stay tuned. Assuming Google was served with on the 17th (the day the complaint was filed), their answer is due June 7 (see FRCP 12).
The distinction is important. The 18 USC 2511 prohibits intercepting any electronic communication. 18 USC 2510 defines "intercept" as
the aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device.(emphasis added) It also includes an explicit definition of "contents"
“contents”, when used with respect to any wire, oral, or electronic communication, includes any information concerning the substance, purport, or meaning of that communication.
Given those definitions, if all Google had been acquiring was the identifying information of the machines communicating on a wireless network, they would have a good argument that what they did didn't count as "intercepting" as prohibited by the wiretap act. However, if Google was actually acquiring the communications passing across the networks, that argument loses a lot of its force. Even worse, in the complaint, the plaintiffs assert that
a GSV [Google Street View] vehicle has collected, and defendant has stored, and decoded/decrypted Van Valin's wireless data on at least one occasion.
While the complaint is written a bit strangely, at least on the face of it, it appears as though the plaintiff's attorney has reason to believe that Google intercepted and decrypted encrypted communications on at least one occasion. If true, it's hard to imagine a more blatant violation of wireless privacy, and it's also hard to imagine a way that Google could escape liability.
So what will happen? Stay tuned. Assuming Google was served with on the 17th (the day the complaint was filed), their answer is due June 7 (see FRCP 12).
Labels:
class actions,
Google Streetview,
wiretap act
Wednesday, May 19, 2010
Privacy can hurt
While this blog is generally all about privacy and how to protect it, it's important to keep in mind that privacy can be a double edged sword. Take the case of Ward v. Cisco Systems. It all started with a 2007 post by an anonymous blogger about a patent infringement suit against Cisco in the Eastern District of Texas (see this article for background information). In it, the blogger, who claimed to be "just a lawyer, interested in patent cases, but not interested in publicity" made some rather acerbic comments about the lawyer suing Cisco, as well as about the Eastern District of Texas.
As it happened, the anonymous blogger wasn't "just a lawyer," he was Rick Frenkel, intellectual property counsel for Cisco. In the subsequent defamation suit filed (where else) in the Eastern District of Texas, the plaintiff's strategy highlighted the anonymity of the Troll Tracker, painting his actions as part of a sinister consipiracy by Cisco. As a result, Cisco changed its blogging policy to specify that:
(emphasis added)
In short, while privacy per-se isn't a bad thing, it can be dangerous, and that danger is something that businesses need to be aware of as they go about their business.
As it happened, the anonymous blogger wasn't "just a lawyer," he was Rick Frenkel, intellectual property counsel for Cisco. In the subsequent defamation suit filed (where else) in the Eastern District of Texas, the plaintiff's strategy highlighted the anonymity of the Troll Tracker, painting his actions as part of a sinister consipiracy by Cisco. As a result, Cisco changed its blogging policy to specify that:
If you comment on any aspect of the company’s business or any policy issue the company is involved in where you have responsibility for Cisco’s engagement, you must clearly identify yourself as a Cisco employee in your postings or blog site(s) and include a disclaimer that the views are your own and not those of Cisco. In addition, Cisco employees should not circulate postings that they know are written by other employees without informing the recipient that the source was within Cisco.
(emphasis added)
In short, while privacy per-se isn't a bad thing, it can be dangerous, and that danger is something that businesses need to be aware of as they go about their business.
Sunday, May 9, 2010
More on Email Privacy
I've been writing about email privacy with City of Ontario v. Quon and Stengart v. Loving Care, how about an encore from New York: People v. Klapper. Factually, People v. Klapper is pretty straightforward. The defendant, Andrew Klapper, was a dentist who installed keystroke logger on his office computers. As a result, when one of Mr. Klapper's employees accessed a personal email account from a work computer, Mr. Klapper learned the employee's email password, which Mr. Klapper later used to access the employee's personal email himself. As a result, Mr. Klapper was charged with Unauthorized use of a Computer, which appears to be a New York state law analog of the Computer Fraud and Abuse Act
Now, from an intuitive standpoint, what Mr. Klapper did seems wrong, and I would like to think that the law provides some disincentives for behavior like that engaged in by Mr. Klapper. However, that's a relatively minor point, as there's lots of behavior that people may find objectionable that the law doesn't prohibit, or even frown upon. Indeed, from the decision in this case, it appears that Mr. Klapper's activities fall into that broad class of behavior, as the judge dismissed the charges against him as facially insufficient. What isn't a minor point is the reason given for dismissing the charges. According to Judge Whiten
I don't like the end result of the case, but the reasoning behind it is an abomination which should be stricken from the face of history. If anything that you type into a computer is considered to not be private (i.e., "a reasonable expectation of internet privacy is lost, upon your affirmative keystroke"), then everything I do, including work done for clients that I have asserted is covered by attorney-client privilege, is potentially public and could be considered fair game for anyone who wants to request it in litigation. This would be a complete surprise for me, and, I'm guessing every other practicing lawyer in the country.
In any case, I expect that the reasoning behind People v. Klapper is unlikely to be considered persuasive in many cases going forward. However, the fact that it appeared in even one case serves as a reminder that, when it comes to information privacy law, relying on even the most basic principles can be a dicey proposition.
via
Now, from an intuitive standpoint, what Mr. Klapper did seems wrong, and I would like to think that the law provides some disincentives for behavior like that engaged in by Mr. Klapper. However, that's a relatively minor point, as there's lots of behavior that people may find objectionable that the law doesn't prohibit, or even frown upon. Indeed, from the decision in this case, it appears that Mr. Klapper's activities fall into that broad class of behavior, as the judge dismissed the charges against him as facially insufficient. What isn't a minor point is the reason given for dismissing the charges. According to Judge Whiten
In this day of wide dissemination of thoughts and messages through transmissions which are vulnerable to interception and readable by unintended parties, armed with software, spyware, viruses and cookies spreading capacity; the concept of internet privacy is a fallacy upon which no one should rely.
It is today's reality that a reasonable expectation of internet privacy is lost, upon your affirmative keystroke. Compound that reality with an employee's use of his or her employer's computer for the transmittal of non-business related messages, and the technological reality meets the legal roadway, which equals the exit of any reasonable expectation of, or right to, privacy in such communications.
I don't like the end result of the case, but the reasoning behind it is an abomination which should be stricken from the face of history. If anything that you type into a computer is considered to not be private (i.e., "a reasonable expectation of internet privacy is lost, upon your affirmative keystroke"), then everything I do, including work done for clients that I have asserted is covered by attorney-client privilege, is potentially public and could be considered fair game for anyone who wants to request it in litigation. This would be a complete surprise for me, and, I'm guessing every other practicing lawyer in the country.
In any case, I expect that the reasoning behind People v. Klapper is unlikely to be considered persuasive in many cases going forward. However, the fact that it appeared in even one case serves as a reminder that, when it comes to information privacy law, relying on even the most basic principles can be a dicey proposition.
via
Sunday, May 2, 2010
Limiting Information Sharing Based on Context
In this article, Computer World describes an argument made by Microsoft research Danah Boyd that social networks should consider the context in which information is provided, and not re-use the information outside of that context. The argument, to the extent it can be distilled down to one paragraph is as follows:
In the article, this concept was described as "relatively new." I'm not sure that that's correct. After all article 6 of the EU Data Privacy Directive provides that
which appears to be analogous to the concept of recognizing the context in which data is provided when deciding how that data should be used.
Of course, the question of whether an idea is a new one is entirely different from the question of whether the idea is a good one. However, recognizing the similarity between the proposed context limitations on social networks and the EU's data privacy directive can certainly be beneficial in evaluating the merits of the new idea. Specifically, the criticisms of the EU directive (e.g., here) can be examined to see if they also apply to the specific context based limitations, and if context based limitations can somehow be implemented in a way that addresses those criticisms.
"You're out joking around with friends and all of a sudden you're being used to advertise something that had nothing to do with what you were joking about with your friends," Boyd said. People don't hold conversations on Facebook for marketing purposes, she said, so it would be incorrect for marketing efforts to capitalize on these conversations.
In the article, this concept was described as "relatively new." I'm not sure that that's correct. After all article 6 of the EU Data Privacy Directive provides that
1. Member States shall provide that personal data must be:
(a) processed fairly and lawfully;
(b) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes. Further processing of data for historical, statistical or scientific purposes shall not be considered as incompatible provided that Member States provide appropriate safeguards;
(c) adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed;
(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that data which are inaccurate or incomplete, having regard to the purposes for which they were collected or for which they are further processed, are erased or rectified;
(e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the data were collected or for which they are further processed. Member States shall lay down appropriate safeguards for personal data stored for longer periods for historical, statistical or scientific use.
which appears to be analogous to the concept of recognizing the context in which data is provided when deciding how that data should be used.
Of course, the question of whether an idea is a new one is entirely different from the question of whether the idea is a good one. However, recognizing the similarity between the proposed context limitations on social networks and the EU's data privacy directive can certainly be beneficial in evaluating the merits of the new idea. Specifically, the criticisms of the EU directive (e.g., here) can be examined to see if they also apply to the specific context based limitations, and if context based limitations can somehow be implemented in a way that addresses those criticisms.
Thursday, April 29, 2010
FTC TO CREATE GUIDELINES FOR INTERNET PRIVACY
After over a year of silence by the FTC concerning Internet privacy, the Commission has responded to the increasingly loud outcry by privacy advocates and legislators. Earlier this week, the FTC announced that it plans to create guidelines on Internet privacy. A spokeswoman for the FTC stated that the FTC is “examining how social networks collect and share data as part of a project to develop a comprehensive framework governing privacy going forward.” The guidelines will provide a framework for how social networks and others collect, use and share personal data.
The catalyst for this step appeared to be a letter sent by Senator Charles Schumer (D-N.Y.), along with fellow Democratic senators Franken (Minn.), Bennet (Colo.), and Begich (Alaska), to the CEO of Facebook, Mark Zuckerberg, in response to Facebooks’s announcement that it would make data from its users available to third parties unless Facebook users opted out. Schumer’s letter requested Zuckerberg to reverse the policy and expressed concern that the federal government had not stepped up to protect the consumer from misuse of personal information. It called for the FTC to adopt consumer enforcement rules, and to step up consumer protection enforcement. See this Washington Post article.
Specifically, the senators requested Facebook to use an “opt-in” method, as opposed to the “opt-out” method announced by Facebook. Facebook has been pushing the envelope on sharing the personal data of its users for months now, and it was simply a matter of time before it reached the tipping point. With each new step taken by Facebook, privacy advocates denounced the moves more strongly, and criticized the FTC for failing to respond to complaints over Facebook’s changes, as well as the mishap by Google when it launched its own social networking site, Buzz. One thing is certain – this battle will continue to be waged aggressively on both sides. For Facebook, there are millions of dollars in revenue at stake. For the privacy advocates, Facebook is aiming to make itself the center of the internet, without regard to users’ privacy rights or the ability to control their personal data. The FTC has been under increasing pressure to impose a European-style opt in” standard in connection with the use of personal data by social networking sites. CDD FTC Complaint If past experience is any indication, however, it will be months before we know definitively whether the FTC will choose to move in that direction.
(Posted on behalf of Jane Shea)
The catalyst for this step appeared to be a letter sent by Senator Charles Schumer (D-N.Y.), along with fellow Democratic senators Franken (Minn.), Bennet (Colo.), and Begich (Alaska), to the CEO of Facebook, Mark Zuckerberg, in response to Facebooks’s announcement that it would make data from its users available to third parties unless Facebook users opted out. Schumer’s letter requested Zuckerberg to reverse the policy and expressed concern that the federal government had not stepped up to protect the consumer from misuse of personal information. It called for the FTC to adopt consumer enforcement rules, and to step up consumer protection enforcement. See this Washington Post article.
Specifically, the senators requested Facebook to use an “opt-in” method, as opposed to the “opt-out” method announced by Facebook. Facebook has been pushing the envelope on sharing the personal data of its users for months now, and it was simply a matter of time before it reached the tipping point. With each new step taken by Facebook, privacy advocates denounced the moves more strongly, and criticized the FTC for failing to respond to complaints over Facebook’s changes, as well as the mishap by Google when it launched its own social networking site, Buzz. One thing is certain – this battle will continue to be waged aggressively on both sides. For Facebook, there are millions of dollars in revenue at stake. For the privacy advocates, Facebook is aiming to make itself the center of the internet, without regard to users’ privacy rights or the ability to control their personal data. The FTC has been under increasing pressure to impose a European-style opt in” standard in connection with the use of personal data by social networking sites. CDD FTC Complaint If past experience is any indication, however, it will be months before we know definitively whether the FTC will choose to move in that direction.
(Posted on behalf of Jane Shea)
Sunday, April 25, 2010
Distinguishing Quon and Stengart
A few weeks ago, I posted about Stengart v. Loving Care Agency, a case where the New Jersey Supreme Court held that employees can send emails to their attorneys on company computers without waiving attorney-client privilege. About a week later, the Supreme Court of the United States heard oral arguments in City of Ontario v. Quon, a case where, from the oral arguments, it looks like the Supreme Court will hold that an employer can read messages sent to an employee on a company pager. The question is, will any meaningful part of the employee protections from Stengart survive the probable employer friendly ruling of Quon?
After re-reading the decision in Stengart, and the oral arguments in Quon, I think that, when the ruling in Quon is handed down, it will likely be distinguishable from Stengart, leaving the employee protections in that case fully intact. The critical question for whether Quon will undermine Stengart is whether Quon will state that employers can abrogate an employee's reasonable expectation of privacy with a policy stating that all communications made using company equipment are non-confidential, and will be monitored. Stengart, as I mentioned in my last post, stated that, even if such a policy did exist, it would be unenforceable (at least with respect to emails which would otherwise be covered by the attorney-client privilege). By contrast, the oral arguments in Quon indicated that the US Supreme Court was at least open to the possibility that employers would use a "no-privacy policy" to eliminate whatever privacy expectations their employees would otherwise have. If the Supreme Court does decide Quon on the theory that such a "no-privacy policy" could eliminate the employee's expectation of privacy, it would cut the heart out of the Stengart decision.
However, while I still think it is likely that the Supreme Court will issue an employer friendly ruling in Quon, it doesn't necessarily have to do so based on the theory that a "no-privacy policy" can eliminate an expectation of privacy. As mentioned by Justice Kennedy (see page 12 of the transcript), the city had two arguments it could prevail on:
Further, Justice Scalia seemed to indicate that the second of those rationales would be an easier way for the Court to find in favor the city (see page 24 of the transcript). As a result, when the decision in Quon does come out, I think there is a good chance that it will be possible to distinguish that decision from Stengart by pointing out that Quon was (once the hypothetical decision comes out) was decided based on the reasonableness of the employer's actions, rather than based on the effectiveness of the employer's no-privacy policy.
Of course, it's also possible that the Supreme Court will hold that the no-privacy policy in Quon eliminated the employee's reasonable expectation of privacy. If that happens, there are still a number of grounds on which the two cases can likely be distinguished. For example, Stengart was decided based on New Jersey common law, while Quon was a fourth amendment case. However, I find that distinction analytically unsatisfying, since Stengart made clear that the analysis under the common law was similar to that under the fourth amendment, and didn't turn on any distinction between them. It's also possible that the cases could be distinguished based on the fact that the communications in Quon were personal messages, while those in Stengart were messages from an attorney about a case. While this is slightly more satisfying, since courts have traditionally been highly protective of the privilege, it seems a bit odd that a reasonable expectation of privacy would turn on the content of a message.
In any case, it's possible that all this prognostication is beside the point. The Supreme Court hasn't ruled in City of Ontario v. Quon, and, until it does, there's no real way to know what impact it will have on Stengart. However, given the above, even once it does, I think there's a good chance that it'll leave the employee protections of Stengart mostly intact.
After re-reading the decision in Stengart, and the oral arguments in Quon, I think that, when the ruling in Quon is handed down, it will likely be distinguishable from Stengart, leaving the employee protections in that case fully intact. The critical question for whether Quon will undermine Stengart is whether Quon will state that employers can abrogate an employee's reasonable expectation of privacy with a policy stating that all communications made using company equipment are non-confidential, and will be monitored. Stengart, as I mentioned in my last post, stated that, even if such a policy did exist, it would be unenforceable (at least with respect to emails which would otherwise be covered by the attorney-client privilege). By contrast, the oral arguments in Quon indicated that the US Supreme Court was at least open to the possibility that employers would use a "no-privacy policy" to eliminate whatever privacy expectations their employees would otherwise have. If the Supreme Court does decide Quon on the theory that such a "no-privacy policy" could eliminate the employee's expectation of privacy, it would cut the heart out of the Stengart decision.
However, while I still think it is likely that the Supreme Court will issue an employer friendly ruling in Quon, it doesn't necessarily have to do so based on the theory that a "no-privacy policy" can eliminate an expectation of privacy. As mentioned by Justice Kennedy (see page 12 of the transcript), the city had two arguments it could prevail on:
One, that it's -- there is no reasonable expectation of privacy [this would be the no-privacy policy argument]; [two]even if there were, that this was a reasonable search [meaning that the no-privacy policy wouldn't have to be effective for the city to win].
Further, Justice Scalia seemed to indicate that the second of those rationales would be an easier way for the Court to find in favor the city (see page 24 of the transcript). As a result, when the decision in Quon does come out, I think there is a good chance that it will be possible to distinguish that decision from Stengart by pointing out that Quon was (once the hypothetical decision comes out) was decided based on the reasonableness of the employer's actions, rather than based on the effectiveness of the employer's no-privacy policy.
Of course, it's also possible that the Supreme Court will hold that the no-privacy policy in Quon eliminated the employee's reasonable expectation of privacy. If that happens, there are still a number of grounds on which the two cases can likely be distinguished. For example, Stengart was decided based on New Jersey common law, while Quon was a fourth amendment case. However, I find that distinction analytically unsatisfying, since Stengart made clear that the analysis under the common law was similar to that under the fourth amendment, and didn't turn on any distinction between them. It's also possible that the cases could be distinguished based on the fact that the communications in Quon were personal messages, while those in Stengart were messages from an attorney about a case. While this is slightly more satisfying, since courts have traditionally been highly protective of the privilege, it seems a bit odd that a reasonable expectation of privacy would turn on the content of a message.
In any case, it's possible that all this prognostication is beside the point. The Supreme Court hasn't ruled in City of Ontario v. Quon, and, until it does, there's no real way to know what impact it will have on Stengart. However, given the above, even once it does, I think there's a good chance that it'll leave the employee protections of Stengart mostly intact.
Tuesday, April 20, 2010
City of Ontario v. Quon
Yesterday, the Supreme Court heard oral arguments in City of Ontario v. Quon (transcript here), a case which addressed the ability of government employers to read personal text messages sent using government pagers. The background: Jeff Quon was a SWAT Sergeant who used a department issued pager to exchange text messages with his wife and girlfriend. After Quon repeatedly exceeded the department's 25,000 character/month limit, an audit was conducted which revealed Quon's personal text messages. Quon sued, claiming that he had a reasonable expectation of privacy in his personal text messages, and that reading the messages as part of the audit was an unreasonable search. The district court disagreed, the Ninth Circuit court of appeals reversed, and the Supreme Court accepted cert.
There were a couple of factual issues in the case, such as whether the police department's policy regarding personal communications covered text messages, and whether that policy had been modified by a later staff meeting where a Lieutenant had said that he wouldn't audit the messages as long as the individual employees paid for any overages. However, as described in the Scotuswiki (which did a pretty good job of summarizing the case and arguments), at oral argument, the Supreme Court seemed to be minimizing those factual issues, and coming down pretty squarely against Sergeant Quon. The Scotuswiki cited Justice Ginsburg as indicative of the court's apparent leanings. My preference would have been Justice Scalia, for this characteristically blunt exchange
Of course, whether you focus on Scalia, or Ginsburg, or one of the other Justices, the result looks the same - the Supreme Court is likely to decide that, at least for SWAT personnel using government issued pagers, employers are allowed to audit text messages by reading them, even if some of those text messages are personal.
There were a couple of factual issues in the case, such as whether the police department's policy regarding personal communications covered text messages, and whether that policy had been modified by a later staff meeting where a Lieutenant had said that he wouldn't audit the messages as long as the individual employees paid for any overages. However, as described in the Scotuswiki (which did a pretty good job of summarizing the case and arguments), at oral argument, the Supreme Court seemed to be minimizing those factual issues, and coming down pretty squarely against Sergeant Quon. The Scotuswiki cited Justice Ginsburg as indicative of the court's apparent leanings. My preference would have been Justice Scalia, for this characteristically blunt exchange
(emphasis added)
JUSTICE SCALIA: I guess we don't decide our -- our Fourth Amendment privacy cases on the basis of whether there -- there was an absolute guarantee of privacy from everybody. I think -- I think those cases say that if you think it can be made public by anybody, you don't -- you don't really have a right of privacy. So when the -- when the filthy-minded police chief listens in, it's a very bad thing, but it's not offending your right of privacy. You expected somebody else could listen in, if not him.
MR. RICHLAND [representing the City of Ontario]: I think that's correct, Justice Scalia.
JUSTICE SCALIA: I think it is.
Of course, whether you focus on Scalia, or Ginsburg, or one of the other Justices, the result looks the same - the Supreme Court is likely to decide that, at least for SWAT personnel using government issued pagers, employers are allowed to audit text messages by reading them, even if some of those text messages are personal.
Sunday, April 18, 2010
Yahoo Fights for Privacy; Ultimate Result Inconclusive
Via this story from Wired.com, Yahoo has "prevailed" in its efforts to resist a court order to turn over emails based on an assertion that the emails were "relevant and material to an ongoing criminal investigation," rather than on a warrant. Technically, at least in the legal sense, Yahoo actually prevailed. Federal prosecutors, who had requested the emails as part of their investigation into a sealed criminal case, dropped their request, meaning that Yahoo prevailed on whether it would have to turn the particular requested emails over in this case. However, in a broader sense, Yahoo's "victory" is an empty one, and could arguably be treated as worse than a clear loss. The reason is that the heart of Yahoo's dispute with the prosecutors was interpretation of the stored communications act. As I mentioned previously (see here), this law has been the subject of substantial controversy, and a definitive ruling could have helped clarify the situation. As it is though, the cloud of uncertainty remains, leaving future litigants in the same situation of potentially having to defy a court order when prosecutors request emails that are arguably material, but which can't be obtained with a warrant.
Wednesday, April 14, 2010
Personal Emails on Company Computers
In December of 2007, Marina Stengart was employed as the Executive Director for Nursing at Loving Care Agency Inc., a company which provides home-care nursing and health services. Sadly, Ms. Stengart's relationship with Loving Care soured, and she left Loving Care and sued for, among other things, harassment based on gender, religion and national origin. However, before she left, Ms. Stengart used a laptop computer provided by the company to exchange emails with her attorney. When she left, she returned the laptop to Loving Care, and they were able to retrieve and read those emails by examining her computer's cache.
Not surprisingly, her lawyer went berserk (which, when a lawyer does it, is called applying for an order to show cause) and said that Loving Care's attorney should have treated the emails as privileged and returned them once they were discovered. Loving Care's attorney disagreed, and, on March 30, the New Jersey Supreme Court issued a comprehensive opinion (which can be found here) stating that Loving Care's attorney should have treated the emails as privileged and remanding to the trial court to determine an appropriate sanction.
Some interesting points from the opinion:
1) The Court said that Loving Care's policy regarding personal emails received on company machines was not entirely clear. However
2) The fact that Ms. Stengart was technically unsophisticated and didn't know that her computer automatically cached documents contributed to her having a reasonable subjective expectation of privacy in the emails. If she had been more technically savvy, the Court may not have decided the emails were protected (though, given the policy considerations surrounding the privilege, I wouldn't bet on it).
3) Even though it wasn't searching for privileged materials, once it found that it had emails that were potentially privileged, Loving Care's law firm had a duty not to read them, and to report them to Stengart's lawyer. Because Loving Care's firm didn't do that, they could be disqualified and/or forced to pay Stengart's costs (or face whatever other sanctions the trial court deems appropriate).
An interesting case, and a result I'm sure was an unpleasant surprise to Loving Care.
via this article from Computer World.
Not surprisingly, her lawyer went berserk (which, when a lawyer does it, is called applying for an order to show cause) and said that Loving Care's attorney should have treated the emails as privileged and returned them once they were discovered. Loving Care's attorney disagreed, and, on March 30, the New Jersey Supreme Court issued a comprehensive opinion (which can be found here) stating that Loving Care's attorney should have treated the emails as privileged and remanding to the trial court to determine an appropriate sanction.
Some interesting points from the opinion:
1) The Court said that Loving Care's policy regarding personal emails received on company machines was not entirely clear. However
Because of the important policy concerns underlying the attorney-client privilege, even a more clearly written company manual -- that is, a policy that banned all personal computer use and provided unambiguous notice that an employer could retrieve and read an employee's attorney-client communications, if accessed on a personal, password protected e-mail account using the company's computer system -- would not be enforceable.
2) The fact that Ms. Stengart was technically unsophisticated and didn't know that her computer automatically cached documents contributed to her having a reasonable subjective expectation of privacy in the emails. If she had been more technically savvy, the Court may not have decided the emails were protected (though, given the policy considerations surrounding the privilege, I wouldn't bet on it).
3) Even though it wasn't searching for privileged materials, once it found that it had emails that were potentially privileged, Loving Care's law firm had a duty not to read them, and to report them to Stengart's lawyer. Because Loving Care's firm didn't do that, they could be disqualified and/or forced to pay Stengart's costs (or face whatever other sanctions the trial court deems appropriate).
An interesting case, and a result I'm sure was an unpleasant surprise to Loving Care.
via this article from Computer World.
Internet Giants’ Online Advertising Practices Challenged
Just as one might wonder whether the FTC had decided to choose its battles and allow the online behavioral marketing dog to continue its nap, the dog has been awakened with a loud boom. Targeted behavioral advertising practices have been in the crosshairs of privacy advocates for several years, and the privacy advocates have finally pulled the trigger. The Center for Digital Democracy (CDD) and two other public interest groups filed a complaint with the Federal Trade Commission last week challenging the tracking and profiling practices used by Internet companies such as Google, Yahoo and Microsoft. Specifically, the complainants ask the Internet companies to acknowledge that the software “cookies” they embed in a Web browser collects data about a person’s online movements that should be considered personally identifiable information, even though the cookies don’t have a person’s name attached to them.
The privacy groups claim they are not calling for an outright ban of behavioral advertising. Instead they seek a balance between what they term the “Wild West” of data collection in the world of online advertising, and privacy controls such as notice and consent. Specifically, CDD, U.S. PIRG and World Privacy Forum called on the FTC to investigate the internet companies using its Section 5 authority for conduct that constitutes unfair and deceptive practices, and to issue an injunction against the unfettered use of what they claim is personal information collected by the companies. A full copy of the complaint can be found here.
The use of targeted behavioral advertising has been a controversial practice for several years, with privacy advocates sounding the alarms, and advertisers pushing for self-regulation. Following the release by the FTC of the FTC Staff Report: Self Regulatory Principles for Online Behavioral Advertising in February, 2009, various industry associations released the Self-Regulatory Principles for Online Behavioral Advertising in July, 2009. In the Conclusion to its Report, the FTC stated that it would continue to evaluate the industry’s efforts at self-regulation, monitor the marketplace and conduct investigations to determine whether there have been violations of Section 5, and meet with industry representatives and consumer protection groups to keep pace with changes. There has been no official word from the FTC in response to the industry’s publication of its Self-Regulatory Principles.
One can only surmise that the consumer protection groups simply got tired of waiting. How the FTC proceeds in response to the complaint will reveal how forcefully the FTC intends to address the online behavioral marketing phenomenom going forward.
The privacy groups claim they are not calling for an outright ban of behavioral advertising. Instead they seek a balance between what they term the “Wild West” of data collection in the world of online advertising, and privacy controls such as notice and consent. Specifically, CDD, U.S. PIRG and World Privacy Forum called on the FTC to investigate the internet companies using its Section 5 authority for conduct that constitutes unfair and deceptive practices, and to issue an injunction against the unfettered use of what they claim is personal information collected by the companies. A full copy of the complaint can be found here.
The use of targeted behavioral advertising has been a controversial practice for several years, with privacy advocates sounding the alarms, and advertisers pushing for self-regulation. Following the release by the FTC of the FTC Staff Report: Self Regulatory Principles for Online Behavioral Advertising in February, 2009, various industry associations released the Self-Regulatory Principles for Online Behavioral Advertising in July, 2009. In the Conclusion to its Report, the FTC stated that it would continue to evaluate the industry’s efforts at self-regulation, monitor the marketplace and conduct investigations to determine whether there have been violations of Section 5, and meet with industry representatives and consumer protection groups to keep pace with changes. There has been no official word from the FTC in response to the industry’s publication of its Self-Regulatory Principles.
One can only surmise that the consumer protection groups simply got tired of waiting. How the FTC proceeds in response to the complaint will reveal how forcefully the FTC intends to address the online behavioral marketing phenomenom going forward.
Sunday, April 11, 2010
Microsoft v. Waledac
This is a site that all lawyers working in the area of computer security should be aware of and visit. It's a page which contains all the pleadings from Microsoft's current case against John Does 1-27 (aka the "Waledac" botnet). This page is important for two reasons. First, Microsoft's efforts against the botnet are on the cutting edge of legal efforts to shut down hacking operations, and so should be seen as examples of legal theories that can be used in that area. Second, it has some interesting (and probably useful) examples of rhetoric and explanations which can be used to sway a (presumably) technologically unsavvy judge to your side. For example, on pages 3-9 of the PDF of Microsoft's motion for a temporary restraining order against the botnet, there is a non-technical tutorial on what a botnet is, and how issuing the TRO would shut it down, complete with pictures. Similarly, in making the arguments in support of the TRO, Microsoft repeatedly seeks to establish the harm the botnet is causing by explaining how it harms Microsoft's customers. E.g.:
While this might not be the most relevant argument legally (after all, one is generally not allowed to bring suit based on injuries to third parties) from an emotional standpoint, it almost certainly made the judge more likely to grant Microsoft's requested relief.*
In any case, there's too much there to succinctly summarize here. Further, there's no reason to want to read a summary. The information is valuable enough to be worth the time to read in the original.
*Yes, I am aware that harm to third parties can be used to establish that issuing an injunction is in the public interest. However, Microsoft invoked its customers' interests essentially everywhere, not only when arguing that the public interest would be served by granting a TRO.
Once customers' computers are infected and become part of the botnet, they are unaware of that fact and may not have the technical resources to solve the problem, allowing their computers to be misused indefinitely. Thus, extrajudicial, technical attempts to remedy the problem alone are insufficient and the injury caused to customers continues.
While this might not be the most relevant argument legally (after all, one is generally not allowed to bring suit based on injuries to third parties) from an emotional standpoint, it almost certainly made the judge more likely to grant Microsoft's requested relief.*
In any case, there's too much there to succinctly summarize here. Further, there's no reason to want to read a summary. The information is valuable enough to be worth the time to read in the original.
*Yes, I am aware that harm to third parties can be used to establish that issuing an injunction is in the public interest. However, Microsoft invoked its customers' interests essentially everywhere, not only when arguing that the public interest would be served by granting a TRO.
Sunday, April 4, 2010
Cloud Computing: Good for Privacy?
In general, cloud computing is not good for privacy. For documents stored on the cloud, not only is there the same risk of hacking that is present for all electronic documents, but there's also a risk that the cloud service provider will accidentally share your data with other clients or users who don't have your permission to see it (see, e.g., Google Privacy Blunder Shares Your Docs Without Permission). However, now, a group of technology companies is coming together to try and address some of the concerns related to cloud computing with a positive change in the law. As described in this article, the group, calling itself the Digital Due Process Initiative, is pressing for the law regarding access to electronically stored information to be clarified, and the protections for that information to be strengthened.
To my mind, this is a positive development. The law on what protections are afforded to electronic communications is not at all clear, as there is currently a split between the First Circuit's decision in U.S. v. Councilman and the Ninth Circuit's decision in Konop v. Hawaiian Airlines on the question of when (and if) the protections of the wiretap act apply to email (see here). While clarifying that (and preferably strengthening existing law) won't eliminate problems that could be caused by cloud service providers accidentally sharing data, if the coalition succeeds, it would change cloud computing from a phenomenon which is almost wholly destructive of privacy, to one which could have beneficial effects, at least in terms of lobbying and raising people's awareness of the issues.
To my mind, this is a positive development. The law on what protections are afforded to electronic communications is not at all clear, as there is currently a split between the First Circuit's decision in U.S. v. Councilman and the Ninth Circuit's decision in Konop v. Hawaiian Airlines on the question of when (and if) the protections of the wiretap act apply to email (see here). While clarifying that (and preferably strengthening existing law) won't eliminate problems that could be caused by cloud service providers accidentally sharing data, if the coalition succeeds, it would change cloud computing from a phenomenon which is almost wholly destructive of privacy, to one which could have beneficial effects, at least in terms of lobbying and raising people's awareness of the issues.
Sunday, March 21, 2010
Punishing Cybercrime
Is chasing cybercrooks worth it?
That's the headline to this article from CNN. I was a bit shocked to see it. The triggering event for that article was the arrest of three men who appear to have operated the 13 million computer "Mariposa" botnet. I would have expected that taking down such a significant* botnet would be followed by multiple rounds of self-congratulation, rather than questions about the value of the whole enterprise. However, according to the article
To my mind, the sentiment reflected in the above quote is simply wrong.
First, Karygiannis' proposed alternatives are, at best, highly imperfect solutions. With respect to user education, I suspect Karygiannis has underestimated how difficult user education actually is, though, given that it's common knowledge that people still fall for Nigerian email scams (see, e.g., here), I don't know why he would. Further, even if user education were perfect, it's not at all clear how it would protect against malware which spreads by exploiting vulnerabilities in legitimate software. Indeed, Mariposa itself has been observed to spread through vulnerabilities in Internet Explorer 6 (among other vectors, described here), so even the specific botnet addressed in the article provides a counterexample to the proposition that user education is some kind of panacea.
With respect to better anti-virus technologies, technical protection mechanisms are certainly helpful, but they too aren't a panacea. Better anti-virus protection is nice, but the people writing malware aren't dummies, and they constantly improve their products to address advances in security technology. A great example of how this works is Conficker, a malware program whose "unknown authors are ... believed to be tracking anti-malware efforts from network operators and law enforcement and have regularly released new variants to close the worm's own vulnerabilities" (via Wikipedia).
Second, with respect to Karygiannis' comment that "I don't think they've deterred anyone by prosecuting these people," to the extent that comment is meant literally - that cybercriminals, as a class, are immune to the deterrent effect of criminal prosecution, it seems unbelievable. That's especially true since the arrests related to the Mariposa botnet are only part of a series of well publicized law enforcement actions against cybercriminals (for example, the recommended 25 year sentence for computer hacker Albert Gonzalez, described in this article). Further, even if it were true that prosecution of cybercriminals had no deterrent effect whatsoever, it would still have the effect of preventing the particular cybercriminals who had been prosecuted from committing further crimes. This effect, referred to as incapacitation, is something that has been well studied and documented with respect to other types of crimes (e.g., here), and there is no reason why it shouldn't apply to cybercrime as well.
The bottom line is that punishment of cybercriminals is a necessary part of our collective defense against cybercrime. To simply focus on user education and technical protection mechanisms, while those are important tools, would do nothing to address the source of these crimes.
*Determining the actual size of botnets is, to put it mildly, an inexact science. For example, this article about the size of the "Kraken" botnet pointed out that the controversy regarding Kraken's size was not limited to how many machines it controlled, but also reached more basic questions, such as whether Kraken was really separate from the older "Bobax" botnet. However, regardless of how botnet size is counted, Mariposa is undeniably huge (by comparison, Kraken was estimated at 400,000 machines - several orders of magnitude smaller than Mariposa).
That's the headline to this article from CNN. I was a bit shocked to see it. The triggering event for that article was the arrest of three men who appear to have operated the 13 million computer "Mariposa" botnet. I would have expected that taking down such a significant* botnet would be followed by multiple rounds of self-congratulation, rather than questions about the value of the whole enterprise. However, according to the article
the whole get-the-bad-guys effort, while it makes for good drama, is a futile way to secure the Internet, some computer security experts say.
"The virus writers and the Trojan [horse] writers, they're still out there," said Tom Karygiannis, a computer scientist and senior researcher at the National Institute of Standards and Technology. "So I don't think they've deterred anyone by prosecuting these people."
...
It would be smarter, Karygiannis said, to develop new anti-virus technologies and to teach people how to protect themselves from Internet crime.
To my mind, the sentiment reflected in the above quote is simply wrong.
First, Karygiannis' proposed alternatives are, at best, highly imperfect solutions. With respect to user education, I suspect Karygiannis has underestimated how difficult user education actually is, though, given that it's common knowledge that people still fall for Nigerian email scams (see, e.g., here), I don't know why he would. Further, even if user education were perfect, it's not at all clear how it would protect against malware which spreads by exploiting vulnerabilities in legitimate software. Indeed, Mariposa itself has been observed to spread through vulnerabilities in Internet Explorer 6 (among other vectors, described here), so even the specific botnet addressed in the article provides a counterexample to the proposition that user education is some kind of panacea.
With respect to better anti-virus technologies, technical protection mechanisms are certainly helpful, but they too aren't a panacea. Better anti-virus protection is nice, but the people writing malware aren't dummies, and they constantly improve their products to address advances in security technology. A great example of how this works is Conficker, a malware program whose "unknown authors are ... believed to be tracking anti-malware efforts from network operators and law enforcement and have regularly released new variants to close the worm's own vulnerabilities" (via Wikipedia).
Second, with respect to Karygiannis' comment that "I don't think they've deterred anyone by prosecuting these people," to the extent that comment is meant literally - that cybercriminals, as a class, are immune to the deterrent effect of criminal prosecution, it seems unbelievable. That's especially true since the arrests related to the Mariposa botnet are only part of a series of well publicized law enforcement actions against cybercriminals (for example, the recommended 25 year sentence for computer hacker Albert Gonzalez, described in this article). Further, even if it were true that prosecution of cybercriminals had no deterrent effect whatsoever, it would still have the effect of preventing the particular cybercriminals who had been prosecuted from committing further crimes. This effect, referred to as incapacitation, is something that has been well studied and documented with respect to other types of crimes (e.g., here), and there is no reason why it shouldn't apply to cybercrime as well.
The bottom line is that punishment of cybercriminals is a necessary part of our collective defense against cybercrime. To simply focus on user education and technical protection mechanisms, while those are important tools, would do nothing to address the source of these crimes.
*Determining the actual size of botnets is, to put it mildly, an inexact science. For example, this article about the size of the "Kraken" botnet pointed out that the controversy regarding Kraken's size was not limited to how many machines it controlled, but also reached more basic questions, such as whether Kraken was really separate from the older "Bobax" botnet. However, regardless of how botnet size is counted, Mariposa is undeniably huge (by comparison, Kraken was estimated at 400,000 machines - several orders of magnitude smaller than Mariposa).
Labels:
criminal enforcement,
cybercrime,
enforcement,
mariposa
Sunday, March 14, 2010
Netflix Fails Data Anonymization
According to this story from the wired threat level blog, Netflix has shut down the sequel to its original $1,000,000 Netflix prize as a result of a privacy lawsuit. The problem for Netflix was that there is a specific law which prevents disclosure of a person's video rentals, and Netflix provided enough information about individual users in their supposedly anonymized training data that at least some of that data could be de-anonymized.
So, was Netflix wrong to give out the data it included in the second contest? Well, the second contest indicated what movies people had watched, and what ratings they had been given. The people weren't identified by name, but their ZIP codes, ages and gender, were provided. As it happens, there is an 87% chance that, if you have someone's birth date, zip code, and gender, you can uniquely identify that person (as related in this article, also from threat level). Does that mean Netflix's second contest ran afoul of the law? Well, it was settled, so we don't know what a court will say. However, it was certainly a significant enough risk that Netflix decided to cancel the well-publicized sequel to its earlier successful efforts, which probably means that Netflix made a bit too much public.
Now that it's all over, given the benefit of 20/20 hindsight, what should Netflix have done with the second contest? Well, from a conservative standpoint, it could probably have avoided the type of privacy complaints that came up if, instead of just removing names, it had followed the anonymization guidelines provided for medical research on human subjects (a good summary of which can be found here). That has the benefit of being the gold standard for data anonymization, and also including specific items to exclude, including the zip codes included in Netflix's data set.
So, was Netflix wrong to give out the data it included in the second contest? Well, the second contest indicated what movies people had watched, and what ratings they had been given. The people weren't identified by name, but their ZIP codes, ages and gender, were provided. As it happens, there is an 87% chance that, if you have someone's birth date, zip code, and gender, you can uniquely identify that person (as related in this article, also from threat level). Does that mean Netflix's second contest ran afoul of the law? Well, it was settled, so we don't know what a court will say. However, it was certainly a significant enough risk that Netflix decided to cancel the well-publicized sequel to its earlier successful efforts, which probably means that Netflix made a bit too much public.
Now that it's all over, given the benefit of 20/20 hindsight, what should Netflix have done with the second contest? Well, from a conservative standpoint, it could probably have avoided the type of privacy complaints that came up if, instead of just removing names, it had followed the anonymization guidelines provided for medical research on human subjects (a good summary of which can be found here). That has the benefit of being the gold standard for data anonymization, and also including specific items to exclude, including the zip codes included in Netflix's data set.
Sunday, March 7, 2010
HIPAA Enforcement
Is HIPAA meaningful? For a long time, the answer to that question was arguably no. The date for compliance with the privacy rules was April 14, 2003, and the date for compliance with the security rule was two years later (the HIPAA Wikipedia entry has a good summary of this history). Nevertheless, it wasn't until 2007 that the first HIPAA audit took place (see here), and the lack of enforcement led many to believe that HIPAA was basically toothless (see, e.g., here).
Now though, that may be changing. One of the notable features of the HITECH act was that it gave state attorneys general the right to file suit on behalf of state residents who have been harmed by a HIPAA violation (the text of the act can be found here). Since then, the attorney general of Connecticut has taken advantage of that new authority, and filed suit against Health Net Connecticut, Inc. for HIPAA violations (among other things). The press release is here, and the complaint can be found here. Does this herald a new era of aggressive HIPAA enforcement? I tend to think not. The HITECH act limits the amount of damages recoverable by attorneys general to $25,000 per calendar year for violations of any individual requirement or prohibition, so HIPAA enforcement isn't going to be a panacea for states which already have limited enforcement budgets. On the other hand, there has already been one suit, and if an attorney general is already thinking about bringing an action (e.g., under some applicable state law), the extra HIPAA recovery could make the difference in whether a suit is brought. Either way though, with the Connecticut attorney general's action, the era of absent HIPAA enforcement is officially closed.
Now though, that may be changing. One of the notable features of the HITECH act was that it gave state attorneys general the right to file suit on behalf of state residents who have been harmed by a HIPAA violation (the text of the act can be found here). Since then, the attorney general of Connecticut has taken advantage of that new authority, and filed suit against Health Net Connecticut, Inc. for HIPAA violations (among other things). The press release is here, and the complaint can be found here. Does this herald a new era of aggressive HIPAA enforcement? I tend to think not. The HITECH act limits the amount of damages recoverable by attorneys general to $25,000 per calendar year for violations of any individual requirement or prohibition, so HIPAA enforcement isn't going to be a panacea for states which already have limited enforcement budgets. On the other hand, there has already been one suit, and if an attorney general is already thinking about bringing an action (e.g., under some applicable state law), the extra HIPAA recovery could make the difference in whether a suit is brought. Either way though, with the Connecticut attorney general's action, the era of absent HIPAA enforcement is officially closed.
Sunday, February 28, 2010
Creepiest Privacy Violation of 2009?
Imagine your child's school offered him or her a free laptop to do homework. That'd be pretty cool, right? Now, imagine that the school administrators used a built in web cam to surreptitiously take pictures of your children. According to the complaint filed in Robins v. Lower Merion School District, that's exactly what happened in one Pennsylvania school district (actually, it's even creepier than that, if the allegations set forth here are true). The complaint alleges violations of (among other things) the electronic communications privacy act, the stored communications act, the computer fraud and abuse act, and the fourth amendment (since the school administrators were acting on behalf of the state when they were allegedly violating the student's privacy rights).
Of course, the school officials are denying any wrongdoing, and claim they have been unfairly portrayed (see here). That could be true. After all, there's a reason we have trials, and it makes sense not to rush to judgment until after both sides have been able to have their proverbial day in court. However, while I don't want to rush to judgment, I can make a few comments at least on the legal theories in the case. First, while I understand the plaintiff's argument, that taking surreptitious web cam pictures violated the stored communications act and electronic communications privacy acts, I still don't know how good a fit those acts are for this particular (alleged) crime. After all, while the hypothetical communications (i.e., web cam images) were illicit, they weren't intercepted or accessed by anyone other than their intended recipients. Instead, I think the computer fraud and abuse act arguments seem a bit more natural. For the computer fraud and abuse act, I can't imagine how taking surreptitious pictures over a web cam doesn't exceed unauthorized access to a protected computer. I think the fourth amendment claim is also a good fit. While students have a lessened right to privacy in the school, there must still be a reasonable suspicion of illegal activity before school authorities can perform a search (a more detailed, and better, explanation of the relevant precedent can be found here). Further, the alleged monitoring wasn't limited to school hours, but also caught students while they were at home and, according to the complaint, "in various stages of dress or undress."
Again, all of the allegations in the complaint are just that - allegations. Until the defendants have a chance to answer, and the case is actually tried, they are presumed innocent (or, in this case, not liable). However, at least from the face of the complaint, it appears as though there could have been some serious privacy violations (potentially supporting claims under at least the fourth amendments and the computer fraud and abuse act).
(via Bruce Schneier)
Of course, the school officials are denying any wrongdoing, and claim they have been unfairly portrayed (see here). That could be true. After all, there's a reason we have trials, and it makes sense not to rush to judgment until after both sides have been able to have their proverbial day in court. However, while I don't want to rush to judgment, I can make a few comments at least on the legal theories in the case. First, while I understand the plaintiff's argument, that taking surreptitious web cam pictures violated the stored communications act and electronic communications privacy acts, I still don't know how good a fit those acts are for this particular (alleged) crime. After all, while the hypothetical communications (i.e., web cam images) were illicit, they weren't intercepted or accessed by anyone other than their intended recipients. Instead, I think the computer fraud and abuse act arguments seem a bit more natural. For the computer fraud and abuse act, I can't imagine how taking surreptitious pictures over a web cam doesn't exceed unauthorized access to a protected computer. I think the fourth amendment claim is also a good fit. While students have a lessened right to privacy in the school, there must still be a reasonable suspicion of illegal activity before school authorities can perform a search (a more detailed, and better, explanation of the relevant precedent can be found here). Further, the alleged monitoring wasn't limited to school hours, but also caught students while they were at home and, according to the complaint, "in various stages of dress or undress."
Again, all of the allegations in the complaint are just that - allegations. Until the defendants have a chance to answer, and the case is actually tried, they are presumed innocent (or, in this case, not liable). However, at least from the face of the complaint, it appears as though there could have been some serious privacy violations (potentially supporting claims under at least the fourth amendments and the computer fraud and abuse act).
(via Bruce Schneier)
Sunday, February 21, 2010
Google Buzz Lawsuit
In a completely unsurprising development, a class action lawsuit has been filed on behalf of all Gmail users who were linked to Google Buzz (story here). The complaint alleges that Google unlawfully shared users' personal data without their permission, and cites the electronic communications privacy act, the computer fraud and abuse act, the stored communications act, as well as California statutory and common law.
At this point, Google hasn't answered (or even been served with) the complaint, so we don't know how they'll defend against the suit. However, the complaint is available online (e.g., here). From my brief perusal, there are a couple of points about it that look a bit odd. For example:
The lawsuit alleges (paragraph 17) that
The lawsuit was filed in the 9th circuit (specifically, California), which has adopted an interpretation of the electronic communications privacy act which makes it relatively difficult to apply that act to email communications (see, e.g., here).
The lawsuit alleges violation of the computer fraud and abuse act, which is a little odd because that act is generally focused on unauthorized access to protected computers, rather than on unauthorized access to third party data.
Anyway, I suspect that, oddities in the complaint notwithstanding, the Buzz lawsuit will go the way of the Beacon lawsuit before it. That is, it will be settled with the individual class members getting nothing but whatever warm feeling comes from having been part of a lawsuit.* However, while it lasts, the lawsuit could be interesting (especially if Google fights at all), and might provide an incentive for Google to pay a bit more attention to privacy going forward.
*Of course, the settlement hasn't been finalized yet. The terms of the settlement, as well as other information on the Beacon case, can be found here.
At this point, Google hasn't answered (or even been served with) the complaint, so we don't know how they'll defend against the suit. However, the complaint is available online (e.g., here). From my brief perusal, there are a couple of points about it that look a bit odd. For example:
The lawsuit alleges (paragraph 17) that
Google Buzz "posted" to Buzz any information that was previously posted to certain other Google websites, including but not limited to Picasa, Google Reader, and Twitter.Why Twitter is considered a Google website it something of a mystery, especially since Buzz is seen (e.g., here) as an attempt to compete with (among others) Twitter.
The lawsuit was filed in the 9th circuit (specifically, California), which has adopted an interpretation of the electronic communications privacy act which makes it relatively difficult to apply that act to email communications (see, e.g., here).
The lawsuit alleges violation of the computer fraud and abuse act, which is a little odd because that act is generally focused on unauthorized access to protected computers, rather than on unauthorized access to third party data.
Anyway, I suspect that, oddities in the complaint notwithstanding, the Buzz lawsuit will go the way of the Beacon lawsuit before it. That is, it will be settled with the individual class members getting nothing but whatever warm feeling comes from having been part of a lawsuit.* However, while it lasts, the lawsuit could be interesting (especially if Google fights at all), and might provide an incentive for Google to pay a bit more attention to privacy going forward.
*Of course, the settlement hasn't been finalized yet. The terms of the settlement, as well as other information on the Beacon case, can be found here.
Sunday, February 14, 2010
Google Buzz
On the 13th, Lawyers, Guns and Money, a blog I read regularly, posted the following complaint (originally posted at Fugitivus a blog which is not open to the public) regarding Google Buzz:
As a note, while the concerns expressed in the above complaint are personal to the author, they are no means limited to that one individual. Depending on the study, either one in five or one in four women are victims of a completed or attempted rape (see here) at some point in their lives, and 70 percent of the perpetrators are "intimates, other relatives, friends or acquaintances" (source) who might show up as being a contact for the victim.
Of course, the problems with Google Buzz aren't limited to rape victims (see, e.g., Google Buzz: Privacy Nightmare). Instead, they're just one more example of how, when communication is commoditized, it will eventually be made publicly available.
I use my private Gmail account to email my boyfriend and my mother. There’s a BIG drop-off between them and my other “most frequent” contacts. You know who my third most frequent contact is? My abusive ex-husband.
Which is why it’s SO EXCITING, Google, that you AUTOMATICALLY allowed all my most frequent contacts access to my Reader, including all the comments I’ve made on Reader items, usually shared with my boyfriend, who I had NO REASON to hide my current location or workplace from, and never did.
My other most frequent contacts? Other friends of Flint’s.
Oh, also, people who email my ANONYMOUS blog account, which gets forwarded to my personal account. They are frequent contacts as well. Most of them, they are nice people. Some of them are probably nice but a little unbalanced and scary. A minority of them — but the minority that emails me the most, thus becoming FREQUENT — are psychotic men who think I deserve to be raped because I keep a blog about how I do not deserve to be raped, and this apparently causes the Hulk rage.
F--- you, Google. My privacy concerns are not trite. They are linked to my actual physical safety, and I will now have to spend the next few days maintaining that safety by continually knocking down followers as they pop up. A few days is how long I expect it will take before you either knock this shit off, or I delete every Google account I have ever had and use Bing out of f---ing spite.
F--- you, Google. You have destroyed over ten years of my goodwill and adoration, just so you could try and out-MySpace MySpace.
As a note, while the concerns expressed in the above complaint are personal to the author, they are no means limited to that one individual. Depending on the study, either one in five or one in four women are victims of a completed or attempted rape (see here) at some point in their lives, and 70 percent of the perpetrators are "intimates, other relatives, friends or acquaintances" (source) who might show up as being a contact for the victim.
Of course, the problems with Google Buzz aren't limited to rape victims (see, e.g., Google Buzz: Privacy Nightmare). Instead, they're just one more example of how, when communication is commoditized, it will eventually be made publicly available.
Monday, February 1, 2010
How to Discuss Open WiFi
As reported in this article from C|NET, Cathy Paradiso, a technical recruiter who works out of her home near Pueblo, Colo., was recently threatened with having her internet access discontinued based on allegations of copyright infringement that ultimately proved unfounded. According to the article, Ms. Paradiso had an unsecured wireless network, and someone took advantage of her connection to download various television shows and movies.
Anyway, on its own, this isn't that big a deal. Certainly, it isn't that big a deal in the ongoing story of copyright infringement accusations and open WiFi (my thought is that this story about an Ohio county which had its free WiFi shut down over a copyright infringement complaint is much more noteworthy). However, something about the reporting on Ms. Paradiso's predicament rubbed me the wrong way. After noting that cutting off internet for someone who works from home is essentially the same as destroying that person's business, the article asked
To me, that's entirely the wrong question. Whether someone has open WiFi isn't just a matter of tech savvy. After all, even Bruce Schneier, who is probably the web's best known expert on computer security has advocated for open WiFi, saying that people who maintain open WiFi make the world a better place, by making a valuable resource more easily available to more people. While Mr. Schneier's analysis of the costs and benefits of leaving WiFi open might not convince everyone that open WiFi is the way to go, it certainly disproves the idea that leaving WiFi open is something that only the technically unsavy would do, and that policies should be built around the idea that leaving WiFi open is somehow a less legitimate choice than the alternative.
So, how would I like to have seen the article deal with the open WiFi issue? I think treating it as a real issue, with real policy consequences would have been a better way to go. For example, instead of assuming open WiFi is bad, it could have explained why the problems with open WiFi (e.g., making it harder to police copyright violations) outweigh the benefits (e.g., broader access to valuable resources). Or, in the alternative, it could have explained that open WiFi is valuable, and then discussed policies which would help foster it (for example, stripping ISPs who go after people with open WiFi of their protections under section 512 of the DMCA, under the theory that those providers are no longer acting as passive conduits, and so shouldn't be protected as if they were). Either way, it would have been a great deal more informative and interesting than simply treating open WiFi as something that happens only by mistake.
Anyway, on its own, this isn't that big a deal. Certainly, it isn't that big a deal in the ongoing story of copyright infringement accusations and open WiFi (my thought is that this story about an Ohio county which had its free WiFi shut down over a copyright infringement complaint is much more noteworthy). However, something about the reporting on Ms. Paradiso's predicament rubbed me the wrong way. After noting that cutting off internet for someone who works from home is essentially the same as destroying that person's business, the article asked
is it right to penalize someone for not being tech-savvy enough to properly secure a wireless network?
To me, that's entirely the wrong question. Whether someone has open WiFi isn't just a matter of tech savvy. After all, even Bruce Schneier, who is probably the web's best known expert on computer security has advocated for open WiFi, saying that people who maintain open WiFi make the world a better place, by making a valuable resource more easily available to more people. While Mr. Schneier's analysis of the costs and benefits of leaving WiFi open might not convince everyone that open WiFi is the way to go, it certainly disproves the idea that leaving WiFi open is something that only the technically unsavy would do, and that policies should be built around the idea that leaving WiFi open is somehow a less legitimate choice than the alternative.
So, how would I like to have seen the article deal with the open WiFi issue? I think treating it as a real issue, with real policy consequences would have been a better way to go. For example, instead of assuming open WiFi is bad, it could have explained why the problems with open WiFi (e.g., making it harder to police copyright violations) outweigh the benefits (e.g., broader access to valuable resources). Or, in the alternative, it could have explained that open WiFi is valuable, and then discussed policies which would help foster it (for example, stripping ISPs who go after people with open WiFi of their protections under section 512 of the DMCA, under the theory that those providers are no longer acting as passive conduits, and so shouldn't be protected as if they were). Either way, it would have been a great deal more informative and interesting than simply treating open WiFi as something that happens only by mistake.
Data Security Deadline Looms
The following legal update is posted on behalf of my colleague Jane Shea.
Despite the temporary relief provided by the six-month extension to June 1, 2010 of the Identity Theft Red Flags regulations deadline, businesses that are located in Massachusetts, or who have customers or employees that are domiciled in Massachusetts, find that they must maintain their focus on data security for another reason – the Massachusetts data privacy regulations compliance deadline is March 1, 2010.
Like the Red Flags regulations, the Massachusetts law deadline has been extended multiple times since its first deadline of January 1, 2009. In addition, the implementing regulations were twice revised in response to feedback received from affected businesses concerning the strict encryption requirements and the "one size fits all" mandate for the written security program that the original regulations imposed.
The Massachusetts Data Security Law (MGL Chapter 93H) and its implementing Regulations (201 CMR 17.00) (the "Massachusetts Regulations") apply to anyone engaged in commerce, and specifically, those who "store" personal information, in addition to those who receive, maintain, process, or otherwise have access to such information. The Massachusetts Regulations apply to the personal information of Massachusetts residents, whether they are customers or employees. Thus, the reach of the Massachusetts Regulations is not limited to businesses located or operating in Massachusetts. There are no exceptions or exemptions, so that both for-profit and non-profit organizations located inside and outside of Massachusetts must comply.
"Personal information" is defined as a Massachusetts resident's first name and last name, or first initial and last name, combined with one or more of "(a) Social Security Number, (b) drivers license or state-issued identification number, or (c) financial account or credit or debit card number, with or without any required security code, access code, personal identification number or password, that would permit access to a resident's financial account." Publicly available information is not included provided it has been lawfully obtained.
The requirements of the Massachusetts Regulations are comparable to the FTC's Safeguards Rule. This Rule requires financial institutions subject to the federal Gramm-Leach-Bliley Act to maintain the security of their customers' personal financial information by evaluating security risks and adopting a written security program, and to oversee service providers' practices with respect to such personal information. Similarly, the Massachusetts Regulations impose a duty on every person that owns or licenses personal information to develop, implement, and maintain a written comprehensive information security program (WISP). The recent revisions permit the business to take a risk-based approach to information security, much like the federal Safeguards Rule's approach. The WISP must address the administrative, technical, and physical safeguards utilized. However, the size and scope of the business, as well as its resources, and the nature and quantity of data collected or stored, may be taken into account in developing the WISP.
The original version of the Massachusetts Regulations imposed specific technical computer security elements. The revised version retained the specific listing of these elements as guidance only, by adding a standard of technical feasibility, so that the requirements are technology neutral.
Finally, the Massachusetts Regulations require businesses to oversee service providers, with the requirements revised to be consistent with federal law. Thus, a business is required to perform reasonable due diligence in selecting a service provider to determine that it uses appropriate security measures to protect personal information, and to contractually require such measures of their service providers.
As noted above, the deadline for compliance is March 1, 2010. The law is enforced by the Massachusetts Attorney General. Businesses with customers or employees in Massachusetts need to prepare and finalize a WISP, after reviewing and evaluating their information security operations and procedures. The suggested elements of a WISP are included in the Massachusetts Regulations, but as the revisions to the Regulations make clear, these are not intended to be a rigid template. The Regulations now recognize that the nature and operations of the businesses that are subject to the law vary considerably, and like the Identity Theft Red Flag Program requirements, each WISP will be unique based upon the particular business. Additionally, businesses subject to the Massachusetts Regulations need to review their outsourcing contracts that affect personal information to determine compliance with the Regulations by their service providers. The deadline for updating service provider contracts is March 1, 2012.
Despite the temporary relief provided by the six-month extension to June 1, 2010 of the Identity Theft Red Flags regulations deadline, businesses that are located in Massachusetts, or who have customers or employees that are domiciled in Massachusetts, find that they must maintain their focus on data security for another reason – the Massachusetts data privacy regulations compliance deadline is March 1, 2010.
Like the Red Flags regulations, the Massachusetts law deadline has been extended multiple times since its first deadline of January 1, 2009. In addition, the implementing regulations were twice revised in response to feedback received from affected businesses concerning the strict encryption requirements and the "one size fits all" mandate for the written security program that the original regulations imposed.
The Massachusetts Data Security Law (MGL Chapter 93H) and its implementing Regulations (201 CMR 17.00) (the "Massachusetts Regulations") apply to anyone engaged in commerce, and specifically, those who "store" personal information, in addition to those who receive, maintain, process, or otherwise have access to such information. The Massachusetts Regulations apply to the personal information of Massachusetts residents, whether they are customers or employees. Thus, the reach of the Massachusetts Regulations is not limited to businesses located or operating in Massachusetts. There are no exceptions or exemptions, so that both for-profit and non-profit organizations located inside and outside of Massachusetts must comply.
"Personal information" is defined as a Massachusetts resident's first name and last name, or first initial and last name, combined with one or more of "(a) Social Security Number, (b) drivers license or state-issued identification number, or (c) financial account or credit or debit card number, with or without any required security code, access code, personal identification number or password, that would permit access to a resident's financial account." Publicly available information is not included provided it has been lawfully obtained.
The requirements of the Massachusetts Regulations are comparable to the FTC's Safeguards Rule. This Rule requires financial institutions subject to the federal Gramm-Leach-Bliley Act to maintain the security of their customers' personal financial information by evaluating security risks and adopting a written security program, and to oversee service providers' practices with respect to such personal information. Similarly, the Massachusetts Regulations impose a duty on every person that owns or licenses personal information to develop, implement, and maintain a written comprehensive information security program (WISP). The recent revisions permit the business to take a risk-based approach to information security, much like the federal Safeguards Rule's approach. The WISP must address the administrative, technical, and physical safeguards utilized. However, the size and scope of the business, as well as its resources, and the nature and quantity of data collected or stored, may be taken into account in developing the WISP.
The original version of the Massachusetts Regulations imposed specific technical computer security elements. The revised version retained the specific listing of these elements as guidance only, by adding a standard of technical feasibility, so that the requirements are technology neutral.
Finally, the Massachusetts Regulations require businesses to oversee service providers, with the requirements revised to be consistent with federal law. Thus, a business is required to perform reasonable due diligence in selecting a service provider to determine that it uses appropriate security measures to protect personal information, and to contractually require such measures of their service providers.
As noted above, the deadline for compliance is March 1, 2010. The law is enforced by the Massachusetts Attorney General. Businesses with customers or employees in Massachusetts need to prepare and finalize a WISP, after reviewing and evaluating their information security operations and procedures. The suggested elements of a WISP are included in the Massachusetts Regulations, but as the revisions to the Regulations make clear, these are not intended to be a rigid template. The Regulations now recognize that the nature and operations of the businesses that are subject to the law vary considerably, and like the Identity Theft Red Flag Program requirements, each WISP will be unique based upon the particular business. Additionally, businesses subject to the Massachusetts Regulations need to review their outsourcing contracts that affect personal information to determine compliance with the Regulations by their service providers. The deadline for updating service provider contracts is March 1, 2012.
Wednesday, January 27, 2010
Microsoft Disaster Response
Was I the only person who saw the headline A view from Microsoft's disaster central and immediately thought that the following article would be about Microsoft's efforts to contain the damage from the explorer weakness that was exploited in the Google hack?
Probably. I guess it's an occupational hazard that comes from being a lawyer who focuses on computer software.
And speaking of software, I wanted to mention that, in my hiatus from Ephemerallaw, I started up a new blog, Developer Diary, which is devoted to my ongoing programming efforts. I also set up a page, By Hand Games where you can download some of the games I've written.
Of course, the above has nothing to do with information security or data privacy. Then again, I'm not exclusively devoted to information security and data privacy, and I see no particular reason why Ephemerallaw should be either.
Probably. I guess it's an occupational hazard that comes from being a lawyer who focuses on computer software.
And speaking of software, I wanted to mention that, in my hiatus from Ephemerallaw, I started up a new blog, Developer Diary, which is devoted to my ongoing programming efforts. I also set up a page, By Hand Games where you can download some of the games I've written.
Of course, the above has nothing to do with information security or data privacy. Then again, I'm not exclusively devoted to information security and data privacy, and I see no particular reason why Ephemerallaw should be either.
Sunday, January 24, 2010
Will Microsoft be sued for the vulnerability used in the Google hack?
Quick answer: I don't know, but it's less likely than it might initially appear.
Earlier this month several sources, including Wired, reported that over 30 large companies, including Google and Adobe, had been victims of a sophisticated hack, which Microsoft admits was made possible by a weakness in Internet Explorer 6. Microsoft also admits that it learned of the flaw in September, and that it was holding back a patch so that it could be released in a cumulative update that was due out next month. Given the above, and the notoriously litigious nature of the American public, it would seem that Microsoft is almost guaranteed to be hit by a lawsuit seeking damages based on the failure to release the patch earlier. Certainly, when I read that Microsoft had learned about the flaw and withheld the patch, my first thought was that this was something that would keep their lawyers busy in court for months (if not years) to come.
However, the more I think about the situation, the less I think Microsoft is guaranteed to go to court. If this had happened 3-4 years ago, I'd expect Microsoft would already have been hit by a class action lawsuit filed on behalf of consumers who used IE6. However, since that time, courts have been pretty uniformly unreceptive to claims that consumers are damaged by increased risks caused by unauthorized access to data by third parties (e.g., here). A consumer wanting to sue Microsoft for vulnerabilities in IE6 would be even less likely to succeed, since (unlike the unsuccessful plaintiffs in the security breach cases) the hypothetical consumer suing Microsoft wouldn't even be able to show that an unauthorized third party had accessed their system, only that they were at an increased risk of such access due to using IE6. Looking at that history, the chances of a consumer class action against Microsoft seem pretty slim.*
So, consumers aren't likely to sue Microsoft, what about the businesses who were victimized because of the flaw? While they'd have an easier time proving damages (after all, it is known that they were hacked, and at least some of what the hackers did), there are also forces which could prevent them from going to court. For one thing, most businesses try and work things out before involving the judiciary. In this case, I assume that Google, Adobe, et al have contacted Microsoft about helping them clean up the damage. Microsoft has a significant interested in trying to make sure those out of court efforts are successful, since a drawn out court battle could only hurt Microsoft's brand in the already competitive browser market. Similarly, the companies that have been hacked would probably like to avoid going to court as well, since any lawsuit would invariably have the effect of calling their own security into question, even if they could convince the public that the reason their systems weren't secure is because they were using unsafe products, rather than that their own internal practices were deficient.
Of course, strong incentives to avoid a court battle don't necessarily mean there won't be one. If the damage caused by the hackers is too expensive, Microsoft might be willing to fight not to pay it, and the injured company might be willing to fight to get paid. At this point it's impossible to say how likely that is to play out. However, I think, given the incentives on all sides to avoid it, the likelihood of a lawsuit against Microsoft on this is much lower than it would initially appear.
*Obviously, the chances aren't zero. If there was going to be a suit against Microsoft, I would expect it in a state which has allowed suits for increased risk of health problems as a result of a chemical spill. The analogy isn't perfect, but it does make it somewhat easier to prove damages.
Earlier this month several sources, including Wired, reported that over 30 large companies, including Google and Adobe, had been victims of a sophisticated hack, which Microsoft admits was made possible by a weakness in Internet Explorer 6. Microsoft also admits that it learned of the flaw in September, and that it was holding back a patch so that it could be released in a cumulative update that was due out next month. Given the above, and the notoriously litigious nature of the American public, it would seem that Microsoft is almost guaranteed to be hit by a lawsuit seeking damages based on the failure to release the patch earlier. Certainly, when I read that Microsoft had learned about the flaw and withheld the patch, my first thought was that this was something that would keep their lawyers busy in court for months (if not years) to come.
However, the more I think about the situation, the less I think Microsoft is guaranteed to go to court. If this had happened 3-4 years ago, I'd expect Microsoft would already have been hit by a class action lawsuit filed on behalf of consumers who used IE6. However, since that time, courts have been pretty uniformly unreceptive to claims that consumers are damaged by increased risks caused by unauthorized access to data by third parties (e.g., here). A consumer wanting to sue Microsoft for vulnerabilities in IE6 would be even less likely to succeed, since (unlike the unsuccessful plaintiffs in the security breach cases) the hypothetical consumer suing Microsoft wouldn't even be able to show that an unauthorized third party had accessed their system, only that they were at an increased risk of such access due to using IE6. Looking at that history, the chances of a consumer class action against Microsoft seem pretty slim.*
So, consumers aren't likely to sue Microsoft, what about the businesses who were victimized because of the flaw? While they'd have an easier time proving damages (after all, it is known that they were hacked, and at least some of what the hackers did), there are also forces which could prevent them from going to court. For one thing, most businesses try and work things out before involving the judiciary. In this case, I assume that Google, Adobe, et al have contacted Microsoft about helping them clean up the damage. Microsoft has a significant interested in trying to make sure those out of court efforts are successful, since a drawn out court battle could only hurt Microsoft's brand in the already competitive browser market. Similarly, the companies that have been hacked would probably like to avoid going to court as well, since any lawsuit would invariably have the effect of calling their own security into question, even if they could convince the public that the reason their systems weren't secure is because they were using unsafe products, rather than that their own internal practices were deficient.
Of course, strong incentives to avoid a court battle don't necessarily mean there won't be one. If the damage caused by the hackers is too expensive, Microsoft might be willing to fight not to pay it, and the injured company might be willing to fight to get paid. At this point it's impossible to say how likely that is to play out. However, I think, given the incentives on all sides to avoid it, the likelihood of a lawsuit against Microsoft on this is much lower than it would initially appear.
*Obviously, the chances aren't zero. If there was going to be a suit against Microsoft, I would expect it in a state which has allowed suits for increased risk of health problems as a result of a chemical spill. The analogy isn't perfect, but it does make it somewhat easier to prove damages.
Subscribe to:
Posts (Atom)