Sunday, December 28, 2008


In case anyone was wondering:

No, I'm not dead.
Yes, I do intend to continue to post.
No, I don't intend to do so before the first Monday of 2009.

So happy new year to all, and I'll be back in about a week.

-William Morriss

Sunday, December 14, 2008

Self Inflicted Wounds

Massive data security breaches get lots of headlines, which makes sense, since big numbers (e.g., 94 million records stolen) are an easy way to capture attention. Similarly, security breaches also come with a built in and easily understandable storyline - hackers from somewhere breached the (usually poorly implemented or obsolete) defenses of some large company, exposing large numbers of innocent consumers to an increased risk of losing money due to various forms of fraud. However, while security breaches generate easy headlines and narratives, it's important to remember that, totally independent of hackers, companies can get in trouble for improperly collecting or exploiting user data.

The newest object lesson on this point is Sony, which has agreed to pay a $1,000,000 penalty to settle charges that it violated the Children's Online Privacy Protection Act and section 5 of the FTC act (FTC press release here, via this story from Computer World). The upshot of the complaint filed by the FTC was that Sony knowingly obtained personal information from at least 30,000 children without their parents' consent (alleged COPPA violation) and falsely stated that it restricted children under the age of 13 from participating in Sony's online activities (alleged FTC act violation). Thus, it was Sony's websites functioning for their intended purpose, not hackers, that hurt Sony in this case.

So how can companies avoid being in the position to pay seven figure settlements? My recommendation is to talk to a lawyer in the area who knows what he/she is doing, and to have that lawyer stay in contact with the marketing people who are responsible for the design and operation of a website. The staying in contact part can be particularly important. For example, as shown in the Gateway Learning case, even if a company is acting properly when an information collection program is first launched, changes made later on (e.g., starting to sell consumer data in violation of a privacy policy that said consumer data would not be sold) can expose a company to liability. My guess is that something similar happened with Sony, where the lawyers were probably consulted early in the process, but, later on, changes were made which weren't run by the lawyers first. Hopefully, settlements like Sony's will provide an incentive for other companies not to follow that same path.

Monday, December 8, 2008

Too Much Protection for Computer Security

Generally, I find that my posts advocate additional protections for data privacy, and argue that people don't pay enough attention to security. This is post is the exception, where I unequivocally state that people should not be criminally liable for violating a website's terms of service, even if such a violation may technically be prohibited by the computer fraud and abuse act. As is admirably laid out in this post in the Wired threat level blog, the consequences of attaching criminal liability to a terms of service violation would be severe. However, while that post, which argues that a criminal conviction based on a terms of service violation is likely to be overturned, I'm not so sure. The computer fraud and abuse act can be analogized, roughly, to a criminal trespass statute. While I doubt that Congress intended to make random terms of service violations criminal acts when it passed the CFAA, in the real world criminal trespass can be based on entry onto the land of another in violation of restrictions placed on entry by the owner (see ORC 2911.21(A)(2)). Thus, it wouldn't be such a stretch to imagine that the application of the CFAA to a terms of service violation will be upheld. True, I think it would be a bad result, but it would be a result that would not be outside the realm of the possible.

Sunday, November 30, 2008

Giving up Email

How long could you live without email? What would it cost in terms of lost productivity and increased difficulty and expense of communication?

I know that I could live without email. I suspect that doing so would significantly decrease my productivity (a suspicion supported by this study of the impact of email on productivity in a white collar environment). There would unquestionably be a period of adjustment when I would be most unhappy to lose what is probably my primary means of communication with friends and clients.

Now, Barack Obama is facing the prospect of losing his ability to use email (see article here). The short version of why is that there are concerns that email isn't secure enough for presidential communications, and the White House doesn't want the president to create an email paper trail which could potentially be subpoenaed. To me, this is crazy. Other secrecy sensitive professions, such as lawyers (who have to protect client confidences) have managed to make peace with the limitations of email and embraced it as a useful tool (see, e.g., this opinion regarding usage of cell phones and email by lawyers). Now, it's true that the president has information (e.g., plans for the conduct of war) which is substantially more important than the confidential information lawyers have access to. However, there's no reason for the president to be completely cut off from email.

So, given that most people are not, and will never be, president, what significance does this have for the day to day lives of ordinary individuals? Only this: I don't think Obama will do it. Even back in 2000, George W. Bush lamented having to give up his email. Since 2000, people's usage of email has increased dramatically (compare this article from 2000 which predicted email usage of about 9 megs/day/person in 2001, with this white paper which puts email usage at 19.3 megs/day/person in 2008) and Obama is a famously wired individual. I predict (though I realize that there is a note of wishful thinking in this prediction) that Obama will rebel against the prohibition on email, and will use his position as the most powerful person in the world to do something about it. Maybe he'll request that technology be put in place that will make his emails more secure, and that technology will eventually become available to the public at large. Maybe he'll propose tougher laws or regulations on network service providers so that email becomes a more secure medium of communication. Whatever the case, if Obama takes action to make being a wired professional more consistent with the heightened security requirements of being president, it can't help but have positive security implications for the country as a whole.

Sunday, November 23, 2008

New Blogs (Update)

Back in June, I put up a post about a the (then new) blog Identity Theft and Business, highlighting it as a resource for news and information on identity theft. In the comments to that post, several bloggers put up links to their own blogs, which I wanted to repost here, since, as I said in the June post, the run of the mill stories about the latest thousand, or million, or ten million records being exposed get old fast, so new sources of informed comment can be good to have.
Anyway, without further ado, I'd like to highlight the Identity Theft Daily, and Identity (featuring Sarah Smith).

Also, from the random rhetorical question file: will fact that Barack Obama's cell phone records were breached lead to broad support for privacy protective legislation since it shows that people on all parts of the political spectrum are vulnerable, or will it simply be another quickly forgotten blip in today's 24 hour news cycle? My cynical guess is the latter, but I suppose one can always hope...

Monday, November 17, 2008

Encryption and the Law

Encryption technology is so commonplace, one might think that it would be required by basically all information security laws and regulations. However, as discussed in the comments to yesterday's post, encryption isn't even required by HIPAA, one of the most well known information security laws on the books. Well, as was the case with data breach notification laws, states are stepping up to fill the void left by the Federal Government. For example, as discussed in this post at The Email Admin Massachusetts is set to implement legislation requiring encryption of personal data for its residents (rule here). It is this kind of law (+ private rights of action) that I was referring to when I said if people want legal protection they should work to get new laws passed. The Federal Government is slow, and generally lags far behind. If consumers really want to make a change, the place to do it is at the state, not the federal, level.

Sunday, November 16, 2008

333,000 Unencrypted Records Exposed a Month Ago

In the "wow, that sounds bad" category, the University of Florida announced on November 12 that on October 3, they discovered that 333,000 unencrypted records for patients at the college of dentistry had been potentially accessed by unauthorized individuals. To make matters worse, the breach itself was caused when malware was remotely installed on the University's system. To make matters even worse, the malware was only discovered during a server upgrade (rather than, say, because the University's system detected and prevented installation of the malware). So, to recap, the facts (as set forth in this article from Computer World) are: (1) more than a quarter million records exposed; (2) notification takes more than a month after discovery; (3) records were patient records; (4) that were kept unencrypted; (5) on a system which was vulnerable to remote installation of malware; and (6) no automated security systems detected the remotely installed software.

Now, as it happens, I've presented the facts in such a way as to accentuate the negative, and I've done so to make a point: you aren't as protected as you think. While I don't know all the facts about this breach, simply from the facts I do know, it's not clear that any laws were broken either before or after the breach took place (other than the remote installation of the malware, of course). The HIPAA security standard regarding encryption (45 CFR 164.312(a)(2)(iv)) states that encryption of data is an addressable standard, not a required one. Similarly, Florida's security breach notification act gives a 45 day period for when notice can take place, so the month+ delay in this case could be (and, according to a spokesman, actually is) within Florida's law. Of course, even if there had been flagrant violations of both HIPAA and Florida's notification law, that wouldn't make much difference to the individuals whose information was exposed. Neither HIPAA nor Florida's law provides for a private right of action.

The bottom line? Laws relating to privacy and information security aren't as comprehensive or as effective as consumers may think. If people really want legal protection for their personal information, they should work to get new laws passed, not simply rely on the laws on the books. Otherwise, they could be in for a sad surprise when and if they try to go to court for redress when their own information is exposed.

Sunday, November 9, 2008

Really valuable information

Before the election, I noted that private information of Samuel "Joe the Plumber" Wurzelbacher had been stolen, and it had been stolen in such a way (no way to know who had logged into the system, test account open for years, multiple individuals using the same log on information) that it seemed that someone had really dropped the ball on security. However, lest I give the impression that people's information is only menaced by insecure government (or large corporate) systems, I would like to present the example of the Intel Itanium Processor. The design for the Itanium processor, like Joe the Plumber's personal information, was stolen. This is true even though the Itanium processor was undoubtedly protected by the most sophisticated security available.

The moral of the story - if it has value, it is at risk of being stolen. Whether your personal information is stored on a government server with minimal security, or on a corporate server with encryption limited access, there is no such thing as complete safety.

Monday, November 3, 2008

Election eve privacy post

As you contemplate tomorrow's election, keep a place in your thoughts for Samuel Joseph Wurzelbacher, aka "Joe the Plumber." Of course, everyone knows the world's most famous plumber from John McCain's decision to repeatedly invoke him during his October 15 debate with Barack Obama. However, Joe the Plumber is more than a symbol of the economic everyman. He's also an example of the risks inherent caused by the lax security at many government databases. As described in this article, Joe the Plumber's data was access using a test account created when Ohio's Law Enforcement Information Sharing Network was created - over four years ago. Apparently, the test account was shared with several with several unidentified contractors when the system was being built, and was still available for whoever (currently no charges have been filed) accessed the Plumber's data.

It's a little surprising that this type of screw up would have happened. I count at least three glaring errors which never should have taken place that contributed. First, there was a test account left open for 4 years after the deployment of the system. Second, there were multiple contractors using the same account - in general, you should have a 1:1 user:account ratio. Third, they didn't have good enough controls to know who was actually in the account. Any system storing sensitive information should have logs which can be used to determine who accessed what and when. All in all, it sounds like whoever was in charge of security really dropped the ball.

Of course, that's why symbols like Joe the Plumber are valuable. His data security incidents reflect the risks that face us all, and serve as a potent reminder that none of us are truly safe from having our private data compromised.

And, on that happy note, I hope everyone (in the U.S.) has a great election day, and takes the time to vote.

Thursday, October 30, 2008

The Most Anticipated Patent Case Ever

Last year, Microsoft was hit with a $1,500,000,000 verdict in a patent infringement suit related to Mp3 technology (see here, later thrown out). In 2006, RIM agreed to pay over $600,000,000 to settle litigation related to the ubiquitous blackberry (see here). Last year Vonage agreed to a $100,000,000+ settlement with Verizon over patents for VOIP technology (see here). The bottom line is that patents for software are big money, which was why In re Bilski, a decision the Federal Circuit issued today, was so anticipated. You see, many people had thought that Bilski might put an end to software patents, or at least curtail patent protection for business methods.

My take on the subject was somewhat different. As I explained in this guest post at Patent Baristas I felt that it was unlikely Bilski would have much effect, and that even if the Federal Circuit wanted to, it couldn't eliminate software patents. The reason was the Supreme Court's decision in the case of Diamond v. Diehr said that a patent couldn't be invalidated on the basis that it included software, as long as the claimed invention as a whole performs a function the patent laws were designed to protect (e.g., transforming or reducing an article to a different state or thing). As I wrote in that guest post,
"I can easily tie almost any process I write claims for to a computer, and it would be a trivial task to require that the computers make a physical change in an article (e.g., printing an invoice)," which meant that, based on Diamond v. Diehr, software patents were safe.

So, what did the Federal Circuit do in Bilski? Well, everyone who had anticipated the death of software patents was undoubtedly disappointed. The Federal Circuit specifically addressed and smashed that hope: "we decline to adopt a broad exclusion over software or any other such category of subject matter beyond the exclusion of claims drawn to fundamental principles set forth by the Supreme Court." Bilski, FN 23. It also adopted a "machine-or-transformation" test for patent eligibility (from page 10 of the opinion): "A claimed process is surely patent-eligible under § 101 if: (1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing" - exactly the approach I had recommended in my guest post for obtaining patent protection for software inventions. The Federal Circuit's reasoning was also strikingly similar to my guest post, including an extended discussion of Diamond v. Diehr (see pages 7-9 of the opinion) and used that case to answer potential objections based on arguably contrary Supreme Court precedent (see FN 8: "To the extent it may be argued that Flook did not explicitly follow the machine-or-transformation test first articulated in Benson, we note that the more recent decision in Diehr reaffirmed the machine-or-transformation test. See Diehr, 450 U.S. at 191-92. Moreover, the Diehr Court explained that Flook 'presented a similar situation' to Benson and considered it consistent with the holdings of Diehr and Benson. Diehr at 186-87, 189, 191-92. We thus follow the Diehr Court's understanding of Flook.").

The bottom line is that Bilski reaffirmed the patentability of computer software, and did so in a manner which was strikingly similar to what I had predicted some 7 months previously (the guest post went up on March 6, while the actual decision came down October 30). For the future, this can be a lesson: if there's a billion dollar patent law question, you can either wait for the court to decide it, or you can ask me, and I'll tell you the answer.

NOTE: While I'm aware that this blog primarily focuses on the law related to information security and data privacy, when I read Bilski I had an almost irresistible urge to crow about my previous analysis being validated. Thus, given that blogs are basically tailor made platforms for self promotion, I felt that this would be as good a platform as any to engage in a bit of self-congratulation.

Tuesday, October 28, 2008

Red Flag Rules Delayed

Happy news for all organizations which would have been affected by the FTC's red flag rules: the deadline for enforcement of the rules has been pushed back six months from its original date of November 1, 2008. The rule requires that creditors and financial institutions implement identity theft prevention programs, but the FTC found that many companies needed more time to come into compliance. The new enforcement deadline is May 1, 2009. In its statement, the FTC said that the extension does "not affect other federal agencies' enforcement of the original November 1, 2008 deadline for institutions subject to their oversight to be in compliance."

We (and by we, I mean my colleague Jane Shea) previously wrote about the red flag rules here and here.

Monday, October 20, 2008

Consumer Self-Protection

Yesterday I posted about weaknesses in systems deployed by the IRS. In that post, I used the weaknesses as an example of the limits of government regulation, given that they showed that even the government itself couldn't keep its house in order. However, something I didn't explicitly address in that post is that the weaknesses in the IRS' systems also demonstrate that there are serious limits on what consumers can do to prevent their information from being compromised. After all, you can't avoid paying taxes, and, by definition, the information held by the IRS is highly sensitive financial data. The result is, simply by virtue of being an American and following the law, your information is at risk.*

So what can ordinary consumers do to protect themselves? In the case of information security, for individuals, I'd say that an ounce of cure is worth a pound of prevention. That is, rather than worrying about protecting your data (which should be the responsibility of the merchants/government entities your data is entrusted to) individual consumers should worry about how they'll find out and deal with it if their data is compromised. Easy steps like credit monitoring, promptly disputing unauthorized charges, and maintaining backup accounts/lines of credit in case one gets frozen as a result of fraud can make recovering from the extremely hard to prevent data compromises a substantially less miserable experience.

*As a note, I don't mean to single the IRS out as an exceptionally bad actor. Indeed, if you compare the IRS' security practices with security practices at TJX before their big breach, I think the IRS comes out way ahead.

Sunday, October 19, 2008

Weaknesses in Government Systems

According to this report (via), the IRS deployed two major software systems, its Customer Account Data Engine (CADE), and its Account Management Services (AMS) system, despite the existence of "known security vulnerabilities relating to the protection of sensitive data, system access, monitoring of system access, and disaster recovery." Obviously, this is a problem. Indeed, given some of the vulnerabilities noted in the Computer World article summarizing the report (e.g., failure to encrypt data either in storage or transit), the IRS systems wouldn't even pass the private sector PCI Data Security Standard, let alone government imposed standards such as those in HIPAA.

The interesting part of the report though, is not that the IRS deployed systems with flaws. Frankly, while that part may be depressing, similar mistakes take place in both the public and private spheres frequently enough that the existence of one more flawed system doesn't really raise my attention. What interests me about the report is that it shows the limits on what you can do with regulation. The IRS has specific guidelines and requirement for handling data that, in theory, should have prevented the deployment of systems with known vulnerabilities. Moreover, as the report noted the IRS had implemented development policies which "require security and privacy safeguards to be planned for and designed in the early phases of a system’s development life" - something that many private sector businesses would benefit from doing. The problem was that the IRS' cybersecurity organization knew about the vulnerabilities and accepted them anyway - in other words, it decided to save money by skimping on security for taxpayer information. With that kind of culture (which I find a bit surprising in government) it's not likely that an organization will have good security, regardless of how heavily regulated it is.

So how do you create a security conscious culture? The easy answer is feedback. Make sure that there are rewards for doing things right, penalties for doing things wrong, and that the rewards and penalties (as well as what counts as right and wrong) are well known. Unfortunately, that easy answer is only easy in theory. In practice it's really hard to implement, and involves things like keeping open lines of communication, making sure decision makers pay attention to security even though it doesn't contribute directly to the bottom line, and educating people about what resources are available in an organization to provide decision support on security issues. While it seems that there is a slow change underway from a culture where consumer data is treated only as something to be valued, to a culture where it's viewed as something to be protected, that change is very slow indeed. Before the change is complete, I think there will be many more reports revealing that large entities (both public and private) have undervalued securing consumer data.

Sunday, October 12, 2008

Can Privacy Come Back?

In this interview at Computer World private investigator Steve Rambam argues that "Privacy is dead. Get over it. You can't put the genie back in the bottle." His argument seems to be based in large part on his own database, which supposedly contains
pretty much every American's name, address, date of birth, Social Security number, telephone number, personal relationships, businesses, motor vehicles, driver's licenses, bankruptcies, liens, judgments [etc...]

He uses that database, as well as advances in computer technology and changes in government policy to make the case that more and more information is becoming available about people, and that privacy is a thing of the (rapidly receding) past.

My belief is that Rambam is wrong. I'm willing to concede that the state of individual privacy right now is pretty grim (though I don't think it's dead). However, there is a substantial disconnect between observing that things are bad now, and concluding that they'll never get better in the future. Indeed, as my own contribution to putting Rambam's genie back in the bottle, I would like to present the following things people can do to use the law to help privacy:
1) Remember the FTC. While people generally have little success in suits alleging damages based exposure of their personal data, the FTC has broad enforcement authority to combat unfair and deceptive trade practices. That means that if a company isn't following their privacy policy, or if they're saying they value privacy while they actually sell your personal information to the highest bidder, a complaint to the FTC could be a way to deal with it.
2) Watch the EULAs. As I have written before (e.g., here) contract law in general, and abusive end user license agreements in particular present a serious threat to privacy. Thus, when someone asks you to click before continuing, read what it is that you're being asked to agree to and, if it's abusive, don't agree. In fact, not only should you refuse to agree, you should also complain. While generally consumer complaints are of questionable effectiveness, if a company is interested in its image, it can lead to changes in behavior (e.g., Google Chrome).
3) Know your rights. For example, the Fair and Accurate Credit Transactions Act prohibits printing complete credit or debit card numbers on receipts. By being aware of their rights, consumers can know how to protect themselves and their privacy, either by enforcing their rights themselves (e.g., through a private suit) or though others (e.g., by bringing an FTC complaint).

Wednesday, October 8, 2008

Ohio Lemon Law: What’s Covered and What Isn’t

Today, Sergei Lemberg, a lemon law attorney who normally blogs at LemonJustice, discusses what you need to know about new car lemons.

With all of the cars, SUVs, trucks, motorcycles, and RVs being manufactured in the U.S. and abroad, it’s reasonable to expect that some will have defects. After all, vehicles are incredibly complex pieces of machinery and a lot of things can go wrong. In the best-case scenario, any defects that weren’t caught by quality assurance are quickly repaired by the dealer. In the worst-case scenario, you have a vehicle with pronounced defects that make it run poorly, that constitute a safety hazard, or that reduces its value – and the dealer or manufacturer refuse to buy back or replace it.

When that happens, Ohio lemon law can come to the rescue. Ohio lemon law covers new passenger vehicles, SUVs, vans, trucks, and motorcycles that are purchased or leased in Ohio. The motorized portions of RVs are also covered, as are used cars that are purchased within one year or 18,000 miles of delivery to the original owner.

Although it doesn’t cover minor defects (like a non-working stereo system), the lemon law does force the manufacturer to stand by its product. In order for the lemon law to apply to new vehicles, the defects have to occur during the first year from the delivery date or the first 12,000 miles on the odometer – whichever comes first. In addition, the vehicle must have been taken in one time for a problem that could cause serious injury or death or eight times for different problems. Alternately, the vehicle can have been out of service for a cumulative total of 30 calendar days. In addition, you have to notify the manufacturer in writing of the defect within one year from the delivery date or the first 18,000 miles (whichever comes first).

If you think you have a lemon, you have to take part in the manufacturer’s dispute resolution process (if one exists) before going to court. Before you begin, though, you should have a lemon law lawyer by your side. After all, you can be sure that the manufacturer’s team of legal eagles will be there to fight your claim every step of the way. The good news is that, if your claim is successful, the manufacturer has to pay your attorney fees. Often, with the help of a lawyer, you can get a refund, replacement vehicle, or cash settlement without having to go through the entire lemon law process – and get your attorney’s fees covered in the process.

Whenever you buy a new or used vehicle, it’s important to know your rights. And, if you think your vehicle is a lemon, it pays to persevere to make the manufacturer stand by its product.

Sunday, September 28, 2008

Why So Apathetic?

Every so often, I see expressions of frustration from identity theft professionals, or people who care about data privacy in general, that people are so inexplicably apathetic. For example, in the comments to a previous post, Jason Dickens at Prosperity Protection opined that "The general public just doesn’t take this stuff seriously." Similarly, my friend Jack Dunning temporarily shuttered his blog because of what he saw as public apathy (see here).

As I have noted before while consumers are, in fact, appallingly apathetic about their privacy, they are highly concerned about identity theft. In my previous post, I recommended that, if you want someone to care about privacy, you should try and explain that lack of privacy leads to a greater risk of identity theft. However, it occurs to me that there's more to it than just drawing the connection between privacy and identity theft. Consumers also need to know that what appears to be a common approach to trying to protect against identity theft - curtailing online shopping - isn't appropriate. A good example of this approach, and it's ineffectiveness, is provided by this article, which stated that, as a result of (then) recent data security breaches, some consumers were refusing to make credit or debit card purchases with online merchants they didn't know. Of course, even ceasing to do business over the internet entirely would do absolutely nothing to protect against something like the TJX breach, where thieves exploited vulnerabilities in network security at TJX's brick and mortar stores.

Once consumers have a more realistic understanding of the ways that identity theft actually takes place (and yes, obviously internet use is a part of it, as the continued popularity of phishing scams shows) I would think it would be substantially easier to convince them that they'd be better off paying attention to their privacy that they would retreating from the internet.

Monday, September 22, 2008

Self-Regulation by Advertisers

According to this article from Media Post the Interactive Advertising Bureau is pushing for the creation of an industry body to create non-governmental rules to protect consumer privacy online. The goal of this self-regulation, as is the case with most self-regulation, is to prevent actual regulations from being imposed by Congress. While generally, consumers appear apathetic about their privacy online, it appears that advertisers might have reason to worry. Specifically, Eileen Harrington deputy director of the Bureau of Consumer Protection, Federal Trade Commission has said that online privacy is a hot issue in Washington right now, and compared the situation of on-line advertisers to that of telemarketers before the government established the national Do-Not-Call-List. Given that kind of comparison, it makes sense that advertisers are thinking about regulating themselves, so they can convince Congress that regulation by government isn't necessary.

Of course, the elephant in this particular room is that it's too late - section 5 of the FTC act, which prohibits unfair or deceptive trade practices, already covers online advertisers. Moreover, the FTC already uses its authority under section 5 to prosecute online advertisers. For example, currently on the FTC's privacy site there's a link to an article about a 2.9 million dollar settlement which was wrung out of online advertiser ValueClick (link here so it isn't lost when the FTC's site is updated). While I can understand the IAB's desire to forestall more regulation, if their goal was to avoid any regulation, they're about 70 years too late.

Bonus non-legal observation: when you're making a comparison, do not say the following: "It's the same issue. What's really changed, really, is everything." It completely undermines whatever point you were trying to make by the comparison, and makes your reader/listener wonder why you drew the comparison between such dissimilar things in the first place.

Thursday, September 18, 2008

And Now for Something Completely Different (and totally surreal)

Question: What happens when a criminal forum is taken down?
Answer: The criminals who used said forum launch into an orgy of mewling self pity so miserable that even an attention whoring toddler whining about being sent to bed without dinner would consider it undignified.

A little background: What happened is that the forum DarkMarket, which was used by criminals to (among other things) swap stolen identities and tools for stealing more, was shut down. For most people, this is, of course, a happy event, though one which I think will likely have minimal long term significance in the overall world of identity theft. While clearly this is a setback to the criminals who used the forum, my expectation would have been that they'd slink away, perhaps to start up another forum to replace the one which had been closed. However, after reading this article about the closing of the site, it's clear that my expectation would have been wrong. Instead of slinking away, the criminals who used the forums started posting self-pitying screeds about how they were downtrodden victims, and lamenting the unfairness of it all. To me it's just nuts. What kind of a warped individual would respond to the closing of a criminal board by stating that "There must be another solution to the problem. Do we just let them win?"

Oh well, I suppose that's why I went into law, rather than turning to a life of crime.

Sunday, September 7, 2008

Perception of Privacy Policies

Here's some shocking news I learned via Bruce Schneier, apparently:

California consumers overvalue the mere fact that a website has a privacy policy, and assume that websites carrying the label have strong, default rules to protect personal data. In a way, consumers interpret "privacy policy" as a quality seal that denotes adherence to some set of standards.

(Bruce's blog post here).

The above quotation was taken from a paperentitled "What Californians Understand about Privacy Online." Because of the understanding which consumers (at least in California) have regarding the meaning of a "privacy policy," the authors conclude that "its use should be limited to contexts where
businesses provide a set of protections that meet consumers╩╝ expectations." The vehicle for that limitation could be section 5 of the FTC act, which prohibits unfair or deceptive trade practices, the argument being that, if consumers believe that "privacy policy" has a certain meaning, that it is deceptive/unfair for a web site to say that it has a privacy policy if the web site's privacy policy doesn't conform to consumers' preconceptions.

My opinion is that, while the impulse to prevent people from being deceived by the label "privacy policy" is certainly understandable, limiting the use of the term "privacy policy" to situations which conform to consumers' preconceptions isn't a workable solution. The biggest problem is that consumers' ideas of a "privacy policy" aren't necessarily uniform. The paper is based on a survey of California consumers, but California is known for being at the forefront of privacy protection in the United States. What should the FTC do about differences between the consumer understandings between California and the rest of the country? Since the FTC act is nationwide, it would seem most logical to have a nationwide standard. However, if that nationwide standard is lower than the standard expected by consumers in California, wouldn't those consumers still be deceived by the label "privacy policy"? To me it seems that a better idea would be to allow businesses flexibility to define their own policies. Businesses which wanted consumers to be aware of specific privacy protective practices (e.g., not selling to third parties, not storing personally identifiable data, etc) could advertise them, while businesses which didn't care could put their policies behind a "privacy policy" link. While that might not protect consumers who don't take the time to read a web site's privacy policy, it would allow privacy policies to be tailored as appropriate to particular situations (e.g., banks might have more stringent policies than search engines) and it wouldn't put the FTC in an untenable position of trying to find a standard which is both applicable and appropriate nationwide.

Tuesday, August 26, 2008

More stuff I wish I could blog about

In another installment of the disturbingly frequent series of posts which only advert to things I would write about at more length if I had more time, I present for your approval this extremely interesting article from Bruce Schneier via In the article Bruce looks at the differing reactions of U.S. and European courts to potential disclosures of security flaws. In short, the U.S. courts, though ostensibly bound by the first amendment, prohibited disclosure of the flaws, while the European courts supported the free speech rights of the researchers who found the flaws. While Bruce didn't really explore the rich history of prior restraints in U.S. law, or discuss how antithetical such prior restraints (supposedly) are to our system, he did a very good job of explaining why suppressing free dissemination of information about security flaws is a bad idea from a practical standpoint, rather than just a legal one.

In any case, as I said at the beginning of the post, I'd love to blog about this further. However, given my current time situation, I'll have to be content with linking to the article, and identifying it as just one more example of why civil liberties (in this case freedom of speech), even when they appear to be detrimental to security interests, shouldn't be thrown aside lightly.

Monday, August 25, 2008

To the Extent Vice Presidential Candidates Matter

To the extent vice presidential candidates matter, Obama's pick of Joe Biden doesn't seem to auger well for privacy. According to this article from C|NET, Biden has a nasty habit of strongly supporting privacy unfriendly measures, usually under the guise of specious claims of law enforcement necessity. While I don't know anyone who is voting based on privacy concerns in November (including me), it would be nice to have a VP candidate who was a little bit more privacy friendly.

Sunday, August 24, 2008

Only the Guilty Have Something to Hide

The mayor of shuts down a stand where little girls sold excess produce from their family's garden (link).

TSA employees ground plane by using critical instruments as handholds (link).

A pilot is placed on the no-fly list, destroying his ability to do his job (link).

On their face, these incidents aren't obviously about data privacy and information security - the nominal topics of this blog. However, it's incidents like these that come to mind when I hear that privacy doesn't matter because only the guilty have something to hide. To me, the incidents above show that government action, even when the government is faithfully enforcing regulations or laws, can be unpredictable, and even people who never knowingly commit a crime could very well be "guilty" in the sense of incurring adverse government actions. Thus, to say that only the "guilty" have any reason to care about privacy shows a dangerous lack of awareness of how easy it is to violate some law or regulation and thereby become "guilty" yourself. Even worse, when the government goes about collecting enormous amounts of data without having to justify itself and without any oversight, there will inevitably be false positives which have the potential to literally ruin someone's life (e.g., a pilot who can't do his job because he gets added to a no fly list).

For this post I intentionally avoided cases where individual privacy is violated as a result of government lawbreaking (e.g., here, which describes an IRS employee who decided to peruse celebrity tax filings). The reason is that, while rogue employees are a problem, the attitude that only the guilty have any reason to value privacy is a problem even when the government is functioning as it is supposed to.

Tuesday, August 19, 2008

Data Storage

As a general rule, one of the easiest ways to make sure data isn't stolen is to not have it. Unfortunately, as mentioned in this paper from GFI Software there are often legal requirements that prevent a company from purging its data. As the paper mentions, there are a variety of securities regulations that require companies to keep records. While true, that's only part of the story. For example, electronic discovery rules can prohibit a company from purging its records. What's (potentially) worse, even if a company doesn't purge it's records, it can still be sanctioned under the electronic discovery rules if it's records aren't in a reasonably accessible form.

The moral of the story? You need to know not just how to protect data, but what data to keep, and how to keep it in a form where you can get it back.

Thursday, August 14, 2008

There was a time when...

There was a time when privacy violations were considered a serious matter. During colonial times (yes, it's been that long) the British would issue general warrants (discussed here) which essentially gave the people executing the warrant broad power to search for contraband or make arrests, without specifying what contraband was being searched for (or why) or the reason for an arrest. To do away with this generally detested practice, the fourth amendment was written to require that:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Truly, it appears that the late 18th century was a heady time for privacy. By contrast, today, government seems to take the same approach to information gathering as some people do with climbing Everest - they don't need a good reason, they just do it because it's there. The stated reason given for most intrusions is to prevent terrorism, but this is largely bunk. Take this plan to photograph and store the license numbers of every vehicle that enters Manhattan. If I were a terrorist who wanted to bring a bomb into Manhattan, this plan would be no deterrent whatsoever, as I would simply rent a car. This would have the advantages (from the terrorist point of view) of both being anonymous, and probably being large enough to carry more explosives than I can fit into my actual car. So why is there a plan to gather this data? My guess is that someone in government thought it would be cool, and some vendor wanted to sell a new toy, and no one even considered that broad scale, suspicionless data collection is not something that government should be involved in. *sigh*

On the bright side, during the late 18th century I would have had to worry about things like yellow fever, and or malaria, so I suppose it all evens out in the end.

Thursday, August 7, 2008

Drawing the Wrong Lessons from a Breach

The other day, I was listening to the radio, and a commentator said that the most significant harm that could come from a major breach like the TJX breach was not identity theft, but was actually people losing faith in doing business over the internet. Frankly, I'm not sure he was right, given that identity theft is a major problem for consumers. However, while it might not be the biggest harm from a breach, losing faith in doing business over the internet would be an inappropriate response to a breach like that at TJX for the simple reason that the internet had nothing to do with that breach. Instead, the hackers found stores which had unsecure wireless connections, used them to install malicious software on the TJX corporate network, then used the software to harvest credit cards from TJX's systems. The internet didn't come into play until after the cards were stolen and the thieves needed to sell them. While avoiding doing business over the internet might avoid some types of risks (particularly phishing scams), it would have no effect whatsoever on a consumer's risk of being affected by a breach such as took place at TJX.

Wednesday, August 6, 2008

Hackers Caught

As described in this article from, the justice department has issued 11 indictments for stealing more than 40 million credit and debit card numbers. Unsurprisingly given the nature of the crime the suspects are from all over the world - three from the U.S., three from Estonia, two from Ukraine, two from China, and one from Belarus. The arrests are the result of years of investigation, showing both the difficulty of making arrests in cases of international card fraud, and the potential of dedicated police work.

One question raised by the article is how many more people were involved. The article says that "[t]he 41 million credit and debit numbers were used internationally," and also says that the suspects are accused of hacking into the TJX network. There's something of a disconnect between the numbers and the crime. As I mentioned here, depending on whose numbers you go by, the TJX breach involved either 94 or 45 million records. Thus, if the indicted suspects really were behind the breach, and actually did steal only 41 million numbers, it implies that they aren't the only ones who were taking numbers from TJX. Still, aside from that small detail, the indictments appear to be happy news. Hopefully the police got the right people, and will continue to do so in the future.

Tuesday, July 29, 2008

Stuff I Want to Blog About

Unfortunately, I'm about to leave on vacation, and the effort of trying to get my various work related projects in order before I leave has resulted in my not being able to write any kind of substantive blog post this week (and not much of a post last week either). Anyway, in lieu of a substantive post, I'll have to provide this: things I would blog about if I had time.

First, did you know that a major bug in the domain name system (it's the thing that actually makes the internet work) had been found? Did you know that the bug could be used by phishers to redirect people from trusted sites to data gathering or malware distribution sites without their knowledge? What kind of liability might attach to that situation? Products liability for DNS vendors? Negligence for sysadmins who don't patch? If I had time, I'd be blogging on those questions. However, as it is, I'll have to leave them hanging.

Also, Ecora actually has an interesting post on counterproductive effects of regulation. Normally, when people complain about regulation, it's something on the lines of whining about the cost of being forced to do things they should be doing anyway. However, Ecora's post discusses something a good deal more realistic - the cost of having to store data that you otherwise wouldn't. Normally, I'd like to address their argument (for example, would companies really purge their data if not for regulations like Sarbox?). However, as it is, I'll just link and leave the addressing for another day (assuming nothing happens while I'm on vacation, of course).

And now even this post is taking up more time than I realistically have. Oh well...I suppose I'm not that good at the non-substantive blogging thing. In any case, I'll be back the second week in August. While I might put something up between now and then, I wouldn't bet on it. Until then...

Monday, July 21, 2008

Fighting words

Disturbing cartoon about a dystopian surveillance which we, happily, don't live in (yet).

Wednesday, July 16, 2008


Unless otherwise limited by court order, the scope of discovery is as follows: Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party's claim or defense — including the existence, description, nature, custody, condition, and location of any documents or other tangible things and the identity and location of persons who know of any discoverable matter.

That's the text of the first sentence of rule 26(b)(1) of the Federal Rules of Civil Procedure. For the non-lawyers out there, I'll unpack it a bit. The first part, about obtaining discovery of any nonprivileged matter, means that, unless information falls into certain narrowly defined categories (e.g., attorney-client, doctor-patient, etc) it is subject to discovery. The next part, about relevant to any party's claim or defense, means (generally) that it has to have some bearing on the subject matter of the litigation. In practice, this means that during pre-trial discovery, litigants can request essentially any records maintained by a business, its principals, and their agents (e.g., vendors). The bottom line is that, if a lawsuit takes place, the parties can request virtually any information, that information has to be provided to them, unless it falls within the narrowly defined (privileged) categories.

While massive security incidents like the TJX breach generate more headlines, these pretrial discovery rules could represent an even bigger threat to consumer privacy. Two instructive cases in this respect are Viacom v. Google and MPAA v. Bunnell. In the Viacom case, Viacom requested, and the judge ordered Google to produce, records showing who watches videos on YouTube and what videos they watch (see article here). This release of data has the potential to be even more damaging to the affected users (including me, since I use YouTube regularly) than the release of information such as social security and credit card numbers, because YouTube viewing records can be used to make out a case for copyright infringement - a charge that can bankrupt all but the super-wealthy (for example, in the case described here the defendant was found liable for almost a quarter million dollars in damages for infringing copyrights on only 24 songs). In the MPAA case, the judge also ordered that user records be turned over - in that case the records showed what users had searched for using the popular bit torrent software. However, there, rather than take an act which it saw as betraying its users privacy expectations, the defendant blocked access to his web site from the U.S. - a radical solution, but the only way the defendant saw to protect his users' privacy.

The cases above showcase a trend which is, to me, highly disturbing. Instead of relying on black hat hackers, businesses can use litigation to obtain consumer information. In the cases above, that result in the exposure of (likely) millions of records from Google, and the complete shutdown of TorrentSpy in the U.S. Those are serious consequences, and they should be considered whenever people think of possible threats to their privacy.

Wednesday, July 9, 2008

FTC Clarifies CAN-SPAM Act

The Federal Trade Commission (“FTC”) has issued a Final Rule that adds four new provisions and provides clarification of some of the CAN-SPAM Act’s requirements. This Final Rule, effective July 7, 2008, is the culmination of work that was begun three years ago with a proposed FTC rule, and takes into account comment letters from 150 individuals, businesses, and organizations.

The CAN-SPAM Act (Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003) regulates the sending of unsolicited commercial emails, and became effective January 1, 2004. Although “spam” is generally defined as unsolicited commercial e-mail sent to a large number of addresses, the Act makes no distinction between solicited and unsolicited commercial e-mail. It defines commercial e-mail as "any electronic mail message the primary purpose of which is the commercial advertisement or promotion of a commercial product or service (including content on an Internet website operated for a commercial purpose)." Transactional or relationship messages are not subject to or regulated by the Act.

The CAN-SPAM Act outlaws certain commercial acts and practices with respect to commercial email, and imposes requirements on senders of commercial emails:

The transmission of any email that contains false or misleading header or “from” line information is prohibited.
The transmission of emails with false or misleading “subject” line information is prohibited.
The Act requires that a commercial email message contain a functioning return email address or similar Internet-based mechanism for recipients to use to “opt out” of receiving future commercial email messages.
The sender, or others acting on the sender’s behalf, is prohibited from initiating a commercial email to a recipient more than ten business days after the recipient has opted out.
A commercial email may not be sent without including three disclosures – a clear and conspicuous indication that the email is an advertisement or solicitation, a message and mechanism for the recipient to opt out of future solicitations, and a postal address for the sender.

Four specific practices are cited by the CAN-SPAM Act as “aggravated violations” which, when alleged and proven in combinations with certain other violations of the Act, will increase the statutory damages imposed upon the sender. These practices are: address harvesting; dictionary attacks; automated creation of multiple email accounts; and relaying or retransmitting through unauthorized access to a protected computer or network.

Changes to Definitions

The FTC made some changes several changes to the definitions found in the Act:

It modified the definition of “sender” to clarify that for single emails promoting the products, services or Internet website of multiple persons, each of the persons whose products or services are promoted will be deemed to be a “sender” of the email, except that such emails will be considered to have only one sender if: (1) one person is within the definition of “sender” under the Act, (2) that person is identified in the “from” line as the sole sender of the email, and (3) that person complies with certain provisions of the Act that are applicable to initiators of emails.

This change provides a more flexible approach for email marketers, and is more logical from a consumer perspective since the consumer is likely to focus on the “from” line to identify the sender. It is this sender that must honor “opt out” requests, and is responsible for the email’s compliance with the CAN-SPAM Act requirements. It is important to realize, however, that liability for compliance with the Act does not shift exclusively to the sender, since certain other requirements and prohibitions imposed by the Act upon “initiators” of emails, will continue to apply to all persons identified in the commercial email.

It added the new definition of “person” to mean any individual, group, unincorporated association, limited or general partnership, corporation, or other business entity. Despite strident calls by commentators to exempt non-profit entities, the FTC refused to do so, stating that consumers were deserving of the protections provided by the Act against all forms of spam, no matter the nature of the sender’s enterprise.

The Act requires senders to include a “valid physical postal address” in any commercial email. The FTC broadened the definition of this term to allow senders to use post office boxes that have been accurately registered with the U.S. Postal Service, or a private mailbox accurately registered with a commercial mail receiving agency operating according to the U.S. Postal Service regulations.

Transactional or Relationship Messages

The FTC considered whether to change the statutory definition of “transactional or relationship messages,” to address various types of messages such as legally mandated notices, debt collection email communications, and employment-related messages. It ultimately declined to make any changes to the statutory definition, since none of the types of messages put forth in the Notice of Proposed Rulemaking met the statutory standard for modifying the definition. Some of the issues raised by the commentators with respect to a particular type of message could be resolved using the “primary purpose test”, as in the case of legally mandated messages, messages concerning copyright infringement or emails messages for the purpose of conducting market research. In the case of others, such as messages from debt collectors, including third party agents, or in the case of most employment-related email messages, the overwhelming majority of such messages will likely fall within the existing definition of “transactional or relationship messages.”

However, the FTC did provide guidance on the interpretation of some particular forms of communication:

Email messages to effectuate or complete a negotiation will be considered “transactional or relationship messages” if issued in connection with a commercial transactions. However, where an unsolicited email delivers an offer to purchase goods or services, and attempts to launch a negotiation as part of the message, it would not fall within the definition of “transactional or relationship messages.”

Email messages facilitating, completing or confirming registration with a “free” internet service where there is no exchange of consideration are likely to be “transactional or relationship messages,” but the FTC was not willing to preclude the possibility that such a message may be commercial even if there is no exchange of consideration.

Where a recipient subscribes to a newsletter or other periodical to be delivered by email, or to which the recipient is entitled as a result of a prior transaction, the FTC would consider such an email to be a “transactional or relationship message,” as opposed to an unsolicited newsletter or periodical to which the recipient has not subscribed, which would likely be considered a commercial message.

Forward-to-a-“Friend” Messages

The FTC was persuaded by the commentators to modify its earlier position on forward-to-a-“friend” messages. This type of message could arise under two different scenarios – where the content of the email message encouraged the recipient to forward the message to others, and where the seller’s web site encouraged visitors to supply others’ email addresses. Rather than attempt to refine the definition based upon the nature and method of forwarding, the FTC established a bright line test that turns on the presence or absence of consideration for the act of forwarding. A seller would not have liability under the Act for the forwarding of these types of email messages so long as the seller did not offer consideration for the forwarding. No matter what the nature (coupons, discounts, rewards) or amount of consideration – even an offer of de minimus consideration – an offer of consideration will be sufficient to cause the seller to be an “initiator” of the forwarded message, and subject the seller to liability under the Act.

No Fee for Opting Out

The FTC adopted a rule prohibiting a sender of commercial emails from imposing a fee upon a recipient for opting out of future unsolicited emails, or from requiring the recipient to provide any information other than a recipient’s email address and opt out preferences.


The CAN-SPAM Act gives the FTC enforcement authority for the Act. In addition, the Act gives the state attorneys general the authority to bring an enforcement action in federal court after giving advance notice to the FTC where possible. Finally, internet service providers may bring a federal court action to enforce certain of the Act’s prohibitions. The enforcement authority given to the FTC is the same as that afforded the FTC under its trade regulation rule authority, meaning that each violation is subject to fines of $11,000 per day, with additional penalties where “aggravated violations” are proven.

Sunday, July 6, 2008

The Other Side of Consumer Data Collection

While I generally consider myself an advocate of strong consumer privacy protection, even I have to admit that there are generally two sides to every invasion of consumer privacy. For example, shopper loyalty programs are criticized for raising consumers' fraud risk, and for leading to a proliferation of annoying telemarketer and junk mail contacts (e.g., here). However, sometimes, the information gathered by grocery stores is used in ways which are unarguably beneficial to consumers. Case in point: product recalls. Before my fourth of July barbecue, I got a call from Kroger's. Apparently, the ground beef I'd purchased earlier in the week had been recalled, and should be thrown away rather than eaten. Of course, they knew who I was and what I'd purchased, because I used my Kroger card to buy the meat, which meant they were tracking my purchases and storing the data.

The bottom line is that the same data type of data collection which leads to annoying circulars and telemarketer calls led to Kroger being able to provide me with information that I really needed. Of course, consumer data collection isn't an unalloyed good, but it isn't an unalloyed evil either. The trick is to find ways to deal with (or regulate) the data collection that maximizes the good while minimizing the harm.

Monday, June 30, 2008

Observation on Legal Blogging

While looking at Hack-igations I noticed a fun little statement at the bottom of his post:
[Again, all my blog comments are just public discussion and not legal advice for any particular situation.]

He had one on the previous post as well:
[Again, nothing I say on this blog is legal or other professional advice. It is just general public discussion. If you need expert help, you should not rely on this blog. You should go get help.]

This (at least for someone with my sense of humor) is one of the funny side effects of being part of a profession that basically sells words - when we give words away for free (e.g., on a blog) we have to make very sure that no one confuses the public comments on our blogs with the legal advice that we sell professionally. Of course, I have a similar disclaimer here (it's at the bottom of the page above the link to Patent Baristas), but mine's a permanent part of the setup. I thought it was funny that Ben at hack-igations seems to write a new disclaimer for every single post he puts up.

Sunday, June 29, 2008

Protecting Privacy by Contract

I have long been on record as believing that modern contract law will essentially be the death of individual privacy - the basic argument being that people want their toys, so they'll click on abusive clickthroughs and EULAs that essentially sign away their personal data (see, e.g., this post on Privacy and Contract). However, recently Ben Wright has proposed that these contracts could be harnessed on behalf of privacy - essentially, that consumers could put up their own websites with terms of use that require businesses to respect their personal information (see, here). Ben even points to a case where a website's terms of use were enforced against a consumer who made a contract over the phone, to demonstrate how the mere existence of the terms of use can be used in litigation.

I think Ben's argument is appealing, and I'd like to agree with it...unfortunately, there are a couple of problems with the argument that prevent me from endorsing it, as appealing as it may be. First, as a practical matter, it would be difficult to show that a company which sells an individual's personal data ever visited the website where the privacy protective terms of use were posted. In the case Ben cited to show that terms of use could be enforced even against a consumer who made a contract over the telephone, it was easy to prove that the consumer visited the website which hosted the terms of use, because the consumer was trying to enforce the website's privacy policy. However, in most cases, I think it would be hard to prove in court that a company which sells consumer data actually visited the websites of the consumers whose data is being sold. Second, even if it were possible to show that the a company which sells consumer data visited the consumer's website, there is no reason to believe that a court would enforce the website's privacy protective terms of use. For example, in the case of In re Northwest Airlines Litigation, the court refused to allow consumers to sue Northwest Airlines for a violation of its privacy policy. Given that, I see no reason to believe that a court would be any more solicitous of privacy protective terms of use that a consumer might put on his or her website.

The bottom line is I like Ben's idea, and I would love to see the approach to abusive terms of service turned against businesses that don't respect privacy. However, I think the practical obstacles to implementing the idea are such that Ben's idea isn't something that most people can rely on.

Sunday, June 22, 2008

Measuring the Effect of Security Breach Notification Laws

How do you measure the effectiveness of security breach notification laws? One way is to take data on how many consumers report that they were victims of an ID theft due to a security breach, break the data down by state, and compare the states which do have security breach notification laws with those that don't. If the states that have notification laws have a lower rate of identity theft due to security breach (after controlling for various confounding variables) then you would conclude that the notification laws are effective in reducing identity theft.

The cross-state comparison described above was essentially the approach taken in this paper by Romanosky et al., which attempted to measure whether data breach disclosure laws reduce identity theft. Unfortunately, while measuring the effect of data breach disclosure laws is a laudable goal, I don't think the paper's approach was likely to result in any meaningful conclusion. The biggest problem with the paper's approach is that it didn't appear to adequately take into account the effect of interstate commerce in extending the coverage of existing security breach notification acts to states where those acts haven't been enacted. That isn't to say that the paper ignored this effect. However, its efforts to account for it seemed to focus on interstate movement by people (e.g., students attending an out of state university), when interstate movement of data is almost certainly a much bigger effect (largely because there is a well developed interstate market for data, while such an interstate market for people is prohibited by the 13th amendment). Most security breach notification laws are triggered not only by security breaches at in-state companies, but also by security breaches at out of state companies which expose the data of state residents. This results in a duty to disclose data traveling from the point where the data was collected to anywhere in the country. Similarly, if the data for a resident of a state which doesn't have a security breach notification act is transferred to a state where such an act does exist, the individuals whose data was transferred will benefit from the out-of-state notification law, even if the person has never left their local jurisdiction. Thus, since the effects of security breach notification acts bleed so freely across state lines, trying to measure the effectiveness of those acts by comparing jurisdictions with security breach notification acts to jurisdictions without security breach notification acts is unlikely to yield any meaningful results.

So what would be a better approach to measuring the effect of security breach notification laws? One way would be to compare jurisdictions where transfer of data is either nonexistent or severely limited. Unfortunately, it seems likely that there would be so many other differences between such jurisdictions that meaningful comparisons would simply be impossible. For example, if you were comparing between the U.S. and E.U., how would you control for the effect of the E.U. data privacy directive? Another approach would be to examine relative rates of identity theft caused by security breaches with id thefts caused by something that isn't influenced by security breach notification acts (e.g., dumpster diving). The problem with that though, is that the absolute most common cause of identity theft is "unknown." Thus, it could be that security breach notification laws would actually increase the reported incidence of ID theft due to security breaches, because some ID thefts caused by breaches would move from the "unknown" column to the security breach column. Further, when making that kind of fine grained comparison, it's necessary to have a larger data set than is necessary to simply look at overall rates of ID theft, and such a data set might not be available. The bottom line is that measuring the effectiveness of security breach notification acts is hard, and if there is a good way to do so, it isn't clear what it is.

Wednesday, June 18, 2008

New Identity Theft Blog

One of the most difficult things about running a blog is finding good material. True, it seems there's a new data security breach every few days, but reporting that another million, or thousand, or ten million records have been compromised gets old fast. Thus, I was happy to be discover (discover in the sense of follow a link left in a comment) a new ID theft blog: ID Theft and Business. I look forward to using it as a source for informed comment on the subject, and (hopefully) picking up a few ideas there to use for my own posts.

Tuesday, June 17, 2008

Always Go With the Original

Via The Dunning Letter, I learned about this paper which (according to Jack's post) says that data security breach notification laws don't actually work. When I first read the post discussing the paper, I was somewhat unnerved, since that would mean that one of the primary vehicles that governments have used to try and address the vulnerability of consumer data is ineffective. Happily, when I read the paper I found that this was one time that the normally astute Dunning Letter was simply wrong. What the paper actually found was that, using their data set (which, as I will discuss in a later post, was not the proper data to evaluate security breach notification laws) they did not detect a statistically significant effect of security breach notification laws on identity theft. However, that is different from saying that there is no effect. Indeed, the paper explicitly recommends increasing disclosure requirements to help address the lack of data: "[other authors argue that] current information is not sufficient and that banks and other organizations should be
required to release identity theft data to the public for proper research. We certainly agree with this view."

So what can be gained from this? First, the paper itself is quite interesting, and I plan on addressing it in more detail in future posts. For now though, the lesson I draw from this is that you should always go to the original source when blogging. When discussing the paper, the Dunning Letter also linked to a TechWorld article with the bold headline that "Researchers say notification laws in US not lowering ID theft." My guess is that Jack probably read the TechWorld article but not the original paper. While that might be a nice shortcut, it can also (as demonstrated here) lead to perpetuating falsehoods just because they make nice screaming headlines.

Thursday, June 12, 2008

Ephemeral Law Named to Top 100

Happy news for me today. Ephemeral law has been named as one of the top 100 civil liberties advocacy blogs by the criminal justice degrees guide. Now, of course, one could point out that Ephemeral law's rankings, plus $3.25 would get me a coffee at Starbucks, but whatever. It certainly isn't bad news, and I'm happy with all the not-bad blog related news I can get.

Wednesday, June 11, 2008

Value of Security Breach Notification Laws

This article from Computer World advances a position which I find truly bizarre: that security breach notification laws don't help people. The article's reasoning (and I use the term loosely) seems to be that notification laws only require action after a breach takes place, so they really don't prevent identity theft. It would be better for consumers, according to the article, if the money companies now spend on complying with security breach notification laws were instead spent on security that might prevent identity theft. In any case, the article points out, more identity theft takes place due to telephone scams, lost wallets, or consumers who don't properly protect their computers. Basically, the article minimizes the harm caused by security breaches, and tries to argue that the money spent notifying consumers of the breaches would be better spent elsewhere.

Frankly, it's hard to know where to begin criticizing the article. My immediate instinct is to slam the prose. The author has a terrible habit (epidemic in lawyers, I'm sad to say) of asking rhetorical questions and making mealy mouthed equivocations rather than just taking a position. For example, the author points out that "Enforcement of these laws may not help consumers, either." So there's a possibility that consumers may not be helped by enforcing laws. Similarly, it's possible that the sun may not rise in the east tomorrow. If the author really feels that security breach notification laws don't help people, he should say so, rather than couching his arguments in insubstantial speculation and rhetorical questions.

However, while my instinct is to slam the prose, I think it's more important to recognize that the logic underlying the prose is really, really bad. The primary mistake the author makes (and it's a doozy) is to assume that the only benefit which can come from security breach notification acts is to prevent identity theft. That's simply nuts. The primary benefit of the notification acts is that, because of them, people are notified when there's a problem. Without notification laws, businesses would never go public about security breaches, and what is indisputably a major public policy issue would simply be swept under the rug. Perhaps the author of the article thinks ignorance is bliss, but I prefer that problems be widely acknowledged so that they can be addressed. A secondary mistake the author makes is that he assumes that the more money businesses spend complying with notification laws, the less money they'll spend on security. This doesn't make sense. If businesses could sweep security breaches under the proverbial rug, they would spend even less on security. The high cost of security breach notifications (in terms of both money and bad PR) will cause companies to spend more on security, not less.

I could go on almost indefinitely about what's wrong with the author's position, but I won't. Instead, I can illustrate with a simple analogy: if the author were arguing that statutes requiring businesses to notify consumers when there was a toxic waste spill were ill conceived because they diverted money which would otherwise be used preventing spills, he would be treated as a laughing stock. While drinking toxic waste is clearly a more direct threat to health than a data security breach, it's no more logical to allow the release of personal data to be swept under the rug than it is to allow the release of toxic waste to be covered up.

Sunday, June 1, 2008

Facebook Accused of Violating Canadian Law

According to this article from Computer World, a complaint has been filed against Facebook for violating Canada's Personal Information and Electronic Documents Act (PIPEDA). If that law, and its rather unwieldy acronym, seem familiar, it could be because there were concerns last year that Google's Street View product might violate it (see, e.g., here). In the case of Street View, the concerns were raised over the broad scope and indefinite retention of the data which was collected. In the case of Facebook, there are several possible violations. First, Facebook (allegedly) does not fully inform users how broadly their information can be shared with strangers for social networking. Second, Facebook (again, allegedly), fails to notify users of how their information will be used for advertising, and shared with third parties.

Without commenting on the merits of the complaint, I will note that the Computer World article points out that

Jeffrey Chester, founder and executive director of the Center for Digital Democracy in the U.S., said the Canadian organization "has lifted the veil that covers Facebook's extensive personal data collection apparatus." [and said that]...It's a giant privacy wake-up call about Facebook from our friends up north."

My own view is a bit different. I don't think this is a wake-up call at all. American consumers already know that there are some serious privacy issues surrounding Facebook. In fact, there is already a lawsuit in U.S. court based on Facebook's beacon program (see, e.g., here).. The problem is that U.S. consumers don't really have much they can do about privacy. The lawsuit about beacon is only possible because of a very narrow provision of federal law which covers video tape rentals and sales records, but that kind of sui generis protection doesn't really translate into decent coverage for personal information. Thus, my view is that the Canadian complaint, to the extent it's a wakeup call at all, is a wakeup about the state of U.S. privacy laws, not a wakeup about the threats to privacy.

Sunday, May 25, 2008

Well, he was asking for it...

Normally, someone getting their identity stolen isn't news. It's annoying for the victim, but not of great enough consequence for the rest of the world to bear reporting. However, in this case, the person who's ID was stolen was Todd Davis. While that name might not be immediately familiar, it's a good bet you've seen Mr. Davis in the near-ubiquitous online adds for Lifelock, where he poses with his social security card to show just how confident he is in Lifelock's services. Thus, for him to have his identity stolen is not just news, it's also the trigger for a lawsuit by Lifelock customers saying that David's identity theft shows that he knew his product didn't work, even as he promoted it nationwide.
Of course, the filing of a lawsuit, and a decision by a court that Lifelock is liable for damages are two totally different things. Indeed, I'm not sure that the existence of one identity theft incident shows that Davis knew his service didn't work. Davis has been flashing his complete social security number all over the internet for years. The fact that he was only victimized once in that time seems (to me at least) to show that Lifelock's services really do work to mitigate the threat of identity theft, though they can't eliminate it entirely.

Tuesday, May 13, 2008

More Potential Legal Troubles for Google Streetview

Ever since its introduction, Google Streetview has raised concerns about privacy (see, e.g., here). Now, Streetview is being prepared for Europe, and apparently French law is presenting a problem. According to this article from Computer World, under French law, you are not permitted to publish images of people going about their business without their permission. The article says that that's a problem for Streetview because it could require Google to employ "an army of clipboard-wielding legal assistants asking bystanders to sign release forms as they sip their coffee."

My initial take on it is that something about the article doesn't make sense. While I'm not familiar with French law, it seems unbelievable to me that any country would have regulations that prevent the publication of pictures taken in public. After all, if French law really did include that requirement, it would seem completely incompatible with newspapers publishing pictures of crowds, such as might appear at political rallies and sporting events. In any case though, if the article's portrayal of French law really is correct, then it's an example of where I think giving individuals control over some aspect of their persona (in this case their image) goes too far. The loss of privacy from allowing pictures to be published without permission is slight (if it shows up on Google Streetview it was, by hypothesis, visible to the public). By contrast, the cost is real - loss of a popular product which could spin off potentially interesting follow on technologies. Thus, in this case, assuming the choice is real, I'd have to come down on the side of Google, rather than on the side of individual control of information.

Sunday, May 11, 2008

Pricing Personal Privacy

One perennial problem plaguing plaintiffs pursuing privacy protective pleadings is the difficulty in showing damages. When people have gone to court to try and obtain compensation from companies who exposed their personal data in a security breach incident (e.g., DSW Shoe, TJX, etc...) they have consistently lost because the courts say that they can't show damage, and therefore can't be compensated. One approach to this has been to try and argue that expenditures for dealing with the exposure of personal information (e.g., money spent on credit monitoring) should be compensated. However, courts have by and large rejected that approach, concluding that money spent on credit monitoring is intended to prevent future loss, and therefore isn't damages which the court can compensate.

However, according to this article from C|NET, criminal identity thieves have no problem valuing stolen data which has not yet been used for identity theft. Indeed, there was even a price list found on a server containing stolen business and personal data which said exactly what various accounts were worth (e.g., bank account with $16,040 had an asking price of 700 Euros; bank account with $14,400 had an asking price of 600 Euros, etc...). Now, do I think that courts should start using the price lists of criminal identity thieves to determine how to compensate victims in security breaches? No. I think a much better measure of damages would be quantifiable damages, such as the cost of replacing compromised credit cards (something I discussed here. However, even if the prices given for stolen accounts shouldn't be used as a measure of damages, they should at least be considered evidence that personal data, even if not used in identity theft, has value, and that that value should be recognized, either in current law (where it often isn't) or in future regulatory changes (where it might be).

Sunday, May 4, 2008

Private Information in Court Documents

As described in a pair of articles (here and here) from Computer World, privacy advocate Betty "BJ" Ostergren has been campaigning to have personal data removed from California court websites. BJ claims that she has turned up "complete tax filings, medical reports pertaining to cases handled by the court, and images of checks complete with signatures as well as account and bank-routing numbers" on the court's website. Further, she says that it's possible to retrieve similar documents by entering popular last names at random. The response to this from the court's personnel - that they have tens of millions of documents and finding personal information among them is like looking for a needle in a haystack - is not encouraging. Essentially, everyone who comes into contact with their system is defended through "security through obscurity," and there's nothing that they can do about it.

The question then, is whether the posting of thousands, perhaps tens or hundreds of thousands, of documents containing personal information to the court's website is a problem. As it happens, in my opinion is isn't. I think it is a huge benefit to society for courts to make filings publicly available. Indeed, full access to court records gives people the option of finding out how courts have handled various types of scenarios so that they can plan their actions accordingly. This ability to know (and therefore follow) the law is an indispensable aspect of any system where rule of law is taken seriously. If a court makes tens of millions of document available, I'm not at all surprised that some small percentage of them include information which shouldn't be made publicly available. Certainly that's regrettable, but I think it's a small price to pay for making courts and the law available to all.

Does that mean I think the status quo is optimal? No. I think the response from the court is totally inappropriate. The correct response would have been to to redact the personal information from the identified documents. Even then, the system wouldn't be perfect, since there's no guarantee that personal information would be discovered by privacy advocates who report it to court personnel rather than by criminals who would use it in identity theft. However, it doesn't make sense to expect any system to be perfect, and shutting down something so clearly positive as public access to court filings because they don't perfectly protect privacy would be a terrible mistake.

Monday, April 28, 2008

Hundreds of Thousands of Pages Hacked...Legal Implications Unclear

Over the past week, there has been an rash of sites which have been compromised to distribute malware. The basic idea of the attack is nothing new - legitimate sites are compromised so that when users visit them they download malicious software. What's new is the scope of the current wave (hundreds of thousands of pages compromised) and the highly trusted nature of the sites compromised (including pages run by the UN). Something else that's noteworthy about this latest rash of attacks is that it's note clear where to assign responsibility. Early reports (e.g., here) blamed a vulnerability in the Microsoft Internet Information Services server software. However, later reports (e.g., here and here) have said that the fault doesn't lie with Microsoft, but instead can be assigned to lax programming practices and more sophisticated bad guys.

So what, from a legal standpoint, happens now? The initial reports seemed to indicate a class action lawsuit in Microsoft's future. However, if the blame can't be pinned on Microsoft, what recourse do businesses who, through no fault of their own, end up having their web pages compromised have? While it still isn't clear what's going on, it could be that the answer is that those businesses have no recourse at all. Realistically, they won't be able to find the hackers, and, even if they do, the hackers are most likely judgment proof. They can't sue Microsoft if that company is blameless, and they can't go after their own employees for poor programming practices (if that's what's to blame). The bottom line is that the losses in this case might just be eaten by the entities who have already been victimized by hackers. It's a reminder of the limits of the legal system to shift risk, and a good example of why relying the legal system to protect a business from losses due to criminal behavior isn't a particularly good idea.

Tuesday, April 22, 2008

Good News and Bad News For Individual Privacy

There's good news and bad news from the courts today on whether information on a computer system is treated as private for the purpose of government investigations. First the good news: as described here, the New Jersey Supreme Court has held that, under the New Jersey constitution, people have an expectation of privacy when they are online. The practical effect of this is that a grand jury warrant would be necessary for police in New Jersey to obtain access to that information. One important point in the decision is that it relied on the New Jersey, rather than the U.S. Constitution. This is important, because it means that the U.S. Supreme Court can't overturn the decision on appeal. Thus, until there's an amendment to the New Jersey Constitution, or until the New Jersey Supreme Court reverses itself, an online expectation of privacy will be recognized in that state.

And on the subject of Federal Courts reversing privacy friendly decisions, that brings us to our bad news: the 9th Circuit has reversed a lower court ruling which stated that digital devices are too personal for police at the border to be allowed to search them without cause. In its ruling, the 9th circuit focused on the "border exception" to the fourth amendment (NOTE: don't you love a 4th amendment with exceptions?) and said that no reason whatsoever is necessary for border agents to search digital devices.

via BoingBoing.

Tuesday, April 15, 2008

A "New" Data Security Threat, and Why That's a Good Thing

this article from Computer World describes a "new" type of attack hackers have been using to get at credit card data: interception of unencrypted data while in transit. Now, as the article points out, the tools being used by hackers to intercept data in transit aren't novel technology, so the description of a "new" threat is, in one sense, not accurate. However, obtaining unencrypted information in transit marks a significant shift from the traditional hacker tactic of stealing information stolen from databases (see, e.g., TJX and CardSystems, the two biggest data security incidents on record). Of course, to a consumer, it doesn't matter much how their credit card numbers were stolen. However, to me, the fact that hackers are switching tactics is not only a big deal, it's also good news for at least three reasons.
First, it's harder for a hacker to steal huge amounts of data by intercepting it in transit than it is for a hacker to steal huge amounts of data by stealing it from a database. For example, it takes at least a month for a hacker to steal a month's worth of credit card numbers if they're being captured while in transit during a transaction. By contrast, a month's worth of credit card numbers can be stolen from a database in seconds. Thus, if hackers focusing on data in transit rather than data at rest should decrease the overall amount of data stolen.
Second, as described in the article, one reason that hackers are switching to catching information in transit rather than focusing on databases is that companies have hardened their databases in order to comply with the PCI DSS. This shows that compliance with the DSS, while admittedly not universal, has been widespread enough to change criminal behavior, something that is clearly a positive development for data security.
Third, the fact that hackers have switched from high value targets (databases) to relatively lower value targets (data transmissions) based on the behavior of their targets shows that, when properly motivated, regulation can address and alleviate serious problems (in this case, the problem of easily compromised databases). Of course, at this point, the switch from targeting databases to targeting transmissions means that some tinkering with the PCI DSS is probably in order. However, there is no reason why the same framework which resulted in the increases in database security that led to the shift can't also be used to address threats to transmissions. Thus, while the new tactics being used to steal credit cards represent new challenges, they also show that some progress has been made in the ongoing battle to increase the security of individual consumer data.

PostScript: On a quasi-related note for everyone who says that private initiatives are always superior to government action, the HIPAA security regulations actually address protecting information in transit and at rest, so they already address the "new" threat described in the article.