Over the past week, there has been an rash of sites which have been compromised to distribute malware. The basic idea of the attack is nothing new - legitimate sites are compromised so that when users visit them they download malicious software. What's new is the scope of the current wave (hundreds of thousands of pages compromised) and the highly trusted nature of the sites compromised (including pages run by the UN). Something else that's noteworthy about this latest rash of attacks is that it's note clear where to assign responsibility. Early reports (e.g., here) blamed a vulnerability in the Microsoft Internet Information Services server software. However, later reports (e.g., here and here) have said that the fault doesn't lie with Microsoft, but instead can be assigned to lax programming practices and more sophisticated bad guys.
So what, from a legal standpoint, happens now? The initial reports seemed to indicate a class action lawsuit in Microsoft's future. However, if the blame can't be pinned on Microsoft, what recourse do businesses who, through no fault of their own, end up having their web pages compromised have? While it still isn't clear what's going on, it could be that the answer is that those businesses have no recourse at all. Realistically, they won't be able to find the hackers, and, even if they do, the hackers are most likely judgment proof. They can't sue Microsoft if that company is blameless, and they can't go after their own employees for poor programming practices (if that's what's to blame). The bottom line is that the losses in this case might just be eaten by the entities who have already been victimized by hackers. It's a reminder of the limits of the legal system to shift risk, and a good example of why relying the legal system to protect a business from losses due to criminal behavior isn't a particularly good idea.
Monday, April 28, 2008
Tuesday, April 22, 2008
Good News and Bad News For Individual Privacy
There's good news and bad news from the courts today on whether information on a computer system is treated as private for the purpose of government investigations. First the good news: as described here, the New Jersey Supreme Court has held that, under the New Jersey constitution, people have an expectation of privacy when they are online. The practical effect of this is that a grand jury warrant would be necessary for police in New Jersey to obtain access to that information. One important point in the decision is that it relied on the New Jersey, rather than the U.S. Constitution. This is important, because it means that the U.S. Supreme Court can't overturn the decision on appeal. Thus, until there's an amendment to the New Jersey Constitution, or until the New Jersey Supreme Court reverses itself, an online expectation of privacy will be recognized in that state.
And on the subject of Federal Courts reversing privacy friendly decisions, that brings us to our bad news: the 9th Circuit has reversed a lower court ruling which stated that digital devices are too personal for police at the border to be allowed to search them without cause. In its ruling, the 9th circuit focused on the "border exception" to the fourth amendment (NOTE: don't you love a 4th amendment with exceptions?) and said that no reason whatsoever is necessary for border agents to search digital devices.
via BoingBoing.
And on the subject of Federal Courts reversing privacy friendly decisions, that brings us to our bad news: the 9th Circuit has reversed a lower court ruling which stated that digital devices are too personal for police at the border to be allowed to search them without cause. In its ruling, the 9th circuit focused on the "border exception" to the fourth amendment (NOTE: don't you love a 4th amendment with exceptions?) and said that no reason whatsoever is necessary for border agents to search digital devices.
via BoingBoing.
Tuesday, April 15, 2008
A "New" Data Security Threat, and Why That's a Good Thing
this article from Computer World describes a "new" type of attack hackers have been using to get at credit card data: interception of unencrypted data while in transit. Now, as the article points out, the tools being used by hackers to intercept data in transit aren't novel technology, so the description of a "new" threat is, in one sense, not accurate. However, obtaining unencrypted information in transit marks a significant shift from the traditional hacker tactic of stealing information stolen from databases (see, e.g., TJX and CardSystems, the two biggest data security incidents on record). Of course, to a consumer, it doesn't matter much how their credit card numbers were stolen. However, to me, the fact that hackers are switching tactics is not only a big deal, it's also good news for at least three reasons.
First, it's harder for a hacker to steal huge amounts of data by intercepting it in transit than it is for a hacker to steal huge amounts of data by stealing it from a database. For example, it takes at least a month for a hacker to steal a month's worth of credit card numbers if they're being captured while in transit during a transaction. By contrast, a month's worth of credit card numbers can be stolen from a database in seconds. Thus, if hackers focusing on data in transit rather than data at rest should decrease the overall amount of data stolen.
Second, as described in the article, one reason that hackers are switching to catching information in transit rather than focusing on databases is that companies have hardened their databases in order to comply with the PCI DSS. This shows that compliance with the DSS, while admittedly not universal, has been widespread enough to change criminal behavior, something that is clearly a positive development for data security.
Third, the fact that hackers have switched from high value targets (databases) to relatively lower value targets (data transmissions) based on the behavior of their targets shows that, when properly motivated, regulation can address and alleviate serious problems (in this case, the problem of easily compromised databases). Of course, at this point, the switch from targeting databases to targeting transmissions means that some tinkering with the PCI DSS is probably in order. However, there is no reason why the same framework which resulted in the increases in database security that led to the shift can't also be used to address threats to transmissions. Thus, while the new tactics being used to steal credit cards represent new challenges, they also show that some progress has been made in the ongoing battle to increase the security of individual consumer data.
PostScript: On a quasi-related note for everyone who says that private initiatives are always superior to government action, the HIPAA security regulations actually address protecting information in transit and at rest, so they already address the "new" threat described in the article.
First, it's harder for a hacker to steal huge amounts of data by intercepting it in transit than it is for a hacker to steal huge amounts of data by stealing it from a database. For example, it takes at least a month for a hacker to steal a month's worth of credit card numbers if they're being captured while in transit during a transaction. By contrast, a month's worth of credit card numbers can be stolen from a database in seconds. Thus, if hackers focusing on data in transit rather than data at rest should decrease the overall amount of data stolen.
Second, as described in the article, one reason that hackers are switching to catching information in transit rather than focusing on databases is that companies have hardened their databases in order to comply with the PCI DSS. This shows that compliance with the DSS, while admittedly not universal, has been widespread enough to change criminal behavior, something that is clearly a positive development for data security.
Third, the fact that hackers have switched from high value targets (databases) to relatively lower value targets (data transmissions) based on the behavior of their targets shows that, when properly motivated, regulation can address and alleviate serious problems (in this case, the problem of easily compromised databases). Of course, at this point, the switch from targeting databases to targeting transmissions means that some tinkering with the PCI DSS is probably in order. However, there is no reason why the same framework which resulted in the increases in database security that led to the shift can't also be used to address threats to transmissions. Thus, while the new tactics being used to steal credit cards represent new challenges, they also show that some progress has been made in the ongoing battle to increase the security of individual consumer data.
PostScript: On a quasi-related note for everyone who says that private initiatives are always superior to government action, the HIPAA security regulations actually address protecting information in transit and at rest, so they already address the "new" threat described in the article.
Sunday, April 13, 2008
Jail Time for CEOs?
According to this article from Computer World,
To me this seems like a terrible idea. CEOs and corporate board members are not expected to be intimately involved with their company's IT. Indeed, in a well functioning company, a CEO will be aware of the company at a much higher level, and so won't know the facts "on the ground" which lead to a data security incident. Imagine how that would change if CEOs went to prison for data breaches. Instead of being generalists, they'd become micro-managers - and the companies they're responsible for would suffer as a result.
Obviously, my thought is that CEOs should not be sent to prison for information security breaches. Prison, at least in the context of the business world, is an extreme punishment, and it should be reserved for extreme situations such as actual fraud, or wrongdoing leading to loss of life. For the simple negligence (or even bad luck to be the victim of a determined hacker) behind most information security incidents, prison not only has the potential to create perverse incentives to micro-manage, but is also wildly disproportionate to the "wrongdoing" of the CEOs who would be put away.
[a] growing number of security pros believe that the way to stop data breaches from happening is simple as it is stark -- send the CEOs or board members deemed responsible to jail.
To me this seems like a terrible idea. CEOs and corporate board members are not expected to be intimately involved with their company's IT. Indeed, in a well functioning company, a CEO will be aware of the company at a much higher level, and so won't know the facts "on the ground" which lead to a data security incident. Imagine how that would change if CEOs went to prison for data breaches. Instead of being generalists, they'd become micro-managers - and the companies they're responsible for would suffer as a result.
Obviously, my thought is that CEOs should not be sent to prison for information security breaches. Prison, at least in the context of the business world, is an extreme punishment, and it should be reserved for extreme situations such as actual fraud, or wrongdoing leading to loss of life. For the simple negligence (or even bad luck to be the victim of a determined hacker) behind most information security incidents, prison not only has the potential to create perverse incentives to micro-manage, but is also wildly disproportionate to the "wrongdoing" of the CEOs who would be put away.
Labels:
data security breach,
imprisonment,
responsibility
Saturday, April 12, 2008
Utility of Regulations
Does computer security regulation actually improve security? No, says this article from Computer World. Instead, the article says that regulations which specify behavior for companies risk "actually weakening a business by enforcement actions that drive companies to spend unnecessarily on perceived but not genuine security risks." The article says that, instead of specifying behavior, regulation should be outcome based. The example of good outcome based regulation given in the article was California's SB 1386, which requires companies to notify consumers of unauthorized access to their personal data. However, after praising SB 1386, the article says that that legislation is also a problem, and advocates burying it with federal regulation which would preempt state security breach notification laws. The reason the article gives for needing this preemptive federal legislation: "to create a national baseline standard for protecting sensitive data."
As it happens, I disagree with almost everything this article said. First, I think the article's fear that regulation will cause companies to waste money on needless security measures is completely misplaced. I'm actually a little curious, what part of the current regulatory climate is it that he thinks is forcing businesses to spend money unnecessarily? Is it (for example) HIPAA's encryption requirements? It's unique user identifications? It's easy to complain about regulatory burden. However, I'm not sure that I'd want a business to have my health care records (or other personal data) if they didn't even bother to know who was on their systems, or properly encrypt my information. Second, I think the article's focus on SB1386 as the type of regulation that we need is completely wrong. Yes, that law is useful, in that it makes sure companies can't just sweep security breach incidents under the rug. However, it does nothing to prevent the breaches in the first place. To me, it makes sense to try and have regulations which prevent bad events (e.g., security breaches) from happening in the future place) rather than simply trying to clean up after when something goes wrong. Finally, the article advocates preemptive federal data security breach laws. I think this is dead wrong. Why not let individual states try and find their own balances between costs of notification and individual privacy? Having multiple state laws doesn't make it more difficult to comply...it just means that businesses need to know whose data they're storing, then comply with the most stringent standards which apply to that data. If we had some kind of federal amalgam of our current state legislation, the result would be that states could no longer make innovative laws like California's SB1386, and that would make people's information less, not more, secure.
Thus, while I think it's generally a good thing to discuss the appropriate level of regulation to protect information security, Computer World's article arguing that regulation is unhelpful can safely be skipped, as there isn't much there worth considering.
As it happens, I disagree with almost everything this article said. First, I think the article's fear that regulation will cause companies to waste money on needless security measures is completely misplaced. I'm actually a little curious, what part of the current regulatory climate is it that he thinks is forcing businesses to spend money unnecessarily? Is it (for example) HIPAA's encryption requirements? It's unique user identifications? It's easy to complain about regulatory burden. However, I'm not sure that I'd want a business to have my health care records (or other personal data) if they didn't even bother to know who was on their systems, or properly encrypt my information. Second, I think the article's focus on SB1386 as the type of regulation that we need is completely wrong. Yes, that law is useful, in that it makes sure companies can't just sweep security breach incidents under the rug. However, it does nothing to prevent the breaches in the first place. To me, it makes sense to try and have regulations which prevent bad events (e.g., security breaches) from happening in the future place) rather than simply trying to clean up after when something goes wrong. Finally, the article advocates preemptive federal data security breach laws. I think this is dead wrong. Why not let individual states try and find their own balances between costs of notification and individual privacy? Having multiple state laws doesn't make it more difficult to comply...it just means that businesses need to know whose data they're storing, then comply with the most stringent standards which apply to that data. If we had some kind of federal amalgam of our current state legislation, the result would be that states could no longer make innovative laws like California's SB1386, and that would make people's information less, not more, secure.
Thus, while I think it's generally a good thing to discuss the appropriate level of regulation to protect information security, Computer World's article arguing that regulation is unhelpful can safely be skipped, as there isn't much there worth considering.
Labels:
federal legislation,
regulation,
state legislation
Sunday, April 6, 2008
Hannaford Data Exposure Suit
In a development that can be expected to surprise no one, yet another merchant has announced that a security breach has resulted in the exposure of consumer data. The breach is described in this article from Computer World, as well as this article from the E-Commerce Times. The basic outline of the story is that Hannaford Bros. Co., a Maine based supermarket chain, had their servers compromised by malware which ended up leading to the exposure of somewhere north of 4 million debit and credit card accounts. Now that the breach has surfaced, the inevitable class action suits have been filed in federal court in Maine. While I don't have the facts necessary to comment on the merits, there are a few aspects of this case that could set it apart from the run of the mill data exposure suit. First, according to the E-Commerce Times article, nearly 2000 cases of fraud have been traced to the breach. This, obviously, be helpful to the plaintiffs, as it will help show actual damages, which have often been a stumbling block in similar cases. The second interesting aspect of this case is that Hannaford, rather than being a poster child for bad security practices a la TJX, was apparently in compliance with the PCI standards when the breach took place. This, obviously, could be helpful to the defendants, who could use their compliance with the PCI standards to rebut charges of negligence.
In any case, as I mentioned previously, I don't have the facts necessary to comment on the merits of the case. However, it seems that there are things to be said for both parties, which could make this an interesting case which could help provide guidance for both plaintiffs and defendants in future data exposure cases.
In any case, as I mentioned previously, I don't have the facts necessary to comment on the merits of the case. However, it seems that there are things to be said for both parties, which could make this an interesting case which could help provide guidance for both plaintiffs and defendants in future data exposure cases.
Subscribe to:
Posts (Atom)