Tuesday, February 26, 2008
Study: Data Exposure Less Expensive than Previously Estimated
There's a new data point for organizations struggling to figure out what impact data exposure has on a business' bottom line. According to a study from the Ponemon Institute, described in this article, the average costs of an information security breach in the UK is about 47 pounds/record (about 103 dollars/record at current exchange rates). This is much less than the 197 dollars/record figure from a different Ponemon study from last year, which I described in this post. Why the discrepancy? My immediate thought is data underlying sets. The more recent study is focused on the UK, while the previous study was not. On the other hand, it's also possible that the previous study simply overestimated the costs of a breach. At this point, my guess is that the discrepancy is based on a combination of the two factors, but that the actual cost is closer to the lower number, a conclusion I draw in part because of the relatively low per record cost associated with massive breaches, something I wrote about here in the context of the TJX case.
Sunday, February 24, 2008
It's Hard to Escape HIPAA
Computer World has an article up about online personal health records (PHR) systems, and a report which claims that they pose risks to consumer privacy. The premise of the report (and the article) is intriguing: online PHR systems could be a new type of business model which might undermine privacy and security rules governing traditional health care providers (particularly HIPAA). Happily (from a privacy standpoint), both the article and the report it is based on fail to make a case that PHR systems represent a new, serious hole in the privacy regime created by HIPAA. The biggest problems I had are that the report simply assumed that PHR systems are outside of HIPAA, and that it assumed that PHR systems which fall outside of HIPAA aren't required to comply with HIPAA's regulations. First, I'm not sure how many PHR systems actually fall outside of HIPAA. HIPAA doesn't just cover health care providers, it also covers health plans and health care clearinghouses. Before I become excited about PHR systems evading regulations covering health care providers, I would want to know whether they systems in question are part of one of the other categories of covered entities under HIPAA. Second, even if a PHR system isn't a covered entity under HIPAA, it might be required to comply with HIPAA due to its contracts with entities which are covered entities. Indeed, the HIPAA rules specifically require that covered entities enter into such contracts with certain of their business associates (e.g., 45 C.F.R. 164.314(a)(1) "Business Associate Contracts and Other Arrangements"). Again, the report simply didn't consider to what extent the hypothetical hole in HIPAA might be closed by the business associate portions of the regulations.
As a note, none of the above is meant to say that there aren't privacy risks involved in online PHR systems. For example, as the report notes, PHR systems represent one more repository of data which is subject to security breaches. Further, when you transmit data to such systems, it might be captured, either via snooping on a network, or via software like a keystroke logger installed on a local computer. However, neither of those risks have anything to do with HIPAA, and would be the same even if PHR systems were undeniably covered. Thus, while there are privacy concerns with PHR systems, the article didn't make the case that a HIPAA loophole is one of them.
As a note, none of the above is meant to say that there aren't privacy risks involved in online PHR systems. For example, as the report notes, PHR systems represent one more repository of data which is subject to security breaches. Further, when you transmit data to such systems, it might be captured, either via snooping on a network, or via software like a keystroke logger installed on a local computer. However, neither of those risks have anything to do with HIPAA, and would be the same even if PHR systems were undeniably covered. Thus, while there are privacy concerns with PHR systems, the article didn't make the case that a HIPAA loophole is one of them.
Sunday, February 17, 2008
An Interesting Application of Privacy Laws
Here's an interesting article about a consumer who has used a data exposure notification law for an novel purpose: punishing an electronics distributor for bad customer service. What happened was as follows:
1) Raelyn Campbell brings her laptop into Best Buy for service.
2) Best Buy loses laptop.
3) Raelyn contacts best buy asking when her laptop will be ready.
4) Best Buy gives Raelyn the runaround
5) Steps 3-4 repeated for four months.
6) Realyn sues Best Buy...for $54,000,000.
So where's the privacy angle in all this? It turns out, that the sequence of events describes above should have included a step 2a) Tell Raelyn that her laptop, which contained her tax returns, was lost. The reason (other than the fact that it's the right thing to do from a moral and customer relations standpoint) is that such notification seems to be required by the District of Columbia's security breach notification act. That law, which it appears that Best Buy did not comply with, is the basis for Raelyn's $54,000,000 suit.
The next question, of course, how much this is going to end up costing Best Buy. The short answer is: not $54,000,000. The relevant statute does authorize individuals to file suit against entities who have not complied with the statute's requirements (see here). However, it allows a recovery of actual damages plus costs (including attorney's fees), and Raelyn admits that the $54,000,000 figure was pulled out of the air for the purpose of making a statement. However, Best Buy isn't going to get off cheap either. When Raelyn originally learned that the laptop had been lost, she offered to settle with Best Buy for $2,100. As an attorney, I can safely say that Best Buy will spend more (likely much more) in legal fees just dealing with the case. Moreover, there's the cost of actually paying Raelyn's damages (cost of the laptop and any data stored on it, time wasted trying to get the laptop back, attorney's fees, and court costs). All in all, my guess is that before the end of this suit, Best Buy will institute a policy of simply replacing lost laptops for customer in states with security breach notification laws that allow for individual suit.
If that prediction turns out to be correct, it would be a powerful example of how allowing consumers to protect the security of their own data can have beneficial effects beyond consumer privacy (in this case improving customer service); something to consider when people ask why they should care about the privacy of information.
1) Raelyn Campbell brings her laptop into Best Buy for service.
2) Best Buy loses laptop.
3) Raelyn contacts best buy asking when her laptop will be ready.
4) Best Buy gives Raelyn the runaround
5) Steps 3-4 repeated for four months.
6) Realyn sues Best Buy...for $54,000,000.
So where's the privacy angle in all this? It turns out, that the sequence of events describes above should have included a step 2a) Tell Raelyn that her laptop, which contained her tax returns, was lost. The reason (other than the fact that it's the right thing to do from a moral and customer relations standpoint) is that such notification seems to be required by the District of Columbia's security breach notification act. That law, which it appears that Best Buy did not comply with, is the basis for Raelyn's $54,000,000 suit.
The next question, of course, how much this is going to end up costing Best Buy. The short answer is: not $54,000,000. The relevant statute does authorize individuals to file suit against entities who have not complied with the statute's requirements (see here). However, it allows a recovery of actual damages plus costs (including attorney's fees), and Raelyn admits that the $54,000,000 figure was pulled out of the air for the purpose of making a statement. However, Best Buy isn't going to get off cheap either. When Raelyn originally learned that the laptop had been lost, she offered to settle with Best Buy for $2,100. As an attorney, I can safely say that Best Buy will spend more (likely much more) in legal fees just dealing with the case. Moreover, there's the cost of actually paying Raelyn's damages (cost of the laptop and any data stored on it, time wasted trying to get the laptop back, attorney's fees, and court costs). All in all, my guess is that before the end of this suit, Best Buy will institute a policy of simply replacing lost laptops for customer in states with security breach notification laws that allow for individual suit.
If that prediction turns out to be correct, it would be a powerful example of how allowing consumers to protect the security of their own data can have beneficial effects beyond consumer privacy (in this case improving customer service); something to consider when people ask why they should care about the privacy of information.
Monday, February 11, 2008
Who Cares About Privacy?
Wired.com has an article up about a startup called Credentica, which uses multi-party computation to allow people to verify information about themselves (e.g., age) without sharing that information, thereby eliminating data leaks which can lead to identity theft.
While the technology of the new service is neat, the headline of the article asks what I believe is an important question: Does Anyone Care? As discussed here, studies show that people always place greater value on even small amounts of money than they do over the privacy of their personal information, meaning that even a startup with a great new privacy protection product is likely to have a tough time in the market. In my opinion, that's too bad. To my mind, protecting your personal information is like playing poker against the world with all your cards exposed. Perhaps my profession as an attorney makes me more vigilant about privacy than other people, after all, lawyers have affirmative duties to protect information. One great example of those duties is this opinion which states in part that
In other words, for lawyers, if a system isn't secure, it shouldn't be used to store client information. If all individuals had this same type of regard for data privacy, companies like Credentica would be almost sure to succeed. However, my experience, backed up by empirical data, is that most individuals have little to no regard for their personal data, which means that companies like Credentica, even if they have a great product, will likely have a difficult time in the market.
While the technology of the new service is neat, the headline of the article asks what I believe is an important question: Does Anyone Care? As discussed here, studies show that people always place greater value on even small amounts of money than they do over the privacy of their personal information, meaning that even a startup with a great new privacy protection product is likely to have a tough time in the market. In my opinion, that's too bad. To my mind, protecting your personal information is like playing poker against the world with all your cards exposed. Perhaps my profession as an attorney makes me more vigilant about privacy than other people, after all, lawyers have affirmative duties to protect information. One great example of those duties is this opinion which states in part that
The employees of the public defender's office must be the only individuals who have access to client information, including that which is stored on the computer system. If any component of the computer system is linked or somehow shared by other county offices, the public defender must take whatever reasonable and necessary precautions there are to ensure that this information cannot be accessed by the other offices. This is the public defender’s primary responsibility in this scenario. If the public defender is not satisfied that client confidentiality can be secured, then the ethical alternative is to either maintain a separate computer system from the other county offices or discontinue storing client information on the shared system. (emphasis added)
In other words, for lawyers, if a system isn't secure, it shouldn't be used to store client information. If all individuals had this same type of regard for data privacy, companies like Credentica would be almost sure to succeed. However, my experience, backed up by empirical data, is that most individuals have little to no regard for their personal data, which means that companies like Credentica, even if they have a great product, will likely have a difficult time in the market.
Friday, February 8, 2008
"Red Flag" Identity Theft Rules Apply to all Creditors
New federal rules require all creditors - financial institutions, retailers, utilities, car dealers, and other organizations that extend consumer credit or hold consumer accounts - to develop and implement a proactive Identity Theft Prevention Program. An identity theft prevention policy and program must be adopted and operating no later than November 1, 2008.
Federal regulators were required by the FACT Act of 2003 to issue regulations that implement Section 114 of the Act. This section amended the Fair Credit Reporting Act to require financial institutions and other creditors which maintain consumer accounts to adopt and maintain a written Identity Theft Prevention Program. This Program's purpose is to detect, prevent, and mitigate identity theft in connection with the opening of accounts maintained for personal, family or household purposes, so long as the accounts permit multiple payments or transactions. Examples include credit card accounts, mortgage loans, automobile loans, margin accounts, cell phone accounts, utility accounts, checking accounts or savings accounts.
The applicability of the new regulations are not limited to consumer accounts. They also apply to any other account that is offered or maintained by a creditor where there is a reasonably foreseeable risk of identity theft, such as business accounts held by sole proprietors that can be opened or accessed remotely.
The new regulations provide financial institutions and creditors with flexibility in developing their programs according to their relative organizational size and complexity. However, the Program must include reasonable policies and procedures that:
" identify relevant Red Flags, and then incorporate those Red Flags into the Program;
" detect such Red Flags;
" respond appropriately to any Red Flags to prevent and mitigate identity theft; and
" ensure that the Program is updated periodically to reflect changes in risks to customers
What are these "Red Flags"? The regulations define them as a "pattern, practice, or specific activity that indicates the possible existence of identity theft." However, the concept is fleshed out considerably in the supplementary materials to the regulations. The federal regulatory agencies have adopted Interagency Guidelines on Identity Theft Detection, Prevention, and Mitigation. These Guidelines provide policies and procedures that can be used, where appropriate, to satisfy the regulatory requirements of the rules.
Once the Program has been established, each financial institution and creditor must administer the Program. This involves having the board of directors or an appropriate committee of the board approve the initial written Program, and that the board, an appropriate board committee, or a designated member of senior management be responsible for the oversight, development, implementation and administration of the Program. Additionally, training of relevant staff and effective oversight of third party service providers with respect to the Program is also required.
Financial institutions covered by the Red Flag Identity Theft Rules are subject to oversight by the appropriate federal banking regulators, and for those creditors that are not federally regulated financial institutions, the Federal Trade Commission provides oversight. Besides regulatory enforcement actions, violations of the FACT Act can subject the financial institution or creditor to civil actions for damages. The type and amount of damages available will depend on whether the violations are determined to be "negligent" or "willful."
These new regulations will require formalizing practices that one hopes most reputable businesses currently follow with respect to being alert to potential fraud. The requirements are largely common sense, and address the need for both sides of the consumer credit equation to be alert to protecting one's identity.
Federal regulators were required by the FACT Act of 2003 to issue regulations that implement Section 114 of the Act. This section amended the Fair Credit Reporting Act to require financial institutions and other creditors which maintain consumer accounts to adopt and maintain a written Identity Theft Prevention Program. This Program's purpose is to detect, prevent, and mitigate identity theft in connection with the opening of accounts maintained for personal, family or household purposes, so long as the accounts permit multiple payments or transactions. Examples include credit card accounts, mortgage loans, automobile loans, margin accounts, cell phone accounts, utility accounts, checking accounts or savings accounts.
The applicability of the new regulations are not limited to consumer accounts. They also apply to any other account that is offered or maintained by a creditor where there is a reasonably foreseeable risk of identity theft, such as business accounts held by sole proprietors that can be opened or accessed remotely.
The new regulations provide financial institutions and creditors with flexibility in developing their programs according to their relative organizational size and complexity. However, the Program must include reasonable policies and procedures that:
" identify relevant Red Flags, and then incorporate those Red Flags into the Program;
" detect such Red Flags;
" respond appropriately to any Red Flags to prevent and mitigate identity theft; and
" ensure that the Program is updated periodically to reflect changes in risks to customers
What are these "Red Flags"? The regulations define them as a "pattern, practice, or specific activity that indicates the possible existence of identity theft." However, the concept is fleshed out considerably in the supplementary materials to the regulations. The federal regulatory agencies have adopted Interagency Guidelines on Identity Theft Detection, Prevention, and Mitigation. These Guidelines provide policies and procedures that can be used, where appropriate, to satisfy the regulatory requirements of the rules.
Once the Program has been established, each financial institution and creditor must administer the Program. This involves having the board of directors or an appropriate committee of the board approve the initial written Program, and that the board, an appropriate board committee, or a designated member of senior management be responsible for the oversight, development, implementation and administration of the Program. Additionally, training of relevant staff and effective oversight of third party service providers with respect to the Program is also required.
Financial institutions covered by the Red Flag Identity Theft Rules are subject to oversight by the appropriate federal banking regulators, and for those creditors that are not federally regulated financial institutions, the Federal Trade Commission provides oversight. Besides regulatory enforcement actions, violations of the FACT Act can subject the financial institution or creditor to civil actions for damages. The type and amount of damages available will depend on whether the violations are determined to be "negligent" or "willful."
These new regulations will require formalizing practices that one hopes most reputable businesses currently follow with respect to being alert to potential fraud. The requirements are largely common sense, and address the need for both sides of the consumer credit equation to be alert to protecting one's identity.
Sunday, February 3, 2008
My Own Personal Data Exposure
Well, last week I got a message from the Georgetown Information Security Office. Apparently, a hard drive was stolen which contained information from students enrolled between 1998 and 2006, as well as some faculty and staff. The message said that no credit card information or other financial data was exposed, but that personally identifiable information of some students (and faculty and staff) was stored on the hard drive. We were reassured that there was no evidence that any of the information had been misused, but were cautioned to place a fraud alert on our credit reporting accounts just in case.
For me, the advice to place a fraud alert was a bit late, since I've had credit monitoring ever since someone (not me) opened a bank account in my name almost a year ago. Of course, I could do more (like place a freeze on my account), but, frankly, my risks are low enough that I don't see any need to it. Of course, there are still two things I'm curious about:
1) Why was the data stored on a hard drive that could be easily stolen (I'm guessing on a laptop)?
2) Why wasn't it encrypted (of course, the message didn't say it was unencrypted, but if it had been, you can bet that would have been mentioned)?
You'd think that in this day and age, an organization the size of Georgetown wouldn't store sensitive data on easily stealable hard drives, and would keep it encrypted as a matter of course.
For me, the advice to place a fraud alert was a bit late, since I've had credit monitoring ever since someone (not me) opened a bank account in my name almost a year ago. Of course, I could do more (like place a freeze on my account), but, frankly, my risks are low enough that I don't see any need to it. Of course, there are still two things I'm curious about:
1) Why was the data stored on a hard drive that could be easily stolen (I'm guessing on a laptop)?
2) Why wasn't it encrypted (of course, the message didn't say it was unencrypted, but if it had been, you can bet that would have been mentioned)?
You'd think that in this day and age, an organization the size of Georgetown wouldn't store sensitive data on easily stealable hard drives, and would keep it encrypted as a matter of course.
Subscribe to:
Posts (Atom)