Last year, Microsoft was hit with a $1,500,000,000 verdict in a patent infringement suit related to Mp3 technology (see here, later thrown out). In 2006, RIM agreed to pay over $600,000,000 to settle litigation related to the ubiquitous blackberry (see here). Last year Vonage agreed to a $100,000,000+ settlement with Verizon over patents for VOIP technology (see here). The bottom line is that patents for software are big money, which was why In re Bilski, a decision the Federal Circuit issued today, was so anticipated. You see, many people had thought that Bilski might put an end to software patents, or at least curtail patent protection for business methods.
My take on the subject was somewhat different. As I explained in this guest post at Patent Baristas I felt that it was unlikely Bilski would have much effect, and that even if the Federal Circuit wanted to, it couldn't eliminate software patents. The reason was the Supreme Court's decision in the case of Diamond v. Diehr said that a patent couldn't be invalidated on the basis that it included software, as long as the claimed invention as a whole performs a function the patent laws were designed to protect (e.g., transforming or reducing an article to a different state or thing). As I wrote in that guest post,
"I can easily tie almost any process I write claims for to a computer, and it would be a trivial task to require that the computers make a physical change in an article (e.g., printing an invoice)," which meant that, based on Diamond v. Diehr, software patents were safe.
So, what did the Federal Circuit do in Bilski? Well, everyone who had anticipated the death of software patents was undoubtedly disappointed. The Federal Circuit specifically addressed and smashed that hope: "we decline to adopt a broad exclusion over software or any other such category of subject matter beyond the exclusion of claims drawn to fundamental principles set forth by the Supreme Court." Bilski, FN 23. It also adopted a "machine-or-transformation" test for patent eligibility (from page 10 of the opinion): "A claimed process is surely patent-eligible under § 101 if: (1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing" - exactly the approach I had recommended in my guest post for obtaining patent protection for software inventions. The Federal Circuit's reasoning was also strikingly similar to my guest post, including an extended discussion of Diamond v. Diehr (see pages 7-9 of the opinion) and used that case to answer potential objections based on arguably contrary Supreme Court precedent (see FN 8: "To the extent it may be argued that Flook did not explicitly follow the machine-or-transformation test first articulated in Benson, we note that the more recent decision in Diehr reaffirmed the machine-or-transformation test. See Diehr, 450 U.S. at 191-92. Moreover, the Diehr Court explained that Flook 'presented a similar situation' to Benson and considered it consistent with the holdings of Diehr and Benson. Diehr at 186-87, 189, 191-92. We thus follow the Diehr Court's understanding of Flook.").
The bottom line is that Bilski reaffirmed the patentability of computer software, and did so in a manner which was strikingly similar to what I had predicted some 7 months previously (the guest post went up on March 6, while the actual decision came down October 30). For the future, this can be a lesson: if there's a billion dollar patent law question, you can either wait for the court to decide it, or you can ask me, and I'll tell you the answer.
NOTE: While I'm aware that this blog primarily focuses on the law related to information security and data privacy, when I read Bilski I had an almost irresistible urge to crow about my previous analysis being validated. Thus, given that blogs are basically tailor made platforms for self promotion, I felt that this would be as good a platform as any to engage in a bit of self-congratulation.
Thursday, October 30, 2008
Tuesday, October 28, 2008
Red Flag Rules Delayed
Happy news for all organizations which would have been affected by the FTC's red flag rules: the deadline for enforcement of the rules has been pushed back six months from its original date of November 1, 2008. The rule requires that creditors and financial institutions implement identity theft prevention programs, but the FTC found that many companies needed more time to come into compliance. The new enforcement deadline is May 1, 2009. In its statement, the FTC said that the extension does "not affect other federal agencies' enforcement of the original November 1, 2008 deadline for institutions subject to their oversight to be in compliance."
We (and by we, I mean my colleague Jane Shea) previously wrote about the red flag rules here and here.
We (and by we, I mean my colleague Jane Shea) previously wrote about the red flag rules here and here.
Monday, October 20, 2008
Consumer Self-Protection
Yesterday I posted about weaknesses in systems deployed by the IRS. In that post, I used the weaknesses as an example of the limits of government regulation, given that they showed that even the government itself couldn't keep its house in order. However, something I didn't explicitly address in that post is that the weaknesses in the IRS' systems also demonstrate that there are serious limits on what consumers can do to prevent their information from being compromised. After all, you can't avoid paying taxes, and, by definition, the information held by the IRS is highly sensitive financial data. The result is, simply by virtue of being an American and following the law, your information is at risk.*
So what can ordinary consumers do to protect themselves? In the case of information security, for individuals, I'd say that an ounce of cure is worth a pound of prevention. That is, rather than worrying about protecting your data (which should be the responsibility of the merchants/government entities your data is entrusted to) individual consumers should worry about how they'll find out and deal with it if their data is compromised. Easy steps like credit monitoring, promptly disputing unauthorized charges, and maintaining backup accounts/lines of credit in case one gets frozen as a result of fraud can make recovering from the extremely hard to prevent data compromises a substantially less miserable experience.
*As a note, I don't mean to single the IRS out as an exceptionally bad actor. Indeed, if you compare the IRS' security practices with security practices at TJX before their big breach, I think the IRS comes out way ahead.
So what can ordinary consumers do to protect themselves? In the case of information security, for individuals, I'd say that an ounce of cure is worth a pound of prevention. That is, rather than worrying about protecting your data (which should be the responsibility of the merchants/government entities your data is entrusted to) individual consumers should worry about how they'll find out and deal with it if their data is compromised. Easy steps like credit monitoring, promptly disputing unauthorized charges, and maintaining backup accounts/lines of credit in case one gets frozen as a result of fraud can make recovering from the extremely hard to prevent data compromises a substantially less miserable experience.
*As a note, I don't mean to single the IRS out as an exceptionally bad actor. Indeed, if you compare the IRS' security practices with security practices at TJX before their big breach, I think the IRS comes out way ahead.
Sunday, October 19, 2008
Weaknesses in Government Systems
According to this report (via), the IRS deployed two major software systems, its Customer Account Data Engine (CADE), and its Account Management Services (AMS) system, despite the existence of "known security vulnerabilities relating to the protection of sensitive data, system access, monitoring of system access, and disaster recovery." Obviously, this is a problem. Indeed, given some of the vulnerabilities noted in the Computer World article summarizing the report (e.g., failure to encrypt data either in storage or transit), the IRS systems wouldn't even pass the private sector PCI Data Security Standard, let alone government imposed standards such as those in HIPAA.
The interesting part of the report though, is not that the IRS deployed systems with flaws. Frankly, while that part may be depressing, similar mistakes take place in both the public and private spheres frequently enough that the existence of one more flawed system doesn't really raise my attention. What interests me about the report is that it shows the limits on what you can do with regulation. The IRS has specific guidelines and requirement for handling data that, in theory, should have prevented the deployment of systems with known vulnerabilities. Moreover, as the report noted the IRS had implemented development policies which "require security and privacy safeguards to be planned for and designed in the early phases of a system’s development life" - something that many private sector businesses would benefit from doing. The problem was that the IRS' cybersecurity organization knew about the vulnerabilities and accepted them anyway - in other words, it decided to save money by skimping on security for taxpayer information. With that kind of culture (which I find a bit surprising in government) it's not likely that an organization will have good security, regardless of how heavily regulated it is.
So how do you create a security conscious culture? The easy answer is feedback. Make sure that there are rewards for doing things right, penalties for doing things wrong, and that the rewards and penalties (as well as what counts as right and wrong) are well known. Unfortunately, that easy answer is only easy in theory. In practice it's really hard to implement, and involves things like keeping open lines of communication, making sure decision makers pay attention to security even though it doesn't contribute directly to the bottom line, and educating people about what resources are available in an organization to provide decision support on security issues. While it seems that there is a slow change underway from a culture where consumer data is treated only as something to be valued, to a culture where it's viewed as something to be protected, that change is very slow indeed. Before the change is complete, I think there will be many more reports revealing that large entities (both public and private) have undervalued securing consumer data.
The interesting part of the report though, is not that the IRS deployed systems with flaws. Frankly, while that part may be depressing, similar mistakes take place in both the public and private spheres frequently enough that the existence of one more flawed system doesn't really raise my attention. What interests me about the report is that it shows the limits on what you can do with regulation. The IRS has specific guidelines and requirement for handling data that, in theory, should have prevented the deployment of systems with known vulnerabilities. Moreover, as the report noted the IRS had implemented development policies which "require security and privacy safeguards to be planned for and designed in the early phases of a system’s development life" - something that many private sector businesses would benefit from doing. The problem was that the IRS' cybersecurity organization knew about the vulnerabilities and accepted them anyway - in other words, it decided to save money by skimping on security for taxpayer information. With that kind of culture (which I find a bit surprising in government) it's not likely that an organization will have good security, regardless of how heavily regulated it is.
So how do you create a security conscious culture? The easy answer is feedback. Make sure that there are rewards for doing things right, penalties for doing things wrong, and that the rewards and penalties (as well as what counts as right and wrong) are well known. Unfortunately, that easy answer is only easy in theory. In practice it's really hard to implement, and involves things like keeping open lines of communication, making sure decision makers pay attention to security even though it doesn't contribute directly to the bottom line, and educating people about what resources are available in an organization to provide decision support on security issues. While it seems that there is a slow change underway from a culture where consumer data is treated only as something to be valued, to a culture where it's viewed as something to be protected, that change is very slow indeed. Before the change is complete, I think there will be many more reports revealing that large entities (both public and private) have undervalued securing consumer data.
Sunday, October 12, 2008
Can Privacy Come Back?
In this interview at Computer World private investigator Steve Rambam argues that "Privacy is dead. Get over it. You can't put the genie back in the bottle." His argument seems to be based in large part on his own database, which supposedly contains
He uses that database, as well as advances in computer technology and changes in government policy to make the case that more and more information is becoming available about people, and that privacy is a thing of the (rapidly receding) past.
My belief is that Rambam is wrong. I'm willing to concede that the state of individual privacy right now is pretty grim (though I don't think it's dead). However, there is a substantial disconnect between observing that things are bad now, and concluding that they'll never get better in the future. Indeed, as my own contribution to putting Rambam's genie back in the bottle, I would like to present the following things people can do to use the law to help privacy:
1) Remember the FTC. While people generally have little success in suits alleging damages based exposure of their personal data, the FTC has broad enforcement authority to combat unfair and deceptive trade practices. That means that if a company isn't following their privacy policy, or if they're saying they value privacy while they actually sell your personal information to the highest bidder, a complaint to the FTC could be a way to deal with it.
2) Watch the EULAs. As I have written before (e.g., here) contract law in general, and abusive end user license agreements in particular present a serious threat to privacy. Thus, when someone asks you to click before continuing, read what it is that you're being asked to agree to and, if it's abusive, don't agree. In fact, not only should you refuse to agree, you should also complain. While generally consumer complaints are of questionable effectiveness, if a company is interested in its image, it can lead to changes in behavior (e.g., Google Chrome).
3) Know your rights. For example, the Fair and Accurate Credit Transactions Act prohibits printing complete credit or debit card numbers on receipts. By being aware of their rights, consumers can know how to protect themselves and their privacy, either by enforcing their rights themselves (e.g., through a private suit) or though others (e.g., by bringing an FTC complaint).
pretty much every American's name, address, date of birth, Social Security number, telephone number, personal relationships, businesses, motor vehicles, driver's licenses, bankruptcies, liens, judgments [etc...]
He uses that database, as well as advances in computer technology and changes in government policy to make the case that more and more information is becoming available about people, and that privacy is a thing of the (rapidly receding) past.
My belief is that Rambam is wrong. I'm willing to concede that the state of individual privacy right now is pretty grim (though I don't think it's dead). However, there is a substantial disconnect between observing that things are bad now, and concluding that they'll never get better in the future. Indeed, as my own contribution to putting Rambam's genie back in the bottle, I would like to present the following things people can do to use the law to help privacy:
1) Remember the FTC. While people generally have little success in suits alleging damages based exposure of their personal data, the FTC has broad enforcement authority to combat unfair and deceptive trade practices. That means that if a company isn't following their privacy policy, or if they're saying they value privacy while they actually sell your personal information to the highest bidder, a complaint to the FTC could be a way to deal with it.
2) Watch the EULAs. As I have written before (e.g., here) contract law in general, and abusive end user license agreements in particular present a serious threat to privacy. Thus, when someone asks you to click before continuing, read what it is that you're being asked to agree to and, if it's abusive, don't agree. In fact, not only should you refuse to agree, you should also complain. While generally consumer complaints are of questionable effectiveness, if a company is interested in its image, it can lead to changes in behavior (e.g., Google Chrome).
3) Know your rights. For example, the Fair and Accurate Credit Transactions Act prohibits printing complete credit or debit card numbers on receipts. By being aware of their rights, consumers can know how to protect themselves and their privacy, either by enforcing their rights themselves (e.g., through a private suit) or though others (e.g., by bringing an FTC complaint).
Wednesday, October 8, 2008
Ohio Lemon Law: What’s Covered and What Isn’t
Today, Sergei Lemberg, a lemon law attorney who normally blogs at LemonJustice, discusses what you need to know about new car lemons.
With all of the cars, SUVs, trucks, motorcycles, and RVs being manufactured in the U.S. and abroad, it’s reasonable to expect that some will have defects. After all, vehicles are incredibly complex pieces of machinery and a lot of things can go wrong. In the best-case scenario, any defects that weren’t caught by quality assurance are quickly repaired by the dealer. In the worst-case scenario, you have a vehicle with pronounced defects that make it run poorly, that constitute a safety hazard, or that reduces its value – and the dealer or manufacturer refuse to buy back or replace it.
When that happens, Ohio lemon law can come to the rescue. Ohio lemon law covers new passenger vehicles, SUVs, vans, trucks, and motorcycles that are purchased or leased in Ohio. The motorized portions of RVs are also covered, as are used cars that are purchased within one year or 18,000 miles of delivery to the original owner.
Although it doesn’t cover minor defects (like a non-working stereo system), the lemon law does force the manufacturer to stand by its product. In order for the lemon law to apply to new vehicles, the defects have to occur during the first year from the delivery date or the first 12,000 miles on the odometer – whichever comes first. In addition, the vehicle must have been taken in one time for a problem that could cause serious injury or death or eight times for different problems. Alternately, the vehicle can have been out of service for a cumulative total of 30 calendar days. In addition, you have to notify the manufacturer in writing of the defect within one year from the delivery date or the first 18,000 miles (whichever comes first).
If you think you have a lemon, you have to take part in the manufacturer’s dispute resolution process (if one exists) before going to court. Before you begin, though, you should have a lemon law lawyer by your side. After all, you can be sure that the manufacturer’s team of legal eagles will be there to fight your claim every step of the way. The good news is that, if your claim is successful, the manufacturer has to pay your attorney fees. Often, with the help of a lawyer, you can get a refund, replacement vehicle, or cash settlement without having to go through the entire lemon law process – and get your attorney’s fees covered in the process.
Whenever you buy a new or used vehicle, it’s important to know your rights. And, if you think your vehicle is a lemon, it pays to persevere to make the manufacturer stand by its product.
With all of the cars, SUVs, trucks, motorcycles, and RVs being manufactured in the U.S. and abroad, it’s reasonable to expect that some will have defects. After all, vehicles are incredibly complex pieces of machinery and a lot of things can go wrong. In the best-case scenario, any defects that weren’t caught by quality assurance are quickly repaired by the dealer. In the worst-case scenario, you have a vehicle with pronounced defects that make it run poorly, that constitute a safety hazard, or that reduces its value – and the dealer or manufacturer refuse to buy back or replace it.
When that happens, Ohio lemon law can come to the rescue. Ohio lemon law covers new passenger vehicles, SUVs, vans, trucks, and motorcycles that are purchased or leased in Ohio. The motorized portions of RVs are also covered, as are used cars that are purchased within one year or 18,000 miles of delivery to the original owner.
Although it doesn’t cover minor defects (like a non-working stereo system), the lemon law does force the manufacturer to stand by its product. In order for the lemon law to apply to new vehicles, the defects have to occur during the first year from the delivery date or the first 12,000 miles on the odometer – whichever comes first. In addition, the vehicle must have been taken in one time for a problem that could cause serious injury or death or eight times for different problems. Alternately, the vehicle can have been out of service for a cumulative total of 30 calendar days. In addition, you have to notify the manufacturer in writing of the defect within one year from the delivery date or the first 18,000 miles (whichever comes first).
If you think you have a lemon, you have to take part in the manufacturer’s dispute resolution process (if one exists) before going to court. Before you begin, though, you should have a lemon law lawyer by your side. After all, you can be sure that the manufacturer’s team of legal eagles will be there to fight your claim every step of the way. The good news is that, if your claim is successful, the manufacturer has to pay your attorney fees. Often, with the help of a lawyer, you can get a refund, replacement vehicle, or cash settlement without having to go through the entire lemon law process – and get your attorney’s fees covered in the process.
Whenever you buy a new or used vehicle, it’s important to know your rights. And, if you think your vehicle is a lemon, it pays to persevere to make the manufacturer stand by its product.
Subscribe to:
Posts (Atom)