Hardly a week goes by without some new internet security snafu being reported. And with web usage exploding, expect to hear about a lot more. According to a new analysis from Forrester Research, the number of Internet users is forecast to grow 45% globally over the next four years, reaching 2.2 billion by 2013. More people online, more data to hack — it’s a cybercriminal’s paradise.

Many people don’t yet fully understand the enormity of the threat — to individuals, their families and the companies that they work for, warns Andrea M. Matwyshyn, professor of legal studies and business ethics at Wharton. A frequent public commentator on the topic, Matwyshyn is the editor of a forthcoming book titled, Harboring Data: Information Security, Law and the Corporation.

In an interview with Knowledge at Wharton, Matwyshyn is joined by two of the book’s contributors, Diana Slaughter-Defoe, professor of urban education at University of Pennsylvania, and Cem Paya, a data security expert at Google, who discuss the major risk management gaps that are leaving valuable data assets unprotected not only in the office, but also at home, while also sharing a number of measures that everyone — from parents to CEOs — can take to avoid Internet security disasters.

An edited transcript of the conversation follows.

Knowledge at Wharton: Your forthcoming book says that otherwise sophisticated business entities regularly fail to secure key information assets and that many companies are struggling with incorporating information security practices into their operations. Why is that the case?

Andrea M. Matwyshyn: It’s not apparent to me exactly why this is. But there seems to be a process-based failure under way. It’s in companies’ interests, internally and externally, to secure their information assets. Internally, when a company experiences a data breach, it is potentially compromising trade-secret protection on key intangible assets. Externally, it is going to get bad publicity and trust will diminish among customers, business partners and even its own employees. So securing information assets is a win/win.

[Our speculation] about what may be driving the failure to secure assets [is] partially based on historical … facts. Information security [has been] generally viewed as the province of IT departments, and at one point that may have made sense. But at this point, IT security needs to have a process approach, [coming] from the top layers of a company and a culture of security [should be] filtered through the company’s lower layers.

Security breaches can happen not only in a company’s servers, but also as a result of an employee inserting a CD, [as was] the case of a Sony rootkit problem that arose a few years ago [when its CDs automatically downloaded digital rights management tools on to computers]. [Similarly,] an employee can insert a CD into a PC at work to listen to some music and the vulnerability that arises because of, for example, some digital rights management software on that CD can lead to an employer’s network being compromised. Employee education and [a] top-down [approach to] security information assets are an organizational priority, which is something that hasn’t necessarily permeated corporate culture.

Knowledge at Wharton: It sounds like a company pays a steep price when it fails to do all the things that you suggest. Could you give any examples of companies that have faced problems as a result of not having secured their information assets?

Matwyshyn: The recent example that comes to mind is The TJX Companies. TJX had an extensive database of consumer information because it’s a retailer. [In 1996] a hacker sitting in a car in the parking lot of a Minnesota store with relatively primitive tools accessed its network, compromised it and stole millions of records, subsequently resulting in banks needing to reissue TJX credit cards. There may be incidents of identity theft associated with that activity as well. TJX paid a high price in the press and the banks filed a class-action lawsuit against it.

The costs imposed on other entities because of security breaches [at a company] are starting to result in court cases, and at entities that are forced to reissue cards and absorb the costs [are] finding it unacceptable to pay the price for other people’s security practices.

Part of this stems from the nature of information assets. When a company possesses sensitive information, each subsequent sharing of that information creates another dependency, another point of risk. A compromise anywhere in the chain of possession … is the equivalent of a compromise along every point. So the banks in the TJX case were not pleased, the customers who had data compromised were not pleased and TJX had regulatory action [launched] against it because of that breach.

Knowledge at Wharton: Are there any causes at the macro or social level that have led to information security failures?

Matwyshyn: There are some technological causes, structural causes and legal deficiencies that exacerbate the problem. Information security has become more prominent in part because broadband access is so prevalent. People are using the Internet more, which is a good thing. But such information sharing is leading to additional points of vulnerability. Twenty years ago, there weren’t databases full of such rich consumer information as we have today. The ease of sharing information through the Internet generates targets for information criminals. At this point, the identity-theft economy is on par with or surpassing the [illegal] drug economy.

So when you have a financial incentive driving criminals, dissuading them [from perpetrating a breach] is very difficult and they’re going to innovate to stay one step ahead of information-security experts….

[As for those of us who] think about the legal issues, we haven’t resolved the fundamental holes in our legal structures, which might stop some of this from arising. For example, with extradition treaties, we might expect that if we [in the U.S.] prosecuted an individual cyber criminal somewhere in an Eastern European country who hacks into a U.S. database, [one would think that] we would simply work with the other country to execute the extradition. Alas, it’s not that straightforward in part because to get the extradition, the act that was committed must also be illegal in the other country. In many countries where cyber criminals live, the acts that they’re engaging in aren’t illegal, and their governments are not going to extradite these individuals…. On top of that is the lack of a reciprocal regime for recognizing judgments in other countries, which predates the Internet…. We just never resolved the convention on jurisdiction and judgments to allow us to have our judgments in our courts efficiently enforced in other countries.

Now with the rise of international information crime, these problems are highlighted yet again and we need to take a step back legally and work through some of the gaps….

Knowledge at Wharton: What’s the solution? Do we need more international coordination among legal entities?

Matwyshyn: Absolutely. We need to get some harmonization in cyber crime and the opinion of the international community as to what is acceptable computer conduct. In an economic downturn in particular, this problem reaches a new level because with the ease of information crime and the lack of … job opportunities, it is expected to get even worse.

Knowledge at Wharton: The fascinating thing about your book is the examples of the techniques that cybercriminals use, such as phishing and zombies. Could you describe some of those techniques?

Matwyshyn: Phishing takes the form of an email arriving in an unsuspecting user’s email inbox. The user sees an email from what is assumed a trusted service provider. The email contains a link. The individual follows the link and is asked to provide information, maybe a log in, password or the last four digits of a Social Security number. The information is used by the criminal, sometimes in connection with other information the criminal has purchased online on the black market or even from a legitimate source, which may not have been careful in vetting [who is buying] the information….

The other possibility of a phishing attack is that by following an unsafe link, a person’s computer becomes part of a zombie “botnet,” meaning that someone remotely takes control of the machine … [which is then] used to attack targets, generate spam or engage in other types of [unwanted] activities. We’ve had instances of power grids being threatened by zombie botnets. And there’s speculation that zombie botnets are being used by some countries as a form of cyber war against countries they don’t [want] to prosper.

Knowledge at Wharton: What happened at the job website Monster.com?

Matwyshyn: There was a particular incident at Monster that I mentioned in the book where some individuals posed as employers and by using Monster resources, they mined information about job seekers and consequently sent them emails containing malicious code, which [the job seekers] downloaded, with their security being compromised… The case [involved] individuals with legitimate credentials. Where they got those credentials we’re not sure. It may have been through a different attack before their interactions with Monster. A series of compromised firms may have led to the attacks [in] the Monster database of consumers who had posted their resumes [online]. Of course, with unemployment rates skyrocketing, targets such as Monster will only become more attractive to information thieves. And thinking about the amount of information that an individual puts on his or her resume, a lot of very sensitive, personally identifiable information can let someone pose as you very efficiently.

One of the legal controversies that arises in this type of situation involves data breach notification legislation [requiring companies to tell customers if their information has been put at risk]. Data breach notification legislation now exists in [45 U.S. states], the District of Columbia, Puerto Rico [and the Virgin Islands], and there’ll be probably a few more [states complying] by the end of the year. There’s talk of harmonization, but we’re uncertain when that’s coming.   For example, although there’s no evidence that Monster violated the timeframes stated in the legislation, some critics have asserted that the company didn’t notify its customers as promptly as it could have.  If you looked at the website of Symantec, an information security service provider, you [found out] that there was an information security problem involving Monster sooner than you did from viewing the Monster website, which led to some criticism that only an elite group of people knew [first] about the compromise rather than the individuals who may have been most impacted, the users of Monster. 

Knowledge at Wharton: People tend to disclose all kinds of things about themselves in [social] networks like Facebook and LinkedIn as well. Does that affect information security?

Matwyshyn: Very much so. First, as you mentioned … individuals voluntarily disclose a significant amount of information. But if you ask them, they’ll say they’re very concerned about their privacy. When [such] contradictory behavior [is combined] with difficulty in using privacy settings on websites, [people] sometimes don’t realize how much information is readily available to the public.

There was an incident last year [involving] consumer purchasing on other websites linked to … profiles on Facebook because Facebook had a piece of code, the beacon, that would post information found in its profiles about consumers’ purchases on other websites. Although this was within the [usage] terms [that consumers] had agreed to when they signed up for Facebook, there was an … instinctive reaction of shock on the part of many users that [such] information was pushed [out]…. That was perceived by many users to be too invasive…. Facebook recognized that the beacon plan was a little [too] aggressive for consumer tastes … and it consequently made the privacy settings easier to maneuver. But there is a bit of a contradiction between users’ behavior and users’ stated preferences on privacy.

A corollary concern for information security, as a result of social networks such as Facebook, is that platforms on those networks enable developers to generate interesting, fun, new applications for users to interact. There’s really no information security vetting of those applications by the central platform provider, Facebook in this case. The applications request information on a users’ entire portfolio of friends and then all of those people have data that is possessed by the application provider. What the application provider is using that data for and the extent of secure storage that [it] uses are unknown [to users] and Facebook or another social networking site is not going to [publicize] it. It’s not in their interest to do so because they’d rather not be associated with that relationship. They just want to provide the platform. But most users don’t realize or don’t analyze the extent of information sharing that happens through, for example, the applications….

Knowledge at Wharton: As serious as it is in the case of adults and companies like Monster, these problems become far more serious when children are involved. Diana, tell us about what happens when children are involved in information security failures.

Diana Slaughter-Defoe: A pattern was set up in television broadcasting that applies here. But … a new set of problems [has risen]…. [In the early days of television] when a TV first arrived at a home, everybody watched it, because there was only money and space for one. Now that kids have their own rooms, this means that they have their own TV and so forth and so on. That also applies to computers because … that way of approaching the media has been extended to the computing realm….

But it’s a different situation [with computers], so it seems. From all the indications I’ve heard at the kind of conferences that Andrea has presented at and [from hearing people talking generally], you have a very interactive situation with computers. A TV is a passive instrument….

[But with children accessing computers in] the privacy of their rooms or nook or cranny at home, a parent can’t see who is or is not talking with [a child]. The situation is compounded by a child’s privacy — a situation when they need a knowledgeable adult to assist them with what’s being directed at them.

Knowledge at Wharton: The book has interesting data about how much the use of computers by children has grown. Could you take us through the growth of the Internet … and computer usage?

Slaughter-Defoe: One of my graduate assistants looked into this and a figure we use … is that there’s been a 71% increase since 2001 of teens on the Internet. If you are 16 years old now and you’re not on Facebook or Myspace, you’re nowhere as far as youth are concerned. That’s their world. They fully expect, for example, something that I could never expect — that they’ll have life-long relationships with their elementary school and high school buddies … as a result of the Internet.

So unlike [with] television, where you’re reacting passively to stereotypes and roles and so forth, the Internet is a very active medium and it will likely have, if we get to research this more, the same … [if perhaps not more] impact on children … than parents themselves.

Knowledge at Wharton: What do you regard as the biggest threats to online privacy for children? What are you most worried about when it comes to individuals or companies exploiting children’s privacy?

Slaughter-Defoe: We drew on the analogy of the drug trade earlier. We might use that again here. People who sell drugs [illegally] don’t care who they sell them to. They are simply interested in making money. There’s no such thing as you being too young for x, y or z, whatever that is. The issue is to market [a product], sell [it] and make [a] profit. So the judgment of the child is probably their [making, not the child’s]. If someone can make [a group of children] feel as if they know what they’re doing, then maybe [that person can become] even more important to [those children] than their parents. We have lots of cases … of kids agreeing to meet someone … and not telling the parents anything about it…. It makes [children] very vulnerable.

I want to add that when the Internet is used well, it’s a wonderful tool. You can have all kinds of wonderful educational projects with kids, engaging them and involving them with people around the world in ways that would not have been possible at an earlier time. But we’re … focusing here on situations that make them more vulnerable and victims.

Matwyshyn: One of the details that you highlight nicely in your chapter is that children’s judgment about what information to disclose or hold back is something to be worried about. Information [that’s disclosed when a child is] 13 can follow that child for the rest of his or her life. And when that child is 26 and trying to get a job, perhaps that unfortunate disclosure at age 13 will come up in a Google search. And it may cost the adult an opportunity that he or she might not have even realized has been lost.

Cem Paya: The Internet is a cruel historian.

Knowledge at Wharton: That’s right. Diana, the other thing you mentioned is that since 1998, there’s been the Child Online Privacy Protection Act, or COPA, which is meant to safeguard against these kinds of problems. How effective has it been?

Slaughter-Defoe: There seems to be a consensus that it has not been very effective, maybe for two reasons. [First] parents are not empowered by it. The act apparently asks the child to make sure that a parent’s permission has been [granted] in order for them to proceed with whatever it is that they’re doing on the Internet…. But it appears that none of this legislation has really [involved much] research and evaluation. When you put [legislation] in place presumably to protect a child, you have to tie it much more carefully than maybe you would with [legislation involving] adults to research and evaluation the long-term … outcomes because you don’t really you know whether it’s [working immediately].

In this case, from all indications, parents know less about the Internet at this point than their children. It’s very difficult for them to guide their children and see that what information is on the Internet that’s supposedly [protecting the child] is actually doing the job.

The second problem is that the kids on the Internet are as young as two years old…. But there’s a [big] difference at age five or six and 11 or 12, and different again at age two or three. None of the legislation currently takes into account anything about a child’s developmental level.

Matwyshyn: You also in your chapter refer to the fact that the act requires verifiable parental consent for any data collection and storage from a child under the age of 13. As Diana mentioned, many children, particularly at age 13, are far more technologically adept than their parents. And what is verifiable parental consent? Originally [legislators] wanted faxed transmissions to demonstrate a parent’s approval. But they created an email exception. Well, as any smart 13 year old will tell you, you can forge your parent’s email about as easily as you can forge a note that you were just at the doctors to get out of study hall. Consequently the ability to verify the identity of the person giving consent is something that has been circumvented by children who [are determined] to get access to a Web service or website.

Slaughter-Defoe: It’s similar to foreign language issues. [For example, think about some] kids who have moved to the U.S. from Colombia. By the time they have been in the States ten years, they might have lost the ability to communicate with their parents because … [the kids’] English is now much better…. And if the parents are not one step ahead of a child, relative to this new language, then they’re really way behind. And, of course, predators and others … are way, way ahead of the kids. I don’t think [many people are] thinking about this right now though.

Knowledge at Wharton: The most serious thing to think about is what needs to be done to improve how children’s information is secured online?

Matwyshyn: What Diana argues in her chapter is that one approach that we should think about looking at ways to empower parents to understand what their children are doing online and help guide children’s development online in the same ways as they try to do offline. Diana recommends having a computer that’s in a shared space to enable a parent to watch the interactions a child has online. And Diana argues that parents, apart from legal regimes that give them the right to control their children’s data, perhaps need help to become educated about the Internet and the types of threats to their children that exist online and learn to protect themselves and their financial information better.

The parents who need to think about protecting their children are the same adults who need to think about protecting their credit card data and defend against phishing and other types of attacks. And they need to know which type of router to buy and what the latest version of encryption is that is most likely to stop an intruder from entering their networks and stealing information.

Knowledge at Wharton: That actually is a perfect segue to Cem’s chapter about financial information. What challenges does that pose?

Paya: A consumer’s financial information poses a unique risk in that it’s accessed and available to many commercial entities. Yet the more people attempt to use it, the less valuable and the less reliable it becomes as a secret. There is, I think, an old saying from Benjamin Franklin that three can keep a secret if two are dead. The problem with credit card, Social Security and bank account numbers is that there’s not two or three, but hundreds of businesses, small and large, across different states, different countries that are processing this information.

We’ve distributed the information to a great extent, yet we still expect it to have the same level of confidentiality that [it once had]. [Just the] knowledge of a credit card number and its expiration date allows a criminal [for all intents and purposes to] print money at the expense of that [credit card holder]. We are essentially doing something that’s not entirely consistent with the nature of the information.

Knowledge at Wharton: When it comes to financial data, do security breaches have a different root cause than other kinds of information?

Paya: [All] security breaches are ultimately caused by a failure in a process or implementation of a security policy. But the damage [from financial information breaches] does have a unique, unusual root cause [in] that financial information cannot at once both be distributed to thousands of entities and be so valuable that mere knowledge or access to it is enough to cause monetary losses. We can’t have it both ways.

It’s not so much that the breaches are surprising. It’s that when a breach occurs, the fact that there is no damage control and containment are impossible is a function of how we … use financial information.

Knowledge at Wharton: So financial information is unique in the sense that it is both confidential and widely disseminated?

Paya: Exactly, that’s the paradox.

Knowledge at Wharton: How can a balance be maintained to allow online commerce to proceed? Clearly online commerce is growing, but we need to figure out a way to balance those two things.

Paya: Since we are not going to put the genie back in the bottle, the only option is to reduce the secrecy requirement and ask, “What happens if my financial information is no longer that secret? What if my credit card number is known by other people? Is that a situation we can deal with?” And surprisingly, for that particular type of information, the answer [to the latter] turns out to be, “Yes.” The credit card networks realize that they can absorb the cost of fraud entirely. They can still say to customers, “Continue to shop freely, you can disclose your credit card number to anybody you like. Continue typing in that number. If there’s any fraud, the system will absorb the losses and you don’t have to worry about it.” And they found that that risk management actually works, that the profits made by the credit card networks more than outweighs absorbing fraud losses.

Unfortunately that’s not the case for other things. Social Security numbers, which have become essentially financial information … because of their use in credit reporting, aren’t at that stage yet. But for credit cards, we have [achieved a balance]….

Knowledge at Wharton: The paradox that you talk about also applies to financial information generally. For example, a company about to merge with another … [will keep] information about that event confidential [during negotiations]…. But once the announcement is made it is, of course, expected to be widely and publicly disclosed. Are there any lessons from the offline world about how you manage this paradox of confidentiality versus the public nature of financial information that can be applicable to this space?

Paya: In the example you’ve mentioned, the shelf life of the secret is limited. If the merger talks are going on for three months, all you have to do is keep it secret for three months…. Best practice is to make sure that your secrets have short shelf lives and can be frequently renewed. That’s not something generally followed with consumer data. Credit cards have multi-year expiration periods and Social Security numbers are indefinite [since] you have one for life.

The lesson from the offline world is … acknowledging the fact that the longer a secret exists, the greater the probability of a breach of confidentiality. So try to limit that window of time. That’s a lesson that hasn’t quite carried over to the consumer financial data, because much of [it] has a very long shelf life.

Knowledge at Wharton: What is the legacy design problem you refer to in the book and how does it affect financial information security?

Paya: The legacy design problem … is the assumption built into many systems and processes we have today that the way transactions will be carried out is by a disclosure of secrets. In other words, to buy something on the Web, I must disclose my credit card number to the merchant. To obtain credit, I must disclose my Social Security number…. To sign up for a cell phone service, I have to disclose my Social Security number.

[What] if we were to say, “Let’s stop doing that and come up with a better way for consumers to, for example, authorize payment or run background checks?” And then say, “Here’s this brand new, far more secure, better designed system.” We’re still stuck with all the processes … that only understand credit card or Social Security numbers. Even if magically … we could deploy something better that gave consumers more control over their data and wouldn’t require them to disclose secrets as part of everyday transactions, there would be a huge and slow migration effort to make a dent in the problem. We’re not starting from scratch, but from the assumption that it’s okay to disclose secrets and that’s how many transactions work.

Knowledge at Wharton: Could you discuss some of the biggest mistakes companies make while trying to protect the privacy of their financial information?

Paya: The biggest mistake … is not having a clear handle on where the information lives. The design of large systems calls for a lot of redundancy. Data is copied, duplicated, backed up, sometimes sent to different partners, data warehouses, shipped off site in case some catastrophic event destroys your data center. So data has a tendency to replicate itself. And one of the big challenges is when companies lose track of where the information is. It’s very hard to point to a particular computer or a particular rack and say, “This is where all the credit cards live.” …. The problem is that the more spread out they are, the more points of failure you have to worry about…. The first challenge [arises by] not having an inventory of what you’re collecting, even if you know where you collect it, not knowing where exactly you put it.

Matwyshyn: [Cem’s] commentary is borne out by PricewaterhouseCoopers, which did a survey of chief information officers, chief security officers, high-ranking … decision makers…. One of the startling [findings] is that a large number, approximately 30%, of the respondents could not provide information about where all of their information assets were stored and this is self-reported. A significant number, similarly, could not identify what the major threats were that the company faced in terms of information security. And many of the individuals stated their organizations did not have a comprehensive information security policy.

There’s a broader lack of planning in many enterprises. In their defense, this field is relatively new. However, the downside of not securing information assets is so severe that it’s important that companies start to focus on process-based, top-down initiatives to incorporate information security at every level of their enterprise. Really the neglect is reaching the point that … an argument could be made that the lack of planning that’s prevalent in U.S. companies may give rise to cause a breach of fiduciary duty. That’s serious. We’ve reached a turning point. This is when it really needs to be addressed aggressively in a process-based approach throughout enterprises.

Slaughter-Defoe: A lot of the people [running companies] are parents, and if this is how they’re functioning at their workplace, you can imagine what they must not be doing at home.

Knowledge at Wharton: If in this room with us right now there were the CEO and the CIO of a company who heard everything you said and they want each of you to give one piece of advice of how they can do better job protecting their information assets, what would that advice be?

Matwyshyn: The first piece would be to set up a top-down process and a culture of security. Have every employee go through mandatory information security training regularly. Have every employee know what to do in the case of an information security breach. One of the key mistakes that many companies make, and I talk about this a little bit in my chapter, is that people from the outside [of a company] will report a security breach and employees simply won’t know what to do with the information. They won’t know who to contact internally to stop the bleeding. Each individual in an organization needs to recognize the importance of the team effort in keeping information secure. And the tone really needs to come from the top.

Slaughter-Defoe: This problem, based on what I’ve heard today and at other conferences, has reached a point where attention needs to be called to the nation’s Department of Homeland Security. They need to get this book. They need to look this over…. They need to think about this in terms of future directions of the nation. There was one comment [today at the conference] from a gentleman about how his state … [is] at least ensuring that there is appropriate communication between people who were engaged in rescue operations. In a manner of speaking, if you project the next 20 years or the next generation, that’s what we’re talking about here. We’re talking about at the state level, resources will be coordinated to protect families and where people work, now that the genie is out of the bottle. I don’t think anybody, say, 20 or 30 years ago thought this was a serious issue that they would have to address. But it’s very much with us. And it puts us in a new era.

Knowledge at Wharton: Cem, what do you think?

Paya: I would echo Andrea about instituting a security policy, but phrase it slightly differently. I would suggest to the CIO, CTO or CEO to build a culture of risk management around information security. Because in my opinion risk management is the right way to look at this, not risk elimination, not zero vulnerability.

Slaughter-Defoe: Unmanageable.

Paya: Unrealistic along those lines, but a culture of risk management in the same way that you would hedge against foreign exchange risks and against losing a customer or some investments going bad. Information security [needs] the same perspective. This is at the strategic level.

At the tactical level, my advice in terms of risk management is doing more with less. The less data you have, the less your risk is. If you don’t need the data, don’t collect it. If you don’t need it anymore, erase it, delete it, shred it. Make sure that fewer systems have access to the data, and fewer people have access to the data. The more you can design a system so that you can do more with the same amount of data, the better you’ll manage your risks.