John Markoff of the New York Times wrote an interesting article this weekend. The central theme was based on a single question in the title: “Do we need a new Internet?” The short answer is no, but to understand that you have to look at the article and see Mr. Markoff’s reasoning for suggesting it.
To start, the article overall comes off as FUD. This type of doom and gloom you would expect from a security vendor, not the university where most of the technologies used on the Internet originated. Stanford, where Markoff teaches, is cited in the article with a quote that is apocalyptic when you read it in context. Adding to that is a second quote that harkens back to 1941-type fear.
“Unless we’re willing to rethink today’s Internet,” says Nick McKeown, a Stanford engineer involved in building a new Internet, “we’re just waiting for a series of public catastrophes.”
“If you’re looking for a digital Pearl Harbor, we now have the Japanese ships streaming toward us on the horizon,” said Rick Wesson, CEO of computer consulting company Support Intelligence.
Mr. Markoff says in his article that the IT security industry is expected to hit $79 billion USD next year, with no mention of the source of that figure, so he could be correct or flat out wrong. Yet, it's what he says afterwards that does the damage.
“Despite a thriving global computer security industry that is projected to reach $79 billion in revenues next year, and the fact that in 2002 Microsoft itself began an intense corporatewide effort to improve the security of its software, Internet security has continued to deteriorate globally.”
Has it really? The driving example on Mr. Markoff’s article is Conficker. The Worm that has entered near infamy recently due to its infection rate, its unknown origins, and the design specs used in the Worm’s code. Even Microsoft has placed a $250,000 USD 'wanted' bounty on information leading to the capture or arrest of the Worm’s designer(s).
However, does Conficker prove that we need a new Internet? No it does not. Conficker proves more care is needed in addressing how networks are managed. Is it any wonder most of the largest victims of the Conficker Worm are businesses? Look at how the Worm spreads: weak network shares, weak system policy with regard to USB or external devices, and a lack of system security patching.
How does any of this prove that the Internet’s infrastructure is in need of an overhaul? It doesn’t. Sure new infrastructure upgrades are needed, but only so the Internet can adjust to the explosion of new technologies and growth to the overall user base. This is one of the reasons behind the push to IPv6, which has stalled.
Other infrastructure improvements have been suggested as well, which will improve security on the Internet. One of the biggest would be DNSSEC, which would combine DNS with PKI to create a more secure networking infrastructure. However, DNSSEC has its own problems, like leaving Zone data exposed. In addition, ISC has released DNSSEC security upgrades, proving that even DNSSEC needed additional security.
Designing a new Internet to deal with security problems looks like a silver bullet approach. The reality is that by introducing new technologies, you only introduce new security problems to deal with. Security, even now, is still reactive. You can argue that developing a new Internet to address security is a pro-active approach, but, as Mr. Markoff points out, a new Internet might look like a gated community “where users would give up their anonymity and certain freedoms in return for safety.”
The idea of security as a tradeoff is well established. You see this in both network security and personal security. However, if you only feel more secure, there is no real security to be had. This is because of the same cat and mouse game that has existed for decades. The good guys design a better mousetrap to secure networks and users; the criminals strive to get past the mousetrap and exploit people and systems.
We do not need a new Internet. The old one is just fine. Using the Conficker Worm as justification for a new infrastructure is FUD. What we need is accountability. The IT market is full of applications, hardware, software, and security experts. So hold them accountable.
Since Conficker is cited in the article as one of the reasons for a new Internet, let’s breakdown the layers that allowed it to spread and see if a new Internet would have helped prevent it.
Patching – Businesses that claim security patches and other system patches or adjustments need to be vetted before deployment only see half of the picture. True, in a large network environment you need to make sure what you deploy will not break something, causing the business to suffer. This applies to security patching.
However, the lack of integration where IT security and business productivity are concerned is a serious flaw. IT within a business is only now starting to grow to where it is considered part of the overall business plan. In most cases, businesses adopt a “just make it work” stance when it comes to IT. This stance, plus the lack of patches, allowed Conficker to enter business networks and spread. Nothing related to the technology used for the Internet infrastructure would have stopped this.
Policy – Another layer of Conficker’s infection method is the ability to place itself on USB drives and other external storage media. When a user places an infected USB drive into a clean system, Conficker takes advantage of AutoRun and executes. By removing the option for AutoRun, preventing employees from using external devices, or enforcing a rule that employees use only approved external devices, Conficker could have been stopped dead in its tracks, when it came to this single method of infection and massive reproduction.
How hard is it to use a secured system image, following best practices, business continuity planning, and other productivity planning that merges IT and the business model for a company, when rolling out new laptops or desktops? When you buy new hardware from IBM, for example, you can use your own image for the systems.
There are network technologies capable of checking drives as they are accessed, which would prevent various malicious activities from taking place -- some NAC appliances come to mind here. Another policy issue, as well as method of infection for Conficker, is password usage. Weak passwords on network or admin shares allow the brute force cracking Conficker uses to work.
Again, how hard is it to develop password policy and enforce its usage? If there are complaints for password management, such as passwords can be too hard to remember, there is training for the end user on proper password creation, or there are technologies that will allow all sorts of management or access options when it comes to passwords.
Accountability – Software, hardware, and experts; three things IT has with regard to security that are supposed to prevent things like Conficker and other attacks. If the business plans call for a device, a piece of software, or a person to protect the network, and it or they fail at that job, why is there no accountability?
Hold the experts to their expert opinion. Hold the vendors to their promises. While there is nothing that provide absolute security, if there is a flaw in deployment or complacency when it comes to checking and testing security, someone needs to address this. Those who are responsible for deployment need to be asked why and how.
Again, business models and IT models within the same company often never end up on the same level of operation. They are two different sides of the same coin instead of being a singular aspect of an overall operation. Until this is fixed, security will always end up one step behind.
Technology – In each of the listed areas above, technologies to address the various problems are referenced. So the argument that incorporating them into a brand new Internet as a solution can be made, but that assumes reinventing the wheel is a normal response to dealing with problems.
Why would a new Internet be any better than the one we have now? It wouldn’t, and as an example you need only look at the Trustworthy Computing Security Development Lifecycle (or SDL) that Microsoft adopted so long ago. It developed a process to develop software with security in mind from the very start of the development lifecycle. Something discussed all the time, where the logic is that if things were developed more securely (Web applications, software, etc.) then most of the problems would simply fade away.
Yet, has the SDL really helped protect Microsoft products from harm? Yes it has, as long as the harm comes from an attack based on methods from 1998 or so. When SDL prevented the old attacks from working, criminals used new attacks, taking advantage of new technologies.
So no, a new Internet will not protect you. All a new Internet would do is open a new era of security problems to solve, and introduce a new breed of Internet criminal.
Lastly, the idea of a new Internet fails to address one of the largest factors in Internet or network-based security, the end user. All the technology on the planet will not stop the power of an end user when it comes to clicking and installing anything they want, security implementations be damned.
We would be better off improving what we already have by anticipating the next threat, instead of reacting to the current one.
[Note: I respect John Markoff. I think he is a great reporter, and he knows what he is talking about when he writes or teaches. However, I completely disagree with the notion that we need an entirely new Internet to tackle the security problem. This article is my opinion alone, and does not necessarily reflect the opinions of the staff at The Tech Herald or Monsters & Critics.]