Secunia issued their annual report this week, covering the “sheer numbers of vulnerabilities” discovered in 2008. In one of the sections, dealing with vulnerabilities on browsers, there are some interesting metrics. One surefire way to pick a fight in some circles is to declare that “browser x” is more secure than “browser y”. The actual browser will vary, but no one will see eye-to-eye on this issue.
The data that Secunia offers up is only a small glimpse into the world of vulnerability research when it comes to browsers. According to the report, there were over 200 vulnerabilities reported in 2008 across each of the major browsers. Opera had the least amount of reported vulnerabilities with 30, followed by Internet Explorer with 31, Safari with 32, and the leader is Firefox with 115.
Statistics like this are what fuel the debate over which browser is the most secure. If you take the numbers out of context, clearly Mozilla’s Firefox has the most problems when it comes to security. Likewise, using just the figures alone and taking them with no context, Opera is clearly the most secure browser out there. However, these arguments and this use of logic are flawed, not just because the numbers are out of context, but because security is about more than just vulnerabilities.
Browsers, like any software, are vulnerable by default. There will always be a glitch, bug, or gaping wide hole, for which a criminal will find a way to exploit. This happens more often that many will admit, and if you look at the low numbers reported by Secunia, then you can see that something is off.
It’s not that Secunia’s data or collection methods are flawed; it’s that they can only work with known vulnerabilities. Those that were reported by disclosure from the person who discovered them (full or responsible disclosure methods are given equal listing in the report), or those reported by security advisories from the vendor.
Taking the vulnerability counts in the Secunia report and placing them in context adds depth to the numbers. For example, browser plug-ins, which are the root cause for most browser based vulnerabilities. ActiveX had 366 reported vulnerabilities in 2008, Java had 54, Flash, 19, QuickTime, 30, and Opera Widget had none. There is a listing for a single Firefox Extension, but to be fair, Flash, and Java issues also affect Mozilla’s browser in some cases.
Looking at the list of plug-in vulnerabilities and relating them to a browser, you can see how the numbers can be slanted out of context. In this case, considering that ActiveX is the plug-in with the most issues, you can now argue Firefox is the better alternative to Internet Explorer. You would be incorrect if you did, but again this is how the numbers are used. Most of the arguments forget that Flash, QuickTime, and Java can be used against each of the four major browsers.
Another aspect of the Secunia report, and another metric used in arguments online, is the time it takes to patch vulnerabilities within a browser. Considering the previous measurements of plug-ins and total number of reported vulnerabilities, the time it takes to patch these flaws is important, and often used as a base to prove that one browser is more secure over another.
In their report, Secunia only listed to metrics for the “Window of exposure,” meaning how long a user was exposed to risk due to an unpatched flaw. The reported metrics for this section are critical, but need to be understood when seen. They only list the total days a user was exposed to risk by vulnerabilities that were reported without notice to the vendor. This means that the vulnerabilities listed in the report were disclosed to the public and the vendor at the same time.
Based on the Secunia report, only Internet Explorer and Firefox are listed. Internet Explorer has six vulnerabilities listed, and Firefox has three. Of the listed vulnerabilities for Internet Explorer, three are still unpatched. However, of these three, two are less critical, and one of them is listed as not critical.
The three patched issues on Internet Explorer took an average of 99 days to patch. They are rated as high, moderate, and less critical respectively. In comparison, Firefox took 44 days to on average to patch the three listed issues. Of the three issues listed for Firefox, two of them are rated not critical; the last one is rated less critical.
Considering the imbalance of the metrics used in the Window of exposure section, there is little you can gain information-wise to prove or disprove browser security. Reports like this are great when doing research, but using them as supporting facts when arguing that one browser is more secure over another is ultimately pointless.
As mentioned, browsers will always be insecure. Somewhere, somehow, a bug or flaw in code will be exploited. Moreover, because most browser-based security issues actually originate with plug-ins, targeting the end user via the browser using those means, time to patch metrics and reported vulnerability stats are just numbers on a bit of paper.
To judge browser security based on vulnerability reporting or time to patch metrics isn’t fair. As browsers get more complex so does the code to develop them; the more code you add to something, the more risk you assume. Criminals are smart; they know they can’t attack Internet Explorer directly, but attacking it via ActiveX to target the user works in most cases. Firefox is solid and secure, but targeting Firefox users by exploiting Flash issues, that works.
So which browser is the most secure? None of them, not a single browser is the most secure when compared to another. They are all filled with bugs, some of these bugs will lead to security problems directly, and other minor bugs will be exploited to create a security problem. The trick to browser security is to keep on top of patches, update plug-ins when new versions are released, and use caution and common sense when surfing the Web.