It seems that software security vulnerabilities are rarely out of the news; from bugs which can compromise SCADA systems, to flaws in games design which allow fraudsters cash in at the expense of unwitting users.
Defects broadly come in two flavours: bugs and flaws. Knowing the difference between them is at the heart of securing our software. It’s an important distinction as the activities we need to undertake to find and fix bugs are totally different from what we do to find and fix flaws.
Attackers do not care about bugs versus flaws. Any defect at all is enough to accomplish their goals. For us, though, our software security activities in our lifecycle must cover both. It is also essential that organisations understand the difference between bugs and flaws so they can get a handle on the cost to fix them and the residual risk from not fixing them.
The simplest defect is a bug. It tends to be localised to a specific region in the code (a function, a method, a configuration file, etc.). This could be something as simple as a regular expression that allows unacceptable characters as input, or it might be subtle, like leaking resources in an unusual corner case. The important qualities that mark a bug are: it can be found in the code and it can be fixed in the code with little or no disruption to other parts of the system.
Many injection attacks (e.g., SQL injection, cross-site scripting, log injection) boil down to a bug. Likewise, direct object reference, failing to check authorisation and misconfiguration problems also tend to be bugs, there are, of course, exceptions to this rule.
The way we recognise bugs is by examining what is wrong and describing what we do to fix them. If you can describe the fix without altering any business stories for your application, it is probably a bug. If adding a few lines of pattern matching on input, or adding an isolated authorisation check, makes the problem go away, then we have probably found and fixed a bug. Also, if the problem and the fix are localised to a few files, again, we have probably fixed a bug.
A flaw is a much more serious problem to fix and represents a mistake in the design or architecture, so when software has a flaw, it will exhibit the problem even if all the code is written exactly as designed. The common flaws we see include client-side trust, misuse of cryptography and failures to enforce workflow.
What makes client-side trust a flaw and not a bug, are the broad changes required to fix the defect. When we fix client-side trust issues, the client will now receive error messages and will get into error states that were not possible before. The server will actually reject bad requests, and the user may see new and different errors. Business stories have to be written to cover the user experience, and that is usually the hallmark of a flaw.
We recognise flaws, then, by a few characteristics. If we change the design or the architecture, it is probably a flaw. If we modify the fundamental interaction between modules to fix the problem, it is probably a flaw. If concerns like user experience or backwards compatibility come up, we are probably considering a flaw.
The 50/50 problem
In Cigital’s 20 years of experience, we find security defects falling roughly 50/50 into the bugs and flaws categories. Therefore, if we want to protect our software and build security in, we must have activities in our lifecycle that cover both.
We find bugs by analysing source code either manually or with a tool and we triage and fix them through standard defect management practices. A bug is a bug and fixing it has costs in time and money, but these are typically tractable and we know how to handle these sorts of defects.
Flaws are more complex and the trade-offs for fixing them are usually significant; we must find these through threat modelling, architecture risk analysis, design review, and similar activities. Fixing them requires changes to the architecture or the design, and those might ripple across the software and on into future software designs. We have to think about the future of our software, the ecosystem in which we operate, and what may be important to us later on.
Most importantly, business requirements often constrain our ability to change our architecture. A bug is a failure to build according to the design. An incorrect design, however, might prevent customers from using the software. It might cause the software to do less than it did before. We might remove behaviours that users had come to rely on, so even when security flaws are found, fixing them may involve very delicate and complex trade-offs.
Ultimately, we can fail if attackers find a single defect (bug or flaw). We must have a way of finding and fixing both kinds of defects; if we make our architecture more secure in a way that damages our business, however, we also fail.
Spotify – no simple fix
One recent example of this challenge was the Spotify vulnerability which allowed free downloads for users of the service. Downloadify allowed users of Google’s Chrome browser to simply store the music they were hearing from Spotify. The music was stored in unencrypted MP3 files—one of the most common and convenient forms available.
The fundamental challenge here is that security, while important, is in tension with the goals of Spotify , which are to play on many devices and to deliver a good experience for the listener. Encryption would require Spotify to encrypt the music just before sending it to your browser. Decryption would have to happen at the user’s end of the connection.
If Spotify were to encrypt or do anything to the audio at the server side, the device would have to decrypt data before it could decode the MP3 audio. Whilst a PC or laptop could do this quickly enough for a good listening experience, a tablet or phone couldn’t. The mobile user experience, as well as battery life, would be a disaster. Requiring decryption on the client would provoke the ire of handset manufacturers, electronics manufacturers, and mobile users.
What this example shows us is a trade-off in the architecture. We can call the lack of encryption a flaw, as fixing it would require new design and architecture. Yet if Spotify had chosen to encrypt music files, they would be aggravating rightsholders, compromising the user experience, and ultimately damaging the profitability of their business.
Organisations and developers are facing this choice everyday, and there’s no simple answer or silver bullet. What we must be doing as an industry is considering security at the beginning of the development lifecycle, make a calculated and conscious assessment from the start and make sure that whatever we build can support growth and development in the future. Understand the strengths and weaknesses of your software, but most importantly, get the core design solid to minimise the chance of re-factoring a major flaw somewhere down the line.