Why Developers Build Insecure Apps

Insecure code continues to be a major cause of data breaches today. So why are so many talented developers writing "bad code"?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Insecure code continues to be a major cause of data breaches today. So why are so many talented developers writing "bad code"? Actually, software vulnerabilities have nothing to do with your talent as a developer - instead, it's all about your security training and experience, and your team's ability to utilize industry best practices when building an app.

Software vulnerabilities have been a serious problem for years. Consider these statistics:

  • HP's 2012 Cyber Risk Report found that 77% of mobile apps were susceptible to information leakage vulnerabilities - and 44% of web and mobile apps were vulnerable to cross-site scripting attacks.
  • In 2013, Veracode's State of Software Security Report found that 70% of web and mobile applications failed to comply with enterprise security policies.
  • Cenzic's 2014 Application Vulnerability Trends Report found that 96% of all web/mobile apps had at least one serious vulnerability (down from 99% the year before).

An additional reminder of the shortcomings in secure development practices is to be found in the OWASP Top 10 - a ranking of critical flaws found in web applications, which comes out every three years. Many of these vulnerabilities - such as injection, broken authentication, cross-site scripting, insecure direct object reference and cross-site request forgery - have appeared in the OWASP Top 10 since 2007, meaning developers continue to make the same mistakes.

So why do developers keep running into the same problems when it comes to application security? The problem stems from a number of causes - the business culture of startups and enterprises, demands on the development team, lack of security experience, bad habits and the view that security is more of a cost than benefit. However, with the growing risk of data breaches to companies and consumers, it's important for developers to address these underlying security concerns and build secure software. While skeptics will say "nothing is 100% safe," it is in fact possible to design highly secure apps, when security is built into the software from the very beginning of the development process, using the right tools and set of requirements.

Here are five reasons why developers build insecure apps:

  1. Security Isn't a Priority - A 2012 comScore study found that half of all US developers don't consider security a priority. This is partly due to the fact that shipping a product late or with fewer features has more tangible consequences than software security. But it's also a problem of perception - developers (and executives) often take the attitude that insecure software doesn't pose that much of a risk to begin with, assuming that flaws can simply be patched later, as they're discovered. Tip: Software security needs to be part of the conversation from the very beginning of the process. Every development team managing sensitive data should have a security lead who is directly responsible for overseeing and streamlining security issues during the build process. A single vulnerability can cost $5,000 to detect and fix, and many apps have dozens of flaws when they first roll out.
  2. Code Reuse - Developers continue to write code in C/C++ because it's easy to reuse and extend, plus apps written in these languages have much better performance than most other languages. But these unmanaged programming languages have a number of security issues, including susceptibility to lower level memory flaws that are hard to eradicate. Building a real-world application free of known, preventable software security defects is substantially more difficult in C/C++ than it is in managed language equivalents. Tip: Developers should avoid writing applications in C/C++ and instead use managed languages, such as C#, Java, Ruby, Python, or newer languages such as Rust.
  3. Inadequate Security Requirements - With the exception of a few industries, such as banking and healthcare, there is no regulation that requires developers to integrate security apart from a superficial compliance with a generic list like the OWASP Top 10. As a result, development teams often do incomplete security checks for vulnerabilities that leave the apps vulnerable. The problem with relying on the OWASP Top 10 or the OWASP Top 10 Mobile Risks is that they list broad categories of threats, as opposed to specific actionable flaws, and therefore it isn't easy to remediate them unless the developer knows how to perform an in-depth analysis of the vulnerability's relevance to her specific application. The OWASP Top 10 may also leave out threats that are relevant to your app (ex: a Rails application may be susceptible to a mass assignment vulnerability, but you won't find that directly in the Top 10). Tip: While the OWASP Top 10 is a good start, developers should build an exhaustive set of security requirements based on the set of security weaknesses in the Common Weakness Enumeration.
  4. Over-Reliance on Vulnerability Scanners - There are a few problems with relying on scanners to gauge your app's level of security. First, the scanner can miss some important vulnerabilities like privilege escalation or insufficient authentication controls. Second, if you rely on a scanner, you're invariably going to be waiting until the end of the build process to incorporate security into the app. Both of these issues can leave your app vulnerable when it goes live. Tip: Don't wait until the end to check your app for security flaws. Instead, this should be part of the development process from the very beginning. Use scanning only to verify the work you've already done.
  5. Lack of Experience - Developers often make the mistake of thinking they know security because they're good at coding - and that security mistakes are only made by bad developers. In fact, programming competency isn't always sufficient to prevent security defects. If it were, then why would so many talented companies - Facebook, Google, Apple, Microsoft, etc. - produce vulnerable code? Tip: Software security requires highly specialized training. Developers should begin by reading the OWASP Developer Guide to better understand software security and regularly participate in industry training courses.
Close

What's Hot