Pushing Left, Like a Boss: Part 2 — Security Requirements
As previously published on my blog, SheHacksPurple.
In the previous article in this series we discussed why ensuring the security of software is an elusive task; application security is hard to achieve with how the InfoSec and software development industries and education system(s) currently works. We talked about the importance of starting security actives early in the SDLC and about formalizing them as part of your process. But what ARE these actives? How do they work, and when do we do what? That, dear reader, is what this article is about.
As you recall from the previous article, the system development life cycle generally looks like the image below:
Whether you are doing Agile, Waterfall, or if you have a DevOps culture at your office, you always need to know what you are building (requirements), you need a plan (design), you need to code it (the fun part), testing is obviously a must, and then you release it out into the wild (hopefully you also maintain and monitor it as well, which is all part of the “release” phase). Each one of these phases should involve security activities. Let’s look a little deeper, shall we?
When writing requirements there will always be security questions, such as; does it contain sensitive or Personally Identifiable (PII) data? Where and how is the data being stored? Will this application be available to the public (Internet) or internally only (intranet)? Does this application perform sensitive or important tasks (such as transferring money, unlocking doors or delivering medicine)? Does this application perform any risky software activities (such as allowing users to upload files or other data)? What level of availability do you need? 99.999% up time? These and many more are the questions that security professionals should be asking when assisting with requirements gathering and analysis.
Here is a list of default security requirements that I would suggest for most software development projects:
- Encrypt all data at rest (while in the database)
- Encrypt all data in transit (on its way to and from the user, the database, an API, etc)
- Trust no one: validate and sanitize all data, even from your own database
- Encode (and escape if need be) all output
- Scan all libraries and third-party components for vulnerable components before use, and regularly after use (new vulnerabilities and versions are released all the time). To do this you can use any one of the following tools: OWASP Dependency Check, Snyk, Black Duck, etc.
- Use all appropriate security headers
- Hash and salt all passwords. Make the salt at least 28 characters.
- Only allow your site to be accessible via HTTPS. Redirect from HTTP to HTTPS.
- Ensure you are using the latest version of TLS for encryption (currently 1.2)
- Never hardcode anything. Ever.
- Never put sensitive information in comments, ever. This includes connection strings.
- Use all the security features within your framework, for instance session management features or input sanitization functions, never write your own.
- Use only the latest version of your framework of choice, and keep it up to date
- If performing a file upload, ensure you are following the advice from OWASP for this highly risky activity. This includes scanning all uploaded files with a scanner such as Assembly Line, available for free from the Communications Security Establishment of Canada (CSE).
- Ensure all errors are logged (but not any sensitive information), and if any security errors happen, trigger an alert
- All sanitization must be performed server-side, using a whitelist (not blacklist) approach
- Security testing must be performed on your application before being publicly released
- Threat modelling must be performed on this application
- Code review (specifically of security functions) must be performed on this application
- If the application errors it must catch all errors and fail safe or closed (never fail into an unknown state)
- Specifics on role based authorization
- Specifics on what authentication methods will be used. Will you use Active Directory? ASP.NET Core Identity? There are many options and it’s a good idea to ensure whatever you choose works with how you are manging identity for your enterprise and/or other apps
- Only using parameterized queries
- Forbid passing variables that are of any importance in the URL. For example, you can pass which language (“en”, “fr”, “sp”) but not your userid, bank account number or anything of any importance within your application if the value is changed.
- Ensure your application enforces least privilege, especially in regards to accessing the database or APIs.
- All users to cut and paste into the password field, but disable password autocomplete features
- Disable caching on pages that contain sensitive information
- Ensure passwords for your application’s users are long, but not necessarily complex. The longer the better, encourage use of passphrases. Do not make them change their passwords after a certain amount of time, unless a breach is suspected. Verify that new user’s passwords have not previously been in a breach by comparing sha1 hashes against the HaveIBeenPnwed API service.
- All connection strings, certificates, passwords and secrets must be kept in Key Vault or something similar.
Depending upon what your application does, you may want to add more requirements, or remove some. The point of this article is to get you thinking about security while you are writing up your requirements. If developers know from the beginning that they need to adhere to the above requirements, you are already on your way to creating more secure software.
Up next in part 3 we will discuss secure design.