Uncategorized Accelerate Your SDLC With DevSecOps – DevOps.com
Home » Blogs »
DevOps has been the answer to rising software development complexity, but the granularity and multiplicity of actors, technologies and environments brings added security requirements. Moving to DevSecOps will not only help with these requirements but also accelerate the software development life cycle (SDLC).
As development projects mature, more developers get involved, the code base grows and architecture becomes more complex; consequently, the SDLC slows down. This is why Agile, microservices and DevOps have emerged—to solve this specific problem by separating big teams into smaller ones and monoliths into more nimble components.
But this granularity and multiplicity of teams, work environment, technologies, services and repositories makes integrating security even more complex. In fact, developers now face multiple SDLCs and often, the gains achieved in velocity can be nullified by security issues. You want a process that works, and preferably, a process that works fast.
This is what DevSecOps attempts to solve; adding security into DevOps actually accelerates the SDLC. Here’s why and how.
The planning phase is key as it answers essential questions at the time when architectural decisions are made. Some of these decisions are relative to access control, network infrastructure and data security. For each, adding automation—and therefore, security—allows fast replication.
For access control, you can leverage infrastructure-as-code (IaC) tools (Terraform, for example) to easily define different groups, roles and permissions with code. By adding various plugins, you can even use your IaC tool to manage users on other platforms, as in GitHub. When you have all the groups, roles and permissions defined as code, you are only one commit away from access control updates.
If you need to onboard a new team member not only to, say, a specific AWS group but also to a GitHub group, instead of doing it manually in two places, you can do it “as-code” to both with minimal changes.
The complexity of organizations and the need for better security implies the need for more role-based access controls, but automation tools exist, and they are also less error-prone.
When it comes to network infrastructure, you can also use IaC, but you will also need to add a security layer on top of it to mitigate any errors in the code. As an example, you can use tfsec and terrascan on Terraform. This will scan your code statically and, based on predefined rules, generate alerts and give you advice on improving your security. If you prefer CloudFormation, use cfnlint, which even allows you to define your own security rules.
When you have your infrastructure ready, the next big thing to think about is your data. You need to put data encryption in place to make sure data is securely stored at rest. You also need to add transport layer security (TLS) for the databases to secure the traffic in transit. Besides using static code analysis tools and ready-to-use services, you can even automate your own pipeline. This is very useful when you have multiple agile teams.
Let’s take S3 buckets as an example. You could build a generic solution where the S3 event triggers a serverless function that checks the bucket’s permission. If a policy is violated, it can be fixed right away and the owner notified automatically.
During the development phase, you need to address three major security areas: Code repository security, continuous integration (CI) security and container image security.
Code repository security needs to address questions around the repository access control, considering the principle of least privilege but also checking that no repository is made public by accident or that they do not contain sensitive elements, such as secrets.
With a multi-team, agile and microservices setup, you need automation to skip time-consuming and error-prone tasks but also to avoid slowing your SDLC to a crawl. We already mentioned git user management automation, but there are also a number of tools for static code analysis and secrets detection. You can even include this detection in your CI pipeline and integrate it with your alerting workflow to eliminate the risk from the origin.
CI security is often overlooked, but this is a very important element; if your CI is not set up correctly it can generate data leaks. Access to your CI needs to be controlled to give the proper permissions and access rights. Doing this manually will definitely impact your SDLC velocity so, again, there’s a need to automate. You can integrate your CI with your Git or company single sign-on (SSO) solution for authentication. For authorization, you may rely on the groups’ information provided by those SSO services.
Secrets are also critical for CI. Modern CI pipelines allow you to store secrets at a project level. Once configured, you can use it directly in the build without putting the sensitive data in clear text format. Even if you try to print the variable containing sensitive data, it will not show up. For instance, to use secrets directly from a Jenkins pipeline, you can store your secrets in a Kubernetes cluster and use Kubernetes Credentials Provider or HashiCorp Vault Plugin.
Now, let’s talk about containers. While containers have simplified the deployment, scaling and failover capabilities within the SDLC, they also introduce some challenges. If the container image has some known vulnerabilities, your containers could be exploited, and the integrity of the whole machine—or even the entire system—can be compromised. Automation is, again, the go-to solution to ensure the images you build don’t contain vulnerabilities. Some CLI-based tools like Trivy can be easily integrated into your CI pipelines.
Automation will significantly reduce your operational overhead. No more managing machines (physical or virtual), no more configuration management, no more patching. If everything is automated, that means less toil—and that means all means your SDLC is not only more secure but also faster.
Kubernetes (container orchestration) is the current solution to streamline integration and deployment in our agile world. But Kubernetes is a complex system and comes with its own security challenges, of which the most important is secrets.
The worst thing you could do is define secrets in YAML files and store them in Git repos. You could use Kubernetes’ secrets as the single source of truth, but what about secrets created by people who don’t need access to Kubernetes? This is where a secrets manager is a good choice: It acts as a single source of truth to store your sensitive data. You will get an automated process that eliminates the toil of manual labor but also guarantees security and speed.
Incident response playbooks are a crucial component of DevSecOps. Playbooks are a set of generalized and summarized processes that provide a consistent way of handling issues for quicker response and resolution. Learning from every incident is also part of the playbook, whether it’s a security issue or a new vulnerability.
The content can include everything from runbooks and checklists to templates, training exercises, security attack scenarios and simulation drills. The goal is simple: A set of policies, processes and practices for quickly responding to and resolving unplanned outages, thus helping teams fix online issues more quickly to reduce the size of the blast radius.
We have only scratched the surface of security automation capabilities and their potential impact on the SDLC. But it’s clear that adding (automated) security into DevOps has a positive impact on the overall velocity of the SDLC and, of course, on its security posture.
Filed Under: Blogs, DevOps and Open Technologies, DevOps Practice, DevSecOps
© 2022 ·Techstrong Group, Inc.All rights reserved.