Three months into the remote work shift, most organizations have figured out the basics: VPNs, video calls, cloud-based collaboration tools. But there’s a less visible change that concerns me more — the rapid expansion and loosening of CI/CD pipelines to accommodate distributed teams, often without corresponding security review.
I’ve been consulting with several teams over the past few weeks, and a pattern keeps emerging: pipelines that were designed for a small team working from an office network are now exposed to a much broader set of access patterns, credentials, and integration points. The attack surface has grown, and the security posture hasn’t kept up.
The Credential Sprawl Problem#
The most immediate risk is credential management. A typical CI/CD pipeline has access to an alarming number of secrets: cloud provider credentials, container registry tokens, database passwords, API keys for third-party services, SSH keys for deployment targets. These credentials are often stored in the CI system’s secret management — GitHub Secrets, GitLab CI variables, Jenkins credentials store — but the scope of who and what can trigger pipeline runs has expanded.
Before March, many teams had implicit security controls: code was pushed from known IP ranges, builds ran on on-premises Jenkins servers, deployment was gated by a few senior engineers who were physically present. Now, code is pushed from home networks, self-hosted runners are accessed over VPN, and the pressure to ship quickly has loosened change approval processes.
I reviewed one team’s GitHub Actions configuration last week and found production AWS credentials with AdministratorAccess being injected into every pull request build — including PRs from forks. That’s not a theoretical vulnerability; it’s an open door. Anyone who submits a PR can exfiltrate those credentials by adding a run: echo $AWS_SECRET_ACCESS_KEY step.
Supply Chain Attacks via Build Dependencies#
The second vector that worries me is dependency resolution during builds. Most modern build pipelines pull dependencies from public registries — npm, PyPI, Maven Central, Docker Hub — as part of every build. In a secure network, you might have a proxy or artifact cache that provides some control. In the rush to enable remote builds, several teams I’ve talked to bypassed these caches because they were only accessible from the office network.
This means builds are now pulling directly from public registries, which exposes them to dependency confusion attacks, typosquatting, and compromised packages. The recent research by Alex Birsan on dependency confusion hasn’t been published yet, but the underlying vulnerability — that private package names can be shadowed by public packages with higher version numbers — has been known for a while.
The mitigation is straightforward in principle: use a package proxy like Artifactory, Nexus, or even a simple npm/PyPI mirror, and configure your builds to pull exclusively from it. Pin your dependency versions. Use lock files. Verify checksums. Most teams know this but haven’t implemented it consistently across all build environments.
Self-Hosted Runners: The Forgotten Perimeter#
If you’re running self-hosted CI runners — whether Jenkins agents, GitLab runners, or GitHub Actions self-hosted runners — you’ve effectively created compute resources that execute arbitrary code triggered by repository events. In an on-premises environment with proper network segmentation, the blast radius of a compromised runner is limited.
In a remote-work setup, runners are often provisioned in cloud environments with broader network access. They might have credentials to reach production infrastructure, internal APIs, or databases. A malicious or compromised pipeline step can use the runner as a pivot point into your infrastructure.
GitHub has documented the risks of self-hosted runners with public repositories, but the same principles apply to private repos if you don’t trust all contributors. The recommendation is to run builds in ephemeral, isolated environments — containers or VMs that are destroyed after each build. But implementing this properly requires investment in infrastructure that many teams haven’t made.
Hardening Recommendations#
Based on what I’ve seen across multiple teams, here’s a practical checklist for tightening your CI/CD security:
Credential scoping: Never inject production credentials into PR builds. Use environment-based secret scoping (GitHub’s environment protection rules, GitLab’s protected variables). Rotate credentials regularly — if you haven’t rotated since the office closed, do it now.
Build isolation: Run builds in ephemeral containers. Don’t reuse build environments across projects. If you’re using self-hosted runners, ensure they don’t have network access to production systems.
Dependency pinning: Pin all dependencies to specific versions with lock files. Use a private artifact proxy. Consider tools like Dependabot or Renovate for automated dependency updates with review.
Pipeline-as-code review: Treat your CI configuration files (.github/workflows/*.yml, .gitlab-ci.yml, Jenkinsfile) as security-critical code. Require review for changes. Use CODEOWNERS to restrict who can modify pipeline definitions.
Audit logging: Enable and monitor audit logs for your CI system. Know who triggered what build, what credentials were accessed, and what artifacts were produced. Most CI systems provide this; few teams actually look at the logs.
My Take#
CI/CD pipelines have become the circulatory system of modern software delivery. We’ve invested heavily in making them fast, reliable, and automated. We haven’t invested nearly enough in making them secure.
The remote work shift didn’t create these vulnerabilities — they were always there. But it removed several layers of implicit security (network perimeter, physical presence, slow pace of change) that were masking the underlying risks. The teams that had already adopted zero-trust principles for their build infrastructure are fine. The teams that relied on “well, it’s on the internal network” are now scrambling.
My advice: treat your CI/CD pipeline with the same security rigor you apply to production systems. Because in practice, it is a production system — one that has access to all your other production systems. The fact that it runs in the background and “just works” makes it easy to overlook. Don’t.
If you only do one thing this week, audit the credentials available to your PR builds. You might not like what you find.
