Deployment

How to Set Up CI/CD Pipeline from Scratch

Written by Jack Williams Reviewed by George Brown Updated on 4 March 2026

Overview and goals of a CI/CD pipeline

A CI/CD pipeline is a set of automated steps that take code from a developer’s machine to running software in production. The goal is to make deliveries fast, repeatable, and low-risk. Continuous Integration (CI) focuses on building and testing code frequently. Continuous Delivery and Continuous Deployment (CD) focus on delivering and deploying releases safely.

A clear pipeline reduces human mistakes, speeds feedback, and increases confidence in releases. It also makes it easier to track what changed, who changed it, and how to undo it if needed.

Prerequisites and local environment setup

Before building a pipeline, set up a predictable local environment. This reduces “works on my machine” problems.

Install and configure:

  • Git for source control.
  • A code editor and language runtime (Node, Python, Java, etc.).
  • Docker to standardize builds and test environments.
  • A package manager (npm, pip, Maven) for dependencies.
  • A CI runner or local emulator for pipeline steps (for example, GitHub Actions Runner or GitLab Runner).

Best practices for local setup:

  • Use the same runtime versions as CI. Version managers (nvm, pyenv, sdkman) help.
  • Containerize local dev with Docker Compose when services are needed.
  • Store environment variables in .env.example and never commit secrets.
  • Provide a Makefile or npm scripts for common tasks so all devs run the same commands.

Choosing tools and technology stack

Pick tools that match your team’s skills and your deployment targets. No single tool fits every case.

Consider:

  • CI server: GitHub Actions, GitLab CI, Jenkins, CircleCI, Azure Pipelines.
  • Artifact storage: Nexus, JFrog Artifactory, GitHub Packages.
  • Container registry: Docker Hub, Amazon ECR, Google Container Registry.
  • Infrastructure as Code: Terraform, CloudFormation, Pulumi.
  • Configuration management: Ansible, Chef, Salt.
  • Monitoring and logging: Prometheus, Grafana, ELK (Elasticsearch, Logstash, Kibana), Loki.

Choose based on these criteria:

  • Integration with your version control and cloud provider.
  • Community and plugin ecosystem.
  • Ease of maintenance and cost.
  • Security features (secrets, audit logs, RBAC).

Version control and branching strategy

Version control is the backbone of CI/CD. Use a clear branching model so everyone knows how and when code lands.

Common strategies:

  • Trunk-based development: short-lived feature branches, frequent merges to main/trunk. Works well with fast CI/CD.
  • GitFlow: feature, develop, release, and hotfix branches. Useful for strict release cycles.
  • Feature branching with pull requests: combine code review with automated checks before merging.

Best practices:

  • Protect main branches with required passing CI checks and reviews.
  • Use descriptive commit messages and small, focused pull requests.
  • Add automated checks in PRs: linting, unit tests, security scans.
  • Tag releases with semantic versioning (vMAJOR.MINOR.PATCH).

Build automation and dependency management

Automate builds so everyone gets the same output from the same inputs.

Key points:

  • Use build tools (Gradle, Maven, npm, Make) to produce reproducible artifacts.
  • Lock dependency versions with lock files (package-lock.json, Pipfile.lock).
  • Cache dependencies in CI to speed builds.
  • Build inside containers for environment consistency.

Practical steps:

  • Create a build script that runs in CI and locally (npm run build, mvn package).
  • Fail fast on build errors.
  • Store build metadata: commit SHA, build number, and time inside the artifact for traceability.

Continuous Integration: configuring the CI server

CI should automatically build and test every change. Configure pipelines to give fast feedback.

Essentials:

  • Run CI on pull requests and commits to main branches.
  • Keep pipeline stages simple and fast: install -> build -> test -> static analysis.
  • Parallelize tasks where possible: run tests in parallel or split by test suite.
  • Use pipeline caching and artifacts to avoid re-downloading dependencies.

Example pipeline flow:

  • Checkout code
  • Install dependencies
  • Lint and static analysis
  • Unit tests
  • Build artifact and push to artifact store (if tests pass)

Tip: Start with unit tests only in the quick path. Run heavier tests in nightly or gated stages.

Automated testing: unit, integration, and end-to-end

Testing is the safety net that makes automation trustworthy. Use a test pyramid: many unit tests, fewer integration tests, and even fewer end-to-end (E2E) tests.

Unit tests:

  • Fast and isolated. Run on each commit.
  • Mock external services.
  • Validate logic and edge cases.

Integration tests:

  • Test interactions between modules or with real databases/services.
  • Run in CI but accept that they may be slower.
  • Use test databases or ephemeral containers.

End-to-end tests:

  • Simulate real user flows in a production-like environment.
  • Run less frequently (nightly, pre-release), or run a small subset on each PR.
  • Use test automation tools like Cypress, Playwright, Selenium.

Test data and environment tips:

  • Seed databases with known data.
  • Run tests in containers or separate ephemeral environments.
  • Clean up resources after tests to avoid noise and cost.

Artifact storage, versioning, and promotion

Store build artifacts to avoid rebuilding and to enable rollbacks.

What to store:

  • Binaries, container images, and release manifests.
  • Metadata: version, commit SHA, build number, dependencies.

Artifact repository practices:

  • Use a single source of truth for artifacts.
  • Enforce immutability: once an artifact is published for a release, do not overwrite it.
  • Use semantic versioning and include metadata labels for traceability.

Promotion workflow:

  • Promote artifacts through stages: snapshot -> staging -> production.
  • Tests and approvals should gate promotion.
  • Keep a clear mapping from deployed artifact to version control commit.

Continuous Delivery and Deployment strategies

Continuous Delivery means artifacts are always ready to deploy. Continuous Deployment automates deployment to production after passing checks.

Deployment strategies:

  • Rolling updates: update instances gradually, reducing downtime.
  • Blue-green deployment: switch traffic between two identical environments.
  • Canary releases: send a small share of traffic to the new version first.
  • Feature flags: deploy code safely by toggling features on/off without redeploying.

Choosing a strategy:

  • Use blue-green for minimal downtime and quick rollback.
  • Use canaries for gradual risk exposure and monitoring.
  • Use feature flags to decouple deploy from release decisions.

Rollback procedures:

  • Automate rollbacks by redeploying the last known-good artifact.
  • Keep database migrations backward-compatible when possible.
  • Document manual rollback steps as runbooks in case automation fails.

Infrastructure as Code and environment provisioning

Treat infrastructure as code to make environments reproducible and versioned.

Tools and practices:

  • Use Terraform, CloudFormation, or Pulumi to define infrastructure.
  • Keep IaC in the same repository or a parallel repo with versioning.
  • Use modules or reusable templates to avoid duplication.
  • Store state securely (remote state backends with locking, like Terraform Cloud or S3 with DynamoDB locks).

Environment management:

  • Create separate configs for dev, staging, and production.
  • Use variables and secrets management, not hard-coded values.
  • Test IaC changes in a non-production environment first.

Database and stateful services:

  • Use careful migration strategies: backward-compatible migrations, feature toggles, and migration banking (apply in smaller steps).
  • Backup and test restore procedures frequently.

Monitoring, logging, and rollback procedures

After deployment, you need to know if the system works and be ready to act.

Monitoring:

  • Track key metrics: error rates, request latency, throughput, resource usage.
  • Define SLOs (service level objectives) and alert thresholds.
  • Use tools like Prometheus and Grafana for metrics and dashboards.

Logging:

  • Centralize logs (ELK, Loki) and standardize log formats with request IDs.
  • Correlate logs, traces, and metrics for fast troubleshooting.

Tracing:

  • Use distributed tracing (Jaeger, Zipkin, OpenTelemetry) to follow requests across services.

Alerts and runbooks:

  • Tune alerts to avoid noise. Use severity levels and paging only for critical incidents.
  • Provide runbooks with step-by-step actions for common failures.
  • Practice incident drills and post-incident reviews.

Rollback and remediation:

  • Have automated rollback triggers when critical health checks fail.
  • Prefer automated canary rollbacks based on defined metrics.
  • Keep a human-reviewed escalation path for complex failures.

Security, compliance, and maintenance best practices

Security must be part of the pipeline from day one.

Secrets and credentials:

  • Never store secrets in code. Use secrets managers: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault.
  • Grant least privilege and rotate credentials regularly.

Dependency and image scanning:

  • Run dependency vulnerability scans (Snyk, Dependabot, Whitesource).
  • Scan container images for vulnerabilities before pushing to registry.

Access control and auditing:

  • Use RBAC for CI, cloud, and artifact repositories.
  • Keep audit logs to track who deployed what and when.

Compliance and policies:

  • Automate policy checks for infrastructure and code (OPA, policy-as-code).
  • Keep evidence of tests and scans for audits.

Maintenance habits:

  • Update base images and dependencies regularly.
  • Keep CI runners and plugins up to date.
  • Review pipeline performance and failures monthly.

Final checklist

Before calling a pipeline production-ready, verify:

  • Version control protections and branching rules are in place.
  • CI builds and unit tests run on every PR.
  • Integration and E2E tests run regularly with proper cleanup.
  • Artifacts are stored and immutable, with promotion gates.
  • Deployment strategy supports safe rollback.
  • Infrastructure is defined as code and tested.
  • Monitoring, alerts, and runbooks exist for production.
  • Secrets, scanning, and RBAC are enforced.

Closing note: Start small and iterate. Deliver incremental improvements to the pipeline so teams gain confidence and deploy more often with less risk.

About Jack Williams

Jack Williams is a WordPress and server management specialist at Moss.sh, where he helps developers automate their WordPress deployments and streamline server administration for crypto platforms and traditional web projects. With a focus on practical DevOps solutions, he writes guides on zero-downtime deployments, security automation, WordPress performance optimization, and cryptocurrency platform reviews for freelancers, agencies, and startups in the blockchain and fintech space.