Continous delivery - Top 10 antipatterns


Top 10 Continuous Delivery Anti-Patterns (2025)

1
“Pipeline Theater”
Automated steps that look like CD but don’t enable safe, fast releases.
— e.g., “We have a pipeline!” but:
• Manual gatebetween every stage
• Tests take 4+ hours
• No rollback capability
Createsfalse confidence. Teams think they’re doing CD while still blocked, slow, and high-risk.
CD is about outcomes (lead time, reliability), not tooling.
2
Monorepo + Microservices = Mega-Pipeline
One giant pipeline for 50+ services — changes to service-a trigger full rebuild of all.
Wastes resources, causes queueing, discourages commits →kills trunk-based flow.
Optimize for small batches & fast feedback.
3
GitOps ≠ CD
Using Argo CD/Flux to sync manifests — but builds, tests, and approvals still happen outside the pipeline.
e.g., “We GitOps!” but:
• JAR built on dev laptop
• No automated quality gates
• Image tag =latest
Confusesdeployment syncwithdelivery. You’re doingcontinuous deployment of unvalidated artifacts.
Build once, validate thoroughly, promote immutably.
4
“Environment-as-a-Phase”
Dev → QA → Staging → Prod as mandatory sequential gates, with:
• Manual testing required in each
• No parallelism
• Can’t skip even for hotfixes
Bottlenecks, delays, andprevents release on demand— the core goal of CD.
Environments are deployment targets, not quality gates.
5
Test Ice Cream Cone
Heavy reliance on slow, flaky UI/E2E tests (top), few unit/integration (bottom).
e.g., 80% UI tests, 5% unit tests
Feedback takes hours; tests fail for infra reasons; devs stop trusting tests →bypass pipeline.
Fast, reliable feedback requires pyramid-shaped testing.
6
Secrets in Git / Config in Code
DB passwords, API keys, or env-specific config baked into source or Docker image.
Breaks immutability, enables env drift, violates security/compliance (SOC 2, ISO 27001).
Configuration externalized; same artifact everywhere.
7
“Works in Prod” Testing
Relying on production traffic/errors (e.g., canaries without metrics, no feature flags) as primary QA.
Shifts risk to users; reputational damage; notcontinuous delivery— it’scontinuous exposure.
Validate before production — monitor in production.
8
No Ownership of Pipeline
Pipeline maintained by “Platform Team”; app teams can’t modify it.
→ “My service needs a custom test — but I can’t change the pipeline.”
Creates friction, encourages workarounds, reduces accountability.
Teams own their full delivery lifecycle.
9
Immutable Artifact ≠ Immutable Config
Same Docker image promoted — but Helm values/configMaps differ wildly per env.
You’re not testingwhat you deploy; config bugs dominate prod incidents.
Infrastructure & config as code, with parity.
10
Metrics Theater
Tracking “# of pipelines” or “build success %” — but not DORA metrics:
• Lead time for changes
• Deployment frequency
• Change fail rate
• MTTR
Optimizes for activity, not outcomes. Hides systemic dysfunction.
Measure what matters: flow, stability, recovery.

🔍 Where These Come From

Humble & Farley (2010)
#4, #6, #9 (core ideas)
DORA / Accelerate (2016–2023)
#1, #5, #10 (validated via 30k+ teams)
CNCF/GitOps surveys
#3, #8 (platform-team tension)
Real post-mortems (Netflix, AWS, Azure)
#2, #7 (scale-induced failures)

🛠 Bonus: Quick Fixes for Top 3

Pipeline Theater
Add a“Can we release now?”button to your pipeline UI — if it’s not green, you’re not doing CD.
GitOps ≠ CD
Requireimage: myapp@sha256:abc123(not:latest) in Git — enforce via PR policy.
Test Ice Cream Cone
Delete 50% of UI tests; replace with contract/API tests. Measure feedback time — target <10 mins.

📊 DORA Health Check (2025 Elite Benchmarks)

Deployment Frequency
Multiple per day
≥ 1/day
Lead Time for Changes
<1 hour
<1 day
Change Fail Rate
<15%
<30%
MTTR
<1 hour
<1 day



#
Anti-Pattern
What It Looks Like
Why It’s Dangerous
11
“Golden Path” Illusion<br>Platform team provides “one approved pipeline template” — but 80% of teams fork/ignore it due to real-world constraints (e.g., legacy DB, compliance).
Platform adoption stays low; shadow pipelines proliferate; compliance gaps widen.
12
Chaos-Driven Delivery<br>“We practice CD!” → but deployments fail 40% of the time, rollbacks take hours, and postmortems repeat the same root causes.
Normalizes failure; erodes trust; confuses frequency with reliability.
13
Dependency Hell in Pipelines<br>Service A deploys → breaks Service B → blocks B’s pipeline → blocks C → cascading deployment freeze.
Turns microservices into a distributed monolith — defeats the purpose of decoupling.
14
Compliance Theater<br>Manual approval gates “for audit” (e.g., “Click ‘Approve’ in Jira”) — but no automated policy checks (e.g., SBOM, CVE, config validation).
Creates bottlenecks without real security/risk reduction. Auditors see “approved”, but risk remains.
15
Trunk-Based Development… in Name Only<br>Team claims TBD — but uses 3-day feature branches, merges only after QA sign-off, and disables mainline builds on failure.
Retains integration risk; feedback >24 hrs; “main” is never green.
16
Observability Gap at Deploy Time<br>No canary analysis, no SLO-based rollout, no auto-rollback — just “hope it works”.
Deployments become high-stress events; failures detected by users, not systems.
17
“Shift-Left” Without “Shift-Safe”<br>Push security/scans to devs — but tools are slow, flaky, and lack fix guidance (e.g., “CVE-2024-1234 in transitive dep”).
Devs disable checks; security becomes a blocker, not an enabler.
18
Environment Sprawl Without Governance<br>Every squad spins up dev-<team>-<feature> envs — but no lifecycle mgmt, cost tracking, or cleanup.
Cloud costs balloon; security surface explodes; “Which env is prod-like?” becomes unanswerable.
19
Pipeline as a Black Box<br>Devs see “Pipeline failed at Stage 7” — but logs are 10k lines, no structured error, no debug access.
MTTR skyrockets; devs work around pipeline (e.g., “I’ll deploy from my laptop”).
20
CD Without CI<br>“We deploy daily!” → but builds are infrequent, tests are skipped on main, and PRs are huge.
You’re doing continuous deployment of untested code — the most dangerous form of CD.

🔍 Real-World Origins

Anti-Pattern
Seen In
#11 Golden Path Illusion
Platform engineering teams at Fortune 500 (per CNCF Platform Survey 2024)
#13 Dependency Hell
Microservice outages at AWS, Azure (e.g., cascading config failures)
#14 Compliance Theater
Financial/healthcare audits (SOC 2, HIPAA findings)
#17 Shift-Left Without Shift-Safe
DevEx surveys: 68% of devs disable security scans due to poor UX (GitGuardian, 2024)
#20 CD Without CI
Startups scaling fast — “move fast and break things” → production instability

🛠 Detection & Quick Fixes

Anti-Pattern
How to Spot It
1-Week Fix
#13 Dependency Hell
Track deployment success rate by service after upstream deploys
Introduce consumer-driven contract tests (e.g., Pact)
#16 Observability Gap
% of rollbacks triggered manually vs. automatically
Add automated canary analysis (e.g., Kayenta, Prometheus + SLO)
#18 Environment Sprawl
Count of non-prod envs / team; cost per env
Enforce ephemeral envs (e.g., preview-<PR#>, auto-delete after merge)
#20 CD Without CI
PR size >500 LOC; mainline build broken >10% of day
Enforce PR build + fast tests; block merge on failure

📊 The “CD Maturity” Spectrum (2025)

Level
Symptoms
Goal
0. Manual
Deployments on Fridays, runbooks, hero culture
→ Automate build/deploy
1. Pipeline Theater
Green pipelines, but can’t release on demand
→ Enable on-demand promotion
2. Fragile CD
Frequent deploys, but high failure rate
→ Improve quality gates + rollback
3. Resilient CD
Safe, fast, boring releases
→ Optimize for business outcomes

Most teams stall at Level 1 — mistaking tooling for capability.




Comments

Popular posts from this blog

gemini cli getting file not defined error

NodeJS: Error: spawn EINVAL in window for node version 20.20 and 18.20

vllm : Failed to infer device type