A practical guide for platform engineers navigating the future of Kubernetes traffic management
The Wake-Up Call
If you’re reading this, you’ve probably already heard the news that sent ripples through the Kubernetes community: Ingress NGINX, the battle-tested controller that’s been routing traffic for countless production clusters since the early days of Kubernetes, is being retired. After March 2026, there will be no more security patches, no bug fixes, and no updates.
For many of us, Ingress NGINX has been that reliable workhorse we rarely thought about. It just worked. But as with all technology, evolution is inevitable, and sometimes that evolution means saying goodbye to old friends.
The good news? This retirement isn’t just an ending—it’s an opportunity to modernize your infrastructure with Gateway API, a more powerful and flexible approach to traffic management that represents the future of Kubernetes networking.
Why Ingress NGINX Had to Go
Before we dive into the migration path, it’s worth understanding why this beloved project is being sunset. The story is a familiar one in open source: a project becomes incredibly popular, but maintainer resources don’t scale with adoption.
Ingress NGINX was originally created as a reference implementation, a proof of concept to demonstrate how the Ingress API could work. Nobody expected it to become the de facto standard for Kubernetes traffic routing. Its flexibility—the ability to inject arbitrary NGINX configuration through annotations and snippets—made it powerful but also created a maintenance nightmare.
What was once considered a feature became a security liability. The project ran on the dedication of one or two maintainers working nights and weekends. Despite its massive user base, the community never rallied with enough contributor support to make it sustainable. Even the planned replacement, InGate, failed to gain traction.
This is a sobering reminder: if you depend on open source software, contribute back when you can. Maintainer burnout is real, and projects don’t maintain themselves.
Gateway API: More Than Just a Replacement
Gateway API isn’t simply a new version of Ingress—it’s a fundamental rethinking of how we manage traffic in Kubernetes. If Ingress was a Swiss Army knife, Gateway API is a full professional toolkit.
What Makes Gateway API Different?
Role-Oriented Design: Gateway API recognizes that different people manage different aspects of infrastructure. Cluster operators manage Gateways, while application developers manage Routes. This separation of concerns reduces conflicts and improves security.
Expressiveness Without Chaos: Instead of relying on vendor-specific annotations that varied wildly between implementations, Gateway API provides rich, standardized resources. Need header-based routing? Traffic splitting? Timeouts? They’re all first-class citizens in the API.
Portable by Design: With Ingress, switching providers often meant rewriting all your annotations. Gateway API implementations share a common core, making migrations between providers far less painful.
Advanced Capabilities Built-In: Traffic weighting for canary deployments, cross-namespace routing with explicit grants, and request mirroring aren’t afterthoughts—they’re native features.
Reading the Map: Understanding Gateway API Resources
Before you start migrating, you need to understand the new landscape. Gateway API introduces several key resources:
GatewayClass
Think of this as the infrastructure template. It defines what type of load balancer or proxy you’re using—whether that’s NGINX, Envoy, HAProxy, or a cloud provider’s offering. Cluster administrators typically manage this.
Gateway
This is your actual load balancer instance. It defines listeners (ports and protocols), TLS configuration, and which routes can attach to it. One Gateway can handle multiple domains and applications.
HTTPRoute (and friends)
These define how traffic gets routed to your services. HTTPRoute handles HTTP/HTTPS traffic, while TCPRoute, TLSRoute, and UDPRoute handle other protocols. Application teams typically manage these.
ReferenceGrant
Security-conscious and often overlooked, ReferenceGrants explicitly allow cross-namespace references. If your route in namespace A wants to send traffic to a service in namespace B, you need a ReferenceGrant in namespace B permitting it.
Choosing Your Gateway: The Implementation Landscape
One of the first decisions you’ll face is which Gateway API implementation to use. Unlike Ingress NGINX, where there was essentially one dominant choice, the Gateway API ecosystem offers several mature options.
NGINX Gateway Fabric
The spiritual successor to Ingress NGINX, maintained by NGINX (now part of F5). If you want the least disruptive migration and are comfortable with NGINX’s architecture, this is your natural path. It’s designed with migration from Ingress NGINX in mind.
Best for: Teams already invested in NGINX, those wanting familiar architecture, organizations prioritizing migration simplicity.
Envoy Gateway
A CNCF project built on the Envoy proxy, which powers Istio, Contour, and Ambassador. It’s production-ready, actively developed, and benefits from Envoy’s proven performance in demanding environments.
Best for: Teams wanting cutting-edge features, those already using Envoy elsewhere, organizations prioritizing open governance.
Istio
If you’re already running a service mesh or plan to, Istio’s Gateway API implementation leverages the same infrastructure. You get traffic management plus observability, security, and resilience features.
Best for: Organizations needing service mesh capabilities, teams with complex microservices architectures, those prioritizing observability.
Cilium Gateway API
Built on eBPF technology, Cilium offers incredible performance and integrates tightly with Cilium’s network policies. If you’re using Cilium for CNI, this is a natural fit.
Best for: Performance-critical workloads, teams already using Cilium, organizations prioritizing network security.
Tigera Calico Gateway
Leveraging Envoy under the hood, Calico’s Gateway API implementation integrates seamlessly with Calico’s network policies and security features. If you’re already using Calico for network policy enforcement, this provides unified management of both networking and security.
Best for: Organizations using Calico CNI, teams prioritizing zero-trust security models, those wanting tight integration between network policy and ingress, enterprises needing advanced compliance features.
Kong Gateway
Offers both open-source and enterprise options with API management features built in. If you need more than just routing—rate limiting, authentication, analytics—Kong provides a comprehensive platform.
Best for: API-first organizations, teams needing enterprise support, those wanting integrated API management.
The Migration Game Plan
Now for the practical part: actually moving your workloads. Here’s a battle-tested approach that minimizes risk.
Phase 1: Discovery and Planning
Start by understanding what you’re migrating. Create an inventory:
# Find all Ingress resources
kubectl get ingress --all-namespaces -o yaml > ingress-backup.yaml
# Check for NGINX-specific annotations
kubectl get ingress --all-namespaces -o json | \
jq '.items[] | select(.metadata.annotations | to_entries[] | .key | contains("nginx"))' | \
jq -r '"\(.metadata.namespace)/\(.metadata.name)"'
# Identify snippet usage (high migration effort)
kubectl get ingress --all-namespaces -o json | \
jq '.items[] | select(.metadata.annotations."nginx.ingress.kubernetes.io/configuration-snippet" or
.metadata.annotations."nginx.ingress.kubernetes.io/server-snippet")' | \
jq -r '"\(.metadata.namespace)/\(.metadata.name)"'
Document everything:
- How many Ingress resources do you have?
- Which annotations are you using?
- Do you have custom NGINX snippets?
- What’s your TLS certificate strategy?
- Are there any mission-critical apps that need special attention?
Phase 2: Set Up Your Test Environment
Install Gateway API CRDs:
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
Install your chosen Gateway implementation. For NGINX Gateway Fabric as an example:
kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.4.0/crds.yaml
kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.4.0/nginx-gateway-fabric.yaml
Create a test Gateway:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: prod-gateway
namespace: gateway-system
spec:
gatewayClassName: nginx
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
namespaces:
from: All
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-tls-cert
Phase 3: Convert Your First Application
Choose a non-critical application for your first migration. Here’s how a typical conversion looks:
Original Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: shop-app
namespace: production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
ingressClassName: nginx
tls:
- hosts:
- shop.example.com
secretName: shop-tls
rules:
- host: shop.example.com
http:
paths:
- path: /api(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: shop-api
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: shop-frontend
port:
number: 3000
Gateway API Equivalent:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: shop-app
namespace: production
spec:
parentRefs:
- name: prod-gateway
namespace: gateway-system
hostnames:
- shop.example.com
rules:
# API route with path rewriting
- matches:
- path:
type: PathPrefix
value: /api
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
backendRefs:
- name: shop-api
port: 8080
# Frontend route
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: shop-frontend
port: 3000
Notice what happened here:
- The
rewrite-targetannotation became aURLRewritefilter - SSL redirect is handled at the Gateway level (HTTPS listener only)
- Rate limiting requires implementation-specific policies (varies by controller)
- Path matching is more explicit and predictable
Phase 4: Handle the Tricky Parts
TLS Management
Gateway API handles TLS at the Gateway level, not per route. This is actually cleaner but requires adjustment:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: prod-gateway
spec:
gatewayClassName: nginx
listeners:
- name: https-shop
protocol: HTTPS
port: 443
hostname: shop.example.com
tls:
mode: Terminate
certificateRefs:
- name: shop-tls
- name: https-blog
protocol: HTTPS
port: 443
hostname: blog.ramasankarmolleti.com
tls:
mode: Terminate
certificateRefs:
- name: blog-tls
Cross-Namespace Routing
If you need to route traffic from a Gateway in one namespace to services in another, you need ReferenceGrants:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-gateway-routes
namespace: production
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: gateway-system
to:
- group: ""
kind: Service
Advanced Features
Common NGINX annotations and their Gateway API equivalents:
- Timeouts: Use implementation-specific policies or BackendRef timeouts
- Retries: HTTPRoute filters (implementation-specific)
- CORS: HTTPRoute filters or policies
- Authentication: Typically handled via implementation-specific policies
- Rate limiting: Implementation-specific policies
- Redirects: Built-in HTTPRoute filters
Phase 5: Parallel Running
Don’t just rip out Ingress NGINX. Run both controllers simultaneously:
- Keep Ingress NGINX handling production traffic
- Set up Gateway API for the same routes with different ingress class
- Use external testing to validate Gateway API routes
- Monitor performance and behavior differences
This gives you a safety net and time to ensure feature parity.
Phase 6: Gradual Cutover
Move applications incrementally:
- Start with development/staging environments
- Move low-risk production apps
- Progress to higher-risk workloads
- Keep mission-critical apps for last
For each migration:
- Update DNS or load balancer to point to new ingress
- Monitor closely for 24-48 hours
- Keep rollback plan ready
- Document any issues and solutions
Phase 7: Decommission
Once all applications are migrated and stable:
- Remove Ingress resources
- Scale down Ingress NGINX deployment
- After a safe waiting period, uninstall Ingress NGINX
- Update documentation and runbooks
Common Migration Challenges and Solutions
Challenge 1: Complex Rewrite Rules
Problem: You have intricate NGINX rewrite rules in snippets.
Solution: Some rewrite patterns map cleanly to URLRewrite filters. For complex cases, consider:
- Implementing rewrites in your application
- Using more sophisticated Gateway implementations that support advanced routing
- Creating custom policies if your implementation supports them
Challenge 2: Custom NGINX Configuration
Problem: You rely heavily on configuration snippets for custom behavior.
Solution:
- Evaluate if the functionality is actually necessary
- Check if your Gateway implementation offers equivalent features
- Consider moving logic to your application layer
- Use sidecar proxies for truly custom requirements
Challenge 3: Rate Limiting and WAF
Problem: You use NGINX’s rate limiting and ModSecurity.
Solution: Different Gateway implementations offer varying support:
- NGINX Gateway Fabric: Use NGINX policies
- Envoy Gateway: Use RateLimitFilter
- Istio: Use RequestAuthentication and AuthorizationPolicy
- Consider dedicated WAF solutions like Cloudflare, AWS WAF, or open-source alternatives
Challenge 4: Observability Gaps
Problem: You have custom metrics and dashboards for Ingress NGINX.
Solution:
- Gateway API implementations expose metrics differently
- Most support Prometheus-compatible metrics
- Plan to rebuild dashboards for your new implementation
- Take this opportunity to improve your observability strategy
Real-World Lessons from Early Adopters
Lesson 1: Don’t Rush Teams that tried to migrate everything at once regretted it. Those who took a methodical, app-by-app approach had smoother experiences.
Lesson 2: Test Everything Twice Gateway API is more explicit than Ingress, which is good, but it means subtle configuration differences can have big impacts. What worked in staging might behave differently in production due to traffic patterns.
Lesson 3: Invest in Automation If you have hundreds of Ingress resources, manually converting them is error-prone. Write scripts or use tools like ingress2gateway to automate conversion, then manually review and test.
Lesson 4: Documentation Is Your Friend Gateway API implementations are still maturing. Read the docs thoroughly, especially around implementation-specific features and limitations.
The Silver Lining
While forced migrations are never fun, Gateway API genuinely offers improvements:
Better Security Posture: Role-based resource separation means developers can’t accidentally (or intentionally) inject arbitrary configuration that could compromise cluster security.
Improved Developer Experience: Once set up, HTTPRoutes are more intuitive than Ingress with its maze of annotations. Developers spend less time debugging cryptic NGINX behavior.
Future-Proof Architecture: Gateway API is actively developed by the Kubernetes community and is designed to evolve. New features like session affinity, request mirroring, and advanced traffic management are being added regularly.
Vendor Flexibility: Don’t like your current Gateway implementation? Switching to another is significantly easier than migrating between different Ingress controllers used to be.
Your Action Plan for Tomorrow
If you’re running Ingress NGINX in production, here’s what to do immediately:
- Audit your infrastructure – Run the discovery commands to understand your current state
- Set a deadline – Aim to complete migration by Q4 2025, well before the March 2026 cutoff
- Choose an implementation – Research Gateway implementations and pick one that fits your needs
- Allocate resources – This isn’t a “spare time” project. Dedicate engineering time and treat it as a priority
- Start learning – Read Gateway API documentation, experiment in a test cluster
- Communicate – Inform stakeholders, plan for application team training if needed
Closing Thoughts
The retirement of Ingress NGINX marks the end of an era, but it’s not a crisis—it’s an evolution. Gateway API represents years of lessons learned from Ingress’s limitations, and it provides a more robust foundation for the next decade of Kubernetes networking.
Yes, migrations are work. Yes, there will be challenges. But the Kubernetes ecosystem is healthier when we sunset projects that have become unmaintainable and embrace better alternatives.
Take this opportunity to not just migrate, but to improve. Review your traffic patterns, simplify your routing rules, enhance your security posture, and build a more maintainable infrastructure.
The deadline is March 2026, but don’t wait that long. Start planning now, migrate thoughtfully, and you’ll emerge with a better, more future-proof system.
Your future self—and your infrastructure—will thank you.
Hope you enjoyed the post.
Cheers
Ramasankar Molleti
