This week I want to tackle deploying IBM Sterling B2Bi on Azure using containers. After helping several organizations navigate their cloud migrations, I’ve learned that choosing the right deployment pattern can make or break your project.
Today I’m sharing the five main architecture patterns for Sterling B2Bi on Azure and what actually works in production.
Single cluster or multiple? Separate VNETs or shared? Namespace isolation or subnet separation? These aren’t just infrastructure decisions—they’re security, cost, and operational decisions that will impact your deployment for years. Get it right, and you have a scalable, secure platform. Get it wrong, and you’re either overpaying for unnecessary complexity or failing security reviews.
After working through multiple deployments, I’ve seen these five patterns emerge. Each has its place depending on your security requirements, team expertise, and budget.
Completely separate Kubernetes clusters in different Azure VNETs—one for DMZ (SSP, CM), another for application layer (ASI, AC, API). Networks are totally separate, requiring VNET peering or encrypted public internet traffic.
The Good: Highest isolation possible. DMZ and application layers separated at cluster and network level. Works for financial services and healthcare where regulations mandate this separation.
The Reality Check: You’re paying for and managing two complete clusters—double infrastructure costs, double monitoring overhead, plus VNET peering complexity. Duplicate infrastructure for masters, logging, and monitoring.
When it makes sense: Compliance genuinely mandates network-level separation, or security teams won’t sign off on anything less.

Separate clusters for DMZ and application layers within one Azure VNET. Azure Kubernetes deployments typically require separate subnets for control plane and worker nodes anyway, so you’re organizing them intelligently within one network boundary.
The Good: Solid security isolation through separate clusters without VNET peering headaches. Lower network latency and data transfer costs. Sweet spot between security and operational sanity.
The Reality Check: Still running two complete clusters—double management overhead and higher infrastructure costs than single-cluster patterns.
When it makes sense: My default recommendation for most enterprise production deployments. Solid security without excessive complexity.
One Kubernetes cluster with DMZ and application components separated by namespaces. Physical nodes span both environments, but namespace-level network policies control traffic.
The Good: Most resource-efficient. No duplicate clusters, and most managed Kubernetes services include network policy support.
The Reality Check: Lower isolation—single cluster compromise affects everything. Requires real expertise with Kubernetes network policies. May need Calico or another CNI for advanced network policy support.
When it makes sense: Development and staging environments. Production works if security requirements are reasonable and your team really understands Kubernetes networking.
Single Kubernetes cluster with DMZ and application pods on physically separate nodes in different subnets. Pods can’t migrate across environment boundaries because nodes are subnet-locked.
The Good: Subnet-level network isolation without deep Kubernetes networking expertise. Physical node separation satisfies some security requirements without dual-cluster overhead.
The Reality Check: Less efficient than namespace model—dedicated nodes per environment. Still a single cluster, which bothers some security teams.
When it makes sense: Network-level isolation needed, but team lacks deep Kubernetes networking expertise.

Everything in one cluster, one subnet. DMZ and application components share infrastructure with pod-level network policies or Azure Network Security Groups for traffic control.
The Good: Minimum infrastructure and management overhead. Dead simple to understand and maintain.
The Reality Check: Least secure option. Relies entirely on network policies—one mistake and pods are talking when they shouldn’t be.
When it makes sense: Development, test environments, or production workloads with genuinely low security requirements.
Azure has some quirks you need to understand before deploying Sterling containers.
Node Selection: Use Standard node types (not Burstable—CPU credits tank performance). Deploy across 3 Availability Zones with N+1 capacity planning.
Networking bandwidth matters: ensure at least 1Gbps for production, more for I/O-heavy workloads like SFG or Connect:Direct. Note that B2BI containers don’t support ARM architecture.
For HA, span deployments across 3 Availability Zones with spare capacity. If you need 6 nodes for load, deploy 9 (3 per AZ) to handle complete AZ failures.
Storage: Azure Files Premium with SSD backing is your only supported option. NFS 4.1 with Zone Redundant Storage handles cross-AZ failover. Watch transaction costs on I/O-heavy workloads.
Object Storage: Azure Blob Storage works for two use cases: document storage (offload large BPs from the database) and SFG adapter storage (direct file transfer). Connect:Direct also supports Blob Storage as of v6.2.0.
Azure managed databases (Azure SQL, Azure PostgreSQL) aren’t officially supported due to connection stability issues during maintenance windows. IBM is working with Microsoft on this, but for now, deploy self-managed databases in Azure VMs or containers.
Use Azure’s Standard Load Balancer (not Basic) for production—it has the AZ support and monitoring you need.
For firewalls, there’s no tight support statement from Sterling, so use what fits your requirements: Network Security Groups (NSGs) for basic filtering, Azure Firewall for advanced capabilities, and Sterling Secure Proxy (SSP) for Sterling-specific protection. I recommend layering all three.
Note: Azure Managed HSM isn’t supported by Sterling currently. Azure PrivateLink is worth investigating to connect VNETs to Azure services without public internet traffic.
(I’ll cover Azure HA/DR strategies, backup approaches, and detailed cost optimization in another Issue—this is a deep topic that deserves its own newsletter.)

Start with Pattern 2, Adjust Later: Most organizations should begin with Pattern 2 (Dual Clusters, Single VNET) for solid security without excessive complexity. You can simplify to Pattern 3 if it’s overkill, or strengthen to Pattern 1 if compliance requires it. Don’t get clever with the most efficient pattern without the Kubernetes expertise to back it up.
GitOps from Day One: Store all Helm values, network policies, and configurations in Git. Use ArgoCD or Flux for automated deployment. The payoff is massive—recreate entire environments in minutes instead of days, plus you get a complete audit trail.
Monitoring Across Three Layers: Deploy Azure Monitor (infrastructure), Kubernetes monitoring (cluster health), and Sterling Control Center (application visibility). Set up alerts that trigger automatic scaling, not just wake your ops team at 3am.
Autoscaling Reality: Configure Horizontal Pod Autoscalers for AC (adapter containers), ASI nodes, and REST APIs. Critical caveat: SSP and Connect:Direct cannot horizontally autoscale—plan capacity upfront.
Cost Reality: Containers can cut compute by 25%+ with proper right-sizing and autoscaling. Use Azure Reserved Instances for production (30-50% savings) and spot instances for dev/test (up to 90% off).
The beauty of containers is that your infrastructure can finally match the dynamic nature of B2B integration workloads. No more paying for idle capacity or scrambling during peak periods—if you choose the right pattern from the start.
Which deployment pattern are you using? And more importantly—what’s been your biggest pain point with Sterling B2Bi on Azure?
As always, if you have questions about your specific environment or need guidance on which deployment pattern makes sense for your organization, feel free to reach out.
Coliance’s name embodies its mission: Collaboration + Alliance.
By working hand-in-hand with IBM and industry partners, Coliance helps organisations integrate complex systems, visualise data, and automate decision-making for faster, smarter operations.
Through initiatives like the IBM Amplify program, Coliance is guiding enterprises toward modular, cloud-native integration architectures infused with AI and automation – replacing outdated tools and enhancing real-time insight.
Coliance’s flagship solutions leverage IBM’s Watsonx AI and Sterling technologies to deliver visibility and performance across hybrid ecosystems:
These solutions empower IT, operations, and supply-chain teams with the data intelligence, speed, and confidence required to succeed in today’s connected world.
With more than 23 years of experience across integration and middleware technologies, including IBM Sterling, App Connect, and Cloud Pak for Integration, Coliance provides end-to-end services – from architecture and implementation to optimisation and 24/7 support.
This depth of expertise positions Coliance as a trusted IBM Gold Partner and a strategic force in shaping the future of hybrid iPaaS, MFT modernisation, and AI-driven business integration.
Looking Ahead: Smarter, Faster, Stronger
As supply chains grow more complex and responsiveness becomes critical, Coliance is empowering enterprises to connect smarter, move faster, and grow stronger through visibility, automation, and intelligence.
By combining innovation, partnership, and foresight, Coliance continues to define what’s next for AI-powered integration – helping businesses transform complexity into clarity in today’s hybrid, data-driven world.