VMware vSAN and NSX-T Compatibility
There are lot of discussions that talk about VMware NSX and VMware vSAN, most of them around compatibility.
vSAN and NSX are compatible with each other, however, vSAN traffic is not supported on NSX overlay network. But, the way VDS Portgroups can be used to configure vSAN vmkernel adapters, NSX-T VLAN backed logical switches can also be used to configure vSAN vmkernel adapters. Apart from this, NSX-T logical routers can be used as gateways to route the vSAN traffic, of course the backing for such configuration must be with NSX-T VLAN logical switches.
In this blog post I cover how NSX-T can be used to setup configuration for vSAN stretched cluster.
Deep Dive of vSAN Stretched Cluster Using an NSX-T Backed L3 Network
One of the configurations for vSAN stretched cluster can be achieved with L3 networking between Data Nodes and the Witness Host. In such deployment, the Data Nodes and Witness Host may reside in different networks. Hence, the vSAN vmkernel adapters need to point to their gateways to talk to each other. Following is the high-level network view of such topology for vSAN stretched cluster where hosts use VDS Portgroups to configure the vmkernel adapters.
vSAN Stretched Cluster
Note: The configuration that is illustrated below uses 2 hosts in Primary Site, 2 in Secondary Site and 1 Witness host. However, an ideal vSAN stretched cluster configuration should be with at least 3 hosts in Primary Site, 3 in Secondary Site and 1 Witness host.
Fig. 1: All hosts have their networking backed by VDS Portgroups (high level topology for representation, it may not be exact deployment)
NSX-T Network Backing for Configuring vSAN Stretched Cluster
NSX-T Logical Routers (Tier-0) can help with the communication between Datanodes and Witness Host and NSX-T Logical switches can be used as network component for the vSAN vmkernel adapters.
Fig. 2: NSX-T for host networking and enabling communication across datanodes and Witness Host
As shown in Fig.2, this topology makes use of NSX-T logical switches and logical routers for vSAN traffic. The vmkernel adapters of Data Nodes and Witness Host points to their Tier0 (logical router) interfaces (HA-VIP in this case). As depicted in the diagram, Host1 and Host2 in Primary Site points to their T0-Router 10.10.10.1, Host3 and Host4 in Secondary Site points to its T0-Router 20.20.20.1 and the Witness Host points to its T0-Router 30.30.30.1. BGP needs to be setup between the T0 Router of Data Nodes and Witness Host (between Tier0-Site and Tier0-Witness in the diagram above) that advertises their respective networks.
Tier0 Configuration That Serves as Gateway to Hosts in Primary and Secondary Site
Fig.3: HA VIP configuration on T0-Router for Primary, Secondary Site and BGP connectivity to Witness Host.
As shown in Fig.3 above, 3 HA-VIP interfaces are setup on T0 router. Each interface represents the connectivity to Primary Site, Secondary Site and a BGP link to the Witness Host.
Tier0 Configuration That Serves as Gateway to Witness Host
Fig.4: HA VIP configuration on T0-Router for Witness Host interfaces and BGP connectivity to data nodes
As shown in Fig.4, 2 HA VIP interfaces are configured that represents Witness network and a BGP connectivity to the vSAN data nodes
Setting Up the vmkernel Adpaters on ESXi Hosts for vSAN
Following is the configuration of Host vmkernel adapters in Primary Site (Host1 and Host2)
Following is the configuration of Host vmkernel adapters in Secondary Site (Host3 and Host4)
Following is the configuration of Witness host’s vmkernel adapters.
Edge Deployment Options
The placement of Edge Nodes is crucial as inaccessibility of this would lead vSAN partition and would affect the accessibility of applications deployed. Following are 2 options that an end user can choose.
Edge deployed at remote site
As shown above capture, vsan-edge1, vsan-edge2, vsan-witness-edge1, vsan-witness-edge2 are the edge nodes that are part of the two separate edge clusters where the separate T0 Routers of datanodes and witness nodes are deployed. In order to configure vSAN stretched cluster for site “vcenter (Cluster Name: Cluster)”, it is recommended to deploy these edges on site “vcenter2 (Cluster Name: Cluster2)”.
As depicted above, ensure to select the proper Compute Manager, Cluster and Datastore for this deployment. In this case, since the edges are being deployed at site vcenter2, select the relevant clusters and datastore configuration.
Edges in Local Site with Affinity Rules
If vSAN stretched cluster is configured at the local site (single vcenter deployment), it is recommended to deploy each of the edge node one on Primary Site and another on Secondary Site. This is to ensure that in case if Primary site is completely down, the other pair of Edge in the Edge Cluster at Secondary Site is up and running thereby keeping vSAN healthy.
Select the Compute Manager, Cluster and Datastore configuration for the edges of datanodes and witness.
As shown above, vsan-edge1 and vsan-edge2 have host affinity rules configured such that vsan-edge1 is tied to Host1 (part of Primary Site) and vsan-edge2 is tied to Host4 (part of Secondary Site). Ensure to configure similar host affinity rules for vsan-witness-edge1 and vsan-witness-edge2 where these edge nodes are tied to one host each in Primary and Secondary Site.
vSAN Stretched Cluster Health Check
Ensure that the vSAN Health is “Green” after setting up stretched cluster with NSX-T backing
vSAN and NSX Resources
- Visit our product pages for more information on vSAN and NSX
- VMware YouTube Channels for vSAN or NSX for more on features and capabilities
- Getting Started with vSAN Hands-on Lab
- Getting Started with NSX Hands-on Lab
The post vSAN Stretched Cluster Using an NSX-T Backed L3 Network appeared first on Network Virtualization.