Stretch clusters are high-availability architecture mainly used to divide a storage volume into two fault domains, with a witness node added. The primary function of this architecture is to ensure that even if a host in one fault domain fails, the other fault domain can continue to operate normally, thereby achieving active-active business data. This architecture effectively enhances the system’s fault tolerance and ensures business continuity.
Precaution:
- The two data centers (Data Center 1 and Data Center 2) are directly connected through a Layer 2 network. It is recommended to use 10G bare fiber optics for aggregation, and the network latency between the two data centers should be less than or equal to 1ms. The switches in each data center should be stacked or configured with M-LAG, and three different VLANs should be created on the stacked/M-LAG switches to carry the management network, VXLAN network, and storage network data between the two data centers.
- The witness node-to-cluster link should have a latency not exceeding 5ms, with a recommended bandwidth of 100M.
- Licensing Requirements: Using stretch clusters requires the appropriate aSC license.
- At least 4 hosts (Each data center has 2) and 1 witness node.
Features of Stretch Clusters:
Fault Domains: Extended clusters achieve high availability by dividing fault domains. Even if one fault domain fails, the other fault domain can still provide services.
Witness Node: The witness node is a replica used to store data addresses. It can be deployed on a virtual machine or physical machine, and it is must to deploy it separately from the fault domains.
Deploy Witness Node
The witness node can be deployed on a physical server or a VMware virtualization environment. You can refer to the following table for corresponding hardware planning according to the cluster size.
The arbitration disk does not support the use of HDD disks. It must be an enterprise SSD and in the compatibility list. Otherwise, it cannot pass the checking.
Arbitration Node Scale:
| Cluster size |
Minimum hardware requirements |
Illustration |
| Small deployment (4 to 6 HCI nodes, 2 to 3 for each machine room) |
CPU: 6 cores Memory: 32GB System disk: capacity > = 128G Witness disk: enterprise SSD with capacity > 100GB. Virtualization deployment requires no less than 1000 IOPs. |
Support VMware virtualization deployment and physical machine deployment. |
| Midsize deployment (8 to 16 HCI nodes, 4 to 8 for each machine room) |
CPU: 8 cores Memory: 32GB System disk: capacity > = 128G Witness disk: two 128GB or 248GB enterprise SSDs are used and configured as RAID1 |
Physical machine deployment is recommended. |
| Large deployment (18 to 24 HCI nodes, 9 to 12 for each machine room) |
CPU: 16 cores Memory: 32GB System disk: capacity > = 128G Witness disk: two 128GB or 248GB enterprise SSDs are used and configured as RAID1 |
Be sure to deploy using physical machines. |
2.4.2.2.1.1Deploy Witness Node by ISO File
To install the witness on a physical server you need to download the iso package first. The download link package is named HCI6.11.1 Witness Node Installation ISO (For both Stretched cluster and 2+1 environment) (Choose the appropriate installation package based on the different HCI versions).
Before you deploy it on physical server, you need to use a tool (such as UltraISO) to burn the ISO file onto a USB drive to create a bootable drive.
Below are the detailed installation steps.
Step 1: Adjust the boot method of the physical server. Modify the BIOS settings to set the server to boot from the USB drive, then insert the USB drive and restart the server.
Step 2: Select the language. The following installation steps will be displayed based on the language selected in this step.
Step 3: Select install version. Since the Chinese and English versions share the same installation package, you need to choose the install version at the start of the installation. If the HCI install the internation version, select the internation version here; if the HCI install the Chinese version, select the Chinese version here.
Step 4: Select Install Sangfor HCI on this machine and press Enter.
Step 5: Select Sangfor HCI Installer and press Enter.
Step 6: Waiting for a while, a window will pop up to show the terms of use, carefully read the terms and, if you have no objections, select Agree to proceed with the next steps.
Step 7: Select the installation mode. Here, we choose Stretch Volume Arbiter Node. The other option, Two Host Volume Arbiter Node will be used in a two nodes cluster.
Step 8: A window will pop up to show an alert message for witness node you just need click Yes button.
Step 9: Select a disk where you want to install Sangfor HCI software and click OK after making the selection.
Step 10: Then, you will be prompted that this operation will format the disk. Enter "format" in the input box and select OK.
Step 11: Disk speed testing. If you don't need the disk speed test, select No in the pop-up dialog box. We recommend that you select Yes to perform a disk read/write speed test, in order to better understand whether the disk meets the requirements. Later it will pop up a window to show the result, if the result shows the disk has a good read/write performance click Yes to continue.
Step 12: Confirm the disk boot order. Ensure that the currently selected disk is set as the first boot device in the server's BIOS settings, then click OK to continue.
Step 13: Configure aggregate interface. If you do not need the aggregate interface, click Yes to continue or you can select No to configure aggregate interface first (the blew step is not select aggregate interface).
Step 14: select the NIC as prompted and select OK. Then, press Enter to go to the settings page.
Step 15: Configure the IP address, subnet mask, and gateway, and then click OK. Make sure that the address can communicate properly with the target HCI cluster
Step 16: Then, select Yes to go back to the NIC selection page and set other NICs, or select No to complete the installation.
Step 17: Remove the USB flash drive or ISO image file and select Reboot to complete installation.
Step 18: After a restart, you can log in to the witness node console through https://configured IP address in your browser.
2.4.2.2.1.2Deploy Witness node by OVF file
To deploy the witness node on VMware you need to download the OVF package the download link package named HCI6.11.1 Witness Node OVF Deployment Package. (Choose the appropriate installation package based on the different HCI versions).
The following steps are based on vCenter 7.0.3 as an example.
Steps:
Step 1: Select a cluster Go to Actions > Deploy OVF Template.
Step 2: Select Local File and click Browse to select the OVF and vmdk file of the witness node.
Step 3: Choose the appropriate location and virtual machine name.
Step 4: Select the appropriate compute resource.
Step 5: Select an option based on the cluster scale
Large Witness Node: Supports 18 to 24 hosts, with 9 to 12 hosts per data center
CPU: 12 cores
Memory: 32GB.
Medium Witness Node: Supports 8 to 16 hosts, with 4 to 8 hosts per data center
CPU: 8 cores
Memory: 24GB
Tiny Witness Node: Supports 4 to 6 hosts, with 2 to 3 hosts per data center
CPU: 6 cores
Memory: 16GB
Step 6: Select the appropriate storage and network to complete the deployment of the witness node.
Step 7: After the VM is imported, modify the settings.
Step 8: Click VM Options and expand Advanced. Under Configuration Parameters, click Edit Configuration.
Step 9: Enter "disk.enableuuid" for Name and "true" for Value, and then click OK.
Step 10: Select management interface. After the witness VM created successfully, open the VM console Fill in the default password admin click OK. Select Network Settings press enter. Select set as management interface press enter, then select the interface you want to use as management interface.
Step 11: Configurate IP address. Select the option configure interface press enter, then fill in the IP address, netmask, gateway, MTU and VLAN ID. At the last step click OK to complete the process.
Step 12: After all the above steps have been taken, you can use https://IP to access the witness node.
Virtual storage pool creation
The following is the initialization process of virtual storage under an stretch cluster.
Step1: Configure the storage datastore type. Navigate to Storage > Virtual Storage and click New. Then, select the datastore type as Stretched datastore.
Step 2: Select the node. Select the nodes to be added to the stretched datastore.
Step 3: Specify fault domains. Add the required nodes to the corresponding fault domains. If there are four nodes in cluster, so two nodes are added to the Primary Fault Domain, while another two are added to the Secondary Fault Domain.
Step 4: Add Witness Node. Configure the witness node IP according to the pre-installed witness node after naming the Primary Fault Domain and the Secondary Fault Domain. Follow the wizard to enter the password to confirm the configuration of the witness node.
you need to enable the SSH Port of the witness node under System > Port Management.
Step 5: Confirm the configuration. Confirm the configuration of the fault domain. Modifying the fault domain where the node is located after the datastore is created is not supported.
Step 6: Configure the Use of Disks. Next, you need to plan the use of disks, including data disk, cache disk, and spare disk. Generally, SSD is used as a cache disk to improve the IO performance of virtual storage. The system automatically recommends the type of hard disk according to the configuration. You can follow the default recommendations of the system.
Step 7: Confirm the configuration. Finally, the page displays the configuration result information of the virtual datastore, including the final storage capacity, the number of copies, and the number of disks. After confirming that the configuration is correct, you need to enter the administrator password and click OK to start initializing the virtual datastore.