Hyper Converged Infrastructure (HCI/aSV)

Sangfor HCI and aSV provide a unified infrastructure combining compute, storage, networking, and built-in security to simplify deployment, operations, and services.
{{ $t('productDocDetail.guideClickSwitch') }}
{{ $t('productDocDetail.know') }}
{{ $t('productDocDetail.dontRemind') }}
6.11.1R1
{{sendMatomoQuery("Hyper Converged Infrastructure (HCI/aSV)","Preventing Split-Brain in Dual-Host Clusters")}}

Preventing Split-Brain in Dual-Host Clusters

{{ $t('productDocDetail.updateTime') }}: 2026-01-05

Hardware Configuration Requirements (HCI Witness Device)

A witness device is an external physical node that imports the witness mechanism and witness node operating system into a thin client. This solution aims to address split-brain in dual-node clusters. A witness device looks like an aDesk thin client. To start and deploy a witness device, you only need to connect it to the power supply cable, network cable, and server, and then press the power on button.

To prevent split-brain, you need to deploy a witness node, which can reside on the witness device or in the virtual environment of another witness node. For detailed directions, see Witness Node System Installation (Optional).

A witness device configuration is fixed.

The disk of a witness device is made of on-board flash memory particles and cannot be replaced. If the disk is damaged, you can only replace the witness device with a new one.

Table 1:Witness Device Specifications

Model

aServer-J-100Z

Hardware Configuration

CPU Model

 2.9GHz

Number of CPU

1

Number of cores per CPU

2

Number of threads per CPU

2

Memory

16G(Onboard)

Disk Volume

64G(Onboard)

Details

Largest Size ( length * width * height(mm))

178*110*28

Weight

0.93KG

Working Environment

Temperature

0-40

Humidity

5-95%RH

Power

Power supply

Adapter

Actual

17W

Largest

60W

Network Port

Copper Port ByPass

-

100M Copper Port

-

1G Copper Port

1

1G Fibre Port SFP

-

10G Fibre Port SFP

-

Serial (RJ45)

-

Interface Type

HDMI+HDMI+DP, 1000M network card x1, audio two-in-one

USB

4 2.0 + 4 3.0

Remarks

Explanation

Only for HCI6.7.0_R3 2 + 1 new VS witness node scenario.

VM Configuration Requirements

You can deploy a witness node on a VM to prevent split-brain.

Below are the required configurations:

4 CPU cores, 16 GB memory

Installation on HCI or VMware clusters

64 GB disk x 1

Network interface x 1

1.VMware image template: https://download.sangfor.com/Download/Product/HCI/HCI6.10.0R2/HCI_Witness6.10.0_R2_X86_64(20240911).zip

2.Required disk configuration: Enterprise SSDs with a capacity of at least 64 GB and 1,000 input/output operations per second (IOPS) for virtualization.

Dual-host scenarios are subject to the following storage restrictions.

Feature

Description

Tri-Host (x86)

Dual-Host (x86)

2+1 Split-Brain Prevention (6.9.0/6.7.0 R3)

Stretched Cluster

Dynamic disk provisioning

1. In pre-allocating, storage space is allocated in advance; in thin provisioning, storage space is allocated on demand; in dynamic provisioning, metadata storage space is allocated in advance, reducing the loss due to address space application and improving performance.

2. Dynamic provisioning boasts the disk utilization of thin provisioning and 90% of the performance of pre-allocating.

3. In virtual storage with three or more hosts, dynamic provisioning is used by default and can be switched to pre-allocating. In virtual storage with one or two hosts, thin provisioning is used by default and can be switched to pre-allocating only (dynamic provisioning is grayed out).

Supported

Not supported

Supported

Dynamic disk provisioning

Adaptive striping

In data striping, the number of strips is automatically adjusted based on that of HDDs in the cluster host to make the most of HDD concurrency.

Supported

Not supported

Supported

(The number of strips is one by default, which can be adjusted on the page.)

Adaptive striping

Multi-datastore physical pool

Virtual datastores are created by a physical host in a cluster and are isolated from each other for independent performance and capacity. VMs can be created and run on different virtual datastores.

Supported

Not supported

Not supported

Multi-datastore physical pool

Rebuilding upon a failure

If a failure occurs, the system automatically selects HDDs with redundant space for automatic data rebuilding. Data in multiple HDDs are concurrently rebuilt to multiple HHDs, with a rate of up to 1 TB/30 minutes (in the case of six hosts).

Supported

Not supported

Supported

Data balancing

Data is migrated according to the data balancing schedule, making sure that the capacity of each host is balanced.

Supported

Not supported

Supported

Data balancing

If the capacity difference between hosts is large or the capacity of a single host is greater than that of others, the system automatically performs data migration for balancing.

Supported

Not supported

Supported

Shared virtual disk

A shared virtual disk is created from a virtual storage disk and shared by multiple VMs (such as Oracle RAC).

Supported

Not supported

Not supported

Shared virtual disk

Subhealthy disk isolation and read/write source switching

If a disk lags, slows down, or gets congested, the overall performance may be affected. In this case, you can isolate the subhealthy disk and switch read/write sources to recover the performance.

Supported

Not supported

Supported

Subhealthy disk isolation and read/write source switching

Internet Small Computer Systems Interface (iSCSI) virtual disk

iSCSI virtual disks support high availability (HA). If a host fails, iSCSI connection is automatically switched to a normal host.

Supported

Not supported

Supported

Storage-based snapshot

Storage-based snapshots have less impact on business than computer-layer snapshots. Snapshots are usually taken for VMs before software or operating system upgrades to ensure quick rollback in the case of any failure. Using storage-based snapshots can also avoid data loss caused by virus infection or misoperation and is quicker than backup.

Supported

Not supported

Supported

(not supported by witness nodes)

Storage-based snapshot

Consistency group snapshot

In business scenarios where multiple VMs are forcibly bound to each other, they must be snapshot at the same time point to ensure consistency. For example, you can use consistency group snapshots for an Oracle RAC database consisting of two or more VMs, a distributed application of multiple VMs, and a typical business of the "application VM + middleware VM + database VM."

Supported

Not supported

Partially supported

(not supported by applications that require shared disks, such as Oracle RAC or other clustered databases)

Consistency group snapshot

Linked clone

It is implemented based on VM snapshots. The implementation principles are as follows:

1. A storage-based snapshot is taken for the source VM, and the source image is set to read-only.

2. When a linked clone is running, old data from the source image is read, and newly added and modified data is written to the snapshot space of the linked clone.

Supported

Not supported

Supported

Linked clone

Instant full clone

It integrates the advantages of the full clone and the linked clone.

As this clone type is implemented based on the linked clone, the clone VM can be started in seconds at the early stage of the clone.

During the clone, the HCI backend generates a full clone task, which continuously replicates all data from the source image to the target image.

After the clone, the VM is disconnected and independent from the source VM.

Supported

Not supported

Supported

Instant full clone

Disk-level partitioning

aSAN supports disk-level partitioning if there are at least three hosts. This makes datastores more flexible, meeting different performance requirements at lower costs.

Supported

Not supported

Not supported

Disk-level partitioning

Information technology application innovation (ITAI) version

Supported

Not supported

Not supported

ITAI version

Storage area network using RDMA

Storage area networks can use RDMA for internal communication, which reduces latency and improves storage performance.

Supported

Not supported

Not supported

Not supported