| Feature |
Description |
Tri-Host (x86) |
Dual-Host (x86) |
2+1 Split-Brain Prevention (6.9.0/6.7.0 R3) |
Stretched Cluster |
| Dynamic disk provisioning |
1. In pre-allocating, storage space is allocated in advance; in thin provisioning, storage space is allocated on demand; in dynamic provisioning, metadata storage space is allocated in advance, reducing the loss due to address space application and improving performance. 2. Dynamic provisioning boasts the disk utilization of thin provisioning and 90% of the performance of pre-allocating. 3. In virtual storage with three or more hosts, dynamic provisioning is used by default and can be switched to pre-allocating. In virtual storage with one or two hosts, thin provisioning is used by default and can be switched to pre-allocating only (dynamic provisioning is grayed out). |
Supported |
Not supported |
Supported |
Dynamic disk provisioning |
| Adaptive striping |
In data striping, the number of strips is automatically adjusted based on that of HDDs in the cluster host to make the most of HDD concurrency. |
Supported |
Not supported |
Supported (The number of strips is one by default, which can be adjusted on the page.) |
Adaptive striping |
| Multi-datastore physical pool |
Virtual datastores are created by a physical host in a cluster and are isolated from each other for independent performance and capacity. VMs can be created and run on different virtual datastores. |
Supported |
Not supported |
Not supported |
Multi-datastore physical pool |
| Rebuilding upon a failure |
If a failure occurs, the system automatically selects HDDs with redundant space for automatic data rebuilding. Data in multiple HDDs are concurrently rebuilt to multiple HHDs, with a rate of up to 1 TB/30 minutes (in the case of six hosts). |
Supported |
Not supported |
Supported |
|
| Data balancing |
Data is migrated according to the data balancing schedule, making sure that the capacity of each host is balanced. |
Supported |
Not supported |
Supported |
Data balancing |
| If the capacity difference between hosts is large or the capacity of a single host is greater than that of others, the system automatically performs data migration for balancing. |
Supported |
Not supported |
Supported |
|
| Shared virtual disk |
A shared virtual disk is created from a virtual storage disk and shared by multiple VMs (such as Oracle RAC). |
Supported |
Not supported |
Not supported |
Shared virtual disk |
| Subhealthy disk isolation and read/write source switching |
If a disk lags, slows down, or gets congested, the overall performance may be affected. In this case, you can isolate the subhealthy disk and switch read/write sources to recover the performance. |
Supported |
Not supported |
Supported |
Subhealthy disk isolation and read/write source switching |
| Internet Small Computer Systems Interface (iSCSI) virtual disk |
iSCSI virtual disks support high availability (HA). If a host fails, iSCSI connection is automatically switched to a normal host. |
Supported |
Not supported |
Supported |
|
| Storage-based snapshot |
Storage-based snapshots have less impact on business than computer-layer snapshots. Snapshots are usually taken for VMs before software or operating system upgrades to ensure quick rollback in the case of any failure. Using storage-based snapshots can also avoid data loss caused by virus infection or misoperation and is quicker than backup. |
Supported |
Not supported |
Supported (not supported by witness nodes) |
Storage-based snapshot |
| Consistency group snapshot |
In business scenarios where multiple VMs are forcibly bound to each other, they must be snapshot at the same time point to ensure consistency. For example, you can use consistency group snapshots for an Oracle RAC database consisting of two or more VMs, a distributed application of multiple VMs, and a typical business of the "application VM + middleware VM + database VM." |
Supported |
Not supported |
Partially supported (not supported by applications that require shared disks, such as Oracle RAC or other clustered databases) |
Consistency group snapshot |
| Linked clone |
It is implemented based on VM snapshots. The implementation principles are as follows: 1. A storage-based snapshot is taken for the source VM, and the source image is set to read-only. 2. When a linked clone is running, old data from the source image is read, and newly added and modified data is written to the snapshot space of the linked clone. |
Supported |
Not supported |
Supported |
Linked clone |
| Instant full clone |
It integrates the advantages of the full clone and the linked clone. As this clone type is implemented based on the linked clone, the clone VM can be started in seconds at the early stage of the clone. During the clone, the HCI backend generates a full clone task, which continuously replicates all data from the source image to the target image. After the clone, the VM is disconnected and independent from the source VM. |
Supported |
Not supported |
Supported |
Instant full clone |
| Disk-level partitioning |
aSAN supports disk-level partitioning if there are at least three hosts. This makes datastores more flexible, meeting different performance requirements at lower costs. |
Supported |
Not supported |
Not supported |
Disk-level partitioning |
| Information technology application innovation (ITAI) version |
|
Supported |
Not supported |
Not supported |
ITAI version |
| Storage area network using RDMA |
Storage area networks can use RDMA for internal communication, which reduces latency and improves storage performance. |
Supported |
Not supported |
Not supported |
Not supported |