Storage-Based Snapshot
aSAN provides efficient, space-optimized snapshots at the storage layer using a Redirect-on-Write mechanism. When data is modified after a snapshot is taken, the original data block is preserved and the new write is redirected to a different location. This method minimizes write amplification compared to Copy-on-Write and is used for creating rapid recovery points for data protection or application testing without impacting production performance.
Clone
The platform supports multiple cloning methodologies. A Full Clone creates an independent, complete copy of a source volume, ideal for creating permanent test environments. A Linked Clone shares disk blocks with its parent, making it extremely fast to deploy and space-efficient for VDI or development scenarios. An Instant Full Clone presents as a full clone immediately but performs background synchronization, offering a balance between deployment speed and long-term independence.
Multi-Replica Data
Data protection is achieved by maintaining multiple identical copies of each data shard across different physical nodes. The system enforces strong consistency, meaning a write operation is only acknowledged after all replicas have been successfully persisted. This architecture provides fault tolerance, allowing the cluster to withstand the failure of multiple nodes or disks (depending on the configured replica count) without data loss or service interruption.
Quorum Mechanism
To prevent "split-brain" scenarios during network partitions, aSAN employs a quorum mechanism that often utilizes a lightweight Witness Replica. In the event that cluster nodes lose communication, the component that can maintain a connection with the witness retains write permissions, while the isolated component is prevented from making changes that could cause data inconsistency.
Silent Error Detection
This feature proactively guards against undetected data corruption. Checksums are generated and stored with data blocks upon write. During subsequent read operations, the checksum is recalculated and verified. If a mismatch is detected, aSAN automatically recovers the correct data from a healthy replica, ensuring data integrity without administrator intervention.
Data Rebuilding
When a disk or node fails, aSAN automatically initiates a data rebuilding process to restore redundancy. The system reconstructs the lost replicas from the remaining healthy copies and distributes them across the surviving nodes in the cluster. This process is designed to be non-disruptive, running in the background with configurable resource limits to minimize impact on active workloads while returning the cluster to a fully protected state.
Sub-Healthy Disk Detection and Handling
aSAN proactively monitors disk health to identify potential failures before they occur. It can detect and repair bad sectors using data from healthy replicas, and if a disk is deemed high-risk, it can automatically migrate all data off that disk to healthier ones in the cluster. This proactive approach helps to avoid unexpected data loss and service disruption.
High Reliability of Storage Links
Network redundancy is critical for distributed storage. aSAN supports link aggregation and multipathing for the storage network. If a primary network path fails, storage traffic is automatically and seamlessly rerouted to a secondary path, ensuring continuous data availability and replication between nodes.
Spare Disks for Data Protection
Administrators can designate global hot-spare disks within the cluster. Upon a disk failure, aSAN immediately and automatically begins using a spare disk for the data rebuilding process. This automation significantly reduces the time to restore full data protection compared to manual disk replacement.