Sangfor aSV constitutes the compute virtualization foundation of the Hyper-Converged Infrastructure, built upon a hardened Linux kernel integrated with the Kernel-based Virtual Machine (KVM) hypervisor. It delivers enterprise-grade virtualization by abstracting physical compute resources—CPUs, memory, and I/O devices—into shared pools that can be dynamically allocated to multiple isolated virtual machines. The architecture is designed for high performance, security, and integration with other HCI components like aSAN and aNET, forming a cohesive software-defined data center platform.
Hypervisor Architecture Implementation
aSV adopts a bare-metal virtualization model, where the hypervisor operates directly on the server hardware without an underlying host operating system. This implementation leverages the Linux KVM module, which transforms the Linux kernel itself into a hypervisor. KVM handles core virtualization functions like CPU scheduling and memory management, while the modified QEMU process provides device emulation and userspace management for each VM. This combination utilizes hardware virtualization extensions (Intel VT-x or AMD-V) to run guest VMs in a privileged CPU mode called non-root mode, ensuring direct hardware execution for most instructions while trapping and emulating sensitive operations for isolation and resource control. The result is a highly efficient virtualization layer with minimal performance overhead.
Core Virtualization Mechanisms
• vCPU Mechanism: Each virtual CPU (vCPU) in a VM is implemented as a user-space thread scheduled by the host's Linux kernel. The KVM module creates a dedicated structure for each vCPU, allowing the hypervisor to context-switch between vCPUs and physical CPUs. When a vCPU is scheduled to run, KVM executes a VM entry operation that transitions the physical CPU into non-root mode, enabling the guest code to run directly on the processor. Sensitive instructions executed by the guest are trapped by the hardware, causing a VM exit that returns control to KVM for emulation or handling.
• Memory Virtualization Mechanism: aSV utilizes hardware-assisted memory virtualization through technologies like Intel Extended Page Tables (EPT) or AMD Rapid Virtualization Indexing (RVI). This approach uses two-level page tables to translate guest virtual addresses to host physical addresses. The guest OS manages the first translation from guest virtual to guest physical addresses using its own page tables, while the EPT/RVI hardware performs the second translation from guest physical to host physical addresses. This eliminates the software overhead of shadow page tables, delivering near-native memory performance for virtualized workloads.
• I/O Device Virtualization: aSV employs multiple I/O virtualization techniques optimized for different scenarios. For high-performance storage and network devices, it uses paravirtualization through the VirtIO framework, where specially optimized drivers in the guest OS communicate directly with simplified backend drivers in the host. For legacy compatibility, full device emulation provides complete software simulation of hardware devices. For maximum performance, direct I/O passthrough using Intel VT-d or AMD-Vi technologies allows VMs to exclusively access physical devices, bypassing the hypervisor entirely for I/O operations.
SFFS and Reliability
The Sangfor File System (SFFS) serves as a cluster file system specifically designed for virtualization environments, enabling multiple hosts to concurrently access shared external storage. SFFS operates at the kernel level and employs a distributed lock manager to coordinate access to shared virtual disk files, ensuring data consistency across the cluster. For reliability enhancement, SFFS implements an Atomic Test and Set (ATS) locking mechanism that utilizes storage-level atomic operations to prevent split-brain scenarios. The system also supports space reclamation through the SCSI UNMAP command, allowing thin-provisioned storage to efficiently reclaim space when files are deleted within VMs.
Security and Isolation
aSV implements comprehensive isolation mechanisms at multiple levels. Kernel hardening techniques minimize the attack surface by disabling unnecessary services, applying strict file permissions, and configuring security-enhanced kernel parameters. Between VMs, isolation is enforced through hardware-enforced memory protection using EPT/RVI, ensuring each VM can only access its allocated memory regions. For hypervisor security, aSV maintains strict separation between the host kernel space and VM processes, preventing guest-to-host escapes. Additional security measures include mandatory access controls, secure boot verification, and encrypted communication channels for management traffic.