{{ $t('productDocDetail.guideClickSwitch') }}
{{ $t('productDocDetail.know') }}
{{ $t('productDocDetail.dontRemind') }}
6.11.3
{{sendMatomoQuery("Sangfor Cloud Platform (SCP)","Resource Change")}}

Resource Change

{{ $t('productDocDetail.updateTime') }}: 2025-12-18

Introduction

Refers to modifying the hardware resources of a VM. It covers core resources including CPU, memory, disk, NIC, and PCIe device (GPU and encryption card). It aims to meet the dynamic hardware demands of business growth while maximizing service continuity. Through resource change, operations such as CPU clock speed adjustment, disk capacity expansion, NIC addition, and GPU configuration can be performed. It is applicable to different business scenarios (Example: expanding memory for database services, adding NICs for web services), ensuring stable service running.

Constraints and Restrictions

  1.  The rule of no resource overcommitment must be met during resource change: The total number of vCPU cores cannot exceed the total number of logical CPU cores on the node. The memory size cannot exceed the physical memory of a single node. The disk capacity cannot exceed the free space of the target datastore.
  2.  Resource change for running VM depends on hardware type and platform version: Only VirtIO disk capacity can be expanded when the VM is running, while IDE disk capacity expansion requires the VM to be powered off. CPU or memory hot add requires HCI 5.8.8 or later, the license for the enterprise edition, and vmTools.
  3.  Cross-architecture resource change is not supported: For example, an x86-based VM cannot be migrated to an ARM-based cluster. The GPU and encryption card must match the node architecture (Example: An ARM-based node supports only domestic GPUs such as Cambricon MLU370 S4).
  4.  Change restrictions for special resource: Shared virtual disk addition or deletion requires all associated VMs to be powered off. Dynamic provisioning and space reclamation are not supported after physical disk mapping. Cross-cluster migration must meet the following requirements: The network interfaces between the source and destination clusters must be connected, and the source and destination clusters must be of the same architecture.

Description of Running VM Change Taking Effect Immediately

Resource Change Item

Prerequisites

Description

CPU Clock Speed Limit

1. Managed HCI versions: 6.7.0 or later. 2 Limit value range: 100MHz-1000GHz

1. After a VM has been running for one week, the system will display the average CPU clock speed over the last month (excluding 0) as a recommended value. 2. Setting the CPU clock speed limit too low may prevent the VM from starting properly. Please configure with caution.

Disk IO Limit

1. Managed HCI versions: 6.7.0 or later. 2. Disk type: VirtIO disk

1. Configurable value ranges: 128 KB/s-102400 MB/s for read speed, 128 KB/s-102400 MB/s for write speed, 16-2147483647 for read IOPS, and 16-2147483647 for write IOPS. 2. After the VM has been running for one week, the system will display recommended limit values. You can choose to autofill values.

NIC Traffic Limit

1. vmTools must be installed on the VM. 2. Traffic Limit of single NIC: 1000 Kbps-20000 Mbps

1. The outbound traffic and inbound traffic must be limited separately. 2. In high business load scenarios, the feature is used to limit the bandwidth of non-core VMs, ensuring network resources for core services.

Hot Disk Capacity Expansion (Existing Disks)

1. Managed HCI versions: 5.8.8 or later. 2. Disk type: VirtIO disk. 3. vmTools must be installed on the VM.

1. Hot capacity expansion is not supported by IDE disks. 2. After disk capacity expansion, the partition must be extended within the operating system (Expand partition through the disk management feature in a Windows system or through the LVM logical volume feature in a Linux system). For details, see section 3 of Operating System Disk Initialization Operation Manual.

NIC Hot Add

1. The number of NICs to be added per VM cannot exceed 10. 2. Supported NIC models: Virtio, Intel E1000, and Realtek RTL8139

1. After an NIC is added, you can immediately configure the NIC's IP address, MAC address, and Jumbo Frame feature. 2. After the Jumbo Frame feature is enabled, non-TCP jumbo frame packets must be fragmented. The feature must be disabled for IPSec devices.

USB Device Mapping/Disassociation

Cross-node mapping (in the same cluster) must be supported.

1. The USB device will be automatically re-mapped after recovery from device error (Example: network disconnection). 2. The feature is only supported by HCI VMs. For VDI VMs, USB devices must be used through endpoint mapping methods.

VF/PF Dynamic Provisioning of PCIe Device

1. The IOMMU/SMMU feature must be enabled on the node. 2. The device must support SR-IOV (Example: Intel X710 NIC, SYD encryption card).

1. NIC can be divided into up to 16 VFs or up to 8 PFs (only SFC X2522 NIC is supported). 2. An encryption card in SR-IOV mode can be associated with multiple VMs, but association is not supported for an encryption card in passthrough mode.

Description of Running VM Change Requiring Restart to Take Effect

Resource Change Item

Prerequisites

Description

Enable/Disable CPU Hot Add

1. The license for the enterprise edition is required. 2. vmTools must be installed on the VM. 3. Supports only the following operating systems: Windows Server 2012 and later, CentOS 7 and later

If the feature is enabled when the VM is running, restarting VM is required for the change to take effect. After it takes effect, CPUs can be added when the VM is running. Note that the number of sockets and CPU cores must be greater than those in the original configuration when editing. 2. CPU hot add is not supported by the ARM architecture.

Enable/Disable Memory Hot Add

1. The license for the enterprise edition is required. 2. vmTools must be installed on the VM. 3. The adjusted memory size cannot exceed the physical memory size of a single node.

1. If the feature is enabled when the VM is running, restarting VM is required for the change to take effect. After it takes effect, memory can be added when the VM is running. Note that the memory size must be greater than that in the original configuration when editing. 2. Enabling the feature does not affect existing memory usage.

Enable/Disable NUMA Scheduler

1. The total number of CPU cores on the VM must be greater than 8. 2. vmTools must be installed on the VM.

1. If the feature is edited when the VM is running, restarting VM is required for the change to take effect. After it takes effect, vCPUs will be bound to physical CPUs and local memories, reducing cross-NUMA node access consumption (memory access performance will be improved by about 20%). 2. If the feature is disabled, vCPUs will be scheduled randomly, and across-node memory access may occur.

Enable/Disable Host CPU

1. The CPU models of the destination nodes for migration on Intel platforms must be consistent. 2. The feature is not supported by Windows-based VMs with c86 architectures.

1. If the feature is edited when the VM is running, restarting VM is required for the change to take effect. If the feature is enabled, emulation consumption for virtualization instruction set will be prevented, improving performance of CPU-bound service (Example: big data analysis). 2. Host CPU is enabled by default for ARM architecture.

Enable/Disable Huge-Page Memory

1. Memory overcommitment must be disabled. 2. Memory in full page must be reserved (2 MB for x86/c86, 512 MB for ARM).

1. If the feature is edited when the VM is running, restarting VM is required for the change to take effect. If the feature is enabled, the memory reclaiming will be disabled, prioritizing performance for memory-bound service (Example: Oracle database). 2. If memory overcommitment is enabled, enabling the feature may prevent the VM from powering on.

Add/Replace Graphics Card

1. The graphics card model must be in the compatibility list of Sangfor (Example: NVIDIA T4, AMD V620). 2. Corresponding graphics card driver must be installed.

1. If the feature is edited when the VM is running, restarting VM is required for the change to take effect. NVIDIA licensing (vWS/vCS) is required for vGPU. 2. Up to 8 graphics cards (Models: Tesla T4 and GeForce RTX 2080Ti) are supported in passthrough mode.

Associate/Disassociate Encryption Card

1. Encryption card model must be supported (Example: SYD1308-G, SANSEC SJK1727). 2. The encryption card to be associated must be the same model as the primary card.

1. If the feature is edited when the VM is running, restarting VM is required for the change to take effect. Multiple encryption cards can be associated in SR-IOV mode to improve performance, but encryption card association is not supported in passthrough mode. 2. If encryption cards are associated, VM live migration and snapshot feature are not supported.

Description of Change Taking Effect for VMs in Off Status

Resource Change Item

Prerequisites

Description

Total Number of vCPU Cores (Not Hot Add)

1. The total number of vCPU cores after change cannot be greater than the total number of logical CPUs on the node. 2. The number of CPUs cannot exceed the maximum (The overcommitment ratio for the x86 and c86 architecture cannot be greater than 200%).

1. The VM must be powered off first. The total number of cores can be increased and decreased during editing (Total cores = number of virtual sockets × cores per socket). 2. Before reducing the number of cores, confirm the minimum CPU cores required for the service to prevent service anomaly.

Memory Size (Not Hot Add)

1. The memory size after change cannot be greater than the physical memory size of a single node. 2. The memory size on an ARM-based VM cannot be greater than 255 GB.

1. The VM must be powered off first. The memory size can be expanded and shrunk during editing. 2. If huge-page memory is enabled, the reserved memory size must be adjusted accordingly to prevent memory fragment generation.

Disk Type (Change from VirtIO to IDE)

1. IDE disks are required only for old operating systems (Example: Windows 2000). 2. Make sure no disk based snapshot or backup tasks exist.

1. The VM must be powered off first. After change is complete, the corresponding disk driver must be reinstalled. 2. Changing the disk type is not recommended if it is unnecessary because VirtIO disks (support higher IOPS and lower latency) provide higher performance than IDE disks.

Storage Location (Across Datastores)

1. The target datastore must have sufficient storage space. 2. The VM cannot be configured with disk encryption and CDP.

1. The VM must be powered off first. The migration process will consume storage IO resources. Therefore, it is recommended to perform the operation during off-peak hours (Example: 2:00 AM - 4:00 AM). 2. The target datastore must reside in the same fault domain as the source datastore for stretched datastore change.

Map/Disassociate Physical Disk

1. The physical disk must be configured in NON-RAID or JBOD mode. 2. Live migration and snapshot configuration must be disabled.

1. The VM must be powered off first. After mapping is complete, the physical disk will be directly mounted to the VM, and dynamic provisioning and space reclamation are not supported. 2. Before disassociation, make sure the disk has been unmounted from the VM to prevent data loss.

Cross-Cluster Migration (Datastore and Run Locations)

1. The source cluster can properly access the interface used for migration to the destination cluster. 2. The architecture of the source cluster and the destination cluster must match. (Example: both are x86 or ARM).

1. VMs in version 6.8.0 and later with SP installed support cross-version live migration, while VMs without SP installed must be powered off before migration. 2. After migration is complete, the VM in the source cluster will be automatically powered off and moved to the Recycle Bin.

Add/Remove Shared Virtual Disk

1. The feature is applicable to apps requiring shared datastore, such as Oracle RAC and MySQL cluster. 2. Supports sharing across up to 128 VMs.

1. All associated VMs must be powered off first. The added shared virtual disk does not support features of snapshot or live migration. 2. Before removal, make sure the shared virtual disk has been unmounted from all associated VMs.

Steps

Step 1.Log in to SCP and go to Resource Center > VMs.

Step 2.Locate the VM you want to edit, click More in the Operation column, and click Edit to go to the Edit Virtual Machine page.

Step 3.On the Edit Virtual Machine page, click the corresponding tab to go the leaf page based on the type of resource you want to change and configure its features:

For detailed description and configuration suggestion for each parameter, see xx[17]

On the Configuration leaf page, you can change the following features of the resources: enabling or disabling CPU Clock Speed Limit, NUMA Scheduler, and Hot CPU in the Compute section, enabling or disabling Disk IO Limit and Space Reclamation and configuring Disk Size to expand it capacity in the Storage section, configuring Traffic Limit and IP for NIC, and enabling or disabling Jumbo Frame in the Networking section.

Step 4.Select or enter the configuration parameter for the feature you want to change as needed (Example: disk capacity after expansion, vGPU type, and NIC traffic limit value). After confirming the parameters are correct, click OK at the bottom of the page to submit the change task.

Step 5.Complete subsequent operations based on the rule for features taking effect:

For features that are edited when the VM is running and take effect immediately (Example: CPU clock speed limit, NIC traffic limit), changes will become effective immediately after the task is submitted.

For features that are edited when the VM is running and require VM restart to take effect (Example: enable huge-page memory, add graphics card), return to the VMs page, and click More > Restart for the changes to take effect.

For features that are edited when the VM is powered off and take effect when the VM is restarted (Example: change the total number of vCPU cores, migration across datastores), return to the VMs page, locate the target VM, click More in the Operation column, and click Power Off. After the status of the VM becomes Off, perform steps 3 and 4. After the change is complete, click More > Power On for the VM to resume the service.