Distributed Resource Scheduler (DRS)
VMware Distributed Resource Scheduler (DRS) is a feature in VMware vSphere. You use it if your organization runs a logical cluster of hypervisor hosts (not applicable for single-node ESX instances). DRS improves overall cluster performance by balancing the use of overall physical resources across the cluster. Using vMotion, DRS intelligently moves images from node to node, to achieve consistent and even performance across the entire cluster. If a particular image is consuming excessive resources on one node, DRS might move that image to another node to spread the load. Conversely, if a node is under-utilized, DRS might move images from other nodes onto the under-utilized node.
gateway83
VMware Distributed Resource Scheduler (DRS) is a feature in VMware vSphere. You use it if your organization runs a logical cluster of hypervisor hosts (not applicable for single-node ESX instances). DRS improves overall cluster performance by balancing the use of overall physical resources across the cluster. Using vMotion, DRS intelligently moves images from node to node, to achieve consistent and even performance across the entire cluster. If a particular image is consuming excessive resources on one node, DRS might move that image to another node to spread the load. Conversely, if a node is under-utilized, DRS might move images from other nodes onto the under-utilized node.
As the Virtual Appliance requires a consistent allocation of memory, DRS has a significant impact on the performance of the Gateway.
DRS in Load/Test Environments
Disable DRS in test environments or when the Gateway is being subjected to high levels of load during performance testing. If DRS is enabled, it might move the Virtual Appliance image around, resulting in poor and/or inconsistent test results.
DRS in Production Environments
DRS in production environments is common for enterprises that deploy hypervisor clusters, since it is an integral part of properly maintaining a cluster of ESX servers. Be aware that using DRS can cause inconsistent performance of the Virtual Appliance.
DRS Affinity Rules
One way to mitigate inconsistent performance that is caused by DRS is to use DRS affinity rules. You can create affinity pools and then assign images to these pools. There are two affinity rules:
- Node affinity:Particular images should always reside on a particular host and should never be moved by DRS.
- Node anti-affinity:Particular images should never reside on the same host.
Node Affinity
Setting a particular image to have node affinity ensures that the node always stays on a particular hypervisor host. DRS will continue to move other images from the host, and move off-box images to the Gateway’s host.
This is beneficial to the Virtual Appliance, as it ensures that the image is never moved from a particular host, even if it is consuming a significant amount of resources. In this model, the Virtual Appliance continues to have access to the required resources (while other images are moved around it). It should also help maintain a reasonable performance level, since the image is not moved around.
Consider using node affinity if consistent performance of the Virtual Appliance is important in a production environment. If node affinity is used, ensure that the Virtual Appliance is configured with a reasonable resource allocation. Otherwise, your Virtual Appliance might consume all the resources of a single host and interfere with normal DRS functionality.
If node affinity is not employed, there could be temporary performance implications as DRS moves the Gateway image between hosts.
Node Anti-Affinity
Anti-affinity can be applied to more than one image. When applied, it dictates that a particular image should never be moved to a particular host, or that two or more images must never reside on the same host.
Anti-affinity rules provide several advantages for the Gateway, both in terms of performance and clustering. The Gateway is a high-performance application that can consume a great deal of CPU, memory, and network bandwidth. If you deploy more than one Virtual Appliance in an environment, anti-affinity rules can ensure that each of the nodes never reside on the same host. This ensures that each of the Gateway nodes has adequate access to resources to perform well.
As best practice, always enable anti-affinity rules when there is a multiple Virtual Appliance deployment in the same logical hypervisor cluster. This prevents a single hypervisor node from being overloaded with too many Virtual Appliances.
For Gateway logical clusters, use anti-affinity rules to prevent the database nodes of the Gateway cluster from residing on the same ESX host. This ensures that at least one database node is intact if an ESX node goes down, guaranteeing that the Gateway cluster remains operational.