cancel
Showing results for 
Search instead for 
Did you mean: 

OS and VM tuning for UnboundID Servers

UnboundID PhilipP

Scope of This Article

Hardware/VM sizing is a fairly complex topic, which really boils down to being able to characterize the data, the access patterns and load profile. This is better done via a POC or development environment deployment with accurately reproduced data and expected load.

 

However, there are a set of OS and VM (if used) tuning options which are relatively constant across all products in deployments of all sizes. This article addresses these tuning options.

 

Operating System Tuning

Linux tuning is addressed here, since this is by far the most widely used platform for deployment of UnboundID servers. The principles can be extended to any Unix-like operating system.

 

These tuning suggestions are most important on servers having potentially very large Java heap spaces (Data Store and Proxy when employing entry balancing, but are strongly suggested for all other products too).

 

Increase file descriptor limits.

The default (1024) is much too low for anything but very small test deployments.

Recommended setting is 65535.

 

This is set in /etc/sysctl.conf:

 

fs.file-max = 65535

 

Also change limits in /etc/security/limits.conf to match:

 

* soft nofile 65535
* hard nofile 65535 

 

Increase  number of per user processes

The default is 4k, which seems reasonable, but Linux counts every thread as a process in this context. The server(s) are highly multi-threaded, and can easily hit this limit.

 

This is set in /etc/security/limits.d/NN-nproc.conf

The number, NN, is different on different releases. Note that setting this in the /etc/security/limits.conf file as was done in earlier releases no longer works if it is defined in any of the files under /etc/security/limits.d as these override any settings in limits.d.

 

Change the default 4096 to 100000.

 

Set swappiness to zero

Linux has an out of box tendency to move any memory pages that have not been recently referenced to swap. This is undesirable with large Java processes since heap references (e.g. cache) may have to be fetched from disk rather than memory.

 

There is a kernel variable that controls this tendency: vm.swappiness

We strongly recommend setting this to zero. This will cause the kernel to leave pages allocated in memory.

 

Set in /etc/sysctl.conf:

 

vm.swappiness = 0

 

Reboot after making the above changes to have them take effect.

Background Flush

Linux out of box has a tendency to not flush dirty pages until there is pressure to do so. Usually this occurs when there is little free memory available.

 

On small to medium memory systems this works well. On systems with larger memory sizes, particularly those running in a VM, it can lead to problems. Because of the large memory size, the volume of dirty pages that accumulates can add up significantly (tens of GB). When the kernel attempts to write this data, especially on relatively slow filestore often found on VMs, progress may be very slow, or the kernel may even find that it is just not catching up. In this case it will suspend all user-level processing until it has flushed all the data. This can take a considerable time (seconds) during which the server is apparently not responding.

 

The solution is to change the point at which flushing occurs. Flushing more often at lower memory usage (/etc/sysctl.conf):

 

 

vm.dirty_background_ratio = 5

 

Tuned

Tuned is a system process on RedHat/Centos 6/7 systems which automatically tunes the system to adapt to varying usage patterns.

 

In its out of box configuration, it has one undesirable (from our point of view) effect in that it overrides vm.swappiness settings.

 

There are two methods of dealing with this. One is to change its profile settings to disable changing vm.swappiness, the other is to simply disable it. Disabling may be the better solution since we don't really expect the usage profile to vary.

 

systemctl disable tuned

 

Automatic Huge Pages

With Java 8, turning off automatic huge pages can avoid frequent full garbage collections. Oracle recommends turning these off, particularly with large JVM heaps.

 

See this page on the Red Hat website for details on how to do this:

 

https://access.redhat.com/solutions/46111

 

VM Tuning

Resource allocation

Ensure that no resources are over-allocated. Never. EVER.

 

Time Synchronization

Time synchronization is required for smooth running of time sensitive protocols, such as replication. Time synchronization services supplied by most VM hosts has issues. In particular, time jumps, negative time jumps (time goes backwards).

 

Using NTP  rather than the VM time sync services is recommended. NTP deals with time differences by speeding up or slowing down the clock to achieve sync wherever possible (no backwards jumps) wherever possible, and within reasonable limits.

 

Remember to disable the VM time sync service when using NTP.

 

Linux scheduler

Better performance is often seen using the Linux deadline scheduler rather than the default. It will not do things like re-schedule I/O if  its deadline is past, keeping the server more responsive.

 

Reserve resources

Reserve CPU, memory, I/O bandwidth etc. to this VM.

 

Disable VM swap

The VM should never be swapped out.

 

 

Enable vNUMA

Modern machine architectures typically employ NUMA (Non Uniform Memory Access), where memory is not monolithic and shared between all CPUs, but partitioned and sections allocated to specific CPUs. Tis gives much faster access to the memory directly associated to a given CPU, and (under ideal conditions) no contention for that memory by multiple CPUs. it can make a significant difference in performance.

 

This principle is extended to most VM hosting systems. With VMWare, vNuma is enabled by default when a VM has eight or more CPUs allocated. There is usually an advantage to enabling it when less CPUs are allocated.

 

However, this assumes that all VM hosts to which any VM may be migrated have the same architecture and hardware configuration. If this is not the case, it is likely that the NUMA tables migrated with the VM will not match the underlying hardware. This will result in noticeably reduced performance.

 

VM NUMA tables are only re-calculated when the hosted OS is re-booted.