cancel
Showing results for 
Search instead for 
Did you mean: 

Directory Benchmarking and Number of Clients/Threads

UnboundID PhilipP

When running benchmarks, it is not unusual to find those testing the server to configure a large number of threads, then send requests nose-to-tail to try to get an estimate of the maxed-out performance.

 

Generally, this does not give the best characterization of directory performance. It gives an indication of overload performance, which is not a condition that should be aimed for in real-world deployment. It is usually measuring the performance of some bottleneck in the system rather than the actual server performance.

 

As an example, what follow are a set of basic benchmarks run against a commodity-class server (blade with 2x Intel Xeon E5540 4-core @ 2.53 GHz , 24GB of memory, a single internal disk and one million entries). No attempt was made to tune other than adjusting per process file descriptor and per user process limits.

 

The load is provided by authrate – which performs a random search across all one million entries then binds to the returned DN.

 

Authentication (search + bind) rate vs. number of connections:

 

auth-1.png

 

As can be seen, the maximum throughput on this particular instance is at around 50 connections. Increasing the number of connections actually degrades performance (slightly).

 

It is instructive to look at the time taken for the authentication sequence to complete (as seen by authrate):

 

resp-1.png

 

Again, it is fairly clear that above 50 connections things are beginning to queue up both within the LDAP server, waiting for worker threads and also in output (TCP stack). As more connections are added, no extra work is performed, but there is added overhead in managing more connections.

 

Looking at the actual request processing time for the last few requests, even at 600 connections, it is around 0.1ms:

 

etime=0.110

etime=0.107

etime=0.098

etime=0.123

etime=0.108

etime=0.102

etime=0.113

etime=0.108

etime=0.105

etime=0.106

etime=0.102

etime=0.110

 

Note that these are a mixture of search and bind times, since authrate searches for the entry before binding to it.

 

If we limit the authentication rate to 5,000 per second, which from the above is well within the capability of the server, and much more typical of a per instance rate in a production environment, the curves look significantly different.

 

Authentication rate vs connections:

 

auth-rate-1.png

 

It is clear that more than one connection is required to achieve that rate, however, beyond that point the rate is independent of the number of connections.

 

Looking at the response time vs number of connections (as seen by authrate):

 

conn-1.png

 

Response time is a very flat curve, with sub-millisecond response times.

 

Looking at the last few times:

 

etime=0.099

etime=0.082

etime=0.076

etime=0.114

etime=0.113

etime=0.098

etime=0.099

etime=0.089

etime=0.098

 

As previously, around the 0.1ms mark.

 

MODIFICATION operations exhibit a similar characteristic with respect to the number of connections carrying nose-to-tail traffic.

 

 

MOD. request rate vs. number of connections:

 

mod-1.png

 

and request time vs. number of connections:

 

mod-time-1.png

 

To truly characterize performance, some added dimensions might be interesting, such as performing similar tests to the above, with varying numbers of CPUs, faster filesystem, faster network connections (although, for normal use, 1gbps networking should be sufficient).

 

The take-away should be, be aware of what you are actually testing, and try to keep all of the parameters within reasonable limits. Preferably something like maximum expected production load, plus some percentage for growth headroom.