We do our best to document the process of working with indexes in the admin guide, so please refer to our official documentation for basic index related issues.
The chapter entitled "Working With Indexes" is the primary resource, with the following subsections available:
Overview of Indexes
General Tips on Indexes
To View the System IndexeIndex Types
Managing Local DB Indexes
To View the List of Local DB Indexes
To View a Property for All Local DB Indexes
To View the Configuration Parameters for Local DB Index
To Modify the Configuration of a Local DB Index
To Create a New Local DB Index
To Delete a Local DB Index
Working with Local DB VLV Indexes
To Create a New Local DB VLV Index
To Modify a VLV Index’s Configuration
To Delete a VLV Index
Working with Filtered Indexes
To Create a Filtered Index
About the Exploded Index Format
About the Index Summary Statistics Table
About the dbtest Index Status Table
Configuring the Index Properties
The "Troubleshooting the Server" chapter also has some sections that can help users with exceeded keys
Index Key Entry Limit
Identifying unindexed searches
The first step in improving search speed and throughput is to reduce the number unindexed searches in an environment. You may also want to improve the speed of commonly used searches in memberURL group entries.
The matching entry count could not be determined because the search criteria is unindexed
Matching Entry Count Debug Messages:
* portkun.unboundid.lab:1389 - Beginning index evaluation for (objectClass=person)
* portkun.unboundid.lab:1389 - Index processing for equality filter (objectClass=person) yielded IDList[LIMIT-EXCEEDED]
* portkun.unboundid.lab:1389 - Virtual attribute processing of filter (objectClass=person) resulted in IDList[NOT-INDEXED]
* portkun.unboundid.lab:1389 - Unable to pare the result set down any further using id2subtree (still IDList[NOT-INDEXED]). Cannot guarantee that all candidates are in scope.
* portkun.unboundid.lab:1389 - No matching entry count information is available for an unindexed search.
* portkun.unboundid.lab:1389 - Constructing an UNKNOWN response
same as debugsearchindex
Based on the result analysis above, you should be able to identify any attributes that are not being indexed as well as the indexes that contain keys which have been exceeded.
Identifying indexes with exceeded keys
The verify index tool can be used to identify indexes that have keys that match too many entries and exceed the index entry limit
bin/verify-index --baseDN dc=example,dc=com
In 18.104.22.168 and above, you can use the verify index tool to list all of the exceeded keys for those indexes
This table shows that this index has a limit of 4,000 matches for any key and that there are 4 keys that exceed that limit.
The table also indicates the range of matches for keys, specifically that 2 keys that match between one and nine entries, and 4 keys that match between 10k and 100k entries.
Understanding index entry limits
A large value for the index-entry-limit can have a big impact on write performance and database growth on disk, since for each change to an index key, IDs for all entries matching the key must be rewritten to the database. For example if the index-entry-limit is set to 100,000, and the number of entries matching a given key is 50,000, then changes to that key will require writing 200KB to the database since each entry ID is 4 bytes. Add and delete operations are especially impacted since they must update all indexes for an entry. If many indexes have a large index-entry-limit, this can lead to very low throughput and can cause the database to temporarily grow very large on disk.
For old versions (prior to 22.214.171.124) it's recommended to rarely set this value above 10,000 and never set it above 50,000
Exploded indexes were added in 126.96.36.199 and allow for better performance with larger index entry limits
So you can now set index-entry-limit to much higher values, however you should not raise the exploded-index-entry-limit or you will face the old issues above
If the index-entry-limit is hit for an exploded index, then the index will be deleted
Deleting an exploded index is fairly expensive, so it is done in the background
This can cause some contention when it happens
Try to avoid this by using a high enough index entry limit to accommodate all matching entries for a key