cancel
Showing results for 
Search instead for 
Did you mean: 

Ping Data sync Admin limit exceeded exception

PingDataSync
New Member
0 Kudos

Ping Data sync Admin limit exceeded exception

 when im trying to pull data from CA dir to opendj im getting the following errors.

 

Performing a bulk dump of entries from source server failed: LDAPSearchException(resultCode=11 (admin limit exceeded), numEntries=10000, numReferences=0, errorMessage='admin limit exceeded') (id=1893990469 ResourceOperationFailedException.java:137 6.0.1.0 rev 25036)

 

Also Can we use Paged search in Data sync tool?

3 REPLIES
UnboundID FredricT
UnboundID
0 Kudos

Re: Ping Data sync Admin limit exceeded exception

LDAP searches can specify a limit. the server you are sending the query to,
can have a resource limit imposed upon clients. Tell the CA admin to update
your "sync accounts" entry limit, probably your look-through limit as well.
These limits guard against things like bad code and DOS events.
PingDataSync
New Member
0 Kudos

Re: Ping Data sync Admin limit exceeded exception

There are more than 2 millions entries in CA directory ,we asked CA admin to update the entry limits we got a response saying it might crash the CA directory because of the continuous ldapsearch. So we are asked to do paged search and sync the data from CA directory to Opendj. 

how can we do this ?

UnboundID FredricT
UnboundID
0 Kudos

Re: Ping Data sync Admin limit exceeded exception

If you are using 'resync' to do the initial load then there are a few things you can do. 1) Do not use resync for this given the sources limits. Request an export of the data from the source. Then import the data. If required, filtering can be used during the import. Use resync to reconcile the changes. Perhaps they can provide access to a replica with the constraints removed. 2) Break up the request into chunks of 10k or less. If the source data has some characteristic upon which to divide, whatever it is, it should have an index, something with a numeric range is ideal, like employee number. If not, dump all of the DN's to file and break it into chunks of 9999, assuming the limit is 10,000. 3) Throttle resync by specifying fewer threads and/or the search rate per second.  

Know that resync works by fetching and comparing, that is extra work that may not be needed for a first run. If you need to newly create 2000000 entries per day/hour, then there are other approaches.