I am encountering a collection where the crawler is stopping at exactly 10,000 pages.
What are the kinds of settings that would prevent the crawler from continuing. I believe they are licensed for 25k.
Only setting I can think of is:
crawler.overall_crawl_timeout
This is currently set to 240min but I would not have thought it would consistently cut its crawl phase at a clean 10k