Huge smartcrawl performance issue

Our website is running on 2 servers, 1 web server (32 Core, 128Go RAM, SSD drive) and 1 MySQL server (12 Core, 32Go RAM, SSD drive). We are a news website with 15k articles (1,5 millions unique visitors/month)
Unfortunatly since 2 weeks we are experiencing huge performance issue. Yesterday I've installed new relic and realized that each time our sql server fall (Load average: 132) it was due to SmartCrawl.
Can you help us with that issue please?

  • Kasia Swiderska

    Hello Emmanuel,

    Is it possible that you have scheduled SEO scans in SmartCrawl? It is in SEO Checkup -> CHECKUP. If you have this enabled, can you disable those scans and see if the usage spikes will drop down?

    If that's not the case would you mind allowing support access so we can have a closer look at your site configuration?
    To enable support access you can follow this guide here:

    kind regards,

  • Adam Czajczyk

    Hello Emmanuel,

    I hope you're well today!

    I have accessed your site to review its settings (did make no changes there whatsoever) and I think we should start with some simple "tweaks".

    1. Your site is quite a complex one. Apart from 15k articles that you mentioned and a huge traffic, there's also 40 plugins active (and some of them, like for example - but not only - Visual Composer, are quite "resource heavy"). Yet, while there's up to 256M of memory allowed on server for PHP scripts, your WP install is limited to the default 40M. I suggest increasing that limit significantly. To do this, please add following line to the "wp-config.php" file of your site right above the "/* That's all, stop editing! */" line:

    define('WP_MEMORY_LIMIT', '256M');

    2. Since there's an URL crawler enabled, with that big site it would be recommended to increase maximum script execution time for PHP. Currently the "max_execution_time" PHP option value is set to 30 seconds and that means that any script (including background scripts) will be forced to terminate in 30 seconds even if it doesn't finish its job. With "crawling/testing related" scripts on such a big site that might be a bit of an issue. Therefore, I'd suggest increasing it to at least 150-180 though 300 shouldn't be a problem too.

    3. Another small change in addition to these major once would be to double-check your site analytics to make sure at what times of the day the site's receiving the smallest traffic. What I mean is: currently sitemap crawler in SmartCrawl is set to run daily at 1 AM. That setting relates to your local WP time zone and may not necessarily be the same time when the site actually gets the smallest amount of traffic. So, if that doesn't overlap with what you'll find in your site's traffic stats, adjust it accordingly. If that does overlap, of course ignore this step :slight_smile:

    Once that's done, keep an eye on those performance issues and let's see about results.

    Best regards,

  • Emmanuel

    Althought everything was ok since I changed wp-config to use the 256M, I experiencing huge performance issue (mainly due to database) since 2/3 days.
    According to new relic it's still due to wpmu-dev-seo and it happens at the same hour (I think it's related to wp-cron.php generating the sitemap) :

    [image pos="0"]

    This problem happened when I installed WooCommerce and I just saw that every variation were included in the sitemap, I've just disabled it so maybe it will solve the issue

    I also disabled scheduled crawl (it was supposed to happen at 1AM) to avoid any performance issue.

    However this is not a long term solution as all seo options are pretty much disabled. I have to titles and meta optimization, no social, no seo checkup, no page or readability analysis, etc.

    Our website run on 2 servers, one for the HTML and one for the DB. We use WP Rocket, Cloudflare and Amazon S3 for the images.

    Until now our website was pretty much read only but we activated ecommerce recently and we are planning to activate BBPress, so we are pretty much worried about performance.

    Do you have ideas on how to optimize website performance?

  • Predrag Dubajic

    Hi Emmanuel,

    If I understand correctly the memory increase did the trick until you add ecommerce, right?

    So what happens now if you disable exclude ecommerce content from sitemap and leave the other options running?
    Are you still having performance issue with such setup?

    About the other performance improvements, I see that you have Hummingbird installed on that site, have you configured improvements from there, like GZIP compression, Caching and Minification?
    Also, I see that you have quite a few resources (mainly) scripts that are loaded from external location, these can also cause issues as you don't have control over them and how the external server is working.
    Perhaps see which plugins are loading those and if you don't need them remove them?

    Best regards,

  • Predrag Dubajic

    Hi Emmanuel,

    This is most likely related to sitemap and since you have a lot of posts, and products now, it takes long time to scan it and if you add new posts often then you're firing the process multiple times a day.

    I would suggest to go to SmarCrawl > Sitemap > Advanced tab and in there disable Automatic Sitemap Updates
    This should significantly reduce the usage time and you can re-create your sitemap manually.

    Working on this ticket actually sparked me an idea that I forwarded to our developers, to add option in future version that would allow to set certain time daily/weekly when sitemaps will be regenerated instead of doing it with each post or manually.

    Best regards,

Thank NAME, for their help.

Let NAME know exactly why they deserved these points.

Gift a custom amount of points.