Sitecore artifact table patch config

I’ve patched EventQueues several times through the years, so I saved this Gist to make it easier for future opportunities.¬† There’s not much new to share in terms of introducing why one does this, refer to this blog about Sitecore artifact tables (or the old reliable Sitecore CMS Tuning Guide). You also have to take care around the order of configuration file processing, so zzzzzArtifactTableRetention.config this sucker if you really must ūüôā

Here’s the Gist – https://gist.github.com/grant-killian/ffa1e84770b10a90e2454e241986b911¬† and here’s the expanded XML:

Advertisements

Walkthrough of Solr Query Analysis for Sitecore

Anton Tishchenko wrote a good quick piece on “stemming” for Solr, and I wanted to build on what he shared.

Solr is generally way underutilized by the Sitecore implementations I’ve seen. It makes for plenty of territory for blogging, though, so maybe bit-by-bit the word will get out about what a powerful distributed system Solr really is.

Let me setup a specific example using Sitecore 9 update-2 with Commerce. I’ll use the CatalogItemsScope Solr core, but you could review most any Solr core with the standard Sitecore schema.

Let me pause here to explain where the default Sitecore Commerce Solr Configuration defines this search index, because it took some digging. In the¬† . . . SIF\Configuration\Commerce\Solr\sitecore-commerce-solr.json file you’ll find:

// The names of the cores to create
“Customers.Name”: “[concat(parameter(‘CorePrefix’), ‘CustomersScope’)]”,
“Orders.Name”: “[concat(parameter(‘CorePrefix’), ‘OrdersScope’)]”,
“CatalogItems.Name”: “[concat(parameter(‘CorePrefix’), ‘CatalogItemsScope’)]”,

Using a fresh IaaS install of Sitecore 9 with Commerce, I’ll go to the Solr admin interface. Select the CatalogItemsScope Solr core from the dropdown list on the left side navigation. Choose the “Analysis” option to access this powerful way of evaluating two key operations one does with Solr: indexing and querying.¬† Solr defines different sets of analyzers for these operations, sort of like how Sitecore exposes the httpRequestBegin pipeline or other extensibility points that projects are always customizing. For this exercise, I’ll focus on the¬†Query¬†operation. The documentation on this Analysis screen has a lot more information on this, if you’re interested.

Enter the search phrase “an AWESOME television!” in the textbox for the Field Value(Query), and then specify the text_general as the Fieldname to Analyse: step1

Solr is going to run “an AWESOME television!” through the analysis process and show the results as they progress through each step of that analysis. For this write-up, I’ll uncheck the “Verbose Output” checkbox — but definitely play around with that feature as it shows the offsets and ordinal positions as Solr works it’s magic through each transformation.

After clicking the blue “Analyse Values” button you’ll see output like the following:

step2

Each row of output starts with an abbreviation (ST, SF, etc). That’s the key to which process of the “analyzer” has run; the pipe-separated list to the right of the abbreviation shows the search phrase as it’s progressing through Solr’s transformation logic.

  1. ST is the StandardTokenizerFactory. I removes the “!” punctuation mark, but otherwise doesn’t make any changes to the search query.
  2. SF is the StopFilterFactory. This would remove words listed in the CatalogItemsScope\conf\stopwords.txt file. In this case, with an out-of-the-box Sitecore 9 Commerce installation, there are no stopwords specified. This doesn’t change our search query in any way.
  3. SGF is the SynonymGraphFilterFactory. This component reads a list of synonyms from CatalogItemsScope\conf\synonyms.txt . . . and “television” is one of the samples they provide in that file, so now our query includes those synonyms
    • synonyms.txt includes this entry for example purposes:
      • Television, Televisions, TV, TVs
  4. The final step, LCF, is to run through LowerCaseFilterFactory which is takes “AWESOME” to “awesome” to ensure case isn’t evaluated in the query.

To drive home the point of this example, change the Fieldname to “text_en” from “text_general” and click the blue “Analyse Values” button. The results are different because the text_general and text_en Solr fields have a different set of components defined for the Query operation. Specifically,¬†text_en adds:

  • EPF (EnglishPossessiveFilterFactory)
  • SKF (KeywordMarkerFilterFactory)
  • PSF (PorterStemFilterFactory)

Here’s what the Solr Analysis screen looks like:

step3

There is a lot that could be said about all this, and I’ll probably build on this in a future blog post . . . but I want to return to Anton’s original point about the Porter stemmer and highlight how powerful the Porter stemmer can be. Solr’s text_general is significantly different than text_en, and hopefully I’ve shed light on precisely how they differ on the query side. The docs at http://snowball.tartarus.org/algorithms/porter/stemmer.html review this in some detail. I’ve also used this online Porter stemmer to quickly see how words are decomposed into their key fragments for search relevancy. You don’t really need that online stemming tool, however, since you now see how to use the Solr administrative UI with the standard “text_en” field to have your own Porter stemming sandbox.

Quick note on onPublishEndAsyncSingleInstance vs onPublishEndAsync

This is more a note for my benefit — for search index update strategies, the onPublishEndAsyncSingleInstance’ makes the¬†‘onPublishEndAsync’ a deprecated option.
The legacy onPublishEndAsync remains to ensure backwards compatibility, but from Sitecore 9.0 onward it’s the default index update strategy used by Sitecore.
With that said, it appears in Sitecore 9.0 update-2 there’s a major defect with OnPublishEndAsynchronousSingleInstanceStrategy. The ContentSearch.ParallelIndexing.MaxThreadLimit setting is ignored by the onPublishEndAsyncSingleInstance strategy — so incorrect thread limits can be used (slow perf!). Sitecore’s patch reference # 285903 can be requested through Sitecore Support to address this.
I suppose it’s a consequence of the new¬† onPublishEndAsyncSingleInstance not having a mature and well-tested codebase surrounding it (onPublishEndAsync has been around for ages!).

 

The Game Is Afoot . . . Solr Shenanigans for Sitecore (Part 2)

Following on from Part 1 where I introduced what I’m up to here, let me jump right in to the other 5 Shenanigans for Solr + Sitecore:

Case 4 – The case of the default query crippler

  • In this scenario, a customer’s Solr was straining to the breaking point and we tracked it down to a set of circumstances where Sitecore was using a default value for ContentSearch.SearchMaxResults (defaults to “” which is int.MaxValue which is 2,147,483,647) and flooding Solr with essentially unbounded queries. That default is downright dangerous. The query logs showed the queries using a rows value of int.MaxValue in rapid succession:
  • INFO Solr Query ‚Äď ?q=associated_docs:(‚Äú\*A6C71A21-47B5-156E-FBD1-B0E5EFED4D33\*‚ÄĚ)&rows=2147483647&fq=_indexname:(domain_index_web)
    INFO Solr Query ‚Äď ?q=((_fullpath:(\/sitecore/content/Branches/XYZ/*) AND _templates:(d0321826b8cd4f57ac05816471ba3ebc)))&rows=2147483647&fq=_indexname:(domain_index_web)
  • Solr will set aside some memory for the 2,147,483,647 results even if the dataset isn’t that large. I discussed a scenario like this in detail in this earlier post from 2018
  • This write-up on Solr and “Large Number of Rows” speaks exactly to this scenario: https://risdenk.github.io/2018/10/21/apache-solr-out-of-memory-symptoms-and-solutions.html

Case 5 – The case of the bandwidth blowout

  • Network bandwidth usage was off the charts for the customer we considered in this scenario. It took some digging, but we discovered it was due to a 23 GB Solr core being replicated across data centers. If one viewed the replication panel in the Solr UI, one could see the slow creep of the replication progress bar and it would never reach 100% complete before starting over.

shen5

    • There was additional supporting material such as Solr WARN messages etc:

shen6

    • The network latency was too much for Solr master/slave replication to complete it’s work, but Solr kept on trying to move that 23 GB Solr core across the planet . . . and since this was the sitecore_analytics_index it was kept very busy by Sitecore. It all made for a feedback loop of frequent updates to the analytics index that couldn’t properly synchronize between data centers.
    • For this particular scenario, we determined that there wasn’t a need to replicate the sitecore_analytics_index (it was consumed only by the CM environment which didn’t require the geographical scaling through Solr replication). We disabled the master/slave replication for that specific Solr core and the tidal wave of network traffic stopped. Case closed!

Case 6 – The case of the misguided, well-intentioned, administrator

  • This scenario introduces a server administrator into the equation, and they actually cause more harm than good. The “optimize now” button in the Solr UI lured this administrator into clicking it without understanding the consequences:

OptimizeNow.JPG

  • I posted about this in detail last year, so I won’t dig too far into it here, but the gist of this scenario illustrates how Solr internally organizes files and that there are questionable UI choices for that Optimize Now button. It makes it look like an easy way for one to improve Solr performance when — in reality — clicking that Optimize Now can be pretty expensive in terms of perf, especially for a volatile Solr core.

Case 7 – The case of the AppPool recycle-fest

  • This scenario is one from a couple years ago, but it’s still relevant as a cautionary tale. For a long time, if Sitecore lost an active connection to Solr, the only option was to recycle the Sitecore AppPool. For a Solr server restart, or service restart, or even a transient network failure . . . Sitecore would need to run through the application initialization logic to reacquire Solr connectivity. In this specific case, there was a recurring network issue that interfered with Sitecore’s connectivity to Solr, so the customer scheduled IIS AppPool recycles every 15 minutes to ensure a fresh connection to Solr was available. This AppPool recycle-fest has terrible consequences for website performance as the site is constantly spending time on recycles and the related pipeline of events.
  • This case highlights why there are now more elegant ways of handling this; I recently blogged about the IsSolrAliveAgent designed to solve this exact problem. There’s periodic logic to reconnect Sitecore with Solr now, and it’s important to appreciate why it’s there and — probably — why you may want to tune the default setting of every 10 minutes for your production environment.

That’s the 7 Shenanigans related to Sitecore and Solr from my talk earlier in October. It’s a fun paradigm for learning more about the overlaps of Sitecore and Solr and I hope it helps others to get more from their Sitecore + Solr technology stack!

The Game Is Afoot . . . Solr Shenanigans for Sitecore (Part 1)

I took the challenge of presenting at the Manchester, New Hampshire Sitecore User Group a few days ahead of the 2018 Sitecore Symposium. I say challenge because

  1. Delivering content that isn’t superseded by the Sitecore Symposium agenda can be difficult (all the good Commerce or Azure or DevOps material would be saved for Symposium week)
  2. I would be following Michael West and his showcase of the new Sitecore PowerShell Extensions version 5.0 module (curious that https://doc.sitecorepowershell.com/releases doesn’t list the 5.0 yet, but I’m sure it’s coming there too).

As I finalized my topic, I surveyed the work I’ve been up to recently and figured I could take my talk in one of two directions that would be generally absent from the Sitecore Symposium agenda: Sitecore and Solr, or The Heresy of Sitecore on AWS. While we are doing some interesting things around AWS with RDS, ElastiCache, that’s officially unsupported territory with Sitecore and not fully baked enough for me to present it as anything approaching a best practice — but check back with me in 6 months. So, I elected to give a talk on Solr and explore some of the lessons learned from years in the trenches making Sitecore successful with Solr; the topic was finalized as Solr, Sitecore 9, 7 Shenanigans:

shenanigans
The title slide from the talk

I covered some of the history and underpinnings of Solr with regards to Sitecore, the dependence on Solr.Net (which is <important>NOT</important> a port of Solr to .Net the way Lucene.Net IS a port of Java’s Lucene — and why we should care), and common architecture patterns for Sitecore integrations based on Solr master/slave and Solr Cloud. I guess I’ve blogged a lot about Solr over the years; for instance, here and here are a couple sample areas I delved into.

I think the most fun part of the talk was the Shenanigans, however, as I went with a Sherlock Holmes theme to frame the conversation. I reviewed 7 cases and we had fun digging into some of the diagnostic bits.

shenanigans2

Here’s a quick run down of the first 3 Shenanigans:

  1. The case of the disappearing Java
    • Where we started with Solr that wouldn’t start for a set of Production servers¬† . . .

shen1.JPG

    • . . . and eventually solved the case by determining the development team installed a Java SDK with auto-update enabled and the system had removed the Java identified in the ClassPath. This is a brutal one for a Production implementation!
  1. The case of the underachieving mega-server
    • These OutOfMemoryErrors are not fun and can be due to a variety of issues:
      • shen2.JPG
    • In this case, we determined this 32 GB server was running Solr with the default 512 MB of memory set aside for Solr . . . a pretty fundamental issue:3.512 MB - default of 32 GB
    • We tuned the Solr start settings to use more of the server capacity for Java and Solr, in this case I think we used 10 GB as a starting point, and solved this specific case. This isn’t the last we’ll hear about OutOfMemory errors in our Shenanigans, however, as there can be many causes (see this great summary published just yesterday, for example).
  2. The case of the disappearing content
    • This scenario had a public-facing website’s content disappear periodically after content publishes . . . sound familiar to anyone? It was due to the threshold for full rebuilding of the search indexes after a Sitecore publish set low enough to trigger regularly
      • For the record, the setting is ContentSearch.FullRebuildItemCountThreshold and the default is 100,000 (0x186a0 – 100,000):Shen3

    • This customer didn’t have SwitchOnRebuild implemented for these key public-facing search indexes, so the first step of the index rebuild logic was to remove all documents from the Solr collection, then add the documents back in as the indexing process ran it’s course.¬† To the site visitor it created missing or inconsistent search results while the rebuilding took place, and for a large set of items it can take 60 minutes or longer for rebuilding.
    • The solution is to use the¬†SwitchOnRebuild¬†implementation for their Sitecore search indexes – https://doc.sitecore.net/sitecore_experience_platform/setting_up_and_maintaining/search_and_indexing/indexing/switch_solr_indexes and related documentation from Sitecore covers this process.

I will cover the remaining four Solr + Sitecore Shenanigans in my next post; here’s a teaser for the topics:

  • Case #4 – The case of the default query crippler
  • Case #5 – The case of the bandwidth blowout
  • Case #6 – The case of the misguided, well-intentioned, administrator
  • Case #7 – The case of the AppPool recycle-fest

As Sherlock Holmes would say: “The game is afoot!”

The tale of the IsSolrAliveAgent for Sitecore

I had the pleasure of assessing a Sitecore 7 implementation the other day and it made me a bit nostalgic for the good old days — OMS had grown into DMS, all nicely contained in a tidy SQL Server database; barely a whisper of Solr; ItemBuckets were the new kid on the block.

It got me thinking about how some features can evolve over time until the obscure becomes mainstream. One example of this feature evolution, that for some reason has relevance to a variety of customers we’re working with right now, is in how Sitecore maintains connections with Solr (or should I say how Sitecore doesn’t maintain connections?).

Many of us have learned hard lessons that if you have a Solr server and it reboots or experiences an interruption in service, it can mean downtime for your Sitecore site unless you take special measures to guard against it. Sitecore’s initialization process connects to Solr and then holds that connection for the lifetime of the IIS AppPool. If that connection fails for any reason, there wasn’t logic in the Sitecore Solr Provider for a graceful reconnect. At least, that was the case for several iterations of Sitecore’s standard Solr search integration.

A few years ago, a basic approach evolved through the Sitecore developer community to correct the above limitation with Sitecore; it may have come from Sitecore support, but it wasn’t publicized in any way. One could explicitly repeat Sitecore’s initialization logic for Solr in a custom agent, scheduled to run periodically. This was a custom solution and I saw a few iterations of it — mostly a brute force approach. But it worked.

In Sitecore 8.2 update-1,  buried in the Release Notes is this fragment relevant to the story:

Solr

The Sitecore.ContentSearch.SolrProvider, from that point onward, contained a new agent defined in Sitecore.ContentSearch.Solr.DefaultIndexConfiguration.config named IsSolrAliveAgent that would serve as a retryer for Solr if the connectivity was lost.  It was configured to run every 10 minutes for a default implementation.

By the way, 10 minutes is probably too long an interval in my experience — even 1 minute can be too long for a production environment to wait before trying to reconnect a key component such as Solr. Also, if you set the agent to 1 minute, but have a /sitecore/scheduling/frequency value defined as something like 10 minutes, you need to change the¬†frequency value to ensure the IsSolrAliveAgent is executed on the schedule you expect.

What had been a little-known approach to keeping Sitecore connected to Solr across interruptions had made the big time: it was now part of the official Sitecore search provider code!

This first implementation included in the Sitecore code base wasn’t perfect, though, and it was improved upon in subsequent releases of Sitecore. Improvements include better logging . . . more efficient iteration through the search indexes . . . but the general approach remains the same.

At this point, there’s an official patch for the IsSolrAliveAgent that Sitecore makes available at https://github.com/SitecoreSupport/Sitecore.Support.163850.171950/releases/tag/8.2.6.0 and please note the compatibility caveats on that repo. We do have a Sitecore 8.2 update-5 customer making use of the patch without issue, even though it’s not officially listed as compatible for that version — but that could be exceptional in our case, so always perform your own evaluations, tests, etc. This patch addresses a known issue where, if Solr is unavailable during Sitecore initialization, the SwitchOnRebuildSolrCloudSearchIndex indexes are not properly initialized.

It’s interesting — at least to a Sitecore/Solr nerd like me — to decompile and compare all the changes over time to this component; there’s logging and tests and generally better code present in the newest iteration of this IsSolrAliveAgent vs the earlier implementations. Some of the changes are subtle, but the evolution of this IsSolrAliveAgent from the days of “hey, we’ve got this homegrown ten lines of code that reconnects Sitecore to Solr if Solr goes down” is remarkable.

In some ways, it parallels the progress of Sitecore as an entire platform. The CMS became a “personalization platform” which now is adding a Commerce ecosystem. OMS became DMS which became xDB. On we go.

At this point, adding a patch .config file such as the following to your Sitecore project is the state of the art in Sitecore and Solr connectivity . . . but it surely will be improved upon over time:

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/">
  <sitecore>
    <scheduling>
      <agent type="Sitecore.ContentSearch.SolrProvider.Agents.IsSolrAliveAgent, Sitecore.ContentSearch.SolrProvider">
        <patch:attribute name="type">Sitecore.Support.ContentSearch.SolrProvider.Agents.IsSolrAliveAgent, Sitecore.Support.163850.171950</patch:attribute>
        <patch:attribute name="interval">00:01:00</patch:attribute>
      </agent>
    </scheduling>
  </sitecore>
</configuration>

 

Curious case of the LocationsDictionaryCache

We have some cache monitoring in place for some enterprise Sitecore customers and that system has found one particular cache being evicted roughly 40 times per hour for one particular customer. It’s curious, as this cache isn’t covered in the standard documentation for Sitecore’s caches. Sitecore support had to point me in the right direction on this . . . and it’s an interesting case.

The¬†LocationsDictionaryCache cache is defined in Sitecore.Analytics.Data.Dictionaries.LocationsDictionary and there’s a hard coded 600 second expiration in that object definition:

Expires

There’s also a maxCacheSize set of¬†0xf4240 hex (1,000,000 or 976 KB). You can’t alter these settings through configuration, you’d have to compile a new DLL to alter these values.

It’s not clear to me that the quick eviction/turn-over of this cache is a perf issue to worry about . . . I think, at this point, it’s working as expected and indicative of a busy site with lots of Sitecore Analytics and Tracking behaviour. Reviewing the decompiled Sitecore code that uses this class (like in Sitecore.Analytics.Tracking.CurrentVisitContext or Sitecore.Analytics.Pipelines.CommitSession.ProcessSubscriptions), it appears that this cache serves as a short-term lookup along the same lines as devices, user-agents, etc. Why this particular cache is active in ways UserAgentDictionaryCache and others is not, besides the obvious 600 second life span, is something we need to dig further into — but I don’t know that it’s a perf bottleneck for our given scenario.