Walkthrough of Solr Query Analysis for Sitecore

Anton Tishchenko wrote a good quick piece on “stemming” for Solr, and I wanted to build on what he shared.

Solr is generally way underutilized by the Sitecore implementations I’ve seen. It makes for plenty of territory for blogging, though, so maybe bit-by-bit the word will get out about what a powerful distributed system Solr really is.

Let me setup a specific example using Sitecore 9 update-2 with Commerce. I’ll use the CatalogItemsScope Solr core, but you could review most any Solr core with the standard Sitecore schema.

Let me pause here to explain where the default Sitecore Commerce Solr Configuration defines this search index, because it took some digging. In the  . . . SIF\Configuration\Commerce\Solr\sitecore-commerce-solr.json file you’ll find:

// The names of the cores to create
“Customers.Name”: “[concat(parameter(‘CorePrefix’), ‘CustomersScope’)]”,
“Orders.Name”: “[concat(parameter(‘CorePrefix’), ‘OrdersScope’)]”,
“CatalogItems.Name”: “[concat(parameter(‘CorePrefix’), ‘CatalogItemsScope’)]”,

Using a fresh IaaS install of Sitecore 9 with Commerce, I’ll go to the Solr admin interface. Select the CatalogItemsScope Solr core from the dropdown list on the left side navigation. Choose the “Analysis” option to access this powerful way of evaluating two key operations one does with Solr: indexing and querying.  Solr defines different sets of analyzers for these operations, sort of like how Sitecore exposes the httpRequestBegin pipeline or other extensibility points that projects are always customizing. For this exercise, I’ll focus on the Query operation. The documentation on this Analysis screen has a lot more information on this, if you’re interested.

Enter the search phrase “an AWESOME television!” in the textbox for the Field Value(Query), and then specify the text_general as the Fieldname to Analyse: step1

Solr is going to run “an AWESOME television!” through the analysis process and show the results as they progress through each step of that analysis. For this write-up, I’ll uncheck the “Verbose Output” checkbox — but definitely play around with that feature as it shows the offsets and ordinal positions as Solr works it’s magic through each transformation.

After clicking the blue “Analyse Values” button you’ll see output like the following:

step2

Each row of output starts with an abbreviation (ST, SF, etc). That’s the key to which process of the “analyzer” has run; the pipe-separated list to the right of the abbreviation shows the search phrase as it’s progressing through Solr’s transformation logic.

  1. ST is the StandardTokenizerFactory. I removes the “!” punctuation mark, but otherwise doesn’t make any changes to the search query.
  2. SF is the StopFilterFactory. This would remove words listed in the CatalogItemsScope\conf\stopwords.txt file. In this case, with an out-of-the-box Sitecore 9 Commerce installation, there are no stopwords specified. This doesn’t change our search query in any way.
  3. SGF is the SynonymGraphFilterFactory. This component reads a list of synonyms from CatalogItemsScope\conf\synonyms.txt . . . and “television” is one of the samples they provide in that file, so now our query includes those synonyms
    • synonyms.txt includes this entry for example purposes:
      • Television, Televisions, TV, TVs
  4. The final step, LCF, is to run through LowerCaseFilterFactory which is takes “AWESOME” to “awesome” to ensure case isn’t evaluated in the query.

To drive home the point of this example, change the Fieldname to “text_en” from “text_general” and click the blue “Analyse Values” button. The results are different because the text_general and text_en Solr fields have a different set of components defined for the Query operation. Specifically, text_en adds:

  • EPF (EnglishPossessiveFilterFactory)
  • SKF (KeywordMarkerFilterFactory)
  • PSF (PorterStemFilterFactory)

Here’s what the Solr Analysis screen looks like:

step3

There is a lot that could be said about all this, and I’ll probably build on this in a future blog post . . . but I want to return to Anton’s original point about the Porter stemmer and highlight how powerful the Porter stemmer can be. Solr’s text_general is significantly different than text_en, and hopefully I’ve shed light on precisely how they differ on the query side. The docs at http://snowball.tartarus.org/algorithms/porter/stemmer.html review this in some detail. I’ve also used this online Porter stemmer to quickly see how words are decomposed into their key fragments for search relevancy. You don’t really need that online stemming tool, however, since you now see how to use the Solr administrative UI with the standard “text_en” field to have your own Porter stemming sandbox.

Quick note on onPublishEndAsyncSingleInstance vs onPublishEndAsync

This is more a note for my benefit — for search index update strategies, the onPublishEndAsyncSingleInstance’ makes the ‘onPublishEndAsync’ a deprecated option.
The legacy onPublishEndAsync remains to ensure backwards compatibility, but from Sitecore 9.0 onward it’s the default index update strategy used by Sitecore.
With that said, it appears in Sitecore 9.0 update-2 there’s a major defect with OnPublishEndAsynchronousSingleInstanceStrategy. The ContentSearch.ParallelIndexing.MaxThreadLimit setting is ignored by the onPublishEndAsyncSingleInstance strategy — so incorrect thread limits can be used (slow perf!). Sitecore’s patch reference # 285903 can be requested through Sitecore Support to address this.
I suppose it’s a consequence of the new  onPublishEndAsyncSingleInstance not having a mature and well-tested codebase surrounding it (onPublishEndAsync has been around for ages!).

 

The Game Is Afoot . . . Solr Shenanigans for Sitecore (Part 2)

Following on from Part 1 where I introduced what I’m up to here, let me jump right in to the other 5 Shenanigans for Solr + Sitecore:

Case 4 – The case of the default query crippler

  • In this scenario, a customer’s Solr was straining to the breaking point and we tracked it down to a set of circumstances where Sitecore was using a default value for ContentSearch.SearchMaxResults (defaults to “” which is int.MaxValue which is 2,147,483,647) and flooding Solr with essentially unbounded queries. That default is downright dangerous. The query logs showed the queries using a rows value of int.MaxValue in rapid succession:
  • INFO Solr Query – ?q=associated_docs:(“\*A6C71A21-47B5-156E-FBD1-B0E5EFED4D33\*”)&rows=2147483647&fq=_indexname:(domain_index_web)
    INFO Solr Query – ?q=((_fullpath:(\/sitecore/content/Branches/XYZ/*) AND _templates:(d0321826b8cd4f57ac05816471ba3ebc)))&rows=2147483647&fq=_indexname:(domain_index_web)
  • Solr will set aside some memory for the 2,147,483,647 results even if the dataset isn’t that large. I discussed a scenario like this in detail in this earlier post from 2018
  • This write-up on Solr and “Large Number of Rows” speaks exactly to this scenario: https://risdenk.github.io/2018/10/21/apache-solr-out-of-memory-symptoms-and-solutions.html

Case 5 – The case of the bandwidth blowout

  • Network bandwidth usage was off the charts for the customer we considered in this scenario. It took some digging, but we discovered it was due to a 23 GB Solr core being replicated across data centers. If one viewed the replication panel in the Solr UI, one could see the slow creep of the replication progress bar and it would never reach 100% complete before starting over.

shen5

    • There was additional supporting material such as Solr WARN messages etc:

shen6

    • The network latency was too much for Solr master/slave replication to complete it’s work, but Solr kept on trying to move that 23 GB Solr core across the planet . . . and since this was the sitecore_analytics_index it was kept very busy by Sitecore. It all made for a feedback loop of frequent updates to the analytics index that couldn’t properly synchronize between data centers.
    • For this particular scenario, we determined that there wasn’t a need to replicate the sitecore_analytics_index (it was consumed only by the CM environment which didn’t require the geographical scaling through Solr replication). We disabled the master/slave replication for that specific Solr core and the tidal wave of network traffic stopped. Case closed!

Case 6 – The case of the misguided, well-intentioned, administrator

  • This scenario introduces a server administrator into the equation, and they actually cause more harm than good. The “optimize now” button in the Solr UI lured this administrator into clicking it without understanding the consequences:

OptimizeNow.JPG

  • I posted about this in detail last year, so I won’t dig too far into it here, but the gist of this scenario illustrates how Solr internally organizes files and that there are questionable UI choices for that Optimize Now button. It makes it look like an easy way for one to improve Solr performance when — in reality — clicking that Optimize Now can be pretty expensive in terms of perf, especially for a volatile Solr core.

Case 7 – The case of the AppPool recycle-fest

  • This scenario is one from a couple years ago, but it’s still relevant as a cautionary tale. For a long time, if Sitecore lost an active connection to Solr, the only option was to recycle the Sitecore AppPool. For a Solr server restart, or service restart, or even a transient network failure . . . Sitecore would need to run through the application initialization logic to reacquire Solr connectivity. In this specific case, there was a recurring network issue that interfered with Sitecore’s connectivity to Solr, so the customer scheduled IIS AppPool recycles every 15 minutes to ensure a fresh connection to Solr was available. This AppPool recycle-fest has terrible consequences for website performance as the site is constantly spending time on recycles and the related pipeline of events.
  • This case highlights why there are now more elegant ways of handling this; I recently blogged about the IsSolrAliveAgent designed to solve this exact problem. There’s periodic logic to reconnect Sitecore with Solr now, and it’s important to appreciate why it’s there and — probably — why you may want to tune the default setting of every 10 minutes for your production environment.

That’s the 7 Shenanigans related to Sitecore and Solr from my talk earlier in October. It’s a fun paradigm for learning more about the overlaps of Sitecore and Solr and I hope it helps others to get more from their Sitecore + Solr technology stack!

The Game Is Afoot . . . Solr Shenanigans for Sitecore (Part 1)

I took the challenge of presenting at the Manchester, New Hampshire Sitecore User Group a few days ahead of the 2018 Sitecore Symposium. I say challenge because

  1. Delivering content that isn’t superseded by the Sitecore Symposium agenda can be difficult (all the good Commerce or Azure or DevOps material would be saved for Symposium week)
  2. I would be following Michael West and his showcase of the new Sitecore PowerShell Extensions version 5.0 module (curious that https://doc.sitecorepowershell.com/releases doesn’t list the 5.0 yet, but I’m sure it’s coming there too).

As I finalized my topic, I surveyed the work I’ve been up to recently and figured I could take my talk in one of two directions that would be generally absent from the Sitecore Symposium agenda: Sitecore and Solr, or The Heresy of Sitecore on AWS. While we are doing some interesting things around AWS with RDS, ElastiCache, that’s officially unsupported territory with Sitecore and not fully baked enough for me to present it as anything approaching a best practice — but check back with me in 6 months. So, I elected to give a talk on Solr and explore some of the lessons learned from years in the trenches making Sitecore successful with Solr; the topic was finalized as Solr, Sitecore 9, 7 Shenanigans:

shenanigans
The title slide from the talk

I covered some of the history and underpinnings of Solr with regards to Sitecore, the dependence on Solr.Net (which is <important>NOT</important> a port of Solr to .Net the way Lucene.Net IS a port of Java’s Lucene — and why we should care), and common architecture patterns for Sitecore integrations based on Solr master/slave and Solr Cloud. I guess I’ve blogged a lot about Solr over the years; for instance, here and here are a couple sample areas I delved into.

I think the most fun part of the talk was the Shenanigans, however, as I went with a Sherlock Holmes theme to frame the conversation. I reviewed 7 cases and we had fun digging into some of the diagnostic bits.

shenanigans2

Here’s a quick run down of the first 3 Shenanigans:

  1. The case of the disappearing Java
    • Where we started with Solr that wouldn’t start for a set of Production servers  . . .

shen1.JPG

    • . . . and eventually solved the case by determining the development team installed a Java SDK with auto-update enabled and the system had removed the Java identified in the ClassPath. This is a brutal one for a Production implementation!
  1. The case of the underachieving mega-server
    • These OutOfMemoryErrors are not fun and can be due to a variety of issues:
      • shen2.JPG
    • In this case, we determined this 32 GB server was running Solr with the default 512 MB of memory set aside for Solr . . . a pretty fundamental issue:3.512 MB - default of 32 GB
    • We tuned the Solr start settings to use more of the server capacity for Java and Solr, in this case I think we used 10 GB as a starting point, and solved this specific case. This isn’t the last we’ll hear about OutOfMemory errors in our Shenanigans, however, as there can be many causes (see this great summary published just yesterday, for example).
  2. The case of the disappearing content
    • This scenario had a public-facing website’s content disappear periodically after content publishes . . . sound familiar to anyone? It was due to the threshold for full rebuilding of the search indexes after a Sitecore publish set low enough to trigger regularly
      • For the record, the setting is ContentSearch.FullRebuildItemCountThreshold and the default is 100,000 (0x186a0 – 100,000):Shen3

    • This customer didn’t have SwitchOnRebuild implemented for these key public-facing search indexes, so the first step of the index rebuild logic was to remove all documents from the Solr collection, then add the documents back in as the indexing process ran it’s course.  To the site visitor it created missing or inconsistent search results while the rebuilding took place, and for a large set of items it can take 60 minutes or longer for rebuilding.
    • The solution is to use the SwitchOnRebuild implementation for their Sitecore search indexes – https://doc.sitecore.net/sitecore_experience_platform/setting_up_and_maintaining/search_and_indexing/indexing/switch_solr_indexes and related documentation from Sitecore covers this process.

I will cover the remaining four Solr + Sitecore Shenanigans in my next post; here’s a teaser for the topics:

  • Case #4 – The case of the default query crippler
  • Case #5 – The case of the bandwidth blowout
  • Case #6 – The case of the misguided, well-intentioned, administrator
  • Case #7 – The case of the AppPool recycle-fest

As Sherlock Holmes would say: “The game is afoot!”

The tale of the IsSolrAliveAgent for Sitecore

I had the pleasure of assessing a Sitecore 7 implementation the other day and it made me a bit nostalgic for the good old days — OMS had grown into DMS, all nicely contained in a tidy SQL Server database; barely a whisper of Solr; ItemBuckets were the new kid on the block.

It got me thinking about how some features can evolve over time until the obscure becomes mainstream. One example of this feature evolution, that for some reason has relevance to a variety of customers we’re working with right now, is in how Sitecore maintains connections with Solr (or should I say how Sitecore doesn’t maintain connections?).

Many of us have learned hard lessons that if you have a Solr server and it reboots or experiences an interruption in service, it can mean downtime for your Sitecore site unless you take special measures to guard against it. Sitecore’s initialization process connects to Solr and then holds that connection for the lifetime of the IIS AppPool. If that connection fails for any reason, there wasn’t logic in the Sitecore Solr Provider for a graceful reconnect. At least, that was the case for several iterations of Sitecore’s standard Solr search integration.

A few years ago, a basic approach evolved through the Sitecore developer community to correct the above limitation with Sitecore; it may have come from Sitecore support, but it wasn’t publicized in any way. One could explicitly repeat Sitecore’s initialization logic for Solr in a custom agent, scheduled to run periodically. This was a custom solution and I saw a few iterations of it — mostly a brute force approach. But it worked.

In Sitecore 8.2 update-1,  buried in the Release Notes is this fragment relevant to the story:

Solr

The Sitecore.ContentSearch.SolrProvider, from that point onward, contained a new agent defined in Sitecore.ContentSearch.Solr.DefaultIndexConfiguration.config named IsSolrAliveAgent that would serve as a retryer for Solr if the connectivity was lost.  It was configured to run every 10 minutes for a default implementation.

By the way, 10 minutes is probably too long an interval in my experience — even 1 minute can be too long for a production environment to wait before trying to reconnect a key component such as Solr. Also, if you set the agent to 1 minute, but have a /sitecore/scheduling/frequency value defined as something like 10 minutes, you need to change the frequency value to ensure the IsSolrAliveAgent is executed on the schedule you expect.

What had been a little-known approach to keeping Sitecore connected to Solr across interruptions had made the big time: it was now part of the official Sitecore search provider code!

This first implementation included in the Sitecore code base wasn’t perfect, though, and it was improved upon in subsequent releases of Sitecore. Improvements include better logging . . . more efficient iteration through the search indexes . . . but the general approach remains the same.

At this point, there’s an official patch for the IsSolrAliveAgent that Sitecore makes available at https://github.com/SitecoreSupport/Sitecore.Support.163850.171950/releases/tag/8.2.6.0 and please note the compatibility caveats on that repo. We do have a Sitecore 8.2 update-5 customer making use of the patch without issue, even though it’s not officially listed as compatible for that version — but that could be exceptional in our case, so always perform your own evaluations, tests, etc. This patch addresses a known issue where, if Solr is unavailable during Sitecore initialization, the SwitchOnRebuildSolrCloudSearchIndex indexes are not properly initialized.

It’s interesting — at least to a Sitecore/Solr nerd like me — to decompile and compare all the changes over time to this component; there’s logging and tests and generally better code present in the newest iteration of this IsSolrAliveAgent vs the earlier implementations. Some of the changes are subtle, but the evolution of this IsSolrAliveAgent from the days of “hey, we’ve got this homegrown ten lines of code that reconnects Sitecore to Solr if Solr goes down” is remarkable.

In some ways, it parallels the progress of Sitecore as an entire platform. The CMS became a “personalization platform” which now is adding a Commerce ecosystem. OMS became DMS which became xDB. On we go.

At this point, adding a patch .config file such as the following to your Sitecore project is the state of the art in Sitecore and Solr connectivity . . . but it surely will be improved upon over time:

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:set="http://www.sitecore.net/xmlconfig/set/">
  <sitecore>
    <scheduling>
      <agent type="Sitecore.ContentSearch.SolrProvider.Agents.IsSolrAliveAgent, Sitecore.ContentSearch.SolrProvider">
        <patch:attribute name="type">Sitecore.Support.ContentSearch.SolrProvider.Agents.IsSolrAliveAgent, Sitecore.Support.163850.171950</patch:attribute>
        <patch:attribute name="interval">00:01:00</patch:attribute>
      </agent>
    </scheduling>
  </sitecore>
</configuration>

 

Which Solr Node Is Responding to My Sitecore Query?

In working with Sitecore + Solr, eventually you may need to determine which Solr server responded to a specific query in order to validate Solr replication or to compare Solr responses across machines. If you can’t imagine a reason why you’d want to do this, you’re probably fortunate and should maybe go buy a lottery ticket instead of reading this blog post 🙂

Brute Force Method with Solr Master/Slave

If you’re working with Solr master/slave behind a load-balancer, or with multiple slaves behind a load-balancer, I haven’t found a reliable way of determining which Solr server responded to a particular query besides the brute force method of comparing Sitecore and Solr logs. Specifically, from the Sitecore logs in Data\logs\Search.Logs.[Timestamp].txt you should see something like the following for each query:

Sitecore’s Query Log

15:11:48 INFO Serialized Query – ?q=(_template:(f613d8a8d9324b5f84516424f49c9102) AND (-_name:(“__Standard Values”) AND _language:(en-US)))&start=0&rows=1&fl=*,score&fq=_indexname:(sitecore_web_index)&sort=searchdate_tdt desc

Solr’s Query Log

You can cross-reference this Sitecore log data with Solr logs (like S:\solr-4.10.4\sitecore\logs\solr.log):

INFO – 2018-08-01 15:11:48.514; org.apache.solr.core.SolrCore; [sitecore_web_index] webapp=/solr path=/select params={q=(_template:(f613d8a8d9324b5f84516424f49c9102)+AND+(-_name:(“__Standard+Values”)+AND+_language:(en-US)))&fl=*,score&start=0&fq=_indexname:(sitecore_web_index)&sort=searchdate_tdt+desc&rows=1&version=2.2} hits=2958 status=0 QTime=16

It’s tedious to match up the exact query and time in these logs, but the Solr node with the matching record will reveal which one serviced the request.

Now, one can craft some PowerShell to scrounge the Sitecore and Solr logs and determine where we have matches — crazy as it may sound — but I’m not interested in sharing all that here. It’s academic to read the two logs and look for matches, anyway, so I’ll move on to the Solr Cloud scenario that is more interesting and forward-looking since Sitecore is steadily progressing towards full Solr Cloud support across the board.

Solr Cloud Debug Query Command

Solr Cloud supports a debug command where you append debug=true to the URL and Solr includes diagnostic output in the results. For example, a RESTful query to Solr like http://10.215.118.28:8983/solr/sitecore_web_index_shard1_replica2/select?q=_name%3ANEWS&wt=xml&indent=true&amp;debug=true. Using the XML formatting, debug=true adds something like this to the response from Solr:

Capture

There can be interesting tidbits in each of those debug sections, but I’m going to focus on the track node that shares information about the different phases of the distributed request Solr is making. Under the “EXECUTE_QUERY” item is a “name” attribute that will specify which Solr nodes, shards, and replica were involved in responding to the query, for example:

<lst name=”http://10.215.140.12:8983/solr/sitecore_web_index_shard2_replica1/|http://10.215.140.13:8983/solr/sitecore_web_index_shard2_replica2/”>

I’ve also found the “shard.url” value of the Response (nested under the EXECUTE_QUERY data) to share the same information. It’s possible that’s more reliable across Solr versions etc, but something to keep an eye on. Here’s a fragment of the XML response for the debug information:

Capture

A careful reader might point out that the “rid”  value includes the IP address of the server responding to the request, but this is designed to be the “request ID” that traces the query through Solr’s various moving parts — I wouldn’t rely on the “rid” to tell you the source for the response, though, as it could be changing across versions.

Here’s a quick run through of the other diagnostic data in that EXECUTE_QUERY data that I know about:

  1. QTime: the number of milliseconds it took Solr to execute a search with no regard for time spent sending a response across the network etc
  2. ElapsedTime: the number of milliseconds from when the query was sent
    to Solr until the time the client gets a response. This includes QTime, assembling the response, transmission time, etc.
  3. NumFound: the count of results

There is a ton to all this and we’re only scratching the surface, but as Sitecore gets more serious about scalable search with Solr, we’re all going to be learning a lot more about this in the months and years to come!

Sitecore and SearchMaxResults for Solr

I’ve consulted with a number of Sitecore implementations in the last month or two that had a variety of challenges with Sitecore integration with Solr. Solr is a powerful distributed system with a variety of angles of interest to Sitecore. I wanted to take this chance to comment on a Sitecore setting that can have a significant impact on how Sitecore search functions, but is easily overlooked. The setting is defined in Sitecore.ContentSearch.config and it’s called ContentSearch.SearchMaxResults. The XML comment for this setting is straight-forward, here’s how it’s presented in that file:

snip

There’s a lot to digest in that xml comment above. One could read it and assume “this can be set but it is best kept as the default” means this shouldn’t be altered, but in my experience that can be problematic.

The .Net int.MaxValue constant is 2,147,483,647. If you leave this setting at the default (so “”), one is telling Solr to return up to 2,147,483,647 results in a single response to a query, which we’ve observed in some cases to cause significant performance problems (Solr will fetch the large volume of records from disk and write them to the Solr response causing IO pressure etc.) It’s not always the case since it really comes down to the number of documents one is querying from Solr, but this sets up the potential for a virtually unbounded Solr query.

It’s interesting to trace this setting through Sitecore and into Solr, and it sheds light on how these two complex systems work together. Fun, right!? I cooked up the diagram below that shows an overview of how Sitecore and Solr work together in completing search queries:

snipp

Each application has it’s own logging which will help trace activity between the systems.

The Sitecore ContentSearch Provider for Solr relies on Solr.Net for connectivity to Solr. It’s common for .Net libraries to copy their open source equivalents from the Java world (like Log4J has a .Net port for logging named Log4net, Lucene has a .Net port for search called Lucene.Net, etc). Solr.Net, however, is not a port of the Solr Java application to .Net. Instead, Solr.Net is a wrapper for the main Solr API elements that can be easily called by .Net applications. When it comes to Sitecore’s ContentSearch Provider for Solr, Solr.Net is Sitecore’s bridge for getting data to and from the Solr application.

Just an aside: some projects do creative things with Solr search and Sitecore, and for certain scenarios it’s necessary to bypass Solr.Net and use the REST API directly from Sitecore. This write-up focuses on a conventional Sitecore -> Solr.Net -> Solr pattern, but I wanted to acknowledge that it’s not the only pattern.

Tracking ContentSearch.SearchMaxResults in Sitecore

On the Sitecore side, one can see the ContentSearch.SearchMaxResults setting in the Sitecore logs when you turn up the diagnostics to get more granular data; this isn’t a configuration that’s recommended for using beyond a discrete troubleshooting session as the amount of data it can generate can be significant . . . but here’s how you dial up the diagnostic data Sitecore reports about queries:

snip3

If you run a few queries that exercise your Sitecore implementation code that queries Solr, you can review the contents of the Search log in the Sitecore /data directory and find entries such as:

INFO Solr Query – ?q=associated_docs:(“\*A5C71A21-47B5-156E-FBD1-B0E5EFED4D33\*”)&rows=2147483647&fq=_indexname:(domain_index_web)

or

INFO  Solr Query – ?q=((_fullpath:(\/sitecore/content/Branches/ABC/*) AND _templates:(d0351826b1cd4f57ac05816471ba3ebc)))&rows=2147483647&fq=_indexname:(domain_index_web)

The .Net int.MaxValue 2147483647 is what Sitecore, through Solr.Net, is specifying as the number of rows to return from this query. For Solr cores with only a few hundred results matching this query, it’s not that big a deal because the query has a fairly small universe to process and retrieve. If you have 100,000 documents, however, that’s a very heavy query for Solr to respond to and it will probably impact the performance of your Sitecore implementation.

Tracking ContentSearch.SearchMaxResults in Solr

Solr has it’s own logging systems and this 2147483647 value can be seen in these logs once Solr has completed the API call. In a default Solr installation, the logs will be located at server/logs (check your log4j.properties file if you don’t know for sure where your logs are being stored) and you can a open up the log and scan for our ContentSearch.SearchMaxResults setting value. You’ll see entries such as:

INFO  – 2018-03-26 21:20:19.624; org.apache.solr.core.SolrCore; [domain_index_web] webapp=/solr path=/select params={q=(_template:(a6f3979d03df4441904309e4d281c11b)+AND+_path:(1f6ce22fa51943f5b6c20be96502e6a7))&fl=*,score&fq=_indexname:(domain_index_web)&rows=2147483647&version=2.2} hits=2681 status=0 QTime=88

  • The above Solr query returned 2,681 results (hits) and the QTime (time elapsed between the arrival of the query request to Solr and the completion of the request handler) was 88 milliseconds. This is probably no big deal as it relates to the ContentSearch.SearchMaxResults, but you don’t know if this data will increase over time…

INFO  – 2018-03-26 21:20:19.703; org.apache.solr.core.SolrCore; [domain_index_web] webapp=/solr path=/select params={q=((((include_nav_xml_b:(True)+AND+_path:(00ada316e3e4498797916f411bc283cf)))+AND+url_s:[*+TO+*])+AND+(_language:(no-NO)+OR+_language:(en)))&fl=*,score&fq=_indexname:( domain_index_web)&rows=2147483647&version=2.2} hits=9 status=0 QTime=16

  • The above Solr query returned 9 results (hits) and the QTime was 16 milliseconds. This is unlikely a problem when it comes to ContentSearch.SearchMaxResults.

 INFO  – 2018-03-26 21:20:19.812; org.apache.solr.core.SolrCore; [domain_index_web] webapp=/solr path=/select params={q=(_template:(8ed95d51A5f64ae1927622f3209a661f)+AND+regionids_sm:(33ada316e3e4498799916f411bc283cf))&fl=*,score&fq=_indexname:(domain_index_web)&rows=2147483647&version=2.2} hits=89372 status=0 QTime=1600

  • The above Solr query returned 89,372 results (hits) and the QTime was 1600 milliseconds. Look out. This is the type of query that could easily cause problems based on the Sitecore ContentSearch.SearchMaxResults setting as the volume of data Solr is working with is getting large. That query time is already climbing high and that’s a lot of results for Sitecore to require in a single request.

The impact of retrieving so many documents from Solr can cause a cascade of difficulties besides just the handling of the query. Solr caches the query results in memory and if you request 1,000,000 documents you could also be caching 1,000,000 million documents. Too much of this activity and it can stress Solr to the breaking point.

Best Practices

There is no magic value to set for ContentSearch.SearchMaxResults other than not “”. General best practice when retrieving lots of data from most any system is to use paging. It’s recommended to do that for Sitecore ContentSearch queries, too. A general recommendation would be to set a specific value for the ContentSearch.SearchMaxResults setting, such as “500” or “1000”. This should be thoroughly tested, however, as limiting the search results for an implementation that isn’t properly using search result paging can lead to inconsistent behavior across the site. Areas such as site maps, general site search, and other areas with implementation logic that could assume all the search results are available in a single request deserve special attention when tuning this setting.

What About Noisy Solr Neighbors?

I’ve worked on some implementations where Solr was a resource shared between a variety of Sitecore implementations. One project, in this example, might set ContentSearch.SearchMaxResults to “2000” for their Sitecore sites while another project sets a value of “500” – but what if there’s a third organization making use of the same Solr resources and that project doesn’t explicitly set a value for ContentSearch.SearchMaxResults? That one project leaves the setting at the “” default, so it uses the .Net int.MaxValue. This is a classic noisy neighbor problem where shared services become a point of vulnerability to all the consuming applications. The one project with “” for ContentSearch.SearchMaxResults could be responsible for dragging Solr performance down across all the sites.

Solr is an extensible platform much like Sitecore, and in some ways even more so. In Sitecore one extends pipelines or overrides events to achieve the customizations you desire; the same general idea can be applied to Solr – you just use Java with Solr instead of C#.

In this case, our concern being unbounded Solr queries, we can use an extension to a search component (org.apache.solr.handler.component.SearchComponent) to introduce our custom logic into the Solr query processing. In our case, we want to enforce limits to the number of rows a query can request. This would be a safety net in case an un-tuned Sitecore implementation left a ContentSearch.SearchMaxResults setting at the default.

Some care must be taken in how this is introduced into the Solr execution context and where exactly the custom logic is handled. This topic is all covered very well, along with sample code etc, at https://jorgelbg.wordpress.com/2015/03/12/adding-some-safeguard-measures-to-solr-with-a-searchcomponent/. For an enterprise Solr implementation with a variety of Sitecore consumers, a safety measure such as this could be vital to the general stability and perf of the platform – especially if one doesn’t have control over the specific Sitecore projects and their use (or abuse!) of the ContentSearch.SearchMaxResults setting. Maybe file this under best practices for governing Sitecore implementations with shared Solr infrastructure.

Sitecore Commerce 8.2.1 and ListManager with EXM

I’ve been engaged on a few more Sitecore Commerce builds (Commerce 8.2.1 still as these have carried over from 2017) and found an interesting wrinkle the other day. At first, it looked like a MongoDB issue as contacts weren’t being properly added to Sitecore ListManager “Lists” for use in EXM, but after scratching beneath the surface it was a lot more interesting. I used a utility sent my way by Sitecore support — it’s a .zip that has a Sitecore 8.2 update-5 specific tool for seeing Sitecore Lists and their status in terms of what’s in MongoDB and what’s in content search indexes (Solr in my case).

The tool made it pretty clear that the data was being stored properly in MongoDB but NOT in the search index (the screenshot below shows “Contacts in index: 3” which is after we corrected the problem — initially the Contacts in index would only ever show 0 and that’s what helped to isolate the problem to Sitecore Content Search):

lists

Another piece of evidence, in the Sitecore UI when we’d try to add a new contact to ListManager we’d see this message:

Please note that contacts in the list are currently being indexed, so not all contacts are available to view at this time. 0 out of 3 contacts are currently indexed.

Once we enabled verbose logging for search and examined the Search.log output, we see messages like this in the logs:

INFO  Solr Query - ?q=(type_t:(contact) AND contact.tags_sm:(ContactLists\:\{B76B0E74-E94D-4EBB-F219-6A347C75520D\}))&start=0&rows=20&fl=contact.contactid_s,contact.identifier_t,contact.firstname_t,contact.surname_t,contact.preferredemail_t,contactscount_tl,_uniqueid,_datasource&fq=_indexname:(sitecore_analytics_index)

I bolded the contact.tags_sm criteria in the query as that turned out to be key. This is the query that Sitecore issues to Solr when trying to obtain contacts for ListManager.

Through considerable trial and error, Solr schema inspection, and just determination (and I think Dana [https://twitter.com/thesoftwarejedi] was the one who finally yelled “bingo” and discovered this), when we run this query directly against Solr, we would find our missing ListManagement contact:s

http://solr-server:8983/solr/sitecore_analytics_index/select?q=(type_t:(contact) AND contact.tags_tm:(ContactLists\:\{B76B0E74-E94D-4EBB-F219-6A347C75520D\}))&start=0&rows=20&fl=contact.contactid_s,contact.identifier_t,contact.firstname_t,contact.surname_t,contact.preferredemail_t,contactscount_tl,_uniqueid,_datasource&fq=_indexname:(sitecore_analytics_index

The contact.tags_tm is bold above, and that was the crux of our challenge.

Sitecore was indexing contacts using tags_tm while Sitecore queries were looking for tags_sm.

In Sitecore config file CommerceServer\CommerceServer.ContentSearch.Solr.DefaultIndexConfiguration.config is a fragment of XML like the following:

<typeMatches hint="raw:AddTypeMatch">
 <typeMatch typeName="idCollection" type="System.Collections.Generic.List`1[[Sitecore.Data.ID, Sitecore.Kernel]]" fieldNameFormat="{0}_sm" multiValued="true" settingType="Sitecore.ContentSearch.SolrProvider.SolrSearchFieldConfiguration, Sitecore.ContentSearch.SolrProvider" />
 <typeMatch typeName="textCollection" type="System.Collections.Generic.List`1[System.String]" fieldNameFormat="{0}_tm" multiValued="true" settingType="Sitecore.ContentSearch.SolrProvider.SolrSearchFieldConfiguration, Sitecore.ContentSearch.SolrProvider" />

The typeMatch for typeName=”textCollection” was the issue, along with how it duplicates the mapping for the System.Collections.Generic.List`1[System.String] type — and the many places that were using the textCollection returnType that depended on this typeMatch. I removed the typeMatch from the config file and updated any dependency on textCollection to use stringCollection instead and . . . magic . . . the contacts properly indexed into Solr and the contact.tags_sm criteria would match the new data.

According to Sitecore Support, this is a defect in the way Commerce search indexing is setup and it’s overlap with Sitecore ListManager (EXM in our case). Commerce should probably use a custom configuration section instead of modifying the default index configuration, but we’ll have to wait and see how this is implemented in a future patch or release.

For the time being, I’ve created the following Sitecore patch configuration file to remove the textCollection elements. This is preferable to editing the standard Sitecore configuration files that come with the product and will make for easier Sitecore upgrades or adjustments when (or if?) a true correction for this defect is released by Sitecore:

<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
 <sitecore>
 <contentSearch>
 <indexConfigurations>
 <defaultSolrIndexConfiguration type="Sitecore.ContentSearch.SolrProvider.SolrIndexConfiguration, Sitecore.ContentSearch.SolrProvider">
 <fieldMap type="Sitecore.ContentSearch.SolrProvider.SolrFieldMap, Sitecore.ContentSearch.SolrProvider">
 <typeMatches hint="raw:AddTypeMatch">
 <typeMatch typeName="textCollection">
 <patch:delete />
 </typeMatch>
 </typeMatches>
 <fieldNames>
 <field fieldName="instocklocations">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field>
 <field fieldName="outofstocklocations">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field>
 <field fieldName="orderablelocations">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field>
 <field fieldName="commerceancestornames">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field> 
 </fieldNames>
 <fieldTypes hint="raw:AddFieldByFieldTypeName">
 <fieldType fieldTypeName="catalog selection control">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </fieldType>
 <fieldType fieldTypeName="child categories list control">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </fieldType>
 <fieldType fieldTypeName="child products list control">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </fieldType>
 <fieldType fieldTypeName="parent categories list control">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </fieldType>
 <fieldType fieldTypeName="relationship list control">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </fieldType>
 <fieldType fieldTypeName="variant list control">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </fieldType>
 </fieldTypes>
 </fieldMap>
 <documentOptions>
 <fields hint="raw:AddComputedIndexField">
 <field fieldName="instocklocations">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field>
 <field fieldName="outofstocklocations">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field>
 <field fieldName="orderablelocations">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field>
 <field fieldName="commerceancestornames">
 <patch:attribute name="returnType">stringCollection</patch:attribute>
 </field>
 </fields>
 </documentOptions>
 </defaultSolrIndexConfiguration>
 </indexConfigurations>
 </contentSearch>
 </sitecore>
</configuration>

A few Solr thoughts

Solr has never been more pervasive through the Sitecore projects I’m seeing these days.  Deciding which version of Solr for a greenfield Sitecore project, however, is not clear-cut.

Easy answer: use Solr 5.1

Sitecore’s KB article on compatibility with Solr serves as our official reference when it comes to selecting a Solr version to standardize on.  At face-value, if you’re using Sitecore version 8.2, you’re steered to Solr version 5.1:

SolrCompat

The diagram has a note [3], however, that is worth noting:

WARN  Unable to connect to Solr: [http://{hostname}:{port}/solr], the [SolrNet.Exceptions.SolrConnectionException] was caught.
Exception: SolrNet.Exceptions.SolrConnectionException
Message: Error handling 'status' action
org.apache.solr.common.SolrException: Error handling 'status' action
  • “To resolve issue, upgrade Solr to 5.5.1 or later version.”

Easy answer: use Solr 5.5.1

I asked Sitecore support about this, and in fact the guidance I received from Sitecore Support was to build on Solr version 5.5.1 instead of what the KB article states.  There are no plans to alter the guidance in that KB article, however, since Sitecore 8.2 as a whole platform was thoroughly tested with Solr 5.1.  Apparently, Solr 5.5.1 was not available at the time of that testing.

Anecdotally, Sitecore has found fewer errors when using Solr 5.5.1 instead of Solr 5.1 — when pressed for specifics, it was shared that these two Solr issues have caused problems for other Sitecore implementations:

  1. https://issues.apache.org/jira/browse/SOLR-8793
    • FileNotFoundException or NoSuchFileException with Solr — see comment from Sitecore KB article that it can cause “Unable to connect to Solr” exceptions in some cases
  2. https://issues.apache.org/jira/browse/LUCENE-7188
    • NRTCachingDirectory error where an IllegalStateException exception is thrown

Easy answer: there are no easy answers

I’ve worked with a number of Solr 5.1 projects with Sitecore, and some using other Solr versions prior to Solr 5.5.1, but haven’t encountered the above errors as major impediments.

It’s tempting to use Solr 5.5.1, but if a project is using EXM or WFFM or Sitecore Commerce or some other combination of technology edge case, it’s at least theoretically possible that Sitecore support could fall back on the officially published “Solr 5.1 ✓ ‘officially tested, recommended'” guidance from their KB article.  That’s enough for us to approach new Sitecore projects depending on Solr to go with Solr version 5.1 and keep an eye out for those particular gotchas that may cause us to upgrade to Solr 5.5.1.

The catch is, if you’re upgrading Solr and stopping at Solr 5.5.1 — is there a strong rationale not to upgrade beyond  5.5.1?  At this point, http://archive.apache.org/dist/lucene/solr/ has a wealth of newer Solr versions that are bound to have more patches and fixes that 5.5.1.  This is what you call a slippery slope:

solrslippery.JPG

I have to be careful here as I walk the line of a non-discolosure agreement, but there are still more variables to consider: in the near future, a Sitecore release is likely to involve thorough Solr support for a very recent version of Solr.  Expect a Solr version newer than 5.5.1 (which was released May of 2016 ☺).

So…

I believe I’ve sold myself on the wisdom of Solr 5.1 for now — so long as the sacred Sitecore Support ✓ is present on the official compatibility table.  It’s key to continue learning with Solr, though, and in the months to come we may be talking about SolrCloud and managed Solr schemas . . . cool new aspects to improve Sitecore implementations.

Azure Search compared to Solr for Sitecore PaaS (Chapter 2: Querying)

I carried forward my Azure PaaS benchmarking work from earlier this month (see this post on the indexing side of the equation for the start of the story).

For a quick refresher, I’ve used an ARM template based deployment of Sitecore to get a system resembling the following:

ARM Templates Arch

The element I’m exercising in the benchmarks is how Sitecore’s web servers work with the “Search” icon in the diagram above.  I tackled the document ingestion side (how data gets into the search indexes) in my earlier post.  This post addresses the querying side of things (how data gets out of the search indexes).

By default, Azure PaaS search with Sitecore is configured to use Azure Search.  Solr is another viable option.

Here’s where I’ll interject that Coveo also has an excellent search technology for Sitecore.  There are specific use-cases where Coveo is a strong fit, however, and in my indexing the sitecore_core_index evaluations in the earlier post Coveo would not be considered a good fit.  This changes, however, for the set of benchmarks I’ve run in this post.  I am in the process of testing the Coveo approach in Azure PaaS for Sitecore . . . it’s hot off the presses, so there are still rough edges to work around . . . but Coveo is not part of this write-up for the time being.  I will post an update here once I’ve completed the analysis involving Coveo.

In considering Azure Search vs Solr, I used a methodology with JMeter laid out in a great KB article from Sitecore at https://kb.sitecore.net/articles/398589.  I have a LaunchSitecore site running and I use JMeter to automate visits to the site, simulating simple user behaviour.  I don’t go too crazy with this, because I’m more interested in exercising a basic Sitecore work load than doing a deep-dive in xDB traffic simulation.

My first post showed a clear advantage to Solr for the indexing side of search, but for the querying side I can say there is very little variance between Azure Search and Solr.  Sitecore does a good job of protecting data repositories with layers of data and html caches, but even with those those features disabled (we’re talking cacheHtml=”false”on the site definition, <cacheSizes> configuration all set to a heretical zero (“0”), etc) there isn’t a significant difference between the two technologies.

I’m not going to put up a graph of it, because the throughput as measured by JMeter for tests of 20, 50, 100, 200, or  more visitors performed almost the same.

I could develop a more search heavy set of benchmarks, performing a random dictionary of searches against a large custom index that Sitecore responds to but must bypass all caches etc, but that feels like overkill for what I’m looking to achieve.  Maybe that’s appropriate once I bring Coveo into the benchmarking fun.

For this, I wanted to get a sense for the relative performance between Azure Search and Solr as it relates to Sitecore PaaS and I think I’ve done that.  Succinctly:

  1. Solr is considerably faster at search indexing (courtesy of the search provider implementation in Sitecore)
  2. both Azure Search and Solr perform about the same when it comes to querying a basic Sitecore site like LaunchSitecore (again, courtesy of the search provider implementation in Sitecore)

This isn’t the definitive take on the topic.  It’s more like the beginning.  Azure Search is native to Azure, so there are significant advantages there.  There is a lot of momentum around Azure and Sitecore in general, so that story will continue to evolve.

There are Solr as a service options out there that make Solr for Sitecore much easier (such as www.measuredsearch.com which I’ll blog about in the next few days), but Solr can be a lot for corporate IT departments to take on, so it isn’t a simple choice for everyone.