xDB Reporting Database Rebuild Help

I’ve created something like this every time I need to rebuild the Sitecore “reporting” database (this link covers the basic process), this time I’m posting it online so I can re-use it next time around!

This is the script for generating the T-SQL that’s required to complete step #3 in the write-up when you’re following the “Rebuild Reporting Database” instructions:

“In the Rebuild Reporting Database page, when you see Waiting to receive to data status, copy the following marketing definition tables from the primary to the secondary reporting database”

I have written the SQL several times to do this, but this time I took a run at DRY (don’t repeat yourself) to script this SQL out.  Alas, I think my T-SQL comes in at 40+ lines of code versus the raw SQL to run which is just 35 lines and much easier to read, in my opinion.

Either way, you can pick which you prefer as I’ll share them both here

First, the plain vanilla SQL commands for copying those database tables:

INSERT INTO target_Analytics.dbo.CampaignActivityDefinitions
         SELECT source_Analytics.dbo.CampaignActivityDefinitions.*
         FROM  source_Analytics.dbo.CampaignActivityDefinitions ;

INSERT INTO target_Analytics.dbo.GoalDefinitions
         SELECT source_Analytics.dbo.GoalDefinitions.*
         FROM  source_Analytics.dbo.GoalDefinitions ;

INSERT INTO target_Analytics.dbo.OutcomeDefinitions
         SELECT source_Analytics.dbo.OutcomeDefinitions.*
         FROM  source_Analytics.dbo.OutcomeDefinitions ;

INSERT INTO target_Analytics.dbo.MarketingAssetDefinitions
         SELECT source_Analytics.dbo.MarketingAssetDefinitions.*
         FROM  source_Analytics.dbo.MarketingAssetDefinitions ;

INSERT INTO target_Analytics.dbo.Taxonomy_TaxonEntity
         SELECT source_Analytics.dbo.Taxonomy_TaxonEntity.*
         FROM  source_Analytics.dbo.Taxonomy_TaxonEntity ;

INSERT INTO target_Analytics.dbo.Taxonomy_TaxonEntityFieldDefinition
         SELECT source_Analytics.dbo.Taxonomy_TaxonEntityFieldDefinition.*
         FROM  source_Analytics.dbo.Taxonomy_TaxonEntityFieldDefinition ;

INSERT INTO target_Analytics.dbo.Taxonomy_TaxonEntityFieldValue
         SELECT source_Analytics.dbo.Taxonomy_TaxonEntityFieldValue.*
         FROM  source_Analytics.dbo.Taxonomy_TaxonEntityFieldValue ;

And now, here’s the T-SQL attempt to “simplify” the process of creating a script like the above for future projects (yet I prefer it less to the brute force approach):

The advantage to the below is you set your source and target variables to the names of the SQL Server databases, and then you’re all set.

DECLARE @source VARCHAR(100)
DECLARE @target VARCHAR(100)
SET @source = 'source_Analytics'
SET @target = 'target_Analytics'

SET NOCOUNT ON
--List approach will work in SQL Server 2012 only
DECLARE @ListOfTables TABLE(IDs VARCHAR(100));
INSERT INTO @ListOfTables
VALUES('CampaignActivityDefinitions'),
  ('GoalDefinitions'),
  ('OutcomeDefinitions'),
  ('MarketingAssetDefinitions'),
  ('Taxonomy_TaxonEntity'),
  ('Taxonomy_TaxonEntityFieldDefinition'),
  ('Taxonomy_TaxonEntityFieldValue');

SET ROWCOUNT 0
SELECTX NULL mykey, * INTO #mytemp FROM @ListOfTables
DECLARE @theTable varchar(100)
DECLARE @sql varchar(1000)

SET ROWCOUNT 1
UPDATE #mytemp SET mykey = 1

WHILE @@rowcount > 0
BEGIN
    SET ROWCOUNT 0
    SELECT @theTable = (SELECT IDs FROM #mytemp WHERE mykey = 1)
    PRINT 'INSERT INTO ' + @target + '.dbo.' + @theTable + '
         SELECT ' + @source +  '.dbo.' + @theTable + '.*
         FROM  ' + @source + '.dbo.' + @theTable + ' ;'
     --use 'EXEC to run the dynamic SQL, instead of PRINT, 
     --if you're feeling brave

    DELETE #mytemp WHERE mykey = 1
    SET ROWCOUNT 1
    UPDATE #mytemp SET mykey = 1
END
SET ROWCOUNT 0
DROP TABLE #mytemp

 

Azure Search compared to Solr for Sitecore PaaS (Chapter 2: Querying)

I carried forward my Azure PaaS benchmarking work from earlier this month (see this post on the indexing side of the equation for the start of the story).

For a quick refresher, I’ve used an ARM template based deployment of Sitecore to get a system resembling the following:

ARM Templates Arch

The element I’m exercising in the benchmarks is how Sitecore’s web servers work with the “Search” icon in the diagram above.  I tackled the document ingestion side (how data gets into the search indexes) in my earlier post.  This post addresses the querying side of things (how data gets out of the search indexes).

By default, Azure PaaS search with Sitecore is configured to use Azure Search.  Solr is another viable option.

Here’s where I’ll interject that Coveo also has an excellent search technology for Sitecore.  There are specific use-cases where Coveo is a strong fit, however, and in my indexing the sitecore_core_index evaluations in the earlier post Coveo would not be considered a good fit.  This changes, however, for the set of benchmarks I’ve run in this post.  I am in the process of testing the Coveo approach in Azure PaaS for Sitecore . . . it’s hot off the presses, so there are still rough edges to work around . . . but Coveo is not part of this write-up for the time being.  I will post an update here once I’ve completed the analysis involving Coveo.

In considering Azure Search vs Solr, I used a methodology with JMeter laid out in a great KB article from Sitecore at https://kb.sitecore.net/articles/398589.  I have a LaunchSitecore site running and I use JMeter to automate visits to the site, simulating simple user behaviour.  I don’t go too crazy with this, because I’m more interested in exercising a basic Sitecore work load than doing a deep-dive in xDB traffic simulation.

My first post showed a clear advantage to Solr for the indexing side of search, but for the querying side I can say there is very little variance between Azure Search and Solr.  Sitecore does a good job of protecting data repositories with layers of data and html caches, but even with those those features disabled (we’re talking cacheHtml=”false”on the site definition, <cacheSizes> configuration all set to a heretical zero (“0”), etc) there isn’t a significant difference between the two technologies.

I’m not going to put up a graph of it, because the throughput as measured by JMeter for tests of 20, 50, 100, 200, or  more visitors performed almost the same.

I could develop a more search heavy set of benchmarks, performing a random dictionary of searches against a large custom index that Sitecore responds to but must bypass all caches etc, but that feels like overkill for what I’m looking to achieve.  Maybe that’s appropriate once I bring Coveo into the benchmarking fun.

For this, I wanted to get a sense for the relative performance between Azure Search and Solr as it relates to Sitecore PaaS and I think I’ve done that.  Succinctly:

  1. Solr is considerably faster at search indexing (courtesy of the search provider implementation in Sitecore)
  2. both Azure Search and Solr perform about the same when it comes to querying a basic Sitecore site like LaunchSitecore (again, courtesy of the search provider implementation in Sitecore)

This isn’t the definitive take on the topic.  It’s more like the beginning.  Azure Search is native to Azure, so there are significant advantages there.  There is a lot of momentum around Azure and Sitecore in general, so that story will continue to evolve.

There are Solr as a service options out there that make Solr for Sitecore much easier (such as www.measuredsearch.com which I’ll blog about in the next few days), but Solr can be a lot for corporate IT departments to take on, so it isn’t a simple choice for everyone.

 

 

Azure Search compared to Solr for Sitecore PaaS (Chapter 1: Ingestion)

I’ve been investigating Azure PaaS architectures for Sitecore lately, and I wanted to take a few minutes and summarize some recent findings around the standard Sitecore search providers of Solr and, new for Sitecore PaaS, Azure Search.

To provision Azure PaaS Sitecore environments, I used a variant of the ARM Template approach outlined in this blog.  For simplicity, I evaluated a basic “XP-0” which is the name for the Sitecore CM/CD server combined into a single App Service.  This is considered a basic setup for development or testing, but not real production . . . that’s OK for my purposes, however, as I’m interested in comparing the Sitecore search providers to get an idea for relative performance.

The Results

I’ll save the methodology and details for lower in this post, since I’m sure most don’t want to wait for an idea for the results.  The Solr search provider performed faster, no matter the App Service or DB Tier I evaluated in Azure PaaS:

ChartComparison

The chart shows averages to perform the full re-index operation in minutes.  You may want to refer to my earlier post about the lack of HA with Sitecore’s use of Azure Search; rest assured Sitecore is addressing this in a product update soon, but for now it casts a more significant shadow over the 60+ minutes one could spend waiting for the search re-index to complete.

Methodology

In these PaaS trials, I setup the sample site LaunchSitecore.  I performed rebuilds of the sitecore_core_index through the Sitecore Control Panel as my benchmark; I like using this operation as a benchmark since it has over 80,000 documents.  It doesn’t particularly exercise the querying aspects of Sitecore search, though, so I’ll save that dimension for another time.  I’ve got time set aside for JMeter testing that will shed light on this later…

To get the duration the system took to complete the re-index, I queried the PaaS Sitecore logs as described in this Sitecore KB article.  Using results like the following, I took the timestamps since I’ve found the Sitecore UI to be unreliable in reporting duration for index rebuilds.

queries

You can get at this data yourself in App Insights with a query such as this:

traces
| where timestamp > now(-3h)
| where message contains " Crawler [sitecore_core_index]" 
| project timestamp, message
| sort by timestamp desc

Remember, I’ve used the XP0 PaaS ARM Templates which combine CM and CD roles together, so there’s no need for the “where cloud_RoleInstance == ‘CloudRoleBlahBlah'” in the App Insights query.

Methodology – Azure Search

For my Azure Search testing, I experimented with scaling options for Azure Search.  For speedier document ingestion, the guidance from Microsoft says:

“Partitions allow for scaling of document counts as well as faster data ingestion by spanning your index over multiple Azure Search Units”

The trials should perform more quickly with additional Azure Search Partitions, but I found changing this made zero difference.  My instincts tell me the fact Sitecore isn’t using Azure Search Indexers could be a reason scaling Azure Search doesn’t improve performance in my trials.  Sitecore is making REST calls to index documents with Azure Search, which is fine, but possibly not the best fit for high-volume operations.  I haven’t looked in the DLLs, but perhaps there’s other async models one could use in the the Azure Search provider when it comes to full re-indexes?  It could also be that the 80,000 documents in the sitecore_core_index is too small a number to take advantage of Azure Search’s scaling options.  This will be an area for additional research in the future.

Methodology – Solr

To host Solr for this trial, I used a basic Solr VM in the Rackspace cloud.  One benefit to working at Rackspace is easy access to these sorts of resources 🙂  I picked a 4 GB server running Solr 5.5.1.  I used a one Solr core per Sitecore index (1:1 mapping), see my write-up on Solr core organization if you’re not following why this might be relevant.

For my testing with the Solr search provider, Sitecore running  Azure PaaS needed to connect outside Azure, so I selected a location near to Azure US-East where my App Service was hosted.  I had some concerns about outbound data charges, since data leaving Azure will trigger egress bandwidth fees (see this schedule for pricing).  For the few weeks while I collected this data, the outbound data fee totaled less than $40 — and that includes other people using the same Azure account for other experiments.  I estimate around 10% (just $4) is due to my experiments.  Suffice it to say using a Solr environment outside of Azure isn’t a big expense to worry about.  Just the same, running Solr in an Azure VM would certainly be the recommendation for any real Sitecore implementation following this pattern.  For these tests, I chose the Rackspace VM since I already had it handy.

I’d be remiss to not mention the excellent work Sitecore’s Ivan Sharamok has posted to help make Solr truly enterprise ready with Sitecore.  Basic Auth for Solr with Sitecore is important for the architecture I exercised; this post is another gem of Ivan’s worth including here, even if I didn’t make use of it in this specific set of evaluations.  Full disclosure: I worked with Ivan while I was at the Sitecore mothership, so I’m biased that his contributions are valuable, but just because I’m biased doesn’t mean I’m wrong.

Conclusions

I’ll include my chart once again:

ChartComparison

These findings lead me to more questions than answers, so I’m hesitant to make any sweeping generalizations here.  I’m safe declaring Sitecore’s search provider for Solr to be faster than the Azure Search alternative when it comes to full index re-builds, that’s clear by an order of magnitude in some cases.  Know that this is not a judgement about Solr versus Azure Search;  this is about the way Sitecore makes use of these two search technologies out of the box.  The Solr provider for Sitecore is battle-tested and has gone through many years of development; I think the Azure Search provider for Sitecore could be considered a beta at this point, so it’s important to not get ahead of ourselves.

A couple other conclusions could be:

  1. Whether using Solr or Azure Search, there is no improvement to search re-index performance when changing between the S3 to P3 tiers in Azure App Services.
  2. Changing from the S1 to S3 tiers, on the other hand, makes a big perf difference in terms of search re-indexing.
    • Honestly, the S1 tier is almost unusable as the single CPU core and 1.75 GB RAM are way too low for Sitecore; the S3 with 4 cores and 7 GB RAM is much more reasonable to work with.

Next Up

It’s time for me to consider the more fully scaled PaaS options with Sitecore, and I need to exercise the query side of the Sitecore search provider instead of just the indexing side.

Auto-suggest with Solr Facets in Sitecore

Sitecore’s auto-suggest feature for search in the Content Authoring environment is pretty slick, but there is some confusing documentation from Sitecore about how to set it up properly with Solr.  As of today, Sitecore’s documentation on integrating with Solr indicates…

“When you implement Solr with Sitecore you need to enable term support in the Solr search handler.  The term functionality is built into Solr but is disabled by default. To power the dropdowns in the UI you must enable the terms component.

That above documentation will be updated at some point by Sitecore, since it’s no longer the case for the latest version of Sitecore — 8.2 rev. 161221 (Update-2).

In earlier versions of Sitecore, search in the Sitecore Content Editor could make use of the Solr “terms” component to populate suggestions.  This is why this guidance has previously been part of the Solr integration documentation from Sitecore.  Read more about Solr’s use of this auto-suggest through terms at https://cwiki.apache.org/confluence/display/solr/The+Terms+Component.

Sitecore’s strategy of making use of the “terms” component has changed with recently, however.  Sitecore now uses faceting with Solr instead of terms.

To prove this out, I’m going to turn to the Solr logs after I try some queries for content in the Sitecore client.  Refer to this documentation from Sitecore if you’re looking for more context on how to use the search facility — there are a lot of features that are very under-utilized, in my experience.  I’ll specify a clause by typing Updatedby: and then “siteco” to engage the auto-suggest feature:searbhby

Very nice, right?

Under the covers, the Solr logs will reveal something like this . . .

2017-02-17 19:33:07.546 INFO  (qtp33171127-11) [   x:trial_core] o.a.s.c.S.Request [trial_core]  webapp=/solr path=/select params={q=*:*&facet.field=parsedupdatedby_s&facet.prefix=siteco&rows=0&facet=true&version=2.2&facet.sort=true} hits=24626 status=0 QTime=2

. .  . and that can be further debugged by turning it into the URL request powering that auto-suggest response . . .

http://server:port/solr/sitecore_master_index/select?q=*:*&facet.field=parsedupdatedby_s&facet.prefix=siteco&rows=0&facet=true&version=2.2&facet.sort=true

. . . and that would return results like the following:

solrresponse

If instead we tried an author: search in Sitecore, for example, the facet.field would be parsedcreatedby_s instead of parsedupdatedby_s.

I don’t want to go too far down this rabbit hole.  I really just wanted to share that despite what the documentation shows, it’s not necessary to enable the Solr term component on the /select requestHandler in Solr if you’re using the most recent version of Sitecore.  I’ve confirmed with official Sitecore support that this change was tagged as change #444661 and that’s it was incorporated into the product since Sitecore 8.1 update-1 (rev. 151207); the release notes for 8.1 update-1 are vague, but here it is:

Autocomplete for known fields such as language did not work in the Content Editor Search tab using the SOLR provider. The problem was related to the SOLR server configuration. This has been fixed so that Sitecore no longer depends on this configuration. (444661)

Happy faceting to all!

 

High Availability of Azure Search with Sitecore

I’ve been investigating Azure Search with Sitecore’s new Azure App Service offering.  I’ve got a giant Excel file of benchmarks and charts based on several permutations and configurations, and several other interesting tidbits that I need to organize into posts to this blog . . . so look for much more about this general topic in the future.

For now, I thought I’d share a point I’ve confirmed with Sitecore support regarding a limitation of Azure Search with Sitecore’s CloudSearchProviderIndex.  The CloudSearchProviderIndex is what the standard Platform-As-A-Service product from Sitecore will use in place of Lucene or Solr or Coveo to power content search for Sitecore.  This is the key building block for working with Azure Search through Sitecore.  While I was performing performance benchmarks for search re-indexing with Sitecore, I noticed the Azure Search document count would drop to 0 and I’d see odd results from Sitecore requests that depended on the search index.  This was classic “search index is being worked on, don’t rely on querying it until the work is done” behaviour.  This was corrected several years ago through Sitecore’s addition of a SwitchOnRebuildLuceneIndex and equivalent for Solr . . . but there is no such equivalent for the CloudSearchProviderIndex used by Azure PaaS solutions.  Essentially: Sitecore is using a single copy of search indexes for query and re-indexing operations, limiting the availability of search during maintenance work.

One could argue this may not be such a big deal because one may not rebuild Azure Search indexes with any frequency.  I’m not sold on this argument, however, since the Sitecore projects I know will frequently perform re-indexing due to development changes to the schema, content synchronization demands, or just routine deployment standard practices.

Further complicating this issue is that my benchmarking for Azure Search re-indexing through Sitecore leaves a lot to be desired.  It can be slow.  This could make for an extended period of search index unavailability due to the CloudSearchProviderIndex‘s limitations.  I’ll share the full battery of testing I’ve done in a future post, but for now let me share the timings I’m observing regardless of the number of Azure Search partitions or replicas I’m working through (partitions should generally improve indexing performance; replicas should generally improve querying performance):

App Service Configuration Time for 20,000 Sitecore Items to Re-Index with Azure Search
Azure PaaS Standard (S1) CM IIS (OOTB from the Marketplace) 66 minutes
Azure S2 CM IIS 35 minutes
Azure S3 CM IIS 25 minutes
Azure P2 CM IIS 35 minutes
Azure P3 CM IIS 24 minutes

For reference, with Lucene indexes this operation would take 5 minutes or less.  The scaling options for Azure Search, Partition count and Replica count, have a minimal impact to the re-indexing operation.

I’ll go into details of this later, but it could be that . . .

  • 20,000 Sitecore items is too small a figure to benefit from scaling with Azure Search?  Many customers have 100,000 or more items, so perhaps I should evaluate a larger data set.
  • there are bottlenecks at the SQL tier?  App Insights here I come…
  • the fact Sitecore isn’t using Azure Search Indexers to ingest data and relies on the Sitecore crawling logic to handle data indexing is artificially slowing this process down

For the time being, Sitecore has responded that improving the availability of Azure Search indexes during rebuilds is an official “feature request” and assigned reference number 146822 

In the meantime, if a project needs high availability for Azure Search indexes one may need to roll up their sleeves and craft their own SwitchOnCloudSearchProviderIndex.  It appears fairly straight-forward based on reviewing how this is solved for Solr, just as one example.  A key caveat is in the Azure Search capacity planning documentation:

High availability for Azure Search pertains to queries and index updates that don’t involve rebuilding an index. If you add or delete a field, change a data type, or rename a field, you will need to rebuild the index. To rebuild the index, you must delete the index, re-create the index, and reload the data.

To maintain index availability during a rebuild, you must have a copy of the index with a different name on the same service, or a copy of the index with the same name on a different service, and then provide redirection or failover logic in your code.

It looks like providing for high availability would double the price of Azure Search indexes, so there are a cascade of complications related to this.

My investigations into Sitecore and Azure Search yielded this complication — it’s not insurmountable, and I actually find it fascinating how an on-premises product (classic Sitecore) will evolve into a cloud-first product.  This is just one piece of the evolutionary story.  I expect this will be addressed sooner rather than later in an official upgrade or patch from Sitecore, and until then it’s important to understand this nuance to the Sitecore PaaS landscape.

Digesting Sitecore Commerce 8.2.1

A whole new take on Sitecore Commerce is hot off the presses and I had an opportunity to dig into it briefly this week.  Taking from the release notes and the documentation, which is actually fairly extensive:

This is Sitecore’s new re-envisioned Commerce product.

“Release number 8.2.1 has been assigned to reflect the compatibility with release 8.2 of the Sitecore Experience Platform (Sitecore XP). However, Sitecore Commerce 8.2.1 is not an update to previous Commerce 8.2 releases, but is an entirely new Commerce product and release.”

I worked on a few Sitecore Commerce implementations a while back, but it had been over a year since I ran a proof-of-concept or even completed the installation.  My background with the permutations of “Commerce” on Windows goes back over 15 years, starting with the Microsoft Site Server product and the initial craze around XSLT rendering HTML output from content engines . . . I remember a horrendous e-commerce project designed with a Commerce Server beta and the “elegance” of XML was a complete productivity killer.  It’s a poor worker who blames their tools, right? 🙂   I digress…

Anyway, the last real work I did with Sitecore Commerce was in 2015 and I recall the installation/configuration process being arduous, with both Web and Desktop elements, lots of security hoops to jump through, COM everywhere, and even registry edits for good measure.

This new 8.2.1 Release installation process is certainly an improvement over what is now considered “Legacy” Sitecore Commerce . . . but standing up a baseline installation to kick the tires will still likely occupy a solid day of your time.  The documentation is good, but not 100% bulletproof because there are so many moving parts.  I know I ended up needing to install some new .Net elements for ASP.NET Core . . . and I needed to install an old .Net framework SDK to get another piece of the puzzle to run on the IIS server. I took notes on what extra steps I needed to perform, but I was using a fairly old Rackspace server image so not particularly applicable to everyone.  A few examples from my notes, however:

  1. Re-install the Default Web Site to IIS (our scripted Sitecore installation cleans out the Default Web Site in IIS, so I needed to add it back in to satisfy an assumption one of the various installers made)
  2. Configure IIS 6 Metabase Compatibility to satisfy a requirement for the Commerce Server installation

It’s these sorts of nuances that I recall from previous run-ins with the Commerce platform Sitecore inherited and now fully owns.  In some respects, not all that much has changed.

On the bright side, however, there are clean new SPEAK applications for working with Commerce data:

threecommerce

To get this far, however, you really have to earn it.  There are eight Sitecore “packages” that must be installed, for example, once you get the base Commerce Server + Sitecore + Commerce Core running . . . oh, and they need to be installed in a specific order that is NOT alphabetical, either:

packages

On the bright side, there is a lot more documentation than I’ve seen before on this set of products.  I worked with Sitecore Commerce at a time when there was essentially no real current information about the product, so maybe I’m satisfied too easily with what is now available . . . but I really found this an area Sitecore has improved upon.

Based on this documentation, I was able to pull out some of Sitecore’s diagrams of the product and compile this single visual of the Sitecore Commerce platform as I understand it for version 8.2.1:

8-2-1-annotation

The above is just consolidated from a variety of pictures and notes contained throughout the official documentation from Sitecore on the subject, but one of the ways I digest a system is by diagramming and scribbling notes as I go through a project.  Maybe others will find it useful, too.

Strategies for Sitecore Index Organization into Solr Cores

A few days ago, I shared a graphic I put together to illustrate how Solr can be used to organize Sitecore “indexes” into Solr “cores” — this post has the complete graphic.  I want to elaborate on how one sets Sitecore up to use these two approaches, and dig further into the details.

1:1 Sitecore Index to Solr Core Strategy

To start, here’s a visual showing the typical way Sitecore “indexes” are structured in Solr using a one-to-one (1:1) mapping:

solrseparate

This shows each of the default search indexes defined by Sitecore organized into their own cores defined in Solr.  It’s a 1:1 mapping.  This 1:1 strategy means each index has their own configuration (“conf”) directory in Solr, so seperate stopwords.txt, solrconfig.xml, schema.xml, and so on; it also means each index has their own (“data”) directory in Solr, so separate tlog folders, separate Segment files, etc.

This is the setup one achieves by following the community documentation on setting up Sitecore with Solr; specifically, this quote from that write-up is where you’re doing a lot of the grunt work around setting up distinct Solr cores for each Sitecore index:

“Use the process detailed in Steps 4-7 to create new cores for all the remaining indexes you would like to move to SOLR.”

Since this is the common strategy, I’m not going to go into more details as it’s straight-forward to Sitecore teams.

Kitchen Sink (∞:1 Sitecore Index to Solr Core) Strategy

Here is the comparable graphic showing the ∞:1 strategy of structuring Sitecore indexes in Solr; I like to think of this as the Kitchen Sink container for all Sitecore indexes, since everything goes into that single core just like the kitchen sink:

solrsame

With this approach, a single data and configuration definition is shared by all the Sitecore indexes that reside in Solr.  The advantages are reduced management (setting up the Solr replicationHandler, for example, requires updating 15 solrconfig.xml files in the 1:1 approach, but the Kitchen Sink would require only one solrconfig.xml file to update).  There are significant drawbacks to consider with the Kitchen Sink, however, as you’re sacrificing scaling options specific to each Sitecore index and enforcing a common schema.xml for every index stored in this single core.  There are plenty of reasons not to do this for a production installation of Sitecore, but for a crowded Sitecore environment used for acceptance testing or other use-cases where bullet-proof stability and lots of flexibility when it comes to performance tuning, sharding, etc is not necessary, you could make a good case for the Kitchen Sink strategy.

The only change necessary to a standard Sitecore configuration to support this Kitchen Sink approach is to patch the contentSearch definitions for the Sitecore indexes where the name of the Solr “core” is specified (stored by default in config files like Sitecore.ContentSearch.Solr.Index.Master.config,  Sitecore.ContentSearch.Solr.Index.Web.config, etc).   This is telling Sitecore which Solr core contains the index, but the actual name of the core doesn’t factor into the ContentSearch API code one uses with Sitecore.   A patch such as the following would handle both the sitecore_master_index and the sitecore_web_index to organize into a Solr Core named “kitchen_sink:”

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <contentSearch>
      <configuration>
        <indexes>
          <index id="sitecore_master_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
            <param desc="core">kitchen_sink</param>
          </index>
          <index id="sitecore_web_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
            <param desc="core">kitchen_sink</param>
          </index>
        </indexes>
        </configuration>
    </contentSearch>
  </sitecore>
</configuration>

If you peek into the Solr Admin for the kitchen_sink core that I’m using, specifically the Schema Browser in the Solr Admin UI, it becomes clear how Sitecore uses a field named “_indexname” to represent the Sitecore index value.  For this screenshot below, I’ve set the kitchen_sink core to contain two Sitecore indexes: sitecore_master_index and sitecore_web index:

solrterms

This shows us the two terms stored in that _indexname field, and that there are 18,774 for sitecore_master_index and 5,851 for sitecore_web_index.  Even though the indexes are contained in the same Solr Core, Sitecore ContentSearch API code like this . . .

Sitecore.ContentSearch.ISearchIndex index = 
  ContentSearchManager.GetIndex(indexName);
    using (Sitecore.ContentSearch.IProviderSearchContext ctx = 
      index.CreateSearchContext())

. . . doesn’t care whether all the Sitecore indexes reside in a single Solr “Core” or if they’re in their own following a 1:1 mapping strategy.

Caveats and Going In A Different Direction

There was a bug or two in earlier versions of Sitecore related to this, so be careful with early Sitecore 7.2 or Sitecore 8 implementations (and if you’re using Sitecore 7.5, you’ve got plenty of other things to worry about so don’t sweat a Solr Core organization strategy!).

I should also note that while this post is looking at combining Sitecore indexes into a single Solr Core for convenience and to reduce the management headaches of having 15 sets of Solr Cores to update etc, there are some implementations that go in the opposite direction.  Consider a strategy like the following:

solrmindblown

 

There may be circumstances where keeping Sitecore indexes in their own Solr Core — and even isolating them further into their own Solr implementation — could be in order.  Solr runs in a JVM and this could certainly factor in, but there are other shared run-time resources that Solr sets aside for the whole Solr application.

I’m not familiar enough with these sorts of implementations that I want to comment further or recommend any course of action related to this right now, but it’s good to think about and consider with Solr tuning scenarios.  I just wanted to share it, as it’s a logical dimension to consider given the two previous strategies in this post.