Strategies for Sitecore Index Organization into Solr Cores

A few days ago, I shared a graphic I put together to illustrate how Solr can be used to organize Sitecore “indexes” into Solr “cores” — this post has the complete graphic.  I want to elaborate on how one sets Sitecore up to use these two approaches, and dig further into the details.

1:1 Sitecore Index to Solr Core Strategy

To start, here’s a visual showing the typical way Sitecore “indexes” are structured in Solr using a one-to-one (1:1) mapping:

solrseparate

This shows each of the default search indexes defined by Sitecore organized into their own cores defined in Solr.  It’s a 1:1 mapping.  This 1:1 strategy means each index has their own configuration (“conf”) directory in Solr, so seperate stopwords.txt, solrconfig.xml, schema.xml, and so on; it also means each index has their own (“data”) directory in Solr, so separate tlog folders, separate Segment files, etc.

This is the setup one achieves by following the community documentation on setting up Sitecore with Solr; specifically, this quote from that write-up is where you’re doing a lot of the grunt work around setting up distinct Solr cores for each Sitecore index:

“Use the process detailed in Steps 4-7 to create new cores for all the remaining indexes you would like to move to SOLR.”

Since this is the common strategy, I’m not going to go into more details as it’s straight-forward to Sitecore teams.

Kitchen Sink (∞:1 Sitecore Index to Solr Core) Strategy

Here is the comparable graphic showing the ∞:1 strategy of structuring Sitecore indexes in Solr; I like to think of this as the Kitchen Sink container for all Sitecore indexes, since everything goes into that single core just like the kitchen sink:

solrsame

With this approach, a single data and configuration definition is shared by all the Sitecore indexes that reside in Solr.  The advantages are reduced management (setting up the Solr replicationHandler, for example, requires updating 15 solrconfig.xml files in the 1:1 approach, but the Kitchen Sink would require only one solrconfig.xml file to update).  There are significant drawbacks to consider with the Kitchen Sink, however, as you’re sacrificing scaling options specific to each Sitecore index and enforcing a common schema.xml for every index stored in this single core.  There are plenty of reasons not to do this for a production installation of Sitecore, but for a crowded Sitecore environment used for acceptance testing or other use-cases where bullet-proof stability and lots of flexibility when it comes to performance tuning, sharding, etc is not necessary, you could make a good case for the Kitchen Sink strategy.

The only change necessary to a standard Sitecore configuration to support this Kitchen Sink approach is to patch the contentSearch definitions for the Sitecore indexes where the name of the Solr “core” is specified (stored by default in config files like Sitecore.ContentSearch.Solr.Index.Master.config,  Sitecore.ContentSearch.Solr.Index.Web.config, etc).   This is telling Sitecore which Solr core contains the index, but the actual name of the core doesn’t factor into the ContentSearch API code one uses with Sitecore.   A patch such as the following would handle both the sitecore_master_index and the sitecore_web_index to organize into a Solr Core named “kitchen_sink:”

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <contentSearch>
      <configuration>
        <indexes>
          <index id="sitecore_master_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
            <param desc="core">kitchen_sink</param>
          </index>
          <index id="sitecore_web_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
            <param desc="core">kitchen_sink</param>
          </index>
        </indexes>
        </configuration>
    </contentSearch>
  </sitecore>
</configuration>

If you peek into the Solr Admin for the kitchen_sink core that I’m using, specifically the Schema Browser in the Solr Admin UI, it becomes clear how Sitecore uses a field named “_indexname” to represent the Sitecore index value.  For this screenshot below, I’ve set the kitchen_sink core to contain two Sitecore indexes: sitecore_master_index and sitecore_web index:

solrterms

This shows us the two terms stored in that _indexname field, and that there are 18,774 for sitecore_master_index and 5,851 for sitecore_web_index.  Even though the indexes are contained in the same Solr Core, Sitecore ContentSearch API code like this . . .

Sitecore.ContentSearch.ISearchIndex index = 
  ContentSearchManager.GetIndex(indexName);
    using (Sitecore.ContentSearch.IProviderSearchContext ctx = 
      index.CreateSearchContext())

. . . doesn’t care whether all the Sitecore indexes reside in a single Solr “Core” or if they’re in their own following a 1:1 mapping strategy.

Caveats and Going In A Different Direction

There was a bug or two in earlier versions of Sitecore related to this, so be careful with early Sitecore 7.2 or Sitecore 8 implementations (and if you’re using Sitecore 7.5, you’ve got plenty of other things to worry about so don’t sweat a Solr Core organization strategy!).

I should also note that while this post is looking at combining Sitecore indexes into a single Solr Core for convenience and to reduce the management headaches of having 15 sets of Solr Cores to update etc, there are some implementations that go in the opposite direction.  Consider a strategy like the following:

solrmindblown

 

There may be circumstances where keeping Sitecore indexes in their own Solr Core — and even isolating them further into their own Solr implementation — could be in order.  Solr runs in a JVM and this could certainly factor in, but there are other shared run-time resources that Solr sets aside for the whole Solr application.

I’m not familiar enough with these sorts of implementations that I want to comment further or recommend any course of action related to this right now, but it’s good to think about and consider with Solr tuning scenarios.  I just wanted to share it, as it’s a logical dimension to consider given the two previous strategies in this post.

 

The Solr *Optimize Now* Button for Sitecore Use Cases

If you’ve worked with Sitecore and Solr, you’re no stranger to the Solr Admin UI.  There are great aspects to that UI, and some exciting extension points with Sitecore implications too, but I know one element of that Solr UI that causes some head-scratching . . . the “optimize now” feature:

OptimizeNow.JPG

 

The inclusion of the badcauses people to think “something is wrong . . . red bad . . . must click now!”

What is this Optimize?

I’m writing this for the benefit of Sitecore developers who may not be coming at this from a deep search background: do not worry if your Solr cores show this badicon encouraging you to optimize now.  Fight that instinct.  For standard Sitecore Solr cores that are frequently updating, such as the sitecore_core_index, sitecore_master_index, sitecore_analytics_index, and — depending on your publishing strategy — sitecore_web_index, one may notice these cores almost always appear with this “optimize now” button in the Solr Admin UI.  Other indexes, too, may be heavily in use depending on how your Sitecore implementation is structured.  If you choose the optimize now option and then reload the screen, you’ll see the friendly green check mark next to Optimized and you should notice the Segment Count drops to a value of 1:

segmentcount1.JPG

Segments are Lucene’s (and therefore Solr’s) file system unit.  On disk, “segments” are where data is durably stored and organized for Lucene.  In Solr version 5 and newer, one can visualize Segment details for each Solr Core via the Solr Admin UI Segments Info screen.  This shows 2 Segments:

segmentinfo

If your Segment count is greater than 1, the Solr Admin UI will report that your Solr Core is in need of Optimization (with that somewhat alarmingbadicon).  The Optimize operation re-organizes all the Segments in a Core down to just one single Segment . . . and for busy Sitecore indexes this is not something to do very often (or at all!).

To track an optimize operation through at the file system level, consider this snapshot of the /data/index directory for a sitecore_master_index before performing optimization; note the quantity of files:optimizebefore

After the optimization, consider the same file system:

optimizeafter

When in doubt, don’t optimize

Solr’s optimize now command is like cleaning up a house after a party.  It reduces the clutter and consolidates the representation of the Solr Core on disk to a minimal footprint.  The problem, is, however, optimizing takes longer the larger the index is — so the act of optimizing may produce very non-optimal performance while it’s doing the work.  Solr has to read a copy of the entire index and restructure the copy into a single Segment.  This can be slow.  Caches must be re-populated after an optimization, too, compounding the perf impact.  To continue the analogy of the optimize now being like cleaning after a party, imagine cleaning up during a party; maybe you pause the music and ask everyone to leave the house for 20 minutes while you straighten everything up.  Then everyone returns and the partying resumes, with the cleaning being a mostly useless delay.

To draw from the official Solr documentation at https://wiki.apache.org/solr/SolrPerformanceFactors#Optimization_Considerations:

“Optimizing is very expensive, and if the index is constantly changing, the slight performance boost will not last long. The trade-off is not often worth it for a non static index.”

For those Sitecore indexes in Solr that are decidedly non-static, then, ignore that “optimize now” feature of the Solr Admin UI.  It’s better to pay attention to Solr “Merge Policies” for a rules based approach to maintaining Segments; this is a huge topic, one left for another time.

When to consider optimizing

Knowing more about the optimization process in Solr, then, we can think about when it may be appropriate to apply the optimize command.  For external data one is pulling into Solr, for example, a routine optimization could make sense.  If you have a weekly product data load, for instance, where 10,000 items are regularly loaded into a Solr Core and then they remain un-changed, optimization after the load completes makes a lot of sense.  That data in the Core is not dynamic.  When the data load completes, you could include an API call to Solr that triggers the optimize.

An API call to trigger an optimize in Solr is available through an update handler call : http://solr-server:8983/solr/sitecore_product_catalog/update?stream.body=<optimize><query>*:*</query></optimize&gt;

Sitecore search has a very checkered past with the Lucene Optimize operation.  I’ve worked on big projects that were crippled by too frequent optimizing work like that discussed in Uli Weltersbach’s post.  We ended up customizing the Optimize methods to be no-op statements, or another variation like that.  For additional validation, check out the Lucene docs on the optimize method:

“This method has been deprecated, as it is horribly inefficient and very rarely justified.”

Since Solr sits on top of Lucene, the heritage of Lucene’s optimize is still relevant and — in the Solr Admin UI — we see a potential performance bottleneck button ripe for clicking . . . fight that instinct!

now

 

Solr Configuration for Integration with Sitecore

I’ve got a few good Solr and Sitecore blogs around 75% finished, but I’ve been too busy lately to focus on finishing them.  In the meantime, I figure a picture can be worth 1,000 words sometimes so let me post this visual representation of Solr strategies for Sitecore integrations.  One Solr core per index is certainly the best practice for production Sitecore implementations, but now that Solr support has significantly matured at Sitecore a one Solr core for all the Sitecore indexes is a viable, if limited, option:

draft

There used to be a bug (or two?) that made this single Solr core for every Sitecore index unstable, but that’s been corrected for some time now.

More to follow!

Sitecore and Solr Diagnostics: Pings and Logs

Like I mentioned last week in this post about Sitecore + Solr – Lucene, I wanted to share some other posts about running Sitecore with Solr.  While 2017 is shaping up to be the year of Azure for Sitecore, I also expect Solr to continue to grow in prominence across Sitecore implementations.

Solr exposes lots of information about it’s current condition and state, but it’s not that easy to access.  I’ll take a run at shedding light on some of the monitoring basics and let’s see where this goes:

Ping

A proof-of-life for Solr is usually part of an implementation, whether it’s for a load-balancer in front of Solr nodes or some other form of a health check.  This is typically done via the Solr /admin/ping endpoint so a request to http://localhost:8983/solr/sitecore_master_index/admin/ping?wt=json would result in something like:

{
  "responseHeader": {
    "status": 0,
    "QTime": 0,
    "params": {
      "q": "{!lucene}*:*",
      "distrib": "false",
      "df": "text",
      "rows": "10",
      "wt": "json",
      "echoParams": "all"
    }
  },
  "status": "OK"
}

I chose the JSON response writer for Solr (more about this at https://wiki.apache.org/solr/CoreQueryParameters).  Solr’s default response writer is XML and returns a lot more formatting text around the exact same information.

There is actually a lot of flexibility around this ping RequestHandler for Solr (check out the documentation).  I’m scratching the surface here since, for a plain vanilla Sitecore install, knowing that ping will either be “OK” or return the failing HTTP error code is usually sufficient.

Logging

So the ping is easy, but only reports OK or NOT OK.  It doesn’t provide any real granularity around that determination . . . so let’s assume you’ve got a Solr server that is not returning an “OK” status from the ping.  If you’ve restarted Solr on the server, and Solr still isn’t responding “OK” then it’s probably time to dig into the Solr logs.  Solr defaults to using a variation of Log4j to capture trace logs, so the documentation on Log4j can be helpful (just note that currently Log4j is on version 2.x, while the Solr versions I’ve worked on [up to Solr 6.2] are all using the earlier Log4j 1.2 version [log4j-1.2.17.jar]).

Solr will use the file server\resources\log4j.properties as the basis for Log4j configuration, but this can vary depending on how you’ve installed Solr.   If you need to determine where this file is configured in your implementation, you can check the Dashboard of the Solr web admin tool.  This shows the runtime configuration settings for Solr.  I’ve highlighted the two log related settings in my example:

solrlogsettings

The -Xloggc parameter is for Java Garbage Collection (GC) logging and usually not something you would consider Solr specific, but good to know it exists if that becomes a topic to investigate.

Inside a standard log4j.properties file one finds the following:

#  Logging level
solr.log=logs
log4j.rootLogger=INFO, file

log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender

log4j.appender.CONSOLE.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%-4r %-5p (%t) [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m%n

#- size rotation with log cleanup.
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.MaxFileSize=50MB
log4j.appender.file.MaxBackupIndex=9

#- File to log to and log format
log4j.appender.file.File=${solr.log}/solr.log
log4j.appender.file.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p (%t) [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m\n

log4j.logger.org.apache.zookeeper=WARN
log4j.logger.org.apache.hadoop=WARN

# set to INFO to enable infostream log messages
log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF

That configuration file is fairly self-explanatory.  Adjust the rootLogger value off of INFO if you’d prefer DEBUG or WARN or some other level.  I think I have just one other note on the above configuration file:

the MaxFileSize setting of older Solr installations was set to 4 MB; this is pretty small for a production instance, so the example above (using a Solr 6 default) of using 50 MB makes more sense for production.

If one peeks into a solr log file, there’s a couple common record types by default.  Here’s what a query logged for Solr looks like:

webapp=/solr path=/select params={df=text&distrib=false&fl=_uniqueid&fl=score&shards.purpose=4&start=0&fsv=true&fq=_indexname:(sitecore_testing_index)&shard.url=http://10.225.240.243:8983/solr/sitecore_testing_index_shard2_replica2/|http://10.225.240.72:8983/solr/sitecore_testing_index_shard2_replica1/&rows=2147483647&version=2&q=__is_running_b:(True)&NOW=1481640759343&isShard=true&wt=javabin} hits=0 status=0 QTime=0

After the query parameters, the log reports the number of hits for the query, the status (0 means “OK”) and the duration for this request in milliseconds.  Note that Solr queries are logged after they complete . . . so if you have a run-away query you may not see it in the log with the standard configuration.  More about this at a later time, perhaps.

Updates to Solr are logged using a similar pattern, they will start with webapp=/solr path=/update.

It’s possible to  setup Log4j to create more narrowly targeted logs.  For example, you could have queries and updates written to different logs for easier troubleshooting (see this example for the nitty gritty on targeting Solr logs).

If your project doesn’t want all this “noise” about each update and query to Solr, or there’s a concern that all this logging could become a bottleneck, you can set the log level for Solr to WARN and only see warnings and errors in your logs.  You will be sacrificing that query performance data, however, so you’ll not have a record to review for slow performing queries . . . so a recommended practice is to trigger a “WARN” condition when a query exceeds a certain threshold.  In solrconfig.xml add a <slowQueryThresholdMillis> element to the query section of solrconfig.xml:

<query>
<slowQueryThresholdMillis>2000</slowQueryThresholdMillis>

Any queries that take longer than 2 seconds (2000 milliseconds defined above) will be logged as “slow” queries at the WARN level.  Note that this setting would need to be duplicated for every core that’s defined in Solr (meaning, with Sitecore’s one core per index  approach you’ll have several solrconfig.xml files to touch).

Besides these fairly routine entries in Solr logs, there will also be exceptions and sometimes using the Logging section of the Solr web admin provides a convenient way to see the most recent set of these exceptions.  This is a convenient way of viewing Log4j items for your Solr implementation in near real-time; note if you make changes to the Level setting from this web interface, it is not persistent after a Solr restart.

Example of a Solr logging section in the web admin interface:

loggingsolradmin

There are a number of other ways to review the Log4j data, old-school Chainsaw is still around, some I know some prefer Otros log viewer, but I’ve had success with Splunk (see screen shot below):

splunkforSolr.JPG

I’ll get into other methods for diagnostic work with Solr in the next post.

Sitecore Gets Serious About Search, by Leaving the Game

Sitecore is steadily reducing the use-cases for using Lucene as the search provider for the Sitecore product.  Sitecore published this document on “Solr or Lucene” a few days ago, going so far as to state this one criteria where you must use Solr:

  • You have two or more content delivery servers

Read that bullet point again.  If you’ve worked with Sitecore for a while, this should trigger an alarm.

Sound the alarm: most implementations have 2 or more CD servers!  I’d say 80% or more of Sitecore implementations are using more than a single CD server, in fact.  Carrying this logic forward, 80% of Sitecore implementations should not be running on Lucene!  This is a big departure for Sitecore as a company, who would historically dodge conversations about what is the right technology for a given situation.  I think the Sitecore philosophy was that advocating for one technical approach over another is risky and means a Sitecore endorsement could make them accountable if a technology approach fails for one reason or another.  For as long as I’ve known Sitecore, it’s a software company intent on selling licenses, not dictating how to use the product.  Risk aversion has kept them from getting really down in the weeds with customers.  This tide has been turning, however, with things like Helix and now this more aggressive messaging about the limitations of Lucene.  I think it’s great for Sitecore to be more vocal about best practices, it’s just taken years for them to come around to the idea.

As a bit of a search geek, I want to state for the record that this new Solr over Lucene guidance from Sitecore is not really an indictment of Lucene.  The Apache Lucene project, and it’s cousin the .Net port Lucene.net that Sitecore makes use of out-of-the-box, was ground breaking in many ways.  As a technology, Lucene can handle enormous volumes of documents and performs really well.  Solr is making use of Lucene underneath it all, anyway!  This recent announcement from Sitecore is more acknowledgement that Sitecore’s event plumbing is no substitute for Solr’s CAP-theorem straddling acrobatics.  Sitecore is done trying to roll their own distributed search system.  I think this is Sitecore announcing that they’re tired of patching the EventQueue, troubleshooting search index update strategies for enterprise installations, and giving up on ensuring clients don’t hijack the Sitecore heartbeat thread and block search indexing with some custom boondoggle in the publishing pipeline.  They’re saying: we give up — just use Solr.

Amen.  I’m fine with that.

To be honest, I think this change of heart can also be explained by the predominant role Azure Search plays in the newest Sitecore PaaS offering.  Having an IP address for all the “search stuff” is awful nice whether you’re running in Azure or anywhere else.  It’s clear Sitecore decided they weren’t keen to recreate the search wheel a few years ago, and are steadily converging around these technologies with either huge corporate backing (Azure Search) or a vibrant open source community (Solr).

I should also note, here, that Coveo for Sitecore probably welcomes this opportunity for Sitecore teams to break away from the shackles of the local Lucene index.  I’m not convinced that long-term Coveo can out-run the likes of Azure Search or Solr, but I know today if your focus is quickly ramping up a search heavy site on Sitecore you should certainly weight the pros/cons of Coveo.

With all this said, I want to take my next few posts and dig really deep into Solr for Sitecore and talk about performance monitoring, tuning, and lots of areas that get overlooked when it comes to search with Sitecore and Solr.   So I’ll have more to say on this topic in the next few days!

 

Sitecore WFFM IsRemoteActions and Scaling Considerations

One of our Sitecore implementations was experiencing WFFM exceptions when trying to send emails from CD servers.  This post about WFFM configuration over on the Community site for Sitecore ends with a vague answer to fix a scenario such as this via the following:

Please remove the below mentioned setting from “Sitecore.Forms.Config” from all CD instance.

<setting name=”WFM.IsRemoteActions” value=”true” />

we have also added this setting as per sitecore documents but after adding this line on CD instance, mail won’t trigger. You have to remove this setting from “Sitecore.Forms.Config”. Making value as “false” won’t fix the issue.

Unfortunately, this is a fairly common scenario on that Community site . . . there are certainly answers tucked away in there if you’re diligent in your research, but sometimes they can leave you with more questions.  I think the new StackExchange site for Sitecore is a breath of fresh air on this front, but for this specific question about WFFM I was left to puzzle over this WFM.IsRemoteActions setting.  Here’s what I uncovered.

The official Sitecore documentation for setting up WFFM explicitly states (on page 6 of the WFFM 8.1 Update-3 installation guide in our case, but it’s mostly the same for other versions):

In the \Website\App_Config\Include\Sitecore.Forms.Config file,
  • Add the following node to the section:
  • <settings> section: <setting name=”WFM.IsRemoteActions” value=”true” />
The above contradicts the guidance on the Sitecore Community forum, and in our case we found that removing this setting altogether DID correct our problem of WFFM emails not being sent.
I used Reflector (like always!) so get some insight into what was going on, and in the Sitecore.Forms.Core.Handlers.FormDataHandler type is an ExecuteSaveActions method that is the crux of the logic for processing forms that are submitted through WFFM.  Right near the top of this method is a big conditional test based around the WFM.IsRemoteActions setting:
wffm
When one removes this setting from a CD server, this logic short-circuits this ExecuteSaveActions method and prevents the CD node from following through on the WFFM activities . . . and instead the Sitecore EventQueue of the “Core” database is used to store a record of this WFFM action.
At a later point, a server that shares that Sitecore “Core” database AND is configured with the proper WFFM hooks and events definition (from Sitecore’s WFFM documentation on Multiserver configuration) as follows . . .
The  section: 
<!--HOOKS-->
<hooks>
<!--remote events hook-->
<hook type="Sitecore.Form.Core.WffmActionHook, Sitecore.Forms.Core"/>
</hooks>

The  section: 
The <events> section:
<!--Remote events handler-->
<event name="wffm:action:remote">
<handler type="Sitecore.Form.Core.WffmActionHandler, Sitecore.Forms.Core"
method="OnWffmActionEventFired" />
</event>  
  . . . will pick up the EventQueue record and run the WFFM event.  Usually this is the CM server.  This is how it all worked in our case.

By removing (or setting to false) this WFM.IsRemoteActions setting we’re asking less of our Sitecore CD servers and centralizing the WFFM save behaviour to the CM server.  Most Sitecore implementations are eager to find ways to reduce the demands on the CD nodes in an implementation, so this measure could be considered a best-practice if you’re using WFFM and looking to save Sitecore CD cycles (CPU!) for responding to HTTP requests for site visitors instead of performing WFFM post save actions.  To me, it looks like you could take this further and delegate this responsibility to a “processing” server if you’ve got one in your scaled Sitecore architecture . . . or any node that suits you.

Powershell installation of Sitecore Performance Counters

Today was the day I bit the bullet and automated this bit of administrative work that we do for all our Sitecore projects. I think it was the fresh crop of 9 new Sitecore IIS nodes that needed this attention that was the final impetus.

Sitecore Performance Counters

This blog reviews all the perf counters Sitecore makes available.  The documentation is a bit thin from Sitecore, since these counters were added to the product several years ago and they apparently don’t get much attention.  They do require special effort to setup as part of your IIS environment, too, so it’s easy to overlook them.  They’re still very handy, though!  A future blog post could discuss the ones most worthy of attention (in my humble opininion).

You can download the .zip file with an installer for the Sitecore specific performance counters at http://sdn.sitecore.net/upload/sdn5/faq/administration/sitecorecounters.zip.  These files are what you’ll end up with:

counters.JPG

The .EXE runs through all the .XML files in the directory that enumerate perf counters relevant to Sitecore, and adds the counter to the computer.  It’s easy.

Scripted Installation

One would typically click the SitecoreCounters.EXE, press the {enter} key, and then the performance counters would quickly install to the server.  Doing this for many servers, or as part of an automated provisioning effort, could be tedious . . . we actually invoke this via remote magic with Ansible (but that’s not the focus of this blog post).  I do want to go over how we automated this, however, since it wasn’t the smooth sailing I expected it to be.

I’ve posted a Powershell baseline for installing Sitecore perf counters to this github gist.  It was straight-forward Powershell until I found the .EXE was running properly, but no performance counters were actually being added.  I had to look at the decompiled .EXE source to figure out why.  Here’s the Main method of the .EXE; I’ve bolded the instruction that was giving me problems:

        private static void Main(string[] args)
        {
            Console.WriteLine("Press enter to begin creating counters");
            Console.ReadLine();
            Console.WriteLine("Creating Sitecore counters");
            string[] files = args;
            if (args.Length == 0)
            {
                files = Directory.GetFiles(Environment.CurrentDirectory, "*counter*.xml");
            }
            foreach (string str in files)
            {
                new Counters(str).Execute();
            }
            Console.WriteLine("Done");
            Console.ReadLine();
        }

The “Environment.CurrentDirectory” in Powershell is by default something like C:\Users\UserName, but my Powershell was using a staging location to download and unzip the perf counter executable and related xml:

$zipFileURI = "https://your.cdn.with.the.Sitecore.zip.resources/sitecorecounters%207.5.zip"
$stageFolder = "C:\staging" 
if( !(test-path $stageFolder) )
{
    mkdir $stageFolder
}
$downLoadZipPath = $stageFolder + "/SitecoreCounters.zip"  
Invoke-WebRequest -Uri $zipFileURI -OutFile $downLoadZipPath
Add-Type -AssemblyName System.IO.Compression.FileSystem
[System.IO.Compression.ZipFile]::ExtractToDirectory($downLoadZipPath, $stageFolder)

Based on what I saw in Reflector, I needed to change my staging directory or to assign the Environment.CurrentDirectory to be $stageFolder in order for the perf counters to properly be installed to the new environment.  I really wanted to use a configurable location instead of some default Powershell directory, so I sifted through Powershell documentation and figured out the magic API call was [System.IO.Directory]SetCurrentDirectory; this example makes it apparent:

Write-Output "Default directory:"
 [System.IO.Directory]::GetCurrentDirectory()
 [System.IO.Directory]::SetCurrentDirectory($stageFolder + "\sitecorecounters 7.5")
 Write-Output "Directory changed to:"
 [System.IO.Directory]::GetCurrentDirectory()

With that in place, the context of the Sitecore perf counter .EXE was set properly and the program could locate all the counters defined in the XML files.

The end result is our new Sitecore specific perf counters are ready:

counters2.JPG

You will need to recycle the Sitecore App Pool in IIS for this to take effect, something like this should suffice:

$theSite = "Cool Sitecore Site"
$thePool = (Get-Item "IIS:\Sites\$theSite"| Select-Object applicationPool).applicationPool
Restart-WebAppPool $thePool