Friday, May 26, 2017

Installing the Elastic Stack (ELK 5.4) on Windows Server 2016

Elasticsearch, Logstash, and Kibana from Elastic are the three major products that make up the Elastic Stack (what used to be called ELK Stack). It represents a hugely versatile set of tools that can be used to collect and analyze data from just about source. There are tons of products in this space, so why bother with Elastic Stack? Logging and event management solutions are often expensive, and generally not where SMBs want to spend their limited IT budget. Elastic Stack is an open-source solution, providing a huge amount of configurability and customization, creating quite a lot of bang for your buck - if you can invest the time to install and configure it. And whether you're operating in an all-Windows environment or simply not interested in working with Linux, there are plenty good reasons to install your Elastic Stack on Windows Server. Let's take a look.

This guide will also work with Windows Server 2012 R2. The process is exactly the same.


Before diving in to the installation portion I wanted to take a second to review the architecture of the Elastic Stack that we'll be building. If you're a visual learner like me, it may aid in understanding how these components fit together and interact with one another.

The Elastic Stack allows you to visualize data from myriad sources. For our simplified example, consider Windows Event Logs.

  1. An agent program installed on our server captures Event Log data and ships it to Logstash. In this guide we're using Elastic Beats as our agents. Specifically for Windows Event Logs we will use Winlogbeat. (Note that agent programs are not required for all data sources; network appliance syslogs, for example.)
  2. Logstash receives the input from Winlogbeat, considers any filters and performs any transforms that we've defined, and ships the data to Elasticsearch.
  3. Elasticsearch indexes and centrally stores the data from Logstash, and makes it available for searching and analytics.
  4. Kibana connects to Elasticsearch to provide a friendly user interface for filtering and visualizing your data.

For this guide we'll be installing all three applications on one Windows Server 2016 VM, however they do support distributed installation.

Download Installation Files

Start by downloading Elasticsearch, Logstash, and Kibana from the Elastic website. While we're there, let's also download Filebeat, Packetbeat, Winlogbeat, and Topbeat (or Metricbeat now, as Topbeat has been deprecated). We'll need those later. Choose the option for Windows x64, Windows, or ZIP as appropriate.

Then, download the Java Development Kit (JDK) for Windows x64.

You’ll also need WinPcap if you want to use Packetbeat to send network data to your Elastic Stack.

Lastly, download the Non-Sucking Service Manager (NSSM).

Place everything into a folder on your server's C drive. I called mine ELK-Stack and that's what you'll see referenced throughout this guide.

Install Java Development Kit (JDK)

Once everything is downloaded, we'll start by installing the JDK with the default options. Make note of the installation directory as we’re going to create an environment variable with that path in the next step.

With the JDK installed, we need to create an environment variable that points to the program directory. Browse to System Properties > Advanced Tab, and select Environment Variables…

Click New... at the bottom to add a new System Variable. Name it JAVA_HOME and provide the path of the JDK installation that we just completed.

Click OK to finish, and OK again to close System Properties.

Installing Elastic Stack

With the JDK installed, let’s go ahead and extract all of our zip packages to the ELK-Stack folder. I’ve removed the version info from my folder names to neaten it up a bit, but it isn’t necessary. Just make note of the file paths when following this guide if yours are named differently.

With everything extracted, let's get started installing each application as a service, so we can control them like other services and have them launch at Windows startup.


Of the three applications in the Elastic Stack, Elasticsearch is the only one that is able to install itself as a service out of the box. In order to achieve that, we want to run the elasticsearch-service.bat file with the install option. That can be done at a command prompt as in the screenshot below, or by using the Invoke-Expression cmdlet within Powershell. Elastic uses the Powershell cmdlets in their Windows documentation so I’ll use those as well for the remainder of this post.

elasticsearch-service install

Performing the Elasticsearch service installation with Powershell:

Invoke-Expression -Command "C:\ELK-Stack\elasticsearch\bin\elasticsearch-service install"

After running the install command you should see a response indicating that the service has been installed successfully. Next we need to tweak the properties for the service by launching the service manager. Use the following Powershell command.

Invoke-Expression -Command "C:\ELK-Stack\elasticsearch\bin\elasticsearch-service manager"

On the Elasticsearch Properties dialogue, change the Startup Type to Automatic and start the service.

This is also where you can adjust the Java memory settings, which will be useful when we have more devices logging to our Elastic Stack. For now, we can leave the Java settings alone.

Once you've made your changes, click OK to close the Elasticsearch properties.

With the Elasticsearch service started, go ahead and open a web browser on your server and point it to In Chrome you should see results similar to the following image. Depending on your settings, Internet Explorer may just prompt you to download a json file (the contents of the file will be the same as what’s seen in Chrome). This is fine and there's no need to keep the file. The purpose of the test is just to validate that Elasticsearch is reachable on port 9200.

With Elasticsearch installed, it’s time to move on to Logstash.


Logstash doesn’t provide means to install it as a Windows service like Elasticsearch did, so we’ll use the Non-Sucking Service Manager (NSSM) to help us with that. First, we’re going to need a config file for Logstash that we'll point to when we’re setting up the service. For that we’ll use a text editor like Notepad to create a new file in the \logstash\bin directory and name it config.json. We need to give it a simple configuration to start with or Logstash won't start properly, so let's put the following into our new config.json file:

input {
  beats {
   port => 5044
   type => "log"

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"

This config file sets us up to use the Beats plugin for Logstash, so we'll be able to use the Elastic Beats shippers to send data to our Elastic Stack later on. Note that if you wanted to host Logstash on a separate server from Elasticsearch, this config file is where you would point the output to somewhere other than localhost. For now let's move on and use Powershell to launch NSSM and install our new service named Logstash.

Invoke-Expression -Command "C:\ELK-Stack\nssm\win64\nssm install Logstash"

In the NSSM service installer window > Application tab we’ll configure the path to the logstash.bat file and the config.json file as shown.

On the Details tab, give the new service an appropriate name and description.

Lastly, on the Dependencies tab add elasticsearch-service-x64, and click Install Service.

You should see a message that the service was installed successfully.

Before we move on, let's install the Beats input plugin for Logstash, as we'll be using Beats to ship data into our Elastic Stack. Note that there are dozens of input plugins available for Logstash. Use Powershell to install the Beats input plugin now.

Invoke-Expression -Command "C:\ELK-Stack\logstash\bin\logstash-plugin.bat install logstash-input-beats"

It’s time to move on to Kibana.


For Kibana, we’ll do the same as above with NSSM.

Invoke-Expression -Command "C:\ELK-Stack\nssm\win64\nssm install Kibana"

We don’t need to pass any arguments to Kibana, so we’ll leave that field blank this time.

Provide a name and description.

Add both elasticsearch-service-x64 and logstash as dependencies, and click Install Service.

Kibana's default options can be changed by modifying the \kibana\config\kibana.yml file. This is where you could update the value for elasticsearch.url to something other than localhost, if Kibana is on a different server from Elasticsearch. Since we're keeping the defaults for now, there's no need to edit this file currently.

With Kibana installed successfully, go ahead and make sure our three new services are started. Due to the dependencies we setup, we’ll need to start Elasticsearch first, then Logstash, and finally Kibana. If all has gone according to plan, you should now be able to open a browser and browse to and see Kibana’s initial setup page. While I was testing I found that Kibana can take a minute or two to load up after the service is started, before the website is accessible. If it doesn't come up right away just give it a second.

And that's it for the initial setup of the Elastic Stack on Windows Server 2016. Not very useful yet, is it? Let's talk about how to setup Beats for shipping data to your Elastic Stack.

Elastic Beats

As discussed above, Elastic Beats are the agent programs that we'll use to ship data into Logstash. We downloaded four of them at the beginning of this article so we'll go ahead and install those on our server now. Elastic provides Powershell scripts for installing each Beat as a Windows service, so we just need to execute each script in a Powershell window.

Filebeat - for monitoring log files such as IIS logs.

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\filebeat\ .\install-service-filebeat.ps1

Packetbeat* - for monitoring network traffic.

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\packetbeat\ .\install-service-packetbeat.ps1

Topbeat -for monitoring resource usage.

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\topbeat\ .\install-service-topbeat.ps1

Winlogbeat - for monitoring Windows Event Logs

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\winlogbeat\ .\install-service-winlogbeat.ps1

*For Packetbeat, you'll need to install WinPcap as well on the host(s) that you'll be monitoring network traffic for. If you don't want to use Packetbeat on your Elastic Stack server, skip that install and also skip WinPcap.

Install WinPcap

Now we'll install WinPcap so we can send network data to our Elastic Stack with Packetbeat. We'll select the default options here as well.

There's no additional configuration needed for WinPcap.

Beats Configuration

Within the program directory for each of the Beats we just installed you'll notice a yml configuration file. If you take a look inside, you'll probably notice that the default Beats configuration points to Elasticsearch on port 9200. What we want to do instead is point the Beats to the input plugin that we installed for Logstash earlier on. To do so, we simply comment out the hosts configuration for output.elasticsearch and uncomment the hosts line for output.logstash. The position of these configurations is slightly different in each Beat configuration file but the necessary change is the same.

Here's an example after I've commented the Elasticsearch host and uncommented the same for Logstash. Be sure to comment/uncomment both the output. line and the hosts. line in each config file.

That's all the configuration we need at this stage. Go ahead and start each of your Beats Windows services.

Bringing it all together

Back to Kibana now, it's time to configure our indexes so we can visualize data from each of the Beats that we just turned on. At the Configure an index pattern screen we're going to add an index pattern for each of our Beats. Clear out logstash-* which populates by default, and add each of the following:
  • packetbeat-*
  • topbeat-*
  • winlogbeat-*
  • filebeat-*
Select @timestamp for the Time-field name, then click Create to create the index pattern.

After you've added the first index pattern, you'll need to use the plus icon to add additional indexes.

When you get to Filebeat (there's a reason I listed it last) you'll notice that adding it is unsuccessful. This is expected. Why?

Filebeat expects a log file (or files) as input. Since we're on Windows Server 2016 and we haven't modified this part of the config, it's fairly unlikely there's anything located at /var/log to ingest and push to Logstash. For now, we're OK with this and we'll move on using the other Beats as examples.

Click on the Discover tab in Kibana and let's look at what we have. By default, based on the Beats that we added, we should see entries from Packetbeat. If we click the drop down over packetbeat-* we can also select topbeat-* and winlogbeat-* to view that data.

Now we have the foundation in place for our Elastic Stack on Windows Server 2016. We can install the Beats agents on other servers and point them back to Logstash to aggregate data from our entire infrastructure, if we choose.

Tuesday, May 16, 2017

Disabling SMB1 on Windows

The Server Message Block protocol version 1 (SMB1) is obsolete and insecure. Microsoft has been recommending that we disable it for quite a while. In the wake of WannaCry, there's no good reason to leave this running in your environment - probably.

Will it break anything?

The answer here is 'it depends'. Carefully consider your own environment and how SMB/file & printer sharing is being used. What systems need to interact with one another and what versions of SMB do those systems support?
  • Windows XP and Server 2003 need SMB1 for file sharing but newer Windows operating systems do not.
  • Some Linux operating systems may need SMB1 as well, but that can be disabled for Samba starting with version 3.6.
  • Multi-function devices should be checked to verify that the firmware supports SMB2 or 3 as there are reports of some older devices that do not. If you do not use scan-to-network capability, this may be irrelevant.
  • Network/security devices that interact with your Active Directory domain may be an unexpected source of SMB1 reliance. For example, Sophos UTM appliances with current firmware require SMB1 in order to use AD Single-Sign On.
I think the best guidance here is simply to be careful, plan the change, and test thoroughly. It never hurts to have a roll-back plan too.

How do I disable it?

Microsoft has specific guidance available for toggling the SMB protocols in Windows in this article. Bear in mind that we're only interested in disabling SMB1 for now. SMB2/3 are still needed for Windows file sharing in current operating systems, and they do not have the vulnerability that's being exploited by WannaCry. For a handful of machines you can easily hand jam these changes and be done with it. For larger environments, scripting out the change and creating a Group Policy for your Windows machines will certainly be the most efficient method of delivery. There are already dozens of scripts in the wild for disabling the SMB1 protocol on Windows machines. You need look no further than the WannaCry Megathread on Reddit for numerous options. 

I'm a roll-my-own kind of guy, because I like to keep things as simple as possible and ideally learn something from the process. It also helps me to validate that the results of a script are predictable within my specific environment, and that there's no superfluous functionality that's going to cause problems, so I wrote a simple PowerShell script for deployment through AD Group Policy. All it does is query the Win32_OperatingSystem WMI object and then apply the proper fix from Microsoft's article depending on the OS version detected. It works on Windows 7/Server 2008 through Windows 10 / Server 2016. If you're deploying these changes differently in your environment, I'd love to hear about it.

Get the script: Here

Sunday, May 14, 2017

Implementing Crypto-Blocker using FSRM on Windows Server 2012 R2

This is a topic that is well covered, however given the explosion of ransomware thanks to WannaCrypt this week I thought I’d discuss how I’m protecting many of my file servers.

File Server Resource Manager (FSRM) is available for Windows Server versions beginning with 2008, but I’ll be focusing on 2012 R2. The specific capability that we’re interested in for the purpose of detecting and preventing ransomware is File Screening Management. In short, we create a File Group that is dynamically updated with the latest known ransomware file name patterns, and then create a File Screen that we use to trigger email notifications as well as actually blocking share access for infected users. Let’s get started.

Installing & Configuring FSRM

FSRM is a feature of the File and Storage Services role, and can be installed either through the Add Roles and Features Wizard, or via PowerShell.

Installing from the Add Roles and Features Wizard:

Installing from an administrative PowerShell session:
Install-WindowsFeature FS-Resource-Manager -IncludeManagementTools

Once installed, FSRM can be launched via its shortcut on the Start screen, or by launching its management snap-in, fsrm.msc. Once it's open, right-click on File Server Resource Manager and select Configure Options from the context menu.

Within FSRM Options, on the Email Notifications tab, configure the proper settings for your environment. This defines how we will be notified when our File Screen catches something. Ensure that your mail server is configured to accept SMTP messages from your file server, if necessary.

There are many other options that can be configured on this screen, however for my purposes I have left them at the defaults. I do recommend at least reviewing the Notification Limits tab to adjust the throttling for various alerts that FSRM will send out. Setting them all to 0 can be helpful during testing. Click OK to close the options menu.

With the basic options set, we want to create a new File Group for our ransomware file types. We're not worried about fully populating the File Group at this stage, as we'll do that in the next steps. To create a File Group using the GUI, expand File Screening Management and right-click on File Groups. Then, select Create File Group.

In the Create File Group Properties dialogue, type the name for the new file group. I've named it "Anti-Ransomware File Group". This name will have to match what we use in our scripts later. We also need to include at least one file in the list or we will not be able to save the new File Group. I've added *.crypt. Click OK to complete this step.

To create the new File Group instead via PowerShell, you can use the New-FsrmFileGroup cmdlet. Note that the FsrmFileGroup cmdlets do not exist in Server 2008 versions.

New-FsrmFileGroup -Name "Anti-Ransomware File Group" -IncludePattern *.crypt

The next step is creating a File Screen. To create a File Screen via the GUI, expand File Screening Management and right-click on File Screens. Then, select Create File Screen.

On the Create File Screen dialogue, start by browsing to the file share that you want to protect. On my production servers I have numerous shares on a volume, so instead of selecting an individual share I select the entire volume. In this example, I'm just selecting the shared folder. Choose the button to Define custom file screen properties, and click Custom Properties.

On the Settings tab of the File Screen Properties dialogue, ensure that Active Screening is selected, and check the box for Anti-Ransomware File Group in the Select file groups to block box.

Click over to the E-mail Message tab, and check the box to send email to the following administrators. Enter your administrator email addresses, separated by semicolons. If you choose to send an email message to the user you may check that box as well. Edit the subject and message body as you see fit; This is the email you will receive when the file screen catches a ransomware file type.

On the Event Log tab, check the box to send a warning to the event log. If you're aggregating event logs somewhere, you can monitor for event ID 8215 from source SRMSVC to be notified when the file screen triggers.

Click OK. This brings you back to the Create File Screen dialogue. Click Create. When prompted, select the option to save the custom file screen without creating a template (or if you're creating multiple file screens, go ahead and create a template), and click OK.

At this point, we've configured the options needed to catch and be notified when a file matching a pattern in the Anti-Ransomware File Group is attempted to be written to our file share. This would be a good time to try a test. Create a blank text file using Notepad and save it as ransomware.crypt. Ensure that the file extension is actually .crypt and not .crypt.txt (if you have file extensions hidden, for example.) Copy the ransomware.crypt file and paste it into your newly protected file share. What you should see is that the copy will fail as access is denied. If you've properly configured email, you will receive an email from this event as well. You can verify that FSRM is responsible for blocking the file by reviewing the Windows Application Event Log and checking for event ID 8215 from SRMSVC, as shown here:

Updating the Anti-Ransomware File Group

With the foundation in place and our test successful, we can go ahead and populate the other file patterns that we want to look out for. We're going to use PowerShell to download a current list from, and update our Anti-Ransomware File Group. In an administrative PowerShell session, run the following command:

set-FsrmFileGroup -name "Anti-Ransomware File Group" -IncludePattern @((Invoke-WebRequest -Uri "" -UseBasicParsing).content | convertfrom-json | % {$_.filters})

When complete, let's take a look at our Anti-Ransomware File Group in FSRM and notice that the included files have updated. For the moment, we're in good shape. What we need now is a way to automate this script so we can retrieve updated file types from Experiant periodically. Windows built-in Task Scheduler is a quick and easy way to get this done.

First, save the above PowerShell in a script file. You can do this by simply pasting the command into a new Notepad file and saving it with a .ps1 extension. I called mine Update_FSRM_AntiRansomware.ps1, and saved it in C:\Scripts. I use the same directory for scripts across all of my servers for simplicity (it makes things more portable, too). With the script saved to a file, let's setup the scheduled task.

Within Task Scheduler select the Task Scheduler Library and then right-click to Create New Task. Give it a descriptive name, set the task to run under the SYSTEM account, and check the box to have it run with highest privileges.

On the Triggers tab, create a new trigger and set it to begin on a schedule. In this example I've set it to run daily at 7AM so it can download the latest file types before my users start work for the day. Be sure to check the box to enable the schedule and then click OK.

On the Actions tab, create a new Action. Select Start a program from the drop-down list. In the Program/script box, type PowerShell.exe. The Add arguments box is where we will call our ps1 script from above by typing the following, then click OK.

-ExecutionPolicy ByPass -File "C:\Scripts\Update_FSRM_AntiRansomware.ps1"

The Conditions and Settings tabs can be customized if desired or left as default. When finished, click OK to save the new task.

How do you know if it's working? Since we just populated the File Group in the process of setting this up, it's unlikely that anything new is available. To test, edit the Anti-Ransomware File Group to remove the first file or two from the list and save the changes. Manually trigger the scheduled task and check the File Group again. If the files you removed are back, the update script and scheduled task are working.

Where does this get us?

An important thing to note about what we've done so far is that we have not done anything that will prevent ransomware from encrypting our files. Why did we bother doing all of this, then? That's a great question. This crypto-blocker strategy relies on the fact that the ransomware we know about today changes the file extension of the files it encrypts, and deposits ReadMe files with instructions on how to pay up. That means that our finance teams' Excel files might change from payments.xlsx to payments.xlsx.crypt. The FSRM File Screen that we setup detects this new file extension and triggers an alert to let us know that something bad is happening. What we have at this point is a crypto-canary, not a crypto-blocker. For some, that might be enough and you can stop here. When you see the notification you can go into triage mode and prepare to restore from backups.

How do we actually prevent the encryption from spreading?

If just being notified of the infection is not sufficient and you want to empower your file servers to protect themselves, read on. We're going to leverage the ability of FSRM to run a command when a file screen triggers in order to block a ransomware infection from ransacking our file server. You'll recall on the File Screen Properties dialogue there is a Command tab which we skipped past before. Let's return to this tab and check the box to Run this command or script.

You'll notice here that I've gone ahead and entered the path for cmd.exe into the script box. By entering command arguments in the box below, we will custom tailor the behavior of our crypto-blocker. We have a couple of options.

Option 1: Stop the LanManServer service (a.k.a. The Sledgehammer Approach.)

/c net stop lanmanserver /y

This will immediately stop file and printer sharing on the server for all users. There are similar approaches documented which include disabling the affected share completely, or even powering off the server. The net result is effectively the same: All of your users lose access. A ransomware process running on a user's machine will lose access to the share and the damage is halted.

Option 2: Block the infected user's share access.

The most graceful method I've found for this uses PowerShell instead of cmd, so replace the path to cmd.exe by browsing to or pasting the path to PowerShell's executable.


Then add the arguments:

-ExecutionPolicy ByPass -Command "& { Get-SmbShare -Special $false | ForEach-Object { Block-SmbShareAccess -Name $_.Name -AccountName '[Source Io Owner]' -Force } }"

What this bit of PowerShell is doing is enumerating all of the non-administrative shares on the server with Get-SmbShare, and then passing those values to Block-SmbShare and grabbing the username passed as a parameter from FSRM in [Source Io Owner]. Under command security set the command to run as Local System, and click OK.

Now when the file screen is triggered, the end result is that the share permissions (for every share on the server) for the infected user are explicitly denied. Permissions for other users are unaffected.

Option 2 is my preferred approach for automated file server defense. Some may prefer the blanket approach offered by Option 1 and that's fine. However in my environment there are false positives every now and again, and I much prefer to work with one user on restoring access than I do explaining to my entire organization while they lost access to everything they were working on.

Parting Thoughts

If you're reading this and thinking "Well what about the files that were encrypted in order to trigger the file screen in the first place?", you're right on point. This whole setup relies on a file on your server being encrypted by ransomware in order to trigger any notifications or protection. That means that regardless of how fast and effective this is, at least one file is still going to end up encrypted. Bummer.

One idea to help mitigate that is to place a honeypot at the top of the alphabetical list within each of your shares. For example: 

The _honeypot folder is sorted to the top alphabetically, making it the first target for many known ransomware variants. Place a normal text, Word, or Excel file or two within the folder as sacrificial lambs in the hope that they are encrypted first, triggering the file screen and saving the rest of the server's data. This certainly isn't a bad idea, but there's nothing stopping ransomware developers from deciding to encrypt things in a different order tomorrow.

We just can't escape the need for good backups. If the honeypot folder is skipped over or the infection isn't caught quickly enough, we will inevitably end up with encrypted business data. Backup strategies are a different article but it's crucial to have something in place so you can recover from a ransomware event. With properly configured FSRM file screens as part of a ransomware prevention and recovery strategy, you may just be reimaging a user's workstation and restoring a file or two from last night's backup, instead of preparing three envelopes.