Showing posts with label kibana. Show all posts
Showing posts with label kibana. Show all posts

Thursday, July 13, 2017

Securing Kibana with an IIS Reverse Proxy and Windows Authentication

In the absence of Elastic’s for-pay X-Pack add-on package, the Elastic stack is lacking several notable features which, in my opinion, are absolutely required if it is to be used in production. One such feature is user authentication. Once you’ve configured Kibana to be accessible over the network, any Joe or Sally with network access can browse to or stumble upon your Kibana dashboard and start digging through your log data. Not great.

In this post, we’ll take a few simple steps toward providing some basic security for our Elastic front-end, while remaining free of cost and entirely in the Windows world. We’ll accomplish that by installing IIS on our Elastic server, and configuring it as a reverse proxy for Kibana, authenticated to a security group of our choosing.

Why Use a Reverse Proxy?

Since Kibana doesn’t support any sort of authentication mechanism out of the box, we have to be creative. By using the reverse proxy feature in the URL Rewrite extension for IIS, we can use IIS as a middleman between our clients and the otherwise unprotected Kibana UI. We'll restrict Kibana connections to the local server only, and set IIS as the gatekeeper for outside connections. Conveniently, this also enables us to configure SSL within IIS as we would with any other website.

Getting Started

To start, we need to ensure that Kibana is only listening for connections on localhost (127.0.0.1). We’ll do that by reviewing the Kibana configuration file and verifying with netstat. If you followed my previous guide on installing the Elastic stack, the defaults should already be set correctly for what we are doing now.

  • Locate the kibana.yml configuration file in the config directory and open it with a text editor. Unless you’ve modified these values, at the top of the file you should see server.port: 5601 and server.host: “localhost”. These are the settings that we want, as server.host: “localhost” will prevent Kibana from accepting connections from anywhere other than the local server. The server.port value doesn’t really matter as we’re only using it locally on this server, so I’ve kept the default there as well.


  • If you’re using a firewall (like Windows Firewall) on the local server or a hardware appliance on your network, you can go ahead and close port 5601 at this stage, if necessary. Nothing will be accessing the server over the network on that port once we’re done.
  • As the final check of our Kibana configuration, we can use netstat to validate that Kibana is listening on 127.0.0.1:5601, as that’s where we’ll be pointing our IIS Reverse Proxy.


Install IIS Components

To start off, we’ll need to install the Web Server role along with URL Authorization, Windows Authentication, and Management Tools. You can accomplish this manually via the Add Roles and Features Wizard in Server Manager or via Powershell.


Install-WindowsFeature Web-Server, Web-Url-Auth, Web-Windows-Auth -IncludeManagementTools

With the required features installed, it’s time to configure our reverse proxy.

Configure the IIS Reverse Proxy

Rather than trying to reinvent the wheel, I followed parts one and two of this fantastic guide on Microsoft Blogs for the reverse proxy setup, which I’ll be recapping below. The guide contains a lot more detail on the why and how, if you’re interested. I did not follow part 3 of the guide as it was not necessary.


  • Launch IIS and select the website you'll be configuring as the reverse proxy. Click on the URL Rewrite feature in the center panel.


  • Then, Add Rule(s)... in the Actions panel on the right.


  • In the Add Rule(s) dialog, select Reverse Proxy and click OK.


  • Click OK again to enable proxy functionality within Application Request Routing.



  • In the Add Reverse Proxy Rules dialog under Inbound Rules, we’ll give it our Kibana URL (localhost:5601) as the location where requests will be forwarded. We also want to enable Rewriting of domain names under Outbound Rules and populate the external URL for our server under the To: field. In this case the external URL will be whatever our clients on the network will type into their browsers to access Kibana. I’m just using the server name in my lab environment. Click OK to complete the dialog.

Now we have the basic reverse proxy routing in place, but we’re not quite done yet. If we try to access Kibana via IIS at this stage, we’re greeted with an unfriendly 500.52 error.


What’s happening is that Kibana is using HTTP compression when returning results to IIS, and the URL Rewrite module can’t modify the response when it's compressed. The workaround for this is to configure IIS to tell Kibana not to return compressed responses.

  • With our website selected let’s go back to the URL Rewrite module. This time we’ll choose View Server Variables…


  • On the Allowed Server Variables screen, choose Add… to add a new server variable called HTTP_ACCEPT_ENCODING, and click OK. Follow the same process to add a second variable called HTTP_X_ORIGINAL_ACCEPT_ENCODING.

  • Next, go back to URL Rewrite rules and select the inbound rule. Then click Edit…


  • On the Edit Inbound Rule screen, expand the Server Variables section and click Add… Select the HTTP_X_ORIGINAL_ACCEPT_ENCODING variable that we created earlier from the Server variable name: drop-down box. Under Value: type {HTTP_ACCEPT_ENCODING}. Be sure to include the curly braces so the rule knows to use the value of that variable. Click OK.

  • Click Add… again to add another server variable. This time select HTTP_ACCEPT_ENCODING from the drop down box, and type any text value into the Value: field. What we need to do here is set the value of this variable to be empty, but this field won’t accept a blank value so we’re giving it any text value so we can save the variable, and we’ll update the value in the next step. I typed “123” as my value.




  • With both variables set, click Apply in the Actions panel.



  • Now we need to replace our arbitrary text value (“123” in my case) with a blank. This is done in our website’s web.config file. Since I’m using the Default Web Site, that’s located in C:\inetpub\wwwroot. Open the web.config file in a text editor and find the text value that you entered. Select the value between the quotes and delete it, leaving just the quotes. Save the file.
Before:

After:

That addresses the inbound portion of our configuration, now we need to address outbound traffic.

  • Under URL Rewrite, click Add Rule(s)... again, this time selecting Blank rule under Outbound rules.


  • We’ll name our new rule RestoreAcceptEncoding, and select <Create New Precondition…> from the drop-down menu. On the Add Precondition screen, provide the name NeedsRestoringAcceptEncoding and ensure Regular Expressions is selected from the Using: drop-down menu.


  • Click Add… to add a new condition. For the Condition input: type {HTTP_X_ORIGINAL_ACCEPT_ENCODING}, again making sure to include the curly braces. Under pattern, type ‘.+’. Click OK. Click OK again to close the Add Precondition dialog.




  • Still under the Edit Outbound Rule screen, find the Match section and set the Matching scope: to Server Variable. Type HTTP_ACCEPT_ENCODING as the Variable name:. For the pattern, type ‘^(.*)’.



  • Lastly under the Action section, ensure that Action type: is set to Rewrite. For the Value: type {HTTP_X_ORIGINAL_ACCEPT_ENCODING}, again being sure to include the curly braces. Then, click Apply.



If everything has gone according to plan, reverse proxying from IIS to Kibana should now be working. If you type http://localhost into a web browser on the Elastic server, you should see Kibana being served via IIS over port 80.

Securing Kibana

And now, finally, we can do what we set out to do: Add some security to Kibana so it isn’t openly accessible to anyone on our network.

Configure SSL Certificate

In order to ensure that we’re not passing credentials over the network in the clear, we need to configure IIS with an SSL certificate and bind it to our website. If you want to generate a certificate for this server from your internal CA or a public CA, that’s perfectly fine. I’m going to use a self-signed certificate for this lab.


  • Select the server name in the left-hand panel, and then choose the Server Certificates option.


  • We then choose Create Self-Signed Certificate… from the Actions pane.



  • Type the name you want to use for referencing this certificate. I just used the server name. Click OK.



  • With the certificate created, we can go ahead and bind it to our website. To do that, expand the server in IIS and select the website.



  • Then, select Bindings… from the Actions pane.



  • On the Site Bindings screen, choose Add…



  • On the Add Site Binding screen, choose HTTPS as the type and select your certificate from the SSL certificate: drop-down menu.



  • Click OK again to add the site binding, and then click Close to close the Site Bindings screen. Now we’ll be able to access our website over HTTPS.

Configuring User Authentication

The final step for this guide is to enable user authentication for our Kibana proxy. What I’ve elected to do in my lab environment is configure an Active Directory group with members that I’ve chosen to grant access to Kibana. This same process could also be done with a local Windows group, or individually selected user accounts if desired.

From my lab’s domain controller, I’ve created a security group called Role-G-ElasticAdmins. The specific group name isn't important. This is the naming convention that I use for denoting that this is a Global security group and it is for granting a particular business Role to a set of users. In this case, the only user with permission to access Kibana will be the SMBAdmin user.



  • Back on our Elastic server in IIS, we need to select our website and choose the Authentication option.



  • Within Authentication, we need to set Anonymous Authentication to Disabled, and set Windows Authentication to Enabled.



  • Back to the main IIS screen, we’ll now select Authorization Rules.



  • We need to delete the Allow -> All Users rule that is created by default. Then, click Add Allow Rule… in the Actions pane.



  • On the Add Allow Authorization Rule dialog, we want to select the radio button for Specified roles or user groups:, and type the name of the group for which we’re allowing access. Then, click OK.


That concludes the configuration. Let’s test it out.


  • Open a web browser on the Elastic server and type https://localhost. If all has gone according to plan, you should be prompted to enter credentials (you may have to bypass the certificate warning if you used a self-signed certificate like I did).





  • After entering your credentials, you should be greeted with the familiar Kibana interface. Take note of the address bar to ensure that you’ve accessed the site over HTTPS.


We’re now successfully proxying Kibana’s unsecured web interface on port 5601 through IIS, secured with HTTPS encryption and Windows authentication. To make the secure interface available over the network you simply need to permit HTTPS/TCP 443 through your firewall(s) as you would with any other website, and use a web browser on your client machine to browse to it by DNS name or IP address.

Friday, May 26, 2017

Installing the Elastic Stack (ELK 5.4) on Windows Server 2016

Elasticsearch, Logstash, and Kibana from Elastic are the three major products that make up the Elastic Stack (what used to be called ELK Stack). It represents a hugely versatile set of tools that can be used to collect and analyze data from just about source. There are tons of products in this space, so why bother with Elastic Stack? Logging and event management solutions are often expensive, and generally not where SMBs want to spend their limited IT budget. Elastic Stack is an open-source solution, providing a huge amount of configurability and customization, creating quite a lot of bang for your buck - if you can invest the time to install and configure it. And whether you're operating in an all-Windows environment or simply not interested in working with Linux, there are plenty good reasons to install your Elastic Stack on Windows Server. Let's take a look.

This guide will also work with Windows Server 2012 R2. The process is exactly the same.

Architecture

Before diving in to the installation portion I wanted to take a second to review the architecture of the Elastic Stack that we'll be building. If you're a visual learner like me, it may aid in understanding how these components fit together and interact with one another.


The Elastic Stack allows you to visualize data from myriad sources. For our simplified example, consider Windows Event Logs.

  1. An agent program installed on our server captures Event Log data and ships it to Logstash. In this guide we're using Elastic Beats as our agents. Specifically for Windows Event Logs we will use Winlogbeat. (Note that agent programs are not required for all data sources; network appliance syslogs, for example.)
  2. Logstash receives the input from Winlogbeat, considers any filters and performs any transforms that we've defined, and ships the data to Elasticsearch.
  3. Elasticsearch indexes and centrally stores the data from Logstash, and makes it available for searching and analytics.
  4. Kibana connects to Elasticsearch to provide a friendly user interface for filtering and visualizing your data.

For this guide we'll be installing all three applications on one Windows Server 2016 VM, however they do support distributed installation.

Download Installation Files

Start by downloading Elasticsearch, Logstash, and Kibana from the Elastic website. While we're there, let's also download Filebeat, Packetbeat, Winlogbeat, and Topbeat (or Metricbeat now, as Topbeat has been deprecated). We'll need those later. Choose the option for Windows x64, Windows, or ZIP as appropriate.

https://www.elastic.co/downloads

Then, download the Java Development Kit (JDK) for Windows x64.

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

You’ll also need WinPcap if you want to use Packetbeat to send network data to your Elastic Stack.

https://www.winpcap.org/install/

Lastly, download the Non-Sucking Service Manager (NSSM).

https://nssm.cc/download

Place everything into a folder on your server's C drive. I called mine ELK-Stack and that's what you'll see referenced throughout this guide.

Install Java Development Kit (JDK)

Once everything is downloaded, we'll start by installing the JDK with the default options. Make note of the installation directory as we’re going to create an environment variable with that path in the next step.





With the JDK installed, we need to create an environment variable that points to the program directory. Browse to System Properties > Advanced Tab, and select Environment Variables…


Click New... at the bottom to add a new System Variable. Name it JAVA_HOME and provide the path of the JDK installation that we just completed.


Click OK to finish, and OK again to close System Properties.

Installing Elastic Stack

With the JDK installed, let’s go ahead and extract all of our zip packages to the ELK-Stack folder. I’ve removed the version info from my folder names to neaten it up a bit, but it isn’t necessary. Just make note of the file paths when following this guide if yours are named differently.


With everything extracted, let's get started installing each application as a service, so we can control them like other services and have them launch at Windows startup.

Elasticsearch

Of the three applications in the Elastic Stack, Elasticsearch is the only one that is able to install itself as a service out of the box. In order to achieve that, we want to run the elasticsearch-service.bat file with the install option. That can be done at a command prompt as in the screenshot below, or by using the Invoke-Expression cmdlet within Powershell. Elastic uses the Powershell cmdlets in their Windows documentation so I’ll use those as well for the remainder of this post.


elasticsearch-service install

Performing the Elasticsearch service installation with Powershell:


Invoke-Expression -Command "C:\ELK-Stack\elasticsearch\bin\elasticsearch-service install"

After running the install command you should see a response indicating that the service has been installed successfully. Next we need to tweak the properties for the service by launching the service manager. Use the following Powershell command.

Invoke-Expression -Command "C:\ELK-Stack\elasticsearch\bin\elasticsearch-service manager"

On the Elasticsearch Properties dialogue, change the Startup Type to Automatic and start the service.


This is also where you can adjust the Java memory settings, which will be useful when we have more devices logging to our Elastic Stack. For now, we can leave the Java settings alone.


Once you've made your changes, click OK to close the Elasticsearch properties.

With the Elasticsearch service started, go ahead and open a web browser on your server and point it to http://127.0.0.1:9200. In Chrome you should see results similar to the following image. Depending on your settings, Internet Explorer may just prompt you to download a json file (the contents of the file will be the same as what’s seen in Chrome). This is fine and there's no need to keep the file. The purpose of the test is just to validate that Elasticsearch is reachable on port 9200.


With Elasticsearch installed, it’s time to move on to Logstash.

Logstash

Logstash doesn’t provide means to install it as a Windows service like Elasticsearch did, so we’ll use the Non-Sucking Service Manager (NSSM) to help us with that. First, we’re going to need a config file for Logstash that we'll point to when we’re setting up the service. For that we’ll use a text editor like Notepad to create a new file in the \logstash\bin directory and name it config.json. We need to give it a simple configuration to start with or Logstash won't start properly, so let's put the following into our new config.json file:

input {
  beats {
   port => 5044
   type => "log"
  }
}

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

This config file sets us up to use the Beats plugin for Logstash, so we'll be able to use the Elastic Beats shippers to send data to our Elastic Stack later on. Note that if you wanted to host Logstash on a separate server from Elasticsearch, this config file is where you would point the output to somewhere other than localhost. For now let's move on and use Powershell to launch NSSM and install our new service named Logstash.

Invoke-Expression -Command "C:\ELK-Stack\nssm\win64\nssm install Logstash"

In the NSSM service installer window > Application tab we’ll configure the path to the logstash.bat file and the config.json file as shown.


On the Details tab, give the new service an appropriate name and description.


Lastly, on the Dependencies tab add elasticsearch-service-x64, and click Install Service.


You should see a message that the service was installed successfully.



Before we move on, let's install the Beats input plugin for Logstash, as we'll be using Beats to ship data into our Elastic Stack. Note that there are dozens of input plugins available for Logstash. Use Powershell to install the Beats input plugin now.

Invoke-Expression -Command "C:\ELK-Stack\logstash\bin\logstash-plugin.bat install logstash-input-beats"

It’s time to move on to Kibana.

Kibana

For Kibana, we’ll do the same as above with NSSM.

Invoke-Expression -Command "C:\ELK-Stack\nssm\win64\nssm install Kibana"

We don’t need to pass any arguments to Kibana, so we’ll leave that field blank this time.


Provide a name and description.


Add both elasticsearch-service-x64 and logstash as dependencies, and click Install Service.


Kibana's default options can be changed by modifying the \kibana\config\kibana.yml file. This is where you could update the value for elasticsearch.url to something other than localhost, if Kibana is on a different server from Elasticsearch. Since we're keeping the defaults for now, there's no need to edit this file currently.

With Kibana installed successfully, go ahead and make sure our three new services are started. Due to the dependencies we setup, we’ll need to start Elasticsearch first, then Logstash, and finally Kibana. If all has gone according to plan, you should now be able to open a browser and browse to http://127.0.0.1:5601 and see Kibana’s initial setup page. While I was testing I found that Kibana can take a minute or two to load up after the service is started, before the website is accessible. If it doesn't come up right away just give it a second.


And that's it for the initial setup of the Elastic Stack on Windows Server 2016. Not very useful yet, is it? Let's talk about how to setup Beats for shipping data to your Elastic Stack.

Elastic Beats

As discussed above, Elastic Beats are the agent programs that we'll use to ship data into Logstash. We downloaded four of them at the beginning of this article so we'll go ahead and install those on our server now. Elastic provides Powershell scripts for installing each Beat as a Windows service, so we just need to execute each script in a Powershell window.

Filebeat - for monitoring log files such as IIS logs.

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\filebeat\ .\install-service-filebeat.ps1

Packetbeat* - for monitoring network traffic.

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\packetbeat\ .\install-service-packetbeat.ps1

Topbeat -for monitoring resource usage.

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\topbeat\ .\install-service-topbeat.ps1

Winlogbeat - for monitoring Windows Event Logs

PowerShell.exe -ExecutionPolicy Bypass -File C:\ELK-Stack\winlogbeat\ .\install-service-winlogbeat.ps1

*For Packetbeat, you'll need to install WinPcap as well on the host(s) that you'll be monitoring network traffic for. If you don't want to use Packetbeat on your Elastic Stack server, skip that install and also skip WinPcap.

Install WinPcap

Now we'll install WinPcap so we can send network data to our Elastic Stack with Packetbeat. We'll select the default options here as well.





There's no additional configuration needed for WinPcap.

Beats Configuration

Within the program directory for each of the Beats we just installed you'll notice a yml configuration file. If you take a look inside, you'll probably notice that the default Beats configuration points to Elasticsearch on port 9200. What we want to do instead is point the Beats to the input plugin that we installed for Logstash earlier on. To do so, we simply comment out the hosts configuration for output.elasticsearch and uncomment the hosts line for output.logstash. The position of these configurations is slightly different in each Beat configuration file but the necessary change is the same.

Here's an example after I've commented the Elasticsearch host and uncommented the same for Logstash. Be sure to comment/uncomment both the output. line and the hosts. line in each config file.


That's all the configuration we need at this stage. Go ahead and start each of your Beats Windows services.

Bringing it all together

Back to Kibana now, it's time to configure our indexes so we can visualize data from each of the Beats that we just turned on. At the Configure an index pattern screen we're going to add an index pattern for each of our Beats. Clear out logstash-* which populates by default, and add each of the following:
  • packetbeat-*
  • topbeat-*
  • winlogbeat-*
  • filebeat-*
Select @timestamp for the Time-field name, then click Create to create the index pattern.



After you've added the first index pattern, you'll need to use the plus icon to add additional indexes.


When you get to Filebeat (there's a reason I listed it last) you'll notice that adding it is unsuccessful. This is expected. Why?


Filebeat expects a log file (or files) as input. Since we're on Windows Server 2016 and we haven't modified this part of the config, it's fairly unlikely there's anything located at /var/log to ingest and push to Logstash. For now, we're OK with this and we'll move on using the other Beats as examples.

Click on the Discover tab in Kibana and let's look at what we have. By default, based on the Beats that we added, we should see entries from Packetbeat. If we click the drop down over packetbeat-* we can also select topbeat-* and winlogbeat-* to view that data.


Now we have the foundation in place for our Elastic Stack on Windows Server 2016. We can install the Beats agents on other servers and point them back to Logstash to aggregate data from our entire infrastructure, if we choose.