Elastic Search Cluster on Ubuntu 14.04

Featured

Elastic Search is a popular open source search server that is used for real-time distributed search and analysis of data. When used for anything other than development, Elastic Search should be deployed across multiple servers as a cluster, for the best performance, stability, and scalability.

elasticsearch_logo

Demonstration:

OMegha Platform.

Image – Ubuntu-14.04

Prerequisites:

You must have at least three Ubuntu 14.04 servers to complete this, because an Elastic Search cluster should have a minimum of 3 master-eligible nodes. If you want to have dedicated master and data nodes, you will need at least 3 servers for your master nodes plus additional servers for your data nodes.

Install Java 8:

Elastic Search requires Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elastic Search recommends. It should, however, work fine with OpenJDK, if you decide to go that route.

Complete this step on all of your Elastic Search servers.

Add the Oracle Java PPA to apt:

$ sudo add-apt-repository -y ppa:webupd8team/java

Update your apt package database:

$ sudo apt-get update

Install the latest stable version of Oracle Java 8 with this command (and accept the license agreement that pops up):

$ sudo apt-get -y install oracle-java8-installer

Be sure to repeat this step on all of your Elastic Search servers.

Now that Java 8 is installed, let’s install Elastic Search.

Install Elastic Search:

Elastic Search can be downloaded directly from elastic.co in zip, tar.gz, deb, or rpm packages. For Ubuntu, it’s best to use the deb (Debian) package which will install everything you need to run Elastic Search.

$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.deb

Then install it in the usual Ubuntu way with the dpkg command like this:

$ sudo dpkg -i elasticsearch-1.7.2.deb

This results in Elastic Search being installed in /usr/share/elastic Search/ with its configuration files placed in /etc/elastic Search and its init script added in /etc/init.d/elastic search.

$ sudo update-rc.d elasticsearch defaults

Be sure to repeat these steps on all of your Elastic Search servers.

Elastic Search is now installed but it needs to be configured before you can use it.

Configure Elastic search Cluster

Now it’s time to edit the Elastic search configuration. Complete these steps on all of your Elastic search servers.

Open the Elastic search configuration file for editing:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Set Cluster Name:

Next, set the name of your cluster, which will allow your Elastic search nodes to join and form the cluster. You will want to use a descriptive name that is unique (within your network).

Find the line that specifies cluster.name, uncomment it, and replace its value with the your desired cluster name. In this tutorial, we will name our cluster “elastic search_cluster”:

ELK1

Set Node Name:

Next, we will set the name of each node. This should be a descriptive name that is unique within the cluster.

Find the line that specifies node.name, uncomment it, and replace its value with your desired node name. In this tutorial, we will set each node name to the host name of server by using the ${HOSTNAME}environment variable:

ELK2

For Master Node:

For Master set the node.master as True and for node.data as False

ELK3

For Data Node:

For Data set the node.master as False and for node.data as True

ELK4

Network Host:

Set the network host as 0.0.0.0

ELK5

Set Discovery Hosts:

Next, you will need to configure an initial list of nodes that will be contacted to discover and form a cluster. This is necessary in a unicast network.

Find the line that specifies discovery.zen.ping.unicast.hosts and uncomment it.

For example, if you have three servers node01, node02, and node03 with respective VPN IP addresses of 10.0.0.1, 10.0.0.2, and 10.0.0.3, you could use this line:

ELK6

Save and Exit.

Your servers are now configured to form a basic Elastic search cluster. There are more settings that you will want to update, but we’ll get to those after we verify that the cluster is working.

Save and exit elasticsearch.yml.

Start Elastic search:

Now start Elastic search:

$ sudo service elasticsearch restart

Then run this command to start Elastic search on boot up:

$ sudo update-rc.d elasticsearch defaults 95 10

Be sure to repeat these steps (Configure Elastic search) on all of your Elastic search servers.

Testing:

By now, Elastic search should be running on port 9200. You can test it with curl, the command line client-side URL transfers tool and a simple GET request like this:

$ curl -X GET 'http://localhost:9200'

You should see the following response:

ELK7

If you see a response similar to the one above, Elastic search is working properly. If not, make sure that you have followed correctly the installation instructions and you have allowed some time for Elastic search to fully start.

Check Cluster State:

If everything was configured correctly, your Elastic search cluster should be up and running. Before moving on, let’s verify that it’s working properly. You can do so by querying Elastic search from any of the Elastic search nodes.

From any of your Elastic search servers, run this command to print the state of the cluster:

$ curl -XGET 'http://localhost:9200/_cluster/state?pretty'

EL8

If you see output that is similar to this, your Elastic search cluster is running! If any of your nodes are missing, review the configuration for the node(s) in question before moving on.

Next, we’ll go over some configuration settings that you should consider for your Elastic search cluster.

Enable Memory Locking:

Elastic recommends to avoid swapping the Elastic search process at all costs, due to its negative effects on performance and stability. One way avoid excessive swapping is to configure Elastic search to lock the memory that it needs.

Complete this step on all of your Elastic search servers.

Edit the Elastic search configuration:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Find the line that specifies bootstrap.mlockall and uncomment it:

ELK9

Save and exit.

Now restart Elastic search to put the changes into place:

$ sudo service elasticsearch restart

Cluster Health:

This API can be used to see general info on the cluster and gauge its health:

$ curl -XGET 'localhost:9200/_cluster/health?pretty'

ELK10

Cluster State:

This API can be sued to see a detailed status report on your entire cluster. You can filter results by specifying parameters in the call URL.

$ curl -XGET 'localhost:9200/_cluster/state?pretty'

ELK11

Conclusion:

Your Elastic search cluster should be running in a healthy state, and configured with some basic optimizations.

 

Advertisements

Node.js Installation

Featured

Node.js is a cross-platform environment and library for running JavaScript applications which are used to create networking and server-side applications.

It is used to develop I/O intensive web applications like video streaming sites, single-page applications, and other web applications.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Node.js Installation In Ubuntu 16.04

Step-1 Update the Package List

Before installing the node.js on to the Ubuntu system, update all available repositories.

sudo apt-get update

Step-2  Install Node.js

Run below command to install standard package of node.js

sudo apt-get install nodejs

Step-3 Install NPM

For installing Node.js the npm package is also required. By using below command we install the npm package.

sudo apt-get install npm

In order for some npm packages to work (those that require compiling code from source, for example), you will need to install the build-essential package:

sudo apt-get install build-essential

Installation Check

After Installing the node.js and npm we check that installation is correct or not by typing the following commands:

Node.js Installation Check

nodejs --version

NPM Installation Check

npm --version

Remove Node.js

For removing the node.js from ubuntu system completely we type following commands:

Remove Package without Configuration Files

This command removes node.js but configuration files remain so next time we install node.js than it will be used.

sudo apt-get remove nodejs

Remove Package with Configuration Files

If you don’t want to kept the configuration files then use following command.

sudo apt-get purge nodejs

Finally, Remove Unused Packages

For removing the unused packages that are installed with node.js run the following command:

sudo apt-get autoremove 

Installing Open Source Hosting Control Panel (ZesleCP) on Ubuntu 16.04

Featured

Zesle Control Panel

Secure Web Control Panel for your needs…

ZCP

Zesle is one of the popular open source control panel that any own can download and install. This is very simple and can be installed in just one command.

System Requirements:

  • Ubuntu 14/16
  • 1Ghz CPU
  • 512MB RAM
  • 10+GB DISK

Zesle is simpler and very User friendly.  Using Zesle you’ll be able to do the below tasks…

  • Add multiple domains without hassle;
  • Add multiple sub domains;
  • Install WordPress easily with one-click app;
  • Install free Let’s Encrypt SSL certificates with ease;
  • Configure basic PHP settings;
  • Manage Email accounts;
  • Access phpMyAdmin.

and much more. Let’s see how to install Zesle in your hosting.

Step 1: It’s super-easy to install Zesle. Run the following command with Root privilege.

$ cd /home && curl -o latest -L http://zeslecp.com/release/latest && sh latest

Step 2: The installation will begin and in between It will ask the admin’s email address. Please provide your email id and click enter.

Step 3: You will see the below screen at the end of the installation.

zcsp1.png

Step 4: This is how Zesle will looks like. Once the installation is completed, it’ll show you the temporary password and the login URL.

Step 5: The login URL will be your IP address followed by the port number(2087 will be the default one). For example: https://11.11.11.111:2087 is a sample URL.

Step 6: Just enter this in your browser and you’ll get the login screen.

zcsp2

Step 7: Use root as your username

Step 8: Copy paste the temporary root password provided. Once you entered the correct password, the control panel will open and here is how it looks like. All the options mentioned above will be available in the left side of the UI.

zcsp3

Step 9: In the Dashboard, you can create your account, and install WordPress on your domain using “Once Click Apps

Step 10: There is installation steps end for free Linux Web Hosting control panel called ZesleCP.

 

Installation of Open Project Management System on Ubuntu 16.04

Featured

OpenProjectLogo

Open Project is a web-based management system for location-independent team
collaboration, released under GNU GPL 3 License. It’s a project management software
that provides task-management, team collaboration, scrum etc. Open Project is written
in Ruby on Rails and AngularJS. In this tutorial, I will show you how to install and
configure the Open Project management system using Ubuntu 16.04. The tool can be
installed manually or by using packages from the repository. For t his guide, we will
install Open Project from repository.

Prerequisites

  •  Ubuntu 16.04.
  •  Good Internet Connectivity.
  •  Root Privileges.

What we will do

  • Update and Upgrade System.
  • Install Open Project Management System.
  • Configure the Open Project System.
  • Testing.

Step 1: Update and Upgrade System

Before installing the Open Project on to the Ubuntu system, update all available repositories and upgrade the Ubuntu system.

Run the following commands.

$ sudo apt update
$ sudo apt upgrade -y

Step 2: Install Open Project Management System

Download the open project key and add it to the system.

$ sudo wget -qO- https://dl.packager.io/srv/opf/openproject-ce/key | sudo apt-key add

And download the open project repository for Ubuntu 16.04 in the ‘/etc/apt/sources.list.d’ directory.

$ sudo wget -O /etc/apt/sources.list.d/openproject-ce.list \
  https://dl.packager.io/srv/opf/openproject-ce/stable/7/installer/ubuntu/16.04.repo

Now update the Ubuntu repository and install open project using the apt command as shown below.

$ sudo apt update
$ sudo apt-get install openproject -y

Step 3: Configure the Open Project System

Run the Open Project configuration command. A Graphical based UI screen will appear.

$  sudo openproject configure

op1

Select ‘Install and configure MySQL server locally’ and click ‘OK’. It will automatically install MySQL server on the system, and automatically create the database for openproject installation.

For the web server configuration, choose the ‘Install apache2 server’ and click ‘OK’. it will automatically install the apache2 web server and configure the virtual host for Open Project application.

op2

Now type the domain name for your Open project application, and choose ‘OK’.

Next, for the SSL configuration. If you have purchased SSL certificates, choose ‘yes’, and ‘no’ if you don’t have SSL certificates.

op3

Skip the subversion support, GitHub support and SMTP configuration. (if not needed).

And for memcached installation choose ‘Install’ and select ‘OK’ for better performance of Open Project.

op4

Finally, installation and configuration of all the packages required for Open Project installation should happen automatically.

Step 4: Testing

Check whether the Open Project service is up and running.

$  sudo service openproject status

Now run the openproject web service using the following command.

$  sudo openproject run web

Now open your web browser and type on the address bar your floating IP to access the system

op5

Now click the ‘Sign in’ button to log in to the admin dashboard initially using ‘admin’ as user and ‘admin’ as password and later you can change it.

Finally, the installation and configuration for Open Project on Ubuntu 16.04 has been completed successfully.

 

 

 

Apache Virtual Hosts on Ubuntu 14.04

Featured

The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a Virtual Host.

virtual-hosting-apache

These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

In this document, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you’ll learn how to serve different content to different visitors depending on which domains they are requesting.

Prerequisites

  • Before you begin this tutorial, you should create a non root user.
  • You will also need to have Apache installed in order to work through these steps.

Demonstration:

OMegha platform.

Image – Ubuntu-14.04

Lets get Started,

At first we need to update the packages list.

$ sudo apt-get update

VH1

Install Apache

$ sudo apt-get install apache2

VH2

For the purposes of this document, my configuration will make a virtual host for infra.com and another for infra1.com

Step 1: Create the Directory Structure

Our document root will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

$ sudo mkdir -p /var/www/infra.com/public_html

$ sudo mkdir -p /var/www/infra1.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

Step 2: Grant Permissions

Changing the Ownership

$ sudo chown -R $USER:$USER /var/www/infra.com/public_html

$ sudo chown -R $USER:$USER /var/www/infra1.com/public_html

VH3

We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders

$ sudo chmod -R 755 /var/www

Step 3: Create Demo Pages for Each Virtual Host

We have to create index.html file for each site.

Let’s start with infra.com. We can open up an index.html file in our editor by typing

$ sudo vi /var/www/infra.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to and My file looks like this

<html>

  <head>

    <title>Welcome to infra.com!</title>

  </head>

  <body>

    <h1>Success!  The infra.com virtual host is working!</h1>

  </body>

</html>

Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing

cp /var/www/infra.com/public_html/index.html /var/www/infra1.com/public_html/index.html

Then we can open the file and modify the relevant pieces of information

$ sudo vi /var/www/infra1.com/public_html/index.html

<html>

  <head>

    <title>Welcome to infra1.com!</title>

  </head>

  <body>

    <h1>Success!  The infra1.com virtual host is working!</h1>

  </body>

</html>

Save and close the file.

Step 4: Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf and we can
copy that to our first domain of the virtual host file.

Creating First Virtual Host File

Start by copying the file for the first domain

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/infra.com.conf

Open the new file in your editor with root privileges

$ sudo vi /etc/apache2/sites-available/infra.com.conf

our virtual host file should look like this

<VirtualHost *:80>

    ServerAdmin admin@infra.com

    ServerName infra.com

    ServerAlias www.infra.com

    DocumentRoot /var/www/infra.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Copy first Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying

$ sudo cp /etc/apache2/sites-available/infra.com.conf /etc/apache2/sites-available/infra1.com.conf

Open the new file with root privileges

$ sudo vi /etc/apache2/sites-available/infra1.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this

<VirtualHost *:80>

    ServerAdmin admin@infra1.com

    ServerName infra1.com

    ServerAlias www.infra1.com

    DocumentRoot /var/www/infra1.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Step 5: Enable the New Virtual Host Files

Created Virtual host files needs to be enabled.

We can use the a2ensite tool to enable each of our sites

$ sudo a2ensite infra.com.conf

$ sudo a2ensite infra1.com.conf

VH4

Restart the apache server.

$ sudo service apache2 restart

Step 6: Setup Local Hosts File

$ sudo vi /etc/hosts

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS.

127.0.0.1 localhost

***.***.***.*** infra.com

***.***.***.*** infra1.com

Save and close the file.

This will direct any requests for infra.com and infra1.com on our computer and send them to our server at ***.***.***

Step 7: Test Your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser

http://infra.com

VH5

You should see a page that looks like this

Likewise, if you can visit your second page

http://infra1.com

VH6

You will see the file you created for your second site

Step 8: Conclusion

If both of these sites work well, you’ve successfully configured two virtual hosts on the same server.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

Centralize Logs from Node.js Applications

Featured

Prerequisites

  • Installation of Node.js and NPM
  • Installation of Fluentd

Modifying the Config File

Next, please configure Fluentd to use the forward Input plugin as its data source.

$ sudo vi /etc/td-agent/td-agent.conf

Fluent daemon should listen on TCP port.

Simple configuration is following:

1

Restart your agent once these lines are in place.

$ sudo service td-agent restart

fluent-logger-node

Install fluent-logger-node

$ npm install fluent-logger

Now use npm to install your dependencies locally:

$ npm install

Send an event record to Fluentd

index.js

This is the simplest web app.

2

Run the app and go to http://localhost:4000 in your browser. This will send the logs to Fluentd.

$ node index.js

3

The logs should be output to /var/log/td-agent/td-agent.log  

Store Logs into MongoDB

Fluentd does 3 things:

  1. It continuously “tails” the access log file.
  2. It parses the incoming log entries into meaningful fields (such as ip,path, etc.) and buffers them.
  3. It writes the buffered data to MongoDB periodically.

Configuration         

Fluentd’s config file

$ sudo vi /etc/td-agent/td-agent.conf

 The Fluentd configuration file should look like this:

4

Restart your agent once these lines are in place.

$ sudo service td-agent restart

Then, access MongoDB and see the stored data.

$ mongo

5

Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Installation of MongoDB on Ubuntu 16.04

Featured

MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.

mongodb

MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA31292

After successfully importing the key, you will see:

gpg: Total number processed: 1
gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod
$ mongo

mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Main PID: 4093 (mongod)
Tasks: 16 (limit: 512)
Memory: 47.1M
CPU: 1.224s
CGroup: /system.slice/mongodb.service
└─4093 /usr/bin/mongod --quiet --config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Installing Asterisk on Ubuntu 16.04

Featured

Tags

,

Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.

Asterisk_Logo.svg

Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv

 

Prometheus

What is Prometheus?

Prometheus is an open source monitoring tool, which helps in collecting metrics form a server and stores it in a time-series database.

By default Prometheus collects and stores metrics about itself. But we can expand it by installing exporters.

What are Exporters?

Exporters are extra plugins which helps in giving information on infrastructure, databases ,web servers to messaging systems, APIs, and more.

Prerequisites

  1. We should have an Ubuntu Server of version 16.04, where sudo commands can be used.
  2. UFW should be installed in the server.
  3. Ngnix should be installed and running.

Creating Service Users

First we shall create two user accounts prometheus and node_exporter, for security purposes. This helps in creating ownership for the files and directories. Use the –no-create-home and –shell /bin/false options so that these users can’t log into the server. We use the command-

sudo useradd --no-create-home --shell /bin/false prometheus
sudo useradd --no-create-home --shell /bin/false node_exporter

We shall create a directory in /etc to store Prometheus configuration files and a directory in /var/lib to store Prometheus data. We use the command-

sudo mkdir /etc/prometheus

sudo mkdir /var/lib/prometheus

We put the user and group ownership to the Prometheus user using the command-

sudo chown prometheus:prometheus /etc/prometheus

sudo chown prometheus:prometheus /var/lib/prometheus

Downloading Prometheus

Let us download and unpack the stable version of Prometheus. We can use the URL as given below.

https://prometheus.io/download/

To download we use the following command-

cd ~                                     

curl -LO https://github.com/prometheus/prometheus/releases/download/v2.0.0/prometheus-2.10.0.linux-amd64.tar.gz

Now we use sha256sum to generate the checksum of the downloaded file.

sha256sum prometheus-2.10.0.linux-amd64.tar.gz

If the checksum of the downloaded file does not match the checksum displayed on the Prometheus website, remove the file and download it again.

We shall unpack the downloaded file using the command-

tar xvf prometheus-2.10.0.linux-amd64.tar.gz

This will create a directory called prometheus-2.10.0.linux-amd64 containing two binary files (prometheus and promtool), consoles and console_libraries directories containing the web interface files, a license, a notice, and several example files.

We Copy the two binaries to the /usr/local/bin directory, using the command-

sudo cp prometheus-2.10.0.linux-amd64/prometheus /usr/local/bin/
sudo cp prometheus-2.10.0.linux-amd64/promtool /usr/local/bin/

We put the user and group ownership to the Prometheus user using the command-

sudo chown prometheus:prometheus /usr/local/bin/prometheus
sudo chown prometheus:prometheus /usr/local/bin/promtool

We Copy the console and console_libraries directories to /etc/Prometheus, using the command-

sudo cp -r prometheus-2.10.0.linux-amd64/consoles /etc/prometheus
sudo cp -r prometheus-2.10.0.linux-amd64/console_libraries /etc/prometheus

We put the user and group ownership to the Prometheus user using the command-

sudo chown -R prometheus:prometheus /etc/prometheus/consoles
sudo chown –R prometheus:prometheus /etc/prometheus/console_libraries

We use the -R flag to set the ownership on the files present in that directory.

We now remove the leftover files present in the home directory, using the command-

rm -rf prometheus-2.10.0.linux-amd64.tar.gz prometheus-2.0.0.linux-amd64

Configuring Prometheus

In the /etc/Prometheus directory use a text editor to setup a configuration file named prometheus.yml.  For now, this file will contain just enough information to run Prometheus for the first time.

sudo nano /etc/prometheus/prometheus.yml

In the global settings, define the default interval for scraping metrics. We put it as 15 seconds-

global:
  scrape_interval: 15s

Scrape Interval – Tells Prometheus to collect metrics from its exporters every 15 seconds, which is long enough for most exporters.

Now we add Prometheus itself to the list of exporters, to scrape itself with the scrape_configs directive-

...

scrape_configs:

  - job_name: 'prometheus'

    scrape_interval: 5s

    static_configs:

      - targets: ['localhost:9090']

Prometheus uses the job_name to label exporters in queries and on graphs.

Prometheus exports important data about itself that you can use for monitoring performance and debugging. We have overridden the global scrape_interval directive from 15 seconds to 5 seconds for more frequent updates.

Prometheus uses the static_configs and targets directives to determine where exporters are running. Since this particular exporter is running on the same server as Prometheus itself, we can use localhost instead of an IP address along with the default port, 9090.

We shall save and exit the text editor.

We put the user and group ownership of this configuration file to the Prometheus user using the command-

sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml

Running Prometheus

We will start Prometheus using the user Prometheus, providing the path to both the configuration file and the data directory, using the command-

sudo -u prometheus /usr/local/bin/prometheus \

--config.file /etc/prometheus/prometheus.yml \

--storage.tsdb.path /var/lib/prometheus/ \

--web.console.templates=/etc/prometheus/consoles \

--web.console.libraries=/etc/prometheus/console_libraries

Press CTRL+C to exit.

We will open a new systemd service file using the command-

sudo nano /etc/systemd/system/prometheus.service

The service file tells systemd to run Prometheus as the prometheus user, with the configuration file located in the /etc/prometheus/prometheus.yml directory and to store its data in the /var/lib/prometheus directory.

We copy the following code into the configuration file.

[Unit]

Description=Prometheus

Wants=network-online.target

After=network-online.target


[Service]

User=prometheus

Group=prometheus

Type=simple

ExecStart=/usr/local/bin/prometheus \

    --config.file /etc/prometheus/prometheus.yml \

    --storage.tsdb.path /var/lib/prometheus/ \

    --web.console.templates=/etc/prometheus/consoles \

    --web.console.libraries=/etc/prometheus/console_libraries


[Install]

WantedBy=multi-user.target

We will save and exit the text editor.

We will reload systemd using the command-

We can now start Prometheus using the command-

sudo systemctl daemon-reload
sudo systemctl start prometheus

To make sure Prometheus is running, we check the service’s status using the command-

sudo systemctl status prometheus

If the service state is active, then Prometheus is running successfully.

Now press Q to exit the status command.

We will enable the service to start on boot using the following command-

sudo systemctl enable prometheus

Downloading Node Exporter

Let us download and unpack the stable version of Node Exporter. We can use the URL as given below.

https://prometheus.io/download/

To download we use the following command-

cd ~

curl -LO https://github.com/prometheus/node_exporter/releases/download/v0.15.1/node_exporter-0.18.1.linux-amd64.tar.gz

Now we use sha256sum to generate the checksum of the downloaded file.

sha256sum node_exporter-0.18.1.linux-amd64.tar.gz

If the checksum of the downloaded file doesnot match the checksum displayed on the Prometheus website, remove the file and download it again.

We shall unpack the downloaded file using the command-

tar xvf node_exporter-0.18.1.linux-amd64.tar.gz

This will create a directory called node_exporter-0.18.1.linux-amd64 containing a binary file named node_exporter, a license, and a notice.

We Copy the binary to the /usr/local/bin directory, using the command-

sudo cp node_exporter-0.18.1.linux-amd64/node_exporter /usr/local/bin

We put the user and group ownership to the node exporter user using the command-

sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter

We now remove the leftover files present in the home directory, using the command-

rm -rf node_exporter-0.18.1.linux-amd64.tar.gz node_exporter-0.18.1.linux-amd64

Running Node Exporter

We will open a new systemd service file using the command-

sudo nano /etc/systemd/system/node_exporter.service

This service file tells your system to run Node Exporter as the node_exporter user with the default set of collectors enabled.

We copy the following code into the configuration file.

[Unit]

Description=Node Exporter

Wants=network-online.target

After=network-online.target


[Service]

User=node_exporter

Group=node_exporter

Type=simple

ExecStart=/usr/local/bin/node_exporter


[Install]

WantedBy=multi-user.target

We will save and exit the text editor.

We will reload systemd using the command-

sudo systemctl daemon-reload

We can now run Node Exporter using the command-

sudo systemctl start node_exporter

To make sure Node Exporter is running, we check the service’s status using the command-

sudo systemctl status node_exporter

If the service state is active, then Prometheus is running successfully.

Now press Q to exit the status command.

We will enable the service to start on boot using the following command-

sudo systemctl enable node_exporter

Configuring Prometheus to Scrape Node Exporter

We will open the configuration file, using the command-

sudo nano /etc/prometheus/prometheus.yml

At the end of scrape_configs block add a new entry called node_exporter.

...

  - job_name: 'node_exporter'

    scrape_interval: 5s

    static_configs:

      - targets: ['localhost:9100']

Because this exporter is also running on the same server as Prometheus itself, we can use localhost instead of an IP address again along with Node Exporter’s default port, 9100.

We will save and exit the text editor.

We shall restart Prometheus to put changes in to effect, using the command-

sudo systemctl restart prometheus

To make sure Prometheus is running, we check the service’s status using the command-

sudo systemctl status prometheus

If the service state is active, then Prometheus is running successfully.

Securing Prometheus

We start by installing apache2-utils, which will give us access to the htpasswd utility for generating password files, using the command-

sudo apt-get update
sudo apt-get install apache2-utils

We shall make our own password. We should not forget it as we need to use it to access Prometheus in the web server. We use the command-

sudo htpasswd -c /etc/nginx/.htpasswd

The result of this command is a newly-created file called .htpasswd, located in the /etc/nginx directory, containing the username and a hashed version of the password you entered.

Now we will make a Prometheus-specific copy of the default Nginx configuration file so that we can revert back to the defaults later if we run into a problem, using the command-

sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/prometheus

We shall open a new configuration file, using the command-

sudo nano /etc/nginx/sites-available/Prometheus

Locate the location / block under the server block. It should look like the code below-

...

    location / {

        try_files $uri $uri/ =404;

    }

...

As we will be forwarding all traffic to Prometheus, we will replace the try_files directive with the following content-

...

    location / {

        auth_basic "Prometheus server authentication";

        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_pass http://localhost:9090;

        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;

        proxy_set_header Connection 'upgrade';

        proxy_set_header Host $host;

        proxy_cache_bypass $http_upgrade;

    }

...

We will save and exit the text editor.

Now, we shall deactivate the default Nginx configuration file by removing the link to it in the /etc/nginx/sites-enabled directory, and activate the new configuration file by creating a link to it, using the command-

sudo rm /etc/nginx/sites-enabled/default

sudo ln -s /etc/nginx/sites-available/prometheus /etc/nginx/sites-enabled/

Before restarting Nginx, we will check the configuration for errors, using the command-

sudo nginx –t

If the output is OK then there are no errors.

We shall reload Nginix to put changes in to effect, using the command-

sudo systemctl reload nginx

To make sure Ngnix is running, we check the service’s status using the command-

sudo systemctl status nginx

If the service state is active, then Ngnix is running successfully.

Testing Prometheus

We shall go to our web server and use the URL to access the Prometheus Web page-

http://your_server_ip

We shall enter our username and password.

This the web page of Prometheus where we can use queries to access stored metrics in form of time series database.

Docker and Dockerfiles

What is Docker?

Docker is an open source software, which helps in creating, deploying and running applications in containers. The container helps in giving the same type of environment throughout the Software Development Life Cycle.

What is Container?

A container is like a separate Virtual Machine which is dependent on the host machine. The container runs applications or micro services of a given task efficiently.

Prerequisites

We should have an Ubuntu Server of version 16.04, where sudo commands can be used. We should also have a firewall installed.

Note

Code = Inside the container

Installing Docker

Usually when we download Docker installation package from the Ubuntu repository, we might get an outdated version. To prevent this we download Docker from the main Docker repository itself.

First, in order to ensure that the downloads are valid, we add the GPG key for the official Docker repository to our system, by using the command-

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

If we get an ouput stating OK we can move ahead with the installation process.

We add the package to APT sources using the command-

Sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu   $(lsb_release -cs) stable"

We now update the the package database present using the command-

sudo apt-get update

We should make sure that we are downloading from the main Docker repository and not Ubuntu’s. We use the command-

apt-cache policy docker-ce

We finally will install Docker using the command-

sudo apt-get install -y docker-ce

Now, to check if it is running, we use the command-

sudo systemctl status docker

Working with Docker images

We do know that containers run from Docker images. So we will try to pull an image from the Docker Hub. Before that we will check whether we can access and download images from Docker Hub. We use the command-

sudo docker run hello-world

If we get the highlighted message in the above picture then Docker is working correctly.

We can also search for images using the Docker command-

sudo docker search Ubuntu

Now we shall pull an image called Ubuntu from the Docker Hub, using the command-

sudo docker pull Ubuntu

Now the image Ubuntu is in our server.

We can run a container using Ubuntu image, using the command-

sudo docker run Ubuntu

To see the images which have been downloaded in our server or computer we can use the command-

sudo docker images

We have two images-

  1. Hello-world
  2. ubuntu

Working with Docker Containers

We can interact with the container. Now let us run a container using the image Ubuntu, using the command-

sudo docker run –ti –-rm –-net=hpst ubuntu /bin/bash

Here –ti helps in giving an interactive shell access into the container, –rm helps in removing the container as soon as we exit out of the container.

Now we would be inside the container. We will try to install Node.js inside the container.

First let us update using the command-

 apt-get update

We do not need to use sudo here as we are the root user.

Now let us install Node.js using the command-

apt-get install -y nodejs

We have installed Node.js inside the container. We can see this by checking the version using the command-

node –v
To exit the container we can use exit or CTRL+p+q.

Working with Volumes

Docker can store files in the host machine and connect it to the container, so that the files are not lost even when the container stops. This is done with the help of volumes.

We will try to mount a volume containing a HTML file and map it to the container. The Image we will pull is httpd. We use the command-

sudo docker pull httpd:2.4.39-alpine

We can also check if the image has been pulled from the Docker Hub using the command-

sudo docker images

We shall now run the image using the command-

sudo docker run –name my-apache –p 80:80 httpd:2.4.39-alpine

Here –name helps in giving a name to the container, in this case it is my-apache.

Here -p is the port connection between that of host and the container, in this case both are connected to port 80.

We can check if port 80 is working using the url-

http://server_IP_address

We should be getting the message as given above.

Now we shall try to display our html file in this webserver.

We first make a directory in the terminal using the command-

mkdir sample_dir

We make a file using the command-

sudo nano index.html

After creating the HTML page, we save the file and exit the text editor.

We can use pwd to find out the location of the file as we need it to be mapped to the location of the container.

We can use Docker Hub for our reference.

In Docker Hub there could be information, on where a container of an image stores its files. So for httpd it is the highlighted line.

docker run -dit --name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4

Here –v helps in creating volumes. Here we map the location of the html file to the container so that even if a container stops, the file remains without any changes. We use the command-

Now we can again go back to our webserver using the url-

http://server_IP_address

We can see that our html page is being displayed on the server.

Working with Dockerfiles

A dockerfile is a text document, which has commands needed to build an image by the user itself.

We shall pull alpine from Docker Hub and install git using the help of Dockerfiles.

We shall pull alpine from Docker Hub using the command-

sudo docker pull alpine:3.9.4

We can also check if the image has been pulled from the Docker Hub using the command-

sudo docker images

Now we shall get inside the container using the image alpine to check if git is working, using the command-

sudo docker –ti –-rm alpine:3.9.4 /bin/sh

Here we can see that git, nano, curl,vim has not been installed.

Now we will install all these with the help of Dockerfiles. We will exit the container  and make a Dockerfile using the command-

touch dockerfile

A Dockerfile would have been made. We will get inside the Dockerfile using the command-

sudo nano dockerfile

We shall enter the following lines.

Here we are taking the image alpine:3.9.4 and modifying it by adding curl, vim and git. We even add the maintainer. We will save and exit the file.

We will build the image by using the  command-

sudo docker build –-network host –t “tag-name” .

Here in the command –t is the tag we give to the image to distinguish it from alpine:3.9.4, in the end we also add “.” to tell that we are bearing the current folder.

We can see all the steps have been executed. So we have built our image.

We will get inside the container to check if git is working usng the command-

sudo docker run –ti –net=host “tag-name”

Installing and configuring Nagios to a remote server

What is Nagios?

Nagios is an open source monitoring system. It helps in finding faults and problems in a remote server and alerts the user if anything goes wrong.

Prerequisites

  1. We should have 2 Ubuntu servers of 16.04 version, where sudo commands can be used.
  2. Both the servers should have UFW installed.
  3. One server would be considered as the Nagios server and the other the remote server.
  4. The server that runs Nagios should have Apache and PHP installed.

Note

Code = Nagios Server

Code = Remote/Monitored Server

Installing Nagios

Currently the latest version of Nagios is 4.4.3. We would be using the latest version.

In the Nagios server which we have considered, we create a Nagios user and a Nagios group using the command-

sudo useradd nagios
sudo groupadd nagcmd

Then add user to the group using the command-

sudo usermod -a -G nagcmd nagios

We are installing Nagios from the source, so we have to download few libraries, compilers and headers which helps in completing the build.

We first update our server to obtain the latest packages using the command-

sudo apt-get update

We then install the required packages using the command-

sudo apt-get install build-essential libgd2-xpm-dev openssl libssl-dev unzip

Now we go to our home directory and install Nagios itself using the command-

cd ~
curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.3.4.tar.gz

Now let us extract the Nagios archive using the command-

tar zxf nagios-*.tar.gz

We move to the extracted directory using the command-

cd nagios-*

Now we configure Nagios to specify the user and group which Nagios should use, using the command-

./configure --with-nagios-group=nagios --with-command-group=nagcmd

The main output we will see in the end would be-

Now we will compile Nagios using the make command-

make all

Now we run the following make commands to install Nagios, init scripts and default configuration files-

sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config

We will use Apache to serve Nagios web interface so copy the sample Apache configuration file to the /etc/apache2/sites-available folder using the command-

sudo /usr/bin/install -c -m 644 sample-config/httpd.conf /etc/apache2/sites-available/nagios.conf

In order to issue external commands via the web interface to Nagios, we add the web server user, www-data, to the nagcmd group using the command-

sudo usermod -G nagcmd www-data

Nagios is now installed. We will install a plugin which will help Nagios to collect data from various hosts

Installing the check_nrpe plugin

Nagios monitors it’s remote hosts using the Nagios remote plugin executor(nrpe). It is divided into 2 parts-

  1. It has check_nrpe plugin which is in the Nagios server.
  2. It has nrpe daemon, which runs on the remote hosts and sends data to the Nagios server.

We shall install check_nrpe plugin in the Nagios server using the command-

cd ~
curl -L -O https://github.com/NagiosEnterprises/nrpe/releases/download/nrpe-3.2.1/nrpe-3.2.1.tar.gz

Now let us extract the nrpe archive using the command-

tar zxf nrpe-*.tar.gz

We move to the extracted directory using the command-

cd nrpe-*

We configure the check_nrpe plugin using the command-

./configure

Now we will build and install check_nrpe using the command-

make check_nrpe
sudo make install-plugin

Now we shall configure the Nagios server next.

Configuring Nagios server

We should first edit some configuration files and configure Apache to serve the Nagios web interface. This should only be done once on the Nagios server.

We open the main Nagios configuration file. Here we use nano as the text editor. We use the command-

sudo nano /usr/local/nagios/etc/nagios.cfg

We uncomment the line by deleting the # symbol in front of the line.

After doing it, we will save the file and exit the editor.

Now we shall create a directory that will store the configuration file for each server that we will monitor using the command-

sudo mkdir /usr/local/nagios/etc/servers

We will open the Nagios configuration in our text editor using the command-

sudo nano /usr/local/nagios/etc/objects/contacts.cfg

Find the email directive and replace its value with the e-mail addres we use. Save the file and exit the text editor.

Next we will add a new command to our Nagios configuration which enables us to use check_nrpe command in Nagios service definitions. We open the file using the command-

sudo nano /usr/local/nagios/etc/objects/commands.cfg

We add the following to the end of the file to define a new command called check_nrpe-

...

define command{

        command_name check_nrpe

        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$

}

Save and exit text editor.

Now we configure Apache to serve the Nagios user interface. Enable the Apache rewrite and cgi modules using the a2enmod command-

sudo a2enmod rewrite
sudo a2enmod cgi

Use the htpasswd command to create an admin user called nagiosadmin that can access the Nagios web interface. Using the command-

sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Enter a password at the prompt and remember it.

Now create a symbolic link for nagios.conf to the sites-enabled directory. This enables the Nagios virtual host using the command-

sudo ln -s /etc/apache2/sites-available/nagios.conf /etc/apache2/sites-enabled/

Setting up service for Nagios

We create a Nagios service file using the command-

sudo nano /etc/systemd/system/nagios.service

Enter the following definition into the file. This definition specifies when Nagios should start.

[Unit]

Description=Nagios

BindTo=network.target




[Install]

WantedBy=multi-user.target




[Service]

Type=simple

User=nagios

Group=nagios

ExecStart=/usr/local/nagios/bin/nagios /usr/local/nagios/etc/nagios.cfg

Save the file and exit text editor.

Now start Nagios and enable it to start when the server boots. We use the command-

sudo systemctl enable /etc/systemd/system/nagios.service
sudo systemctl start Nagios

Nagios is now running, so let us log in to its web interface next.

Accessing the Web Interface

We open the browser and go to the Nagios server using the URL-

http://nagios_server_public_ip/nagios

Enter the Admin name and password which was made earlier, in the popup that appears.

After Entering all the credentials, the Nagios home page will appear.

After clicking on hosts, we can see that the Nagios server is currently monitoring our localhost only, this is because we still haven’t added our remote host yet.

Now let us move onto our host server.

Installing nrpe on host

We will install the Nagios Remote Plugin Executor (nrpe) on the remote host, install some plugins, and then configure the Nagios server to monitor this host. We will call the remote server as the monitored server.

First we will create a “nagios” user which will run the nrpe agent using the command-

sudo useradd Nagios

We will install nrpe from source, which means we will need the same development libraries we installed on the Nagios server. Therefore we first update the server using the command-

sudo apt-get update
sudo apt-get install build-essential libgd2-xpm-dev openssl libssl-dev unzip

Nrpe requires that Nagios plugins is installed on the remote host. Let us install this package from source using the command-

cd ~
curl -L -O http://nagios-plugins.org/download/nagios-plugins-2.2.1.tar.gz

Now let us extract the Nagios plugins archive using the command-

tar zxf nagios-plugins-*.tar.gz

We move to the extracted directory using the command-

cd nagios-plugins-*

Before building Nagios Plugins, configure it to use the nagios user and group, and configure OpenSSL support using the command-

./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now we compile the plugins using the make command-

make

Then install them using the command-

sudo make install

Let us now install nrpe using the command-

cd ~
curl -L -O https://github.com/NagiosEnterprises/nrpe/releases/download/nrpe-3.2.1/nrpe-3.2.1.tar.gz

Now let us extract the nrpe archive using the command-

tar zxf nrpe-*.tar.gz

We move to the extracted directory using the command-

cd nrpe-*

We now configure nrpe by specifying the Nagios user and group, using the command-

./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Now we build and install nrpe and its startup script with these commands-

make all
sudo make install
sudo make install-config
sudo make install-init

We will now update the nrpe configuration file using the command-

sudo nano /usr/local/nagios/etc/nrpe.cfg

Find the allowed_hosts directive, and add the private IP address of your Nagios server to the comma-delimited list-

allowed_hosts=127.0.0.1,::1,your_nagios_server_private_ip

Save and exit the text editor.

This configures nrpe to accept requests from our Nagios server via its private IP address.

Now we can start he nrpe using the command-

sudo systemctl start nrpe.service

We can check if the service is running using the command-

sudo systemctl status nrpe.service

Next we should allow access to port 5666 through the firewall using the command-

sudo ufw allow 5666/tcp

Now we can check the communication with the remote nrpe server, using the command on the Nagios server-

sudo nano /usr/local/nagios/libexec/check_nrpe -H remote_host_ip

The output we would get is-

NRPE v3.2.1

Now let us configure some basic checks that Nagios can monitor.

First, let us monitor the disk usage of this server, using the command in the monitored server-

df -h /

The output we would get is-

Filesystem      Size  Used Avail Use% Mounted on

/dev/vda1        29G  1.4G   28G   5% /

Now we open nrpe configuration file using the command-

sudo nano /usr/local/nagios/etc/nrpe.cfg

Modify Server address to the private IP address of the monitored server and change /dev/hda1 to /dev/vda1.

...

server_address=monitored_server_private_ip

...

command[check_vda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/vda1

...

Save and exit the text editor.

Restart the nrpe service to put the change into effect using the command-

sudo systemctl restart nrpe.service

We have installed and configured nrpe in the monitored host. We will add this host to the Nagios server next.

Monitoring hosts with Nagios

To monitor your hosts with Nagios, we will add configuration files for each host specifying what we want to monitor. We can then view those hosts in the Nagios web interface.

We now create a new configuration file in the Nagios server using the command-

sudo nano /usr/local/nagios/etc/servers/your_monitored_server_host_name.cfg

We would get an empty file.

Add the following host definition, replacing the host_name value with remote hostname, the alias with Nagios server’s host name, and the address value with the private IP address of the remote host-

define host {

        use                             linux-server

        host_name                       your_monitored_server_host_name

        alias                           My client server

        address                         your_monitored_server_private_ip

        max_check_attempts              5

        check_period                    24x7

        notification_interval           30

        notification_period             24x7

}

Let us add some services to monitor, in the configuration file itself we shall add the following to manage CPU usage-

define service {

        use                             generic-service

        host_name                       your_monitored_server_host_name

        service_description             CPU load

        check_command                   check_nrpe!check_load

}

We shall add the following to monitor the disks usage-

define service {

        use                             generic-service

        host_name                       your_monitored_server_host_name

        service_description             /dev/vda1 free space

        check_command                   check_nrpe!check_vda1

}

Now save and exit the text editor.

We now restart the Nagios service using the command-

sudo systemctl restart Nagios

After several minutes, Nagios will check the new hosts and you’ll see them in the Nagios web interface.

Installing LAMP stack

What is LAMP stack ?

A LAMP stack is a collection of open source software programs which helps a server to host a dynamic webpage or a website.

L                       Linux (Ubuntu)

A                       Apache

M                      MySQL

P                        PHP/Python

In this document we would be learning to install the LAMP stack on Ubuntu version 16.04. The dynamic content is processed by PHP.  Here Linux is used in the form of Ubuntu.

Prerequisites

We should have an Ubuntu Server of 14.04 or 16.04, where sudo commands can be used. We should have installed UFW.

Demonstrations

  1. Performed on OMegha Platform
  2. Performed on UBUNTU 16.04

Installing Apache

We can install apache using the package manager offered by Ubuntu, apt.

First we need to update our system using the command-

sudo apt-get update

We have to now install the Apache2 package using the command-

sudo apt-get install apache2

Here Apache2 has already been installed.

Setting Global ServerName to prevent syntax warnings

We will add a single line to the /etc/apache2/apache2.conf file to suppress a warning message. 

Open up the main configuration file with your text editor. Here we use nano as the text editor. Using the command-

sudo nano /etc/apache2/apache2.conf

Inside, at the bottom of the file, add a ServerName. If we do not have a ServerName, then we can use our public IP address.

We scroll down to the bottom of the file and add the statement as follows-

ServerName server_domain_or_IP

Save the file and close it, to go back to the terminal.

Check for syntax errors using the command-

sudo apache2ctl configtest

Restart Apache to implement the changes, using the command-

sudo systemctl restart apache2

Adjusting Firewall to allow web traffic

After installing UFW, we should check if there is an application profile for Apache using the command-

sudo ufw app list

If we look at the Apache Full profile, it should show that it enables traffic to ports 80 and 443. We can find it using the command-

sudo ufw app info "Apache Full"

We can verify that everything went as planned by visiting our server/public IP address using our web browser. By using the URL –

http://server_IP_address

We will see the default Ubuntu 16.04 Apache web page, which is there for informational and testing purposes. It should look something like this-

Installing MySQL

We can install MySQL using the package manager offered by Ubuntu, apt.We can use the command-

sudo apt-get install mysql-server

We will be shown a list of the packages that will be installed, along with the amount of disk space they will take up. Enter Y to continue.

During Installation of MySQL we should be setting up a password, which helps us in giving certain privileges and at the same time to keep our databases secure.

After confirming our password, we can run a security script that will remove some dangerous defaults and lock down access to our database system a little bit, using the command-

mysql_secure_installation

We will be asked to enter the password we had set up for the MySQL root account. Next, we will be asked if we want to configure the VALIDATE PASSWORD PLUGIN.

Answer y for yes, or anything else to continue without enabling. It is always better to not enable the plugin and stay with the password we have already made.

For the rest of the questions, we should press Y and hit the Enter key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have mad

Installing PHP

We can install PHP using the package manager offered by Ubuntu, apt. We ARE going to include some helper packages as well, so that PHP code can run under the Apache server and talk to our MySQL database. We can use the command-

sudo apt-get install php libapache2-mod-php php-mcrypt php-mysql

Whenever a file is requested by the user, Apache will first look for a file called index.html. We want to tell our web server to prefer PHP files, so we will make Apache look for an index.php file first, using the command-

sudo nano /etc/apache2/mods-enabled/dir.conf

We will move Index.php in front of Index.html.

When we are done, save and close the file by pressing Ctrl-X. We will have to confirm the save by typing Y and then hit Enter to confirm the file save location.

Restart Apache to implement the changes, using the command-

sudo systemctl restart apache2

Installing PHP modules

We can see all the available modules and libraries in PHP using the command-

apt-cache search php- | less

To get more information of a particular package we can use the command-

apt-cache show package_name

For example we want to find the information of php-curl, we can use the command-

apt-cache show php-curl

Test PHP Processing on Apache Web Server

To test if PHP is configured properly in our system, we create a basic PHP script.

We will call this script info.php. In order for Apache to find the file and serve it correctly, it must be saved to a very specific directory, which is called the “web root”

In Ubuntu 16.04, this directory is located at /var/www/html/. We can create the file at that location by using the command-

sudo nano /var/www/html/info.php

An empty file will open. We can type in our PHP code in it and save the file.

Now we can test whether our web server can correctly display content generated by a PHP script by using the URL in our web browser-

http://your_server_IP_address/info.php

After checking if it is working we can remove the file after the test because it could actually give information about our server to unauthorized users. We can use the command-

sudo rm /var/www/html/info.php

Installation and Fundamentals of Nginx on Ubuntu

Table of contents

  • Overview
  1. Introduction
  2. What is NGINX
  3. Nginx vs Apache
  • Installation
  1. Server with a overview
  2. Installation with a package manager
  3. Building nginx from source and adding modules
  4. Adding an nginx service
  • Configuration
  1. Understanding configuration terms
  2. Creating a Virtual host
  3. Location blocks
  4. Variables
  5. Rewrites and Redirects
  6. Tryfiles and Named locations
  7. Logging
  8. Inheritance and directive types
  9. PHP processing
  10. Worker process
  11. Buffers and timeouts
  12. Adding Dynamic modules
  • Performance
  1. Headers and expires
  2. Compressed responses
  3. FastCGI cache
  4. HTTP2
  5. Server push
  • Security
  1. HTTPS(SSL)
  2. Rate-Limiting
  3. Basic Auth
  • Reverse Proxy and load balancing
  1. Prerequisites
  2. Reverseproxy
  3. Load Balancer

Introduction

What is a webserver?

A web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocol

What is NGINX?

Nginx is a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache.

NGINX was created in 2004 by a Russian developer Igor sizov as he was frustrated with a apache and wanted to build a replacement capable of handling 10000 concurrent connections worth the focus on performance.High concurrency and low memory usage.Of course it’s by no means a simple piece of software but it’s really good at making practical tasks such as video streaming.

Nginx vs Apache

Apache is configured in what’s called pre fork mode meaning that had spawned a set number of processors each of which can serve a single request at a time regardless of whether that request is for appear to be script or an image.Nginx on the other hand deals with requests asynchronously meaning that a single Nginx process can serve multiple requests concurrently with that number basically just depending on the system resources. Nginx will serve static resources without the need to involve any server side languages.And this gives it quite an advantage over Apache.And as for handling concurrent requests Nginx can potentially receive thousands of requests on a single processing thread and respond to them as fast as it can without turning down any of those requests.

Apache on the other hand will accept a request up to the preconfigured number and then simply reject the rest.

Installation

Start with apt-get update a good practice

Then use apt-get install nginx

You can check whether nginx is running using the process command

Then you can get the ip address using the ifconfig command

Put the ip that you received on the web-browser

You can list all the files using ls -l /etc/nginx

Visit nginx.org

Copy the address of the link

Get back to the terminal then use

wget <Address of the link>

Now extract the tar file using this command

Now run ./configure to configure your source code

Then add the pcre library which is necessay for ubuntu

Now building nginx from source

{

}

Adding an nginx service

We will be using systemd in order to run our nginx

Change the path as per the variables we used to set while configuring our source

The systemctl status nginx can be used in place of the process command from now onwards

We can stop a process using the command

or

The advantage that systemd gives us is that each time we reboot our servers we don’t have to explicitly start our nginx servers each time.

We can use this command

Understanding Configuration Terms

The two main configuration terms is context and directive directives.

1) Context

2) Directive

Essentially context is the same as scope and like scope context are also nested and inherit from their parents with the topmost context simply being the configuration file itself.

Are there important contexts include the HTTP context for anything HTTP related the server context which is where we define a virtual host.

Directives:

A specific configuration options that get set in the configuration files and consist of a name and a value.

Creating a Virtual Host

a basic virtual host to serve static files from a directory on our server.

Put this code in your nginx.conf file

and now in place of the default page you would see the pages in that directory

check for any syntax error using

Now this page is being served up.

To check locally on the terminal

To make use of the extension in nginx just include the mime.types in the nginx.conf

Location blocks

This is the most used context in any nginx configuration and it’s how will define and configurethe behaviour of specific uri or requests.

Think of location blocks as intercepting a request based on its value and then doing something other

than just trying to serve a matching file relative to the root directory as we saw.

This standard behaviour is great for static resources such as our style sheet to this image etc..

Types of Ways of writing a location and priority

Variables

There are two types of variables configuration variables and nginx module variables

Rewrites and redirects

Rewrite mutates the uri internally and it gets revaluated again.

Redirect simply tells the client performing the request where to go instead.

Try files and named locations

unique directive called Tri files try files as with the return and rewrite directives can be used in the server context.

So applying to all incoming requests or inside a location context with trifles allows us to do is have nginx check for a resource to respond worth in any number of locations relative to the root directory with a final argument that result in a rewrite and re-evaluation.

Logging

There are two types of logging

Error:As the name suggests for anything that failed or didn’t happen as expected and access logs to log all requests to the server.

Access log: To log all requests to the server

Logging is an extremely important aspect of not only in generics but web servers in general as logs allow us to track down errors or even identify malicious users.

Logging has also enabled by default.

Inheritance and Directive

Inheritance. In object-oriented programming, inheritanceenables new objects to take on the properties of existing objects. A class that is used as the basis forinheritanceis called a superclass or base class. A class that inheritsfrom a superclass is called a subclass or derived class.

Directives are of three types :

1)Standard Directive-

Only declared once inheritance is similar to that of array directive

2)Array Directive:

Can be declared many times without overwritting the previous one

example:

3)Action Directive

Invoke break or action in a context

example the return

Up to now we’ve dealt with and configured in nginx to serve static files of various types leaving the rendering of that file to be handled by the client or browser based on its content type Now while serving static files is great a critical part of most web servers is the ability to serve dynamic content that’s been generated from a server side language such as P HP.nginx isn’t able to embed its server side language processors so instead then we’ll configure a standalone PHP service name This is essentially nginx functioning as a reverse proxy server.

Firstly install php

Buffers and timeouts

Time outs is simply after how much amount of time is the servers connection going to be severed.

Worker processes

Master process is the actual nginx server or software instance we started, this nginx server spawns and listens to a clients request.

The default number of nginx processes is 1.

To change the number of processes nginx exports we have to use

worker_processs 8;

To tell the number of cores

or use lscpu for more details

Use worker_processes auto;

For executing one process at a time for more efficiency.

Connections is the maximum files it can run.

Adding dynamic modules

To add modules to an existing nginx install by rebuilding the source with the following command

Add this to you previous ./configure arguments

Headers and expires

Expires headers are essentially a response header informing the client or browser how long it can cache that response for

curl -I http://127.0.0.1/thumb.png

Compressed responses

Using gzip

Run this command on the terminal

Fast Cgi

HTTP 2

one first off HTTP 2 is a binary protocol where h.t. TB One is a textual protocol.

Binary data or ones and zeros is a far more compact way of transferring data.And it greatly reduces the chance of errors during data transfer.Secondly HTP to compress this response headers which again reduces transfer time then the single most important feature for performance is the fact that HTP 2 uses persistent connections and those persistent connections are also multiplexed meaning that multiple assets such as stylesheets scripts and xceedium l can be combined into a single stream of binary data and transmitted over a single connection.

Server push: Client can be informed of assests like .html,.js,.css

HTTPS(SSL)

Rate Limiting

Basic Auth

Prerequistites

Reverse Proxy

Load Balancing

Has too main points distribution of task among servers and redundancy

Installing Node.js

What is node.js?

Node.js is an open source server or environment that helps in executing Java Script and can generate a Dynamic page content. Here .js is the standard filename extension for JavaScript code.

Prerequisites

We should have Ubuntu Server of version 14.04 or 16.04 or 18.04, where Sudo commands can be used. In this document we are using Ubuntu version 16.04.

Demonstrations

  1. Performed on OMegha Platform
  2. Performed on UBUNTU 16.04

Installing Node.js in UBUNTU

Ubuntu 16.04 contains Node.js in its repository itself. We should just install it. First we should refresh our local package index and then install Node.js, using the command-

sudo apt-get update

sudo apt-get install nodejs

This Node.js is not the latest version.

Installing NPM

We can also Install Node Package Manager (NPM). This will allow us to install modules and packages to use with Node.js. We use the command-

sudo apt-get install npm

To check version of Node.js

We can check the version of Node.js we have installed, by using the command-

nodejs –-version

Or we can use the command-

nodejs –v

This also helps in checking if the package is installed successfully.

To check version of NPM

We can check the version of NPM we have installed, by using the command-

npm --version

Executing Java Script in UBUNTU

First we use Nano on terminal and the text editor opens. We can write the Java Script code and save the file using .js extension.

To execute the given file we use the command-

Node “filename”.js

Removing Node.js

We can Remove Node.js with certain constraints.

  1. Removing Node.js without removing configuration files
  2. Removing Node.js with configuration files
  3. Removing unused Packages

Removing Node.js without removing configuration files

We can remove Node.js without removing the configuration files using the command-

sudo apt-get remove nodejs

Removing Node.js with configuration files

We can remove Node.js along with its configuration files using the command-

sudo apt-get purge nodejs

Removing unused packages

The unused packages can be removed using the command-

sudo apt-get autoremove

To host and access a simple html page using Apache

1.About Apache

a) Apache is the most widely used web server software. Developed and maintained by Apache Software Foundation, Apache is an open source software available for free

b) A Web server is a program that uses HTTP (Hypertext Transfer Protocol) to serve the files that form Web pages to users, in response to their requests, which are forwarded by their computers’ HTTP clients.

2.Prerequisites

a) An ubuntu server of version 16.04

b) An account with sudo privileges

3.Installing Apache

a)   $ sudo apt-get update

   Gets the required packages.

b)    $ sudo apt-get install apache2

   Enables modules, configurations, sites.

4. Checking if Apache works

a) $ ps –ef |grep apache2

   A path of Apache then k start is given as output, this means that Apache is up and running.

5.Creating a directory to place our file

a) $ sudo mkdir –p /var/www/h.com/public_html

 a) $ sudo chown –R $USER:$USER /var/www/h.com/public_html

6.Granting permissions

 b) $ sudo chmod –R 755 /var/www

7. Getting to the folder created

a) cd var

 b) cd www

 c) cd h.com

 d) cd public_html

8.Creating a simple html page in the current folder

a) $ nano /var/www/public_html/index.html

9.Contents of the html page

a) A simple html file, like the one below

<html>

<head>

<title>This is page h</title>

</head>

<body>Hello Everyone!!</body>

</html>

10.Creating a virtual host file

a) Get out of the previous directory, then

  $ cd etc

  $ cd apache2

  $ cd sites-available

  $ ls

b) Under this folder there is a file named “000-default.conf”. We will copy the information in this file to our new file named h.com.conf

  $ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/h.com.conf

c)Now, we have to get into this file. After editing we must save it.

 $ sudo nano /etc/apache2/sites-available/h.com.conf

d)Changes to be made

  ServerAdmin info@h.com

  ServerName h.com

  ServerAlias http://www.h.com

DocumentRoot /var/www/h.com/public_html

11.Enabling our virtual host file

a) $ sudo a2ensite h.com.conf

    This enables site h.com

12.Disabling the default host file

 a) $ sudo a2dissite 000-default.conf

       This disables the default host file

13.Restarting Apache services

 a) $ sudo service apache2 restart

  It says *Restarting web server Apache2

14.Testing

a) For this, we use the public ip address which in my case is (for example) 1.2.3.4

b) On typing the above ip address on a web browser, we can see the body of the page.

If this doesn’t work, we must consider the following steps:

 c)We must check the access and error log files. Later, try rectifying it accordingly.

 d)For example, this is my private ip address 1.2.2.1

    $ curl 1.2.3.1 but $ curl 1.2.3.4 gives timed out

  If this gives you the contents of your html file then your apache is working but it is unable to access this from a network that is outside. So, this means the port is closed

Therefore, we have to check if the port is up. For Apache the port number is 80.                                                                                                                             

Uncomplicated Firewall Setup

What is Firewall

Firewall is a Hardware or Software which divides a computer network into many parts and tries to prevent unauthorised access from an unknown system.

What is UFW?

Uncomplicated Firewall (UFW) is a firewall configuration tool for Ubuntu. UFW provides a much more user-friendly framework for managing net filter and a command-line interface for working with the firewall.

Prerequisites

We should have an Ubuntu Server of version 14.04 or 16.04, where Sudo commands can be used.

Demonstrations

  1. Performed on OMegha Platform
  2. Performed on UBUNTU 14.04

Installing UFW

UFW is installed by default on Ubuntu. If it has not been installed we can install with the command –

sudo apt-get install ufw

Here UFW has already been installed.

Checking Status and Rules(Inactive)

We can check the status of UFW in detail using the command-

sudo ufw status verbose

Here the Status is inactive as UFW is not enabled.

Showing Application list

We can check the list of application available in UFW using the command-

sudo ufw app list

Showing Application Information

We can check the information of an application in UFW using the command-

sudo ufw app info ‘Postfix’

Allowing SSH connections

Before enabling UFW firewall, we have to set some rules for incoming of certain connections.

The connection in this example is SSH connection using the command-

sudo ufw allow ssh

Or we can specify the port instead of the service name. The command using port –

sudo ufw allow 22

Enable UFW

To enable UFW Firewall, we use the command-

sudo ufw enable

You will receive a warning that says the “command may disrupt existing ssh connections.” We already activated a rule to allow ssh connections, so we can click on the key y on keyboard to move further.

Checking Status and Rules(Active)

After enabling the UFW Firewall, we can check the status and rules again, using the same command-

sudo ufw status verbose

We can also display numbers next to each rule, using the command-

sudo ufw status numbered

Allowing other connections

We can also connect to other servers. We should know either the service name or the port. For http service we can use the following commands-

The command using service name –

sudo ufw allow http

The command using port

sudo ufw allow 80

Allowing specific IP Address

We can also allow the connections from a specific IP Address, for example, from our personal machine or any other machine. We need to use the command-

sudo ufw allow from IP Address

We can also specify the port number through which the IP address passes through.

sudo ufw allow from IP Address to any port Number

Deny conneections

We can deny certain rules to prevent certain connections, using the command-

sudo ufw deny ssh

We can also deny connections using the IP Address of the particular service, using the command-

sudo ufw deny from IP Address

Delete Rules

We can delete existing rules by 2 ways-

  1. Rule Number
  2. Actual Rule

Rule Number

By using this method, first we need to access the status of the UFW Firewall with numbering. Then we can use the command-

sudo ufw delete 5

Actual Rule

By using this method, we have to mention the rule itself that we want to delete. The command-

sudo ufw delete deny 80

Disable UFW

We can disable the UFW Firewall using the command-

sudo ufw disable

Reset UFW Rules

We can reset the UFW Configuration using the command-

sudo ufw reset