Elastic Search Cluster on Ubuntu 14.04


Elastic Search is a popular open source search server that is used for real-time distributed search and analysis of data. When used for anything other than development, Elastic Search should be deployed across multiple servers as a cluster, for the best performance, stability, and scalability.



OMegha Platform.

Image – Ubuntu-14.04


You must have at least three Ubuntu 14.04 servers to complete this, because an Elastic Search cluster should have a minimum of 3 master-eligible nodes. If you want to have dedicated master and data nodes, you will need at least 3 servers for your master nodes plus additional servers for your data nodes.

Install Java 8:

Elastic Search requires Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elastic Search recommends. It should, however, work fine with OpenJDK, if you decide to go that route.

Complete this step on all of your Elastic Search servers.

Add the Oracle Java PPA to apt:

$ sudo add-apt-repository -y ppa:webupd8team/java

Update your apt package database:

$ sudo apt-get update

Install the latest stable version of Oracle Java 8 with this command (and accept the license agreement that pops up):

$ sudo apt-get -y install oracle-java8-installer

Be sure to repeat this step on all of your Elastic Search servers.

Now that Java 8 is installed, let’s install Elastic Search.

Install Elastic Search:

Elastic Search can be downloaded directly from elastic.co in zip, tar.gz, deb, or rpm packages. For Ubuntu, it’s best to use the deb (Debian) package which will install everything you need to run Elastic Search.

$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.deb

Then install it in the usual Ubuntu way with the dpkg command like this:

$ sudo dpkg -i elasticsearch-1.7.2.deb

This results in Elastic Search being installed in /usr/share/elastic Search/ with its configuration files placed in /etc/elastic Search and its init script added in /etc/init.d/elastic search.

$ sudo update-rc.d elasticsearch defaults

Be sure to repeat these steps on all of your Elastic Search servers.

Elastic Search is now installed but it needs to be configured before you can use it.

Configure Elastic search Cluster

Now it’s time to edit the Elastic search configuration. Complete these steps on all of your Elastic search servers.

Open the Elastic search configuration file for editing:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Set Cluster Name:

Next, set the name of your cluster, which will allow your Elastic search nodes to join and form the cluster. You will want to use a descriptive name that is unique (within your network).

Find the line that specifies cluster.name, uncomment it, and replace its value with the your desired cluster name. In this tutorial, we will name our cluster “elastic search_cluster”:


Set Node Name:

Next, we will set the name of each node. This should be a descriptive name that is unique within the cluster.

Find the line that specifies node.name, uncomment it, and replace its value with your desired node name. In this tutorial, we will set each node name to the host name of server by using the ${HOSTNAME}environment variable:


For Master Node:

For Master set the node.master as True and for node.data as False


For Data Node:

For Data set the node.master as False and for node.data as True


Network Host:

Set the network host as


Set Discovery Hosts:

Next, you will need to configure an initial list of nodes that will be contacted to discover and form a cluster. This is necessary in a unicast network.

Find the line that specifies discovery.zen.ping.unicast.hosts and uncomment it.

For example, if you have three servers node01, node02, and node03 with respective VPN IP addresses of,, and, you could use this line:


Save and Exit.

Your servers are now configured to form a basic Elastic search cluster. There are more settings that you will want to update, but we’ll get to those after we verify that the cluster is working.

Save and exit elasticsearch.yml.

Start Elastic search:

Now start Elastic search:

$ sudo service elasticsearch restart

Then run this command to start Elastic search on boot up:

$ sudo update-rc.d elasticsearch defaults 95 10

Be sure to repeat these steps (Configure Elastic search) on all of your Elastic search servers.


By now, Elastic search should be running on port 9200. You can test it with curl, the command line client-side URL transfers tool and a simple GET request like this:

$ curl -X GET 'http://localhost:9200'

You should see the following response:


If you see a response similar to the one above, Elastic search is working properly. If not, make sure that you have followed correctly the installation instructions and you have allowed some time for Elastic search to fully start.

Check Cluster State:

If everything was configured correctly, your Elastic search cluster should be up and running. Before moving on, let’s verify that it’s working properly. You can do so by querying Elastic search from any of the Elastic search nodes.

From any of your Elastic search servers, run this command to print the state of the cluster:

$ curl -XGET 'http://localhost:9200/_cluster/state?pretty'


If you see output that is similar to this, your Elastic search cluster is running! If any of your nodes are missing, review the configuration for the node(s) in question before moving on.

Next, we’ll go over some configuration settings that you should consider for your Elastic search cluster.

Enable Memory Locking:

Elastic recommends to avoid swapping the Elastic search process at all costs, due to its negative effects on performance and stability. One way avoid excessive swapping is to configure Elastic search to lock the memory that it needs.

Complete this step on all of your Elastic search servers.

Edit the Elastic search configuration:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Find the line that specifies bootstrap.mlockall and uncomment it:


Save and exit.

Now restart Elastic search to put the changes into place:

$ sudo service elasticsearch restart

Cluster Health:

This API can be used to see general info on the cluster and gauge its health:

$ curl -XGET 'localhost:9200/_cluster/health?pretty'


Cluster State:

This API can be sued to see a detailed status report on your entire cluster. You can filter results by specifying parameters in the call URL.

$ curl -XGET 'localhost:9200/_cluster/state?pretty'



Your Elastic search cluster should be running in a healthy state, and configured with some basic optimizations.



Node.js Installation


Node.js is a cross-platform environment and library for running JavaScript applications which are used to create networking and server-side applications.

It is used to develop I/O intensive web applications like video streaming sites, single-page applications, and other web applications.


To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Node.js Installation In Ubuntu 16.04

Step-1 Update the Package List

Before installing the node.js on to the Ubuntu system, update all available repositories.

sudo apt-get update

Step-2  Install Node.js

Run below command to install standard package of node.js

sudo apt-get install nodejs

Step-3 Install NPM

For installing Node.js the npm package is also required. By using below command we install the npm package.

sudo apt-get install npm

In order for some npm packages to work (those that require compiling code from source, for example), you will need to install the build-essential package:

sudo apt-get install build-essential

Installation Check

After Installing the node.js and npm we check that installation is correct or not by typing the following commands:

Node.js Installation Check

nodejs --version

NPM Installation Check

npm --version

Remove Node.js

For removing the node.js from ubuntu system completely we type following commands:

Remove Package without Configuration Files

This command removes node.js but configuration files remain so next time we install node.js than it will be used.

sudo apt-get remove nodejs

Remove Package with Configuration Files

If you don’t want to kept the configuration files then use following command.

sudo apt-get purge nodejs

Finally, Remove Unused Packages

For removing the unused packages that are installed with node.js run the following command:

sudo apt-get autoremove 

Installing Open Source Hosting Control Panel (ZesleCP) on Ubuntu 16.04


Zesle Control Panel

Secure Web Control Panel for your needs…


Zesle is one of the popular open source control panel that any own can download and install. This is very simple and can be installed in just one command.

System Requirements:

  • Ubuntu 14/16
  • 1Ghz CPU
  • 512MB RAM
  • 10+GB DISK

Zesle is simpler and very User friendly.  Using Zesle you’ll be able to do the below tasks…

  • Add multiple domains without hassle;
  • Add multiple sub domains;
  • Install WordPress easily with one-click app;
  • Install free Let’s Encrypt SSL certificates with ease;
  • Configure basic PHP settings;
  • Manage Email accounts;
  • Access phpMyAdmin.

and much more. Let’s see how to install Zesle in your hosting.

Step 1: It’s super-easy to install Zesle. Run the following command with Root privilege.

$ cd /home && curl -o latest -L http://zeslecp.com/release/latest && sh latest

Step 2: The installation will begin and in between It will ask the admin’s email address. Please provide your email id and click enter.

Step 3: You will see the below screen at the end of the installation.


Step 4: This is how Zesle will looks like. Once the installation is completed, it’ll show you the temporary password and the login URL.

Step 5: The login URL will be your IP address followed by the port number(2087 will be the default one). For example: is a sample URL.

Step 6: Just enter this in your browser and you’ll get the login screen.


Step 7: Use root as your username

Step 8: Copy paste the temporary root password provided. Once you entered the correct password, the control panel will open and here is how it looks like. All the options mentioned above will be available in the left side of the UI.


Step 9: In the Dashboard, you can create your account, and install WordPress on your domain using “Once Click Apps

Step 10: There is installation steps end for free Linux Web Hosting control panel called ZesleCP.


Installation of Open Project Management System on Ubuntu 16.04



Open Project is a web-based management system for location-independent team
collaboration, released under GNU GPL 3 License. It’s a project management software
that provides task-management, team collaboration, scrum etc. Open Project is written
in Ruby on Rails and AngularJS. In this tutorial, I will show you how to install and
configure the Open Project management system using Ubuntu 16.04. The tool can be
installed manually or by using packages from the repository. For t his guide, we will
install Open Project from repository.


  •  Ubuntu 16.04.
  •  Good Internet Connectivity.
  •  Root Privileges.

What we will do

  • Update and Upgrade System.
  • Install Open Project Management System.
  • Configure the Open Project System.
  • Testing.

Step 1: Update and Upgrade System

Before installing the Open Project on to the Ubuntu system, update all available repositories and upgrade the Ubuntu system.

Run the following commands.

$ sudo apt update
$ sudo apt upgrade -y

Step 2: Install Open Project Management System

Download the open project key and add it to the system.

$ sudo wget -qO- https://dl.packager.io/srv/opf/openproject-ce/key | sudo apt-key add

And download the open project repository for Ubuntu 16.04 in the ‘/etc/apt/sources.list.d’ directory.

$ sudo wget -O /etc/apt/sources.list.d/openproject-ce.list \

Now update the Ubuntu repository and install open project using the apt command as shown below.

$ sudo apt update
$ sudo apt-get install openproject -y

Step 3: Configure the Open Project System

Run the Open Project configuration command. A Graphical based UI screen will appear.

$  sudo openproject configure


Select ‘Install and configure MySQL server locally’ and click ‘OK’. It will automatically install MySQL server on the system, and automatically create the database for openproject installation.

For the web server configuration, choose the ‘Install apache2 server’ and click ‘OK’. it will automatically install the apache2 web server and configure the virtual host for Open Project application.


Now type the domain name for your Open project application, and choose ‘OK’.

Next, for the SSL configuration. If you have purchased SSL certificates, choose ‘yes’, and ‘no’ if you don’t have SSL certificates.


Skip the subversion support, GitHub support and SMTP configuration. (if not needed).

And for memcached installation choose ‘Install’ and select ‘OK’ for better performance of Open Project.


Finally, installation and configuration of all the packages required for Open Project installation should happen automatically.

Step 4: Testing

Check whether the Open Project service is up and running.

$  sudo service openproject status

Now run the openproject web service using the following command.

$  sudo openproject run web

Now open your web browser and type on the address bar your floating IP to access the system


Now click the ‘Sign in’ button to log in to the admin dashboard initially using ‘admin’ as user and ‘admin’ as password and later you can change it.

Finally, the installation and configuration for Open Project on Ubuntu 16.04 has been completed successfully.




Apache Virtual Hosts on Ubuntu 14.04


The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a Virtual Host.


These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

In this document, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you’ll learn how to serve different content to different visitors depending on which domains they are requesting.


  • Before you begin this tutorial, you should create a non root user.
  • You will also need to have Apache installed in order to work through these steps.


OMegha platform.

Image – Ubuntu-14.04

Lets get Started,

At first we need to update the packages list.

$ sudo apt-get update


Install Apache

$ sudo apt-get install apache2


For the purposes of this document, my configuration will make a virtual host for infra.com and another for infra1.com

Step 1: Create the Directory Structure

Our document root will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

$ sudo mkdir -p /var/www/infra.com/public_html

$ sudo mkdir -p /var/www/infra1.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

Step 2: Grant Permissions

Changing the Ownership

$ sudo chown -R $USER:$USER /var/www/infra.com/public_html

$ sudo chown -R $USER:$USER /var/www/infra1.com/public_html


We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders

$ sudo chmod -R 755 /var/www

Step 3: Create Demo Pages for Each Virtual Host

We have to create index.html file for each site.

Let’s start with infra.com. We can open up an index.html file in our editor by typing

$ sudo vi /var/www/infra.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to and My file looks like this



    <title>Welcome to infra.com!</title>



    <h1>Success!  The infra.com virtual host is working!</h1>



Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing

cp /var/www/infra.com/public_html/index.html /var/www/infra1.com/public_html/index.html

Then we can open the file and modify the relevant pieces of information

$ sudo vi /var/www/infra1.com/public_html/index.html



    <title>Welcome to infra1.com!</title>



    <h1>Success!  The infra1.com virtual host is working!</h1>



Save and close the file.

Step 4: Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf and we can
copy that to our first domain of the virtual host file.

Creating First Virtual Host File

Start by copying the file for the first domain

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/infra.com.conf

Open the new file in your editor with root privileges

$ sudo vi /etc/apache2/sites-available/infra.com.conf

our virtual host file should look like this

<VirtualHost *:80>

    ServerAdmin admin@infra.com

    ServerName infra.com

    ServerAlias www.infra.com

    DocumentRoot /var/www/infra.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


Save and close the file.

Copy first Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying

$ sudo cp /etc/apache2/sites-available/infra.com.conf /etc/apache2/sites-available/infra1.com.conf

Open the new file with root privileges

$ sudo vi /etc/apache2/sites-available/infra1.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this

<VirtualHost *:80>

    ServerAdmin admin@infra1.com

    ServerName infra1.com

    ServerAlias www.infra1.com

    DocumentRoot /var/www/infra1.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


Save and close the file.

Step 5: Enable the New Virtual Host Files

Created Virtual host files needs to be enabled.

We can use the a2ensite tool to enable each of our sites

$ sudo a2ensite infra.com.conf

$ sudo a2ensite infra1.com.conf


Restart the apache server.

$ sudo service apache2 restart

Step 6: Setup Local Hosts File

$ sudo vi /etc/hosts

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS. localhost

***.***.***.*** infra.com

***.***.***.*** infra1.com

Save and close the file.

This will direct any requests for infra.com and infra1.com on our computer and send them to our server at ***.***.***

Step 7: Test Your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser



You should see a page that looks like this

Likewise, if you can visit your second page



You will see the file you created for your second site

Step 8: Conclusion

If both of these sites work well, you’ve successfully configured two virtual hosts on the same server.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

Centralize Logs from Node.js Applications



  • Installation of Node.js and NPM
  • Installation of Fluentd

Modifying the Config File

Next, please configure Fluentd to use the forward Input plugin as its data source.

$ sudo vi /etc/td-agent/td-agent.conf

Fluent daemon should listen on TCP port.

Simple configuration is following:


Restart your agent once these lines are in place.

$ sudo service td-agent restart


Install fluent-logger-node

$ npm install fluent-logger

Now use npm to install your dependencies locally:

$ npm install

Send an event record to Fluentd


This is the simplest web app.


Run the app and go to http://localhost:4000 in your browser. This will send the logs to Fluentd.

$ node index.js


The logs should be output to /var/log/td-agent/td-agent.log  

Store Logs into MongoDB

Fluentd does 3 things:

  1. It continuously “tails” the access log file.
  2. It parses the incoming log entries into meaningful fields (such as ip,path, etc.) and buffers them.
  3. It writes the buffered data to MongoDB periodically.


Fluentd’s config file

$ sudo vi /etc/td-agent/td-agent.conf

 The Fluentd configuration file should look like this:


Restart your agent once these lines are in place.

$ sudo service td-agent restart

Then, access MongoDB and see the stored data.

$ mongo


Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Installation of MongoDB on Ubuntu 16.04


MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.


MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.


To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA31292

After successfully importing the key, you will see:

gpg: Total number processed: 1
gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod
$ mongo

mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Main PID: 4093 (mongod)
Tasks: 16 (limit: 512)
Memory: 47.1M
CPU: 1.224s
CGroup: /system.slice/mongodb.service
└─4093 /usr/bin/mongod --quiet --config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Installing Asterisk on Ubuntu 16.04




Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.


Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv


Memcached installation in Ubuntu 16.04

What is Memcached?

Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages.

The key features of Memcached 

  • It is open source.
  • Memcached server is a big hash table.
  • It significantly reduces the database load
  • It is perfectly efficient for websites with high database load.
  • It is distributed under the Berkeley Software Distribution (BSD) license.
  • It is a client-server application over TCP or UDP.

Memcached is not 

  • a persistent data store
  • a database
  • application-specific
  • a large object cache
  • fault-tolerant or highly available


One Ubuntu 16.04 server


To install Memcached on Ubuntu we use apt

$sudo apt-get update
$sudo apt-get install memcached

We can also install libmemcached-tools, a library that provides several tools to work with your Memcached server:

$ sudo apt-get install libmemcached-tools

Securing Memcached Configuration Settings

To ensure that our Memcached instance is listening on the local interface, we will check the default setting in the configuration file located at /etc/memcached.conf.

$ sudo nano /etc/memcached.conf

To disable UDP (while leaving TCP unaffected), we’ll add the following option to the bottom of this file:

Restart  Memcached and verifying

$ sudo systemctl restart memcached
$ sudo netstat -plunt

Adding Authorized Users

To add authenticated users to your Memcached service, it is possible to use Simple Authentication and Security Layer (SASL), a framework that de-couples authentication procedures from application protocols. We will enable SASL within our Memcached configuration file and then move on to adding a user with authentication credentials.

First, we will add the -S parameter to /etc/memcached.conf.

$ sudo nano /etc/memcached.conf

At the bottom of the file, adding -s

And uncommenting the -vv-

Restart the Memcached service:

$ sudo systemctl restart memcached

Next, we can take a look at the logs to be sure that SASL support has been enabled:

$ sudo journalctl -u memcached

Adding an Authenticated User

Now we can download sasl2-bin

$ sudo apt-get install sasl2-bin

Next, we will create the directory and file that Memcached will check for its SASL configuration settings:

$ sudo mkdir -p /etc/sasl2
$ sudo nano /etc/sasl2/memcached.conf

Add the following to the SASL configuration file:

Now we will create a SASL database with our user credentials. We will use the saslpasswd2command to make a new entry for our user in our database using the -c option. Our user will be sammy here, but you can replace this name with your own user. Using the -f option, we will specify the path to our database, which will be the path we set in /etc/sasl2/memcached.conf:

$ sudo saslpasswd2 -a memcached -c -f        /etc/sasl2/memcached-sasldb2 ankush

Finally, we will give the memcache user ownership over the SASL database:

$ sudo chown memcache:memcache /etc/sasl2/memcached-sasldb2

Restart the Memcached service:

$ sudo systemctl restart memcached

Running memcstat again

$ memcstat --servers="" --username=ankush --password=1234 

Redis installation in Ubuntu 16.04

What is Redis ?

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It is written in ANSI C and licensed under BSD 3-Clause.

It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. 

Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

The name Redis means REmote DIctionary Server.

Redis made popular the idea of a system that can be considered at the same time a store and a cache, using a design where data is always modified and read from the main computer memory, but also stored on disk in a format that is unsuitable for random access of data, but only to reconstruct the data back in memory once the system restarts.

Redis Advantages :

  1. Exceptionally fast − Redis is very fast and can perform about 110000 SETs per second, about 81000 GETs per second.
  2. Supports rich data types − Redis natively supports most of the datatypes that developers already know such as list, set, sorted set, and hashes. This makes it easy to solve a variety of problems as we know which problem can be handled better by which data type.
  3. Operations are atomic − All Redis operations are atomic, which ensures that if two clients concurrently access, Redis server will receive the updated value.
  4. Multi-utility tool − Redis is a multi-utility tool and can be used in a number of use cases such as caching, messaging-queues (Redis natively supports Publish/Subscribe), any short-lived data in your application, such as web application sessions, web page hit counts, etc.

Installation of Redis(Single Node Configuration)


  1. Ubuntu 14.04 or above (We are using ubuntu 18.04)
  2. A non root user with sudo privileges 


In order to get the latest version of Redis, we will use apt to install it from the official Ubuntu repositories.

$ sudo apt update
$ sudo apt install redis-server

Opening this file with nano

$ sudo nano /etc/redis/redis.conf

The supervised directive is set to no by default. Since we are running Ubuntu, which uses the systemd init system, changing this to systemd:


$ sudo systemctl restart redis.servicesudo systemctl restart redis.service

Testing Redis

$ sudo systemctl status redis

To test that Redis is functioning correctly

$ redis-cli> ping> set test "It's working!"> get test> exit

Binding to localhost

By default, Redis is only accessible from localhost. However, if you installed and configured Redis by following a different tutorial than this one, you might have updated the configuration file to allow connections from anywhere. This is not as secure as binding to localhost.

Opening the Redis configuration file for editing:

$ sudo nano /etc/redis/redis.conf\

remove the # from binding 

To restart the service 

$ sudo systemctl restart redis

To check that this change has gone into effect

$ sudo netstat -lnp | grep redis

Configuring a Redis Password

Configuring a Redis password enables one of its two built-in security features — the auth command, which requires clients to authenticate to access the database. The password is configured directly in Redis’s configuration file, /etc/redis/redis.conf, so open that file again with nano editor:

$ sudo nano /etc/redis/redis.conf

Removing # (comment) from # requirepass foobared

To restart 

$ sudo systemctl restart redis.service


It now asks for a password

Configuration with PHP

Installing the supporting php package for usage of redis with php. This can be done be issuing the following commands to the shell

$ sudo apt install php-redis

Setting up Cloudflare account


Cloudflare, Inc. is a U.S. company that provides content delivery network services, DDoS mitigation, Internet security and distributed domain name server services. Cloudflare’s services sit between the visitor and the Cloudflare user’s hosting provider, acting as a reverse proxy for websites. Cloudflare’s headquarters are in San Francisco, California, with additional offices in Lisbon[2], London, Singapore, Munich, San Jose, Champaign, Austin, New York and Washington, D.C.


More than just Content Delivery (CDN) services, customers rely on Cloudflare’s global network to enhance security, performance, and reliability of anything connected to the Internet.

Cloudflare is designed for easy setup. Anyone with a website and their own domain can use Cloudflare regardless of their platform choice. Cloudflare doesn’t require additional hardware, software, or changes to your code.



Cloudflare stops malicious traffic before it reaches your origin web server. Cloudflare analyzes potential threats in visitor requests based on a number of characteristics:

  • visitor’s IP address,
  • resources requested,
  • request payload and frequency, and
  • customer-defined firewall rules.

Create your Cloudflare account and add a domain to review our security benefits.


Cloudflare optimizes the delivery of website resources for your visitors. Cloudflare’s data centers serve your website’s static resources and ask your origin web server for dynamic content. Cloudflare’s global network provides a faster route from your site visitors to our data centers than would be available to a visitor directly requesting your site. Even with Cloudflare between your website and your visitors, resource requests arrive to your visitor sooner.


Cloudflare’s globally distributed anycast network routes visitor requests to the nearest Cloudflare data center.  Cloudflare distributed DNS responds to website visitors with Cloudflare IP addresses for traffic you proxy to Cloudflare.  This also provides security by hiding the specific IP address of your origin web server.

Cloudflare-proxied domains share IP addresses from a pool that belongs to the Cloudflare network. As a result, Cloudflare does not offer dedicated or exclusive IP addresses. To reduce the number of Cloudflare IPs that your domain shares with other Cloudflare customer domains, upgrade to a Business or Enterprise plan and upload a Custom SSL certificate.

Also, our flat-rate pricing structure provides predictability and reliability in your CDN and DDoS bandwidth expenses.

Related Resources

  • What is Cloudflare?
  • Get started with Cloudflare

Configuration of Cloudflare


  • Browser with Javascript Enabled(Such as Google Chrome, Firefox, Safari)
  • A domain with admin access to change the nameservers for which the CDN needs to be set-up.

Configuration for Cloudflare

Firstly we need to go the cloudflare website. This can be done by hitting the browser with https://www.cloudflare.com and register an account

We can register a new account by hitting the “Sign Up” button and inuting the details such as email-id and password

This will redirect us to an initial setup page which will ask for the domain details

After this step the cloudflare requests all the DNS records associated with the domain and present it to the user for setup

After this it asks for the setup type. We can choose one of the 4 options which will provide their respective feature. For the testing purposes we are choosing the “FREE Tier” and hitting the “Confirm plan” button

Next it asks to change the nameservers to its nameservers and after doing that the set-up is complete

Now the page will get redirected to your domain dashboard and you can control all the things from there. These include “Analytics”, “Crypto”, “Firewall”, “Access”, “Speed”, “Caching”, etc options

Node.js installation on Ubuntu 16.04

What is Node.js

Node.js is a JavaScript platform for general-purpose programming that allows users to build network applications quickly. By leveraging JavaScript on both the front- and back-end, development can be more consistent and designed within the same system.

To Install the Distro-Stable Version for Ubuntu

Ubuntu 16.04 contains a version of Node.js in its default repositories that can be used to easily provide a consistent experience across multiple systems. 

Firstly we’ll update our local package index so that we have access to the most recent package listings.And then we’ll install “Nodejs

$sudo apt-get update
$sudo apt-get install nodejs

To install npm

If the package in the repositories suits your needs, this is all we need to do to get set up with Node.js. In most cases, we’ll also want to also install npm, which is the Node.js package manager. 

$sudo apt-get install npm

Alternative methods 

To Install Using a PPA

An alternative that can get you a more recent version of Node.js is to add a PPA (personal package archive) maintained by NodeSource.

$ cd ~
$ curl -sL https://deb.nodesource.com/setup_8.x -o Nodesource_setup.sh 

To inspect the contents of this script with nano

$ nano nodesource_setup.sh

And run the script under sudo

$ sudo bash nodesource_setup.sh

The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script from nodesource, now we  can install the Node.js package.

$ sudo apt-get install nodejs

To check which version of Node.js

$ nodejs -v

To check the npm version

$ npm -v 

To install the build-essential package:

$ sudo apt-get install build-essential

How To Install Using NVM

An alternative to installing Node.js through apt is to use a specially designed tool called nvm, which stands for “Node.js version manager”.

To start off, we’ll need to get the software packages from our Ubuntu repositories that will allow us to build source packages. The nvm script will leverage these tools to build the necessary components:

$ sudo apt-get update
$ sudo apt-get install build-essential libssl-dev

Once the prerequisite packages are installed, we can pull down the nvm installation script from the project’s GitHub page. We can download it with curl:

$ curlsL  sLhttps://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh -o install_nvm.sh

The installation script with nano

$ nano install_nvm.sh

Running  the script with bash:

$ bash install_nvm.sh

It will install the software into a subdirectory of home directory at ~/.nvm. It will also adds necessary lines to  ~/.profile

$ source ~/.profile

Now, to find out the versions of Node.js that are available for installation

$ nvm ls-remote

To install LS version

 $ nvm install 10.16.0

To use the version we just downloaded by typing:

$ nvm use 8.9.4

We can have npm install packages to the Node.js project’s ./node_modules directory by using the normal format. For example, for the express module:

$ npm install express

Removing Node.js

To remove node.js we simply use apt-get remove command 

$ sudo apt-get remove nodejs

Installing Firewall with ubuntu 16.04

About Firewall

A firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier between a trusted internal network and untrusted external network, such as the Internet.

About UFW

UFW, or Uncomplicated Firewall, is an interface to iptables that is geared towards simplifying the process of configuring a firewall. While iptables is a solid and flexible tool, it can be difficult for beginners to learn how to use it to properly configure a firewall.


We can use Ubuntu 14.04 or 16.0 server with a sudo non-root user.

Installation of UFW

UFW is installed by default on Ubuntu. If it has not been installed we can install it by using following command

$ sudo apt-get install ufw
This image has an empty alt attribute; its file name is capture-2.png

Here, ufw has already been installed.Paragraph

Checking Application list

we can use the following command to check the applications available in UFW.

$ sudo ufw app list
This image has an empty alt attribute; its file name is capture3-1.png

Checking Status and Rules

By using the below command we can check the status of ufw

$ sudo ufw status verbose
This image has an empty alt attribute; its file name is verbose.png

and also we can check the Rule with the below command

sudo ufw status numbered

This image has an empty alt attribute; its file name is numbered.png

Allowing SSH connections:

we have to set some rules for incoming connections before enabling UFW firewall. This helps our server to respond to these requests.

To configure your server to allow incoming SSH connections, you can use this command:

$ sudo ufw allow ssh

we can actually write the equivalent rule by specifying the port instead of the service name by using the below command.

This image has an empty alt attribute; its file name is rules-1.png
 $ Sudo ufw allow 22
This image has an empty alt attribute; its file name is rule2.png

Enable UFW

to enable UFW firewall we can use the below command.

$  Sudo ufw enable

we will receive a warning that says the command may disrupt existing SSH connections. We already set up a firewall rule that allows SSH connections, so it should be fine to continue. Respond to the prompt with y.

This image has an empty alt attribute; its file name is enable-1.png

The firewall is now active.

Allowing other connections:

 $ sudo ufw allow http

command using port

 $ sudo ufw allow 80

Specific IP Address

If we want to allow the connections from a specific IP Address( from our machine to any other machine) we can use the below command:

 $ sudo ufw allow from IP Address

Deny connections:

We need to create deny rules for any services or IP addresses that you don’t want to allow connections for.

For example, To deny HTTP connections, we could use this command:

 $ sudo ufw deny http

Disable UFW

we can disable UFW firewall by using the following command

 $ sudo ufw disable

we can also reset the UFW firewall by using the below command

$  sudo ufw reset

LAMP Stack installation in Ubuntu 16.04

What is LAMP 

A LAMP Stack is a set of open-source software that can be used to create websites and web applications. LAMP is an acronym, and these stacks typically consist of the Linux operating system, the Apache HTTP Server, the MySQL relational database management system, and the PHP programming language.


A Non-root user account with sudo privileges set up on your server.

Install Apache and Allow in Firewall

The Apache web server is among the most popular web servers in the world. It’s well-documented, and has been in wide use for much of the history of the web, which makes it a great default choice for hosting a website. 

$ sudo apt-get update
$ sudo apt-get install apache2

Set Global ServerName to Suppress Syntax Warnings

Next, we will add a single line to the /etc/apache2/apache2.conf file to suppress a warning message. While harmless, if you do not set ServerName globally, you will receive the following warning when checking your Apache configuration for syntax errors:

$ sudo apache2ctl configtest

Opening up the main configuration file with our text edit:

$ sudo nano /etc/apache2/apache2.conf

Inside, at the bottom of the file, adding ServerName directive, pointing to our  server’s public IP 

Next, checking for syntax errors by typing:

$ sudo apache2ctl configtest

Restart Apache to implement changes:

$ sudo systemctl restart apache2

Adjust the Firewall to Allow Web Traffic

Next, assuming that you have followed the initial server setup instructions to enable the UFW firewall, make sure that your firewall allows HTTP and HTTPS traffic. You can make sure that UFW has an application profile for Apache like so:

$ sudo ufw app info "Apache Full"

In browser we put IP or URL


How To Find your Server’s Public IP Address

$ sudo apt-get install curl
$ curl http://icanhazip.com

Install MySQL

Now that we have our web server up and running, it is time to install MySQL. MySQL is a database management system. Basically, it will organize and provide access to databases where our site can store information.

Again, we can use apt to acquire and install our software. 

$ sudo apt-get install mysql-server

When the installation is complete, we want to run a simple security script that will remove some dangerous defaults and lock down access to our database system a little bit. Start the interactive script by running:

$ mysql_secure_installation

Answer y for yes, or anything else to continue without enabling.

Install PHP

PHP is the component of our setup that will process code to display dynamic content. It can run scripts, connect to our MySQL databases to get information, and hand the processed content over to our web server to display.

We can once again leverage the apt system to install our components. We’re going to include some helper packages as well, so that PHP code can run under the Apache server and talk to our MySQL database:

$ sudo apt-get install php libapache2-mod-php php-mcrypt php-mysql

To open the dir.conf file in a text editor

$ sudo nano /etc/apache2/mods-enabled/dir.conf

We want to move the PHP index file highlighted above to the first position after the DirectoryIndex specification


$ sudo systemctl restart apache2

We can also check on the status of the apache2 service using systemctl:

$ sudo systemctl status apache2

Test PHP Processing on your Web Server

In order to test that our system is configured properly for PHP, we can create a very basic PHP script.

We will call this script info.php. In order for Apache to find the file and serve it correctly, it must be saved to a very specific directory, which is called the “web root”

$ sudo nano /var/www/html/info.php

This will open a blank file. We put the following text, which is valid PHP code, inside the file:

Now we can test whether our web server can correctly display content generated by a PHP script. 

The address we visit 


To remove this file

$ sudo rm /var/www/html/info.php

Nginx installation on Ubuntu 16.04

What is Nginx 

NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. NGINX is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption. 

NGINX scales in all directions: from the smallest VPS all the way up to large clusters of servers.

This image has an empty alt attribute; its file name is 21DC0xTOjRb6Ey3QXRRJhJkqGz_TZakfo20gFXhmh6jfVd1L7dpxRI71mFoWw0JlfyegyBQOc1gvdDopte4iREjINBMRqrIcYHkAW2BHvMPiO_D03KZWqVcUJ8Rn_Y3WjD5VOyzS


Step 1: Install Nginx

Nginx is available in Ubuntu’s default repositories, so the installation is rather straightforward.

Firstly we’ll update our local package index so that we have access to the most recent package listings.And then we’ll install “Nginx

$ sudo apt-get update
$ sudo apt-get install nginx

Firewall Configuration

To list the applications configurations that ufw knows how to work with by typing:

$ sudo ufw app list
ubuntu@interninstance: ~

To enable we type:

$ sudo ufw allow 'Nginx HTTP'

Changes can be verified by typing:

$ sudo ufw status

Check  Web Server

$ Sudo systemctl status nginx
ubuntu@interninstance: ~

Check Web Server

We put server name or IP in the browser 


Installing Apache on Ubuntu16.04

What is Apache ?

The Apache HTTP server is the most widely-used web server in the world. It provides many powerful features including dynamically loadable modules, robust media support, and extensive integration with other popular software.


Apache is available in Ubuntu’s default repositories, so the installation is rather straightforward.

Step 1: Install Apache

Firstly we’ll update our local package index so that we have access to the most recent package listings. And then we’ll install “Apache

$ sudo apt-get update
$ sudo apt-get install apache2

Firewall Configuration

To list the applications configurations that ufw knows how to work with by typing:

$ sudo ufw app list

To enable we type:

$ sudo ufw allow 'Apache Full'

Changes can be verified by typing:

$ sudo ufw status

Check  Web Server

$ sudo systemctl status apache2

Check Web Server

We put server name or IP in the browser 



Step 1: Create the directory where you will and can put all the files of your        website in it so naming it “htdocs”.

$ sudo mkdir -p /var/www/domain.com/htdocs/

Step 2: Once created, we have to also grant ownership permissions to that folder for your username. 

 $ sudo chown -R $USER:$USER /var/www/domain.com/htdocs/

Step 3 – You may also need to change permission of the /var/www/ directory so your server’s users can read and write files in it. Use this command:

$ sudo chmod -R 755 /var/www

Step 4: Adding  “index.html” test file to your web directory.

$ sudo nano /var/www/domain.com/htdocs/index.html 

Step 5 – It is time for the tricky part: editing Apache virtual hosts file.

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/domain.com
$ sudo nano /etc/apache2/sites-available/domain.com

Final step: restart apache

$ sudo service apache2 restart 
$ sudo service apache2 reload 


Now in browser we enter the ip 


What is Prometheus?

Prometheus is an open source monitoring tool, which helps in collecting metrics form a server and stores it in a time-series database.

By default Prometheus collects and stores metrics about itself. But we can expand it by installing exporters.

What are Exporters?

Exporters are extra plugins which helps in giving information on infrastructure, databases ,web servers to messaging systems, APIs, and more.


  1. We should have an Ubuntu Server of version 16.04, where sudo commands can be used.
  2. UFW should be installed in the server.
  3. Ngnix should be installed and running.

Creating Service Users

First we shall create two user accounts prometheus and node_exporter, for security purposes. This helps in creating ownership for the files and directories. Use the –no-create-home and –shell /bin/false options so that these users can’t log into the server. We use the command-

sudo useradd --no-create-home --shell /bin/false prometheus
sudo useradd --no-create-home --shell /bin/false node_exporter

We shall create a directory in /etc to store Prometheus configuration files and a directory in /var/lib to store Prometheus data. We use the command-

sudo mkdir /etc/prometheus

sudo mkdir /var/lib/prometheus

We put the user and group ownership to the Prometheus user using the command-

sudo chown prometheus:prometheus /etc/prometheus

sudo chown prometheus:prometheus /var/lib/prometheus

Downloading Prometheus

Let us download and unpack the stable version of Prometheus. We can use the URL as given below.


To download we use the following command-

cd ~                                     

curl -LO https://github.com/prometheus/prometheus/releases/download/v2.0.0/prometheus-2.10.0.linux-amd64.tar.gz

Now we use sha256sum to generate the checksum of the downloaded file.

sha256sum prometheus-2.10.0.linux-amd64.tar.gz

If the checksum of the downloaded file does not match the checksum displayed on the Prometheus website, remove the file and download it again.

We shall unpack the downloaded file using the command-

tar xvf prometheus-2.10.0.linux-amd64.tar.gz

This will create a directory called prometheus-2.10.0.linux-amd64 containing two binary files (prometheus and promtool), consoles and console_libraries directories containing the web interface files, a license, a notice, and several example files.

We Copy the two binaries to the /usr/local/bin directory, using the command-

sudo cp prometheus-2.10.0.linux-amd64/prometheus /usr/local/bin/
sudo cp prometheus-2.10.0.linux-amd64/promtool /usr/local/bin/

We put the user and group ownership to the Prometheus user using the command-

sudo chown prometheus:prometheus /usr/local/bin/prometheus
sudo chown prometheus:prometheus /usr/local/bin/promtool

We Copy the console and console_libraries directories to /etc/Prometheus, using the command-

sudo cp -r prometheus-2.10.0.linux-amd64/consoles /etc/prometheus
sudo cp -r prometheus-2.10.0.linux-amd64/console_libraries /etc/prometheus

We put the user and group ownership to the Prometheus user using the command-

sudo chown -R prometheus:prometheus /etc/prometheus/consoles
sudo chown –R prometheus:prometheus /etc/prometheus/console_libraries

We use the -R flag to set the ownership on the files present in that directory.

We now remove the leftover files present in the home directory, using the command-

rm -rf prometheus-2.10.0.linux-amd64.tar.gz prometheus-2.0.0.linux-amd64

Configuring Prometheus

In the /etc/Prometheus directory use a text editor to setup a configuration file named prometheus.yml.  For now, this file will contain just enough information to run Prometheus for the first time.

sudo nano /etc/prometheus/prometheus.yml

In the global settings, define the default interval for scraping metrics. We put it as 15 seconds-

  scrape_interval: 15s

Scrape Interval – Tells Prometheus to collect metrics from its exporters every 15 seconds, which is long enough for most exporters.

Now we add Prometheus itself to the list of exporters, to scrape itself with the scrape_configs directive-



  - job_name: 'prometheus'

    scrape_interval: 5s


      - targets: ['localhost:9090']

Prometheus uses the job_name to label exporters in queries and on graphs.

Prometheus exports important data about itself that you can use for monitoring performance and debugging. We have overridden the global scrape_interval directive from 15 seconds to 5 seconds for more frequent updates.

Prometheus uses the static_configs and targets directives to determine where exporters are running. Since this particular exporter is running on the same server as Prometheus itself, we can use localhost instead of an IP address along with the default port, 9090.

We shall save and exit the text editor.

We put the user and group ownership of this configuration file to the Prometheus user using the command-

sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml

Running Prometheus

We will start Prometheus using the user Prometheus, providing the path to both the configuration file and the data directory, using the command-

sudo -u prometheus /usr/local/bin/prometheus \

--config.file /etc/prometheus/prometheus.yml \

--storage.tsdb.path /var/lib/prometheus/ \

--web.console.templates=/etc/prometheus/consoles \


Press CTRL+C to exit.

We will open a new systemd service file using the command-

sudo nano /etc/systemd/system/prometheus.service

The service file tells systemd to run Prometheus as the prometheus user, with the configuration file located in the /etc/prometheus/prometheus.yml directory and to store its data in the /var/lib/prometheus directory.

We copy the following code into the configuration file.









ExecStart=/usr/local/bin/prometheus \

    --config.file /etc/prometheus/prometheus.yml \

    --storage.tsdb.path /var/lib/prometheus/ \

    --web.console.templates=/etc/prometheus/consoles \




We will save and exit the text editor.

We will reload systemd using the command-

We can now start Prometheus using the command-

sudo systemctl daemon-reload
sudo systemctl start prometheus

To make sure Prometheus is running, we check the service’s status using the command-

sudo systemctl status prometheus

If the service state is active, then Prometheus is running successfully.

Now press Q to exit the status command.

We will enable the service to start on boot using the following command-

sudo systemctl enable prometheus

Downloading Node Exporter

Let us download and unpack the stable version of Node Exporter. We can use the URL as given below.


To download we use the following command-

cd ~

curl -LO https://github.com/prometheus/node_exporter/releases/download/v0.15.1/node_exporter-0.18.1.linux-amd64.tar.gz

Now we use sha256sum to generate the checksum of the downloaded file.

sha256sum node_exporter-0.18.1.linux-amd64.tar.gz

If the checksum of the downloaded file doesnot match the checksum displayed on the Prometheus website, remove the file and download it again.

We shall unpack the downloaded file using the command-

tar xvf node_exporter-0.18.1.linux-amd64.tar.gz

This will create a directory called node_exporter-0.18.1.linux-amd64 containing a binary file named node_exporter, a license, and a notice.

We Copy the binary to the /usr/local/bin directory, using the command-

sudo cp node_exporter-0.18.1.linux-amd64/node_exporter /usr/local/bin

We put the user and group ownership to the node exporter user using the command-

sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter

We now remove the leftover files present in the home directory, using the command-

rm -rf node_exporter-0.18.1.linux-amd64.tar.gz node_exporter-0.18.1.linux-amd64

Running Node Exporter

We will open a new systemd service file using the command-

sudo nano /etc/systemd/system/node_exporter.service

This service file tells your system to run Node Exporter as the node_exporter user with the default set of collectors enabled.

We copy the following code into the configuration file.


Description=Node Exporter










We will save and exit the text editor.

We will reload systemd using the command-

sudo systemctl daemon-reload

We can now run Node Exporter using the command-

sudo systemctl start node_exporter

To make sure Node Exporter is running, we check the service’s status using the command-

sudo systemctl status node_exporter

If the service state is active, then Prometheus is running successfully.

Now press Q to exit the status command.

We will enable the service to start on boot using the following command-

sudo systemctl enable node_exporter

Configuring Prometheus to Scrape Node Exporter

We will open the configuration file, using the command-

sudo nano /etc/prometheus/prometheus.yml

At the end of scrape_configs block add a new entry called node_exporter.


  - job_name: 'node_exporter'

    scrape_interval: 5s


      - targets: ['localhost:9100']

Because this exporter is also running on the same server as Prometheus itself, we can use localhost instead of an IP address again along with Node Exporter’s default port, 9100.

We will save and exit the text editor.

We shall restart Prometheus to put changes in to effect, using the command-

sudo systemctl restart prometheus

To make sure Prometheus is running, we check the service’s status using the command-

sudo systemctl status prometheus

If the service state is active, then Prometheus is running successfully.

Securing Prometheus

We start by installing apache2-utils, which will give us access to the htpasswd utility for generating password files, using the command-

sudo apt-get update
sudo apt-get install apache2-utils

We shall make our own password. We should not forget it as we need to use it to access Prometheus in the web server. We use the command-

sudo htpasswd -c /etc/nginx/.htpasswd

The result of this command is a newly-created file called .htpasswd, located in the /etc/nginx directory, containing the username and a hashed version of the password you entered.

Now we will make a Prometheus-specific copy of the default Nginx configuration file so that we can revert back to the defaults later if we run into a problem, using the command-

sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/prometheus

We shall open a new configuration file, using the command-

sudo nano /etc/nginx/sites-available/Prometheus

Locate the location / block under the server block. It should look like the code below-


    location / {

        try_files $uri $uri/ =404;



As we will be forwarding all traffic to Prometheus, we will replace the try_files directive with the following content-


    location / {

        auth_basic "Prometheus server authentication";

        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_pass http://localhost:9090;

        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;

        proxy_set_header Connection 'upgrade';

        proxy_set_header Host $host;

        proxy_cache_bypass $http_upgrade;



We will save and exit the text editor.

Now, we shall deactivate the default Nginx configuration file by removing the link to it in the /etc/nginx/sites-enabled directory, and activate the new configuration file by creating a link to it, using the command-

sudo rm /etc/nginx/sites-enabled/default

sudo ln -s /etc/nginx/sites-available/prometheus /etc/nginx/sites-enabled/

Before restarting Nginx, we will check the configuration for errors, using the command-

sudo nginx –t

If the output is OK then there are no errors.

We shall reload Nginix to put changes in to effect, using the command-

sudo systemctl reload nginx

To make sure Ngnix is running, we check the service’s status using the command-

sudo systemctl status nginx

If the service state is active, then Ngnix is running successfully.

Testing Prometheus

We shall go to our web server and use the URL to access the Prometheus Web page-


We shall enter our username and password.

This the web page of Prometheus where we can use queries to access stored metrics in form of time series database.

Docker and Dockerfiles

What is Docker?

Docker is an open source software, which helps in creating, deploying and running applications in containers. The container helps in giving the same type of environment throughout the Software Development Life Cycle.

What is Container?

A container is like a separate Virtual Machine which is dependent on the host machine. The container runs applications or micro services of a given task efficiently.


We should have an Ubuntu Server of version 16.04, where sudo commands can be used. We should also have a firewall installed.


Code = Inside the container

Installing Docker

Usually when we download Docker installation package from the Ubuntu repository, we might get an outdated version. To prevent this we download Docker from the main Docker repository itself.

First, in order to ensure that the downloads are valid, we add the GPG key for the official Docker repository to our system, by using the command-

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

If we get an ouput stating OK we can move ahead with the installation process.

We add the package to APT sources using the command-

Sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu   $(lsb_release -cs) stable"

We now update the the package database present using the command-

sudo apt-get update

We should make sure that we are downloading from the main Docker repository and not Ubuntu’s. We use the command-

apt-cache policy docker-ce

We finally will install Docker using the command-

sudo apt-get install -y docker-ce

Now, to check if it is running, we use the command-

sudo systemctl status docker

Working with Docker images

We do know that containers run from Docker images. So we will try to pull an image from the Docker Hub. Before that we will check whether we can access and download images from Docker Hub. We use the command-

sudo docker run hello-world

If we get the highlighted message in the above picture then Docker is working correctly.

We can also search for images using the Docker command-

sudo docker search Ubuntu

Now we shall pull an image called Ubuntu from the Docker Hub, using the command-

sudo docker pull Ubuntu

Now the image Ubuntu is in our server.

We can run a container using Ubuntu image, using the command-

sudo docker run Ubuntu

To see the images which have been downloaded in our server or computer we can use the command-

sudo docker images

We have two images-

  1. Hello-world
  2. ubuntu

Working with Docker Containers

We can interact with the container. Now let us run a container using the image Ubuntu, using the command-

sudo docker run –ti –-rm –-net=hpst ubuntu /bin/bash

Here –ti helps in giving an interactive shell access into the container, –rm helps in removing the container as soon as we exit out of the container.

Now we would be inside the container. We will try to install Node.js inside the container.

First let us update using the command-

 apt-get update

We do not need to use sudo here as we are the root user.

Now let us install Node.js using the command-

apt-get install -y nodejs

We have installed Node.js inside the container. We can see this by checking the version using the command-

node –v
To exit the container we can use exit or CTRL+p+q.

Working with Volumes

Docker can store files in the host machine and connect it to the container, so that the files are not lost even when the container stops. This is done with the help of volumes.

We will try to mount a volume containing a HTML file and map it to the container. The Image we will pull is httpd. We use the command-

sudo docker pull httpd:2.4.39-alpine

We can also check if the image has been pulled from the Docker Hub using the command-

sudo docker images

We shall now run the image using the command-

sudo docker run –name my-apache –p 80:80 httpd:2.4.39-alpine

Here –name helps in giving a name to the container, in this case it is my-apache.

Here -p is the port connection between that of host and the container, in this case both are connected to port 80.

We can check if port 80 is working using the url-


We should be getting the message as given above.

Now we shall try to display our html file in this webserver.

We first make a directory in the terminal using the command-

mkdir sample_dir

We make a file using the command-

sudo nano index.html

After creating the HTML page, we save the file and exit the text editor.

We can use pwd to find out the location of the file as we need it to be mapped to the location of the container.

We can use Docker Hub for our reference.

In Docker Hub there could be information, on where a container of an image stores its files. So for httpd it is the highlighted line.

docker run -dit --name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4

Here –v helps in creating volumes. Here we map the location of the html file to the container so that even if a container stops, the file remains without any changes. We use the command-

Now we can again go back to our webserver using the url-


We can see that our html page is being displayed on the server.

Working with Dockerfiles

A dockerfile is a text document, which has commands needed to build an image by the user itself.

We shall pull alpine from Docker Hub and install git using the help of Dockerfiles.

We shall pull alpine from Docker Hub using the command-

sudo docker pull alpine:3.9.4

We can also check if the image has been pulled from the Docker Hub using the command-

sudo docker images

Now we shall get inside the container using the image alpine to check if git is working, using the command-

sudo docker –ti –-rm alpine:3.9.4 /bin/sh

Here we can see that git, nano, curl,vim has not been installed.

Now we will install all these with the help of Dockerfiles. We will exit the container  and make a Dockerfile using the command-

touch dockerfile

A Dockerfile would have been made. We will get inside the Dockerfile using the command-

sudo nano dockerfile

We shall enter the following lines.

Here we are taking the image alpine:3.9.4 and modifying it by adding curl, vim and git. We even add the maintainer. We will save and exit the file.

We will build the image by using the  command-

sudo docker build –-network host –t “tag-name” .

Here in the command –t is the tag we give to the image to distinguish it from alpine:3.9.4, in the end we also add “.” to tell that we are bearing the current folder.

We can see all the steps have been executed. So we have built our image.

We will get inside the container to check if git is working usng the command-

sudo docker run –ti –net=host “tag-name”