Elastic Search Cluster on Ubuntu 14.04

Featured

Elastic Search is a popular open source search server that is used for real-time distributed search and analysis of data. When used for anything other than development, Elastic Search should be deployed across multiple servers as a cluster, for the best performance, stability, and scalability.

elasticsearch_logo

Demonstration:

OMegha Platform.

Image – Ubuntu-14.04

Prerequisites:

You must have at least three Ubuntu 14.04 servers to complete this, because an Elastic Search cluster should have a minimum of 3 master-eligible nodes. If you want to have dedicated master and data nodes, you will need at least 3 servers for your master nodes plus additional servers for your data nodes.

Install Java 8:

Elastic Search requires Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elastic Search recommends. It should, however, work fine with OpenJDK, if you decide to go that route.

Complete this step on all of your Elastic Search servers.

Add the Oracle Java PPA to apt:

$ sudo add-apt-repository -y ppa:webupd8team/java

Update your apt package database:

$ sudo apt-get update

Install the latest stable version of Oracle Java 8 with this command (and accept the license agreement that pops up):

$ sudo apt-get -y install oracle-java8-installer

Be sure to repeat this step on all of your Elastic Search servers.

Now that Java 8 is installed, let’s install Elastic Search.

Install Elastic Search:

Elastic Search can be downloaded directly from elastic.co in zip, tar.gz, deb, or rpm packages. For Ubuntu, it’s best to use the deb (Debian) package which will install everything you need to run Elastic Search.

$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.deb

Then install it in the usual Ubuntu way with the dpkg command like this:

$ sudo dpkg -i elasticsearch-1.7.2.deb

This results in Elastic Search being installed in /usr/share/elastic Search/ with its configuration files placed in /etc/elastic Search and its init script added in /etc/init.d/elastic search.

$ sudo update-rc.d elasticsearch defaults

Be sure to repeat these steps on all of your Elastic Search servers.

Elastic Search is now installed but it needs to be configured before you can use it.

Configure Elastic search Cluster

Now it’s time to edit the Elastic search configuration. Complete these steps on all of your Elastic search servers.

Open the Elastic search configuration file for editing:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Set Cluster Name:

Next, set the name of your cluster, which will allow your Elastic search nodes to join and form the cluster. You will want to use a descriptive name that is unique (within your network).

Find the line that specifies cluster.name, uncomment it, and replace its value with the your desired cluster name. In this tutorial, we will name our cluster “elastic search_cluster”:

ELK1

Set Node Name:

Next, we will set the name of each node. This should be a descriptive name that is unique within the cluster.

Find the line that specifies node.name, uncomment it, and replace its value with your desired node name. In this tutorial, we will set each node name to the host name of server by using the ${HOSTNAME}environment variable:

ELK2

For Master Node:

For Master set the node.master as True and for node.data as False

ELK3

For Data Node:

For Data set the node.master as False and for node.data as True

ELK4

Network Host:

Set the network host as 0.0.0.0

ELK5

Set Discovery Hosts:

Next, you will need to configure an initial list of nodes that will be contacted to discover and form a cluster. This is necessary in a unicast network.

Find the line that specifies discovery.zen.ping.unicast.hosts and uncomment it.

For example, if you have three servers node01, node02, and node03 with respective VPN IP addresses of 10.0.0.1, 10.0.0.2, and 10.0.0.3, you could use this line:

ELK6

Save and Exit.

Your servers are now configured to form a basic Elastic search cluster. There are more settings that you will want to update, but we’ll get to those after we verify that the cluster is working.

Save and exit elasticsearch.yml.

Start Elastic search:

Now start Elastic search:

$ sudo service elasticsearch restart

Then run this command to start Elastic search on boot up:

$ sudo update-rc.d elasticsearch defaults 95 10

Be sure to repeat these steps (Configure Elastic search) on all of your Elastic search servers.

Testing:

By now, Elastic search should be running on port 9200. You can test it with curl, the command line client-side URL transfers tool and a simple GET request like this:

$ curl -X GET 'http://localhost:9200'

You should see the following response:

ELK7

If you see a response similar to the one above, Elastic search is working properly. If not, make sure that you have followed correctly the installation instructions and you have allowed some time for Elastic search to fully start.

Check Cluster State:

If everything was configured correctly, your Elastic search cluster should be up and running. Before moving on, let’s verify that it’s working properly. You can do so by querying Elastic search from any of the Elastic search nodes.

From any of your Elastic search servers, run this command to print the state of the cluster:

$ curl -XGET 'http://localhost:9200/_cluster/state?pretty'

EL8

If you see output that is similar to this, your Elastic search cluster is running! If any of your nodes are missing, review the configuration for the node(s) in question before moving on.

Next, we’ll go over some configuration settings that you should consider for your Elastic search cluster.

Enable Memory Locking:

Elastic recommends to avoid swapping the Elastic search process at all costs, due to its negative effects on performance and stability. One way avoid excessive swapping is to configure Elastic search to lock the memory that it needs.

Complete this step on all of your Elastic search servers.

Edit the Elastic search configuration:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Find the line that specifies bootstrap.mlockall and uncomment it:

ELK9

Save and exit.

Now restart Elastic search to put the changes into place:

$ sudo service elasticsearch restart

Cluster Health:

This API can be used to see general info on the cluster and gauge its health:

$ curl -XGET 'localhost:9200/_cluster/health?pretty'

ELK10

Cluster State:

This API can be sued to see a detailed status report on your entire cluster. You can filter results by specifying parameters in the call URL.

$ curl -XGET 'localhost:9200/_cluster/state?pretty'

ELK11

Conclusion:

Your Elastic search cluster should be running in a healthy state, and configured with some basic optimizations.

 

Node.js Installation

Featured

Node.js is a cross-platform environment and library for running JavaScript applications which are used to create networking and server-side applications.

It is used to develop I/O intensive web applications like video streaming sites, single-page applications, and other web applications.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Node.js Installation In Ubuntu 16.04

Step-1 Update the Package List

Before installing the node.js on to the Ubuntu system, update all available repositories.

sudo apt-get update

Step-2  Install Node.js

Run below command to install standard package of node.js

sudo apt-get install nodejs

Step-3 Install NPM

For installing Node.js the npm package is also required. By using below command we install the npm package.

sudo apt-get install npm

In order for some npm packages to work (those that require compiling code from source, for example), you will need to install the build-essential package:

sudo apt-get install build-essential

Installation Check

After Installing the node.js and npm we check that installation is correct or not by typing the following commands:

Node.js Installation Check

nodejs --version

NPM Installation Check

npm --version

Remove Node.js

For removing the node.js from ubuntu system completely we type following commands:

Remove Package without Configuration Files

This command removes node.js but configuration files remain so next time we install node.js than it will be used.

sudo apt-get remove nodejs

Remove Package with Configuration Files

If you don’t want to kept the configuration files then use following command.

sudo apt-get purge nodejs

Finally, Remove Unused Packages

For removing the unused packages that are installed with node.js run the following command:

sudo apt-get autoremove 

Installing Open Source Hosting Control Panel (ZesleCP) on Ubuntu 16.04

Featured

Zesle Control Panel

Secure Web Control Panel for your needs…

ZCP

Zesle is one of the popular open source control panel that any own can download and install. This is very simple and can be installed in just one command.

System Requirements:

  • Ubuntu 14/16
  • 1Ghz CPU
  • 512MB RAM
  • 10+GB DISK

Zesle is simpler and very User friendly.  Using Zesle you’ll be able to do the below tasks…

  • Add multiple domains without hassle;
  • Add multiple sub domains;
  • Install WordPress easily with one-click app;
  • Install free Let’s Encrypt SSL certificates with ease;
  • Configure basic PHP settings;
  • Manage Email accounts;
  • Access phpMyAdmin.

and much more. Let’s see how to install Zesle in your hosting.

Step 1: It’s super-easy to install Zesle. Run the following command with Root privilege.

$ cd /home && curl -o latest -L http://zeslecp.com/release/latest && sh latest

Step 2: The installation will begin and in between It will ask the admin’s email address. Please provide your email id and click enter.

Step 3: You will see the below screen at the end of the installation.

zcsp1.png

Step 4: This is how Zesle will looks like. Once the installation is completed, it’ll show you the temporary password and the login URL.

Step 5: The login URL will be your IP address followed by the port number(2087 will be the default one). For example: https://11.11.11.111:2087 is a sample URL.

Step 6: Just enter this in your browser and you’ll get the login screen.

zcsp2

Step 7: Use root as your username

Step 8: Copy paste the temporary root password provided. Once you entered the correct password, the control panel will open and here is how it looks like. All the options mentioned above will be available in the left side of the UI.

zcsp3

Step 9: In the Dashboard, you can create your account, and install WordPress on your domain using “Once Click Apps

Step 10: There is installation steps end for free Linux Web Hosting control panel called ZesleCP.

 

Installation of Open Project Management System on Ubuntu 16.04

Featured

OpenProjectLogo

Open Project is a web-based management system for location-independent team
collaboration, released under GNU GPL 3 License. It’s a project management software
that provides task-management, team collaboration, scrum etc. Open Project is written
in Ruby on Rails and AngularJS. In this tutorial, I will show you how to install and
configure the Open Project management system using Ubuntu 16.04. The tool can be
installed manually or by using packages from the repository. For t his guide, we will
install Open Project from repository.

Prerequisites

  •  Ubuntu 16.04.
  •  Good Internet Connectivity.
  •  Root Privileges.

What we will do

  • Update and Upgrade System.
  • Install Open Project Management System.
  • Configure the Open Project System.
  • Testing.

Step 1: Update and Upgrade System

Before installing the Open Project on to the Ubuntu system, update all available repositories and upgrade the Ubuntu system.

Run the following commands.

$ sudo apt update
$ sudo apt upgrade -y

Step 2: Install Open Project Management System

Download the open project key and add it to the system.

$ sudo wget -qO- https://dl.packager.io/srv/opf/openproject-ce/key | sudo apt-key add

And download the open project repository for Ubuntu 16.04 in the ‘/etc/apt/sources.list.d’ directory.

$ sudo wget -O /etc/apt/sources.list.d/openproject-ce.list \
  https://dl.packager.io/srv/opf/openproject-ce/stable/7/installer/ubuntu/16.04.repo

Now update the Ubuntu repository and install open project using the apt command as shown below.

$ sudo apt update
$ sudo apt-get install openproject -y

Step 3: Configure the Open Project System

Run the Open Project configuration command. A Graphical based UI screen will appear.

$  sudo openproject configure

op1

Select ‘Install and configure MySQL server locally’ and click ‘OK’. It will automatically install MySQL server on the system, and automatically create the database for openproject installation.

For the web server configuration, choose the ‘Install apache2 server’ and click ‘OK’. it will automatically install the apache2 web server and configure the virtual host for Open Project application.

op2

Now type the domain name for your Open project application, and choose ‘OK’.

Next, for the SSL configuration. If you have purchased SSL certificates, choose ‘yes’, and ‘no’ if you don’t have SSL certificates.

op3

Skip the subversion support, GitHub support and SMTP configuration. (if not needed).

And for memcached installation choose ‘Install’ and select ‘OK’ for better performance of Open Project.

op4

Finally, installation and configuration of all the packages required for Open Project installation should happen automatically.

Step 4: Testing

Check whether the Open Project service is up and running.

$  sudo service openproject status

Now run the openproject web service using the following command.

$  sudo openproject run web

Now open your web browser and type on the address bar your floating IP to access the system

op5

Now click the ‘Sign in’ button to log in to the admin dashboard initially using ‘admin’ as user and ‘admin’ as password and later you can change it.

Finally, the installation and configuration for Open Project on Ubuntu 16.04 has been completed successfully.

 

 

 

Apache Virtual Hosts on Ubuntu 14.04

Featured

The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a Virtual Host.

virtual-hosting-apache

These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

In this document, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you’ll learn how to serve different content to different visitors depending on which domains they are requesting.

Prerequisites

  • Before you begin this tutorial, you should create a non root user.
  • You will also need to have Apache installed in order to work through these steps.

Demonstration:

OMegha platform.

Image – Ubuntu-14.04

Lets get Started,

At first we need to update the packages list.

$ sudo apt-get update

VH1

Install Apache

$ sudo apt-get install apache2

VH2

For the purposes of this document, my configuration will make a virtual host for infra.com and another for infra1.com

Step 1: Create the Directory Structure

Our document root will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

$ sudo mkdir -p /var/www/infra.com/public_html

$ sudo mkdir -p /var/www/infra1.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

Step 2: Grant Permissions

Changing the Ownership

$ sudo chown -R $USER:$USER /var/www/infra.com/public_html

$ sudo chown -R $USER:$USER /var/www/infra1.com/public_html

VH3

We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders

$ sudo chmod -R 755 /var/www

Step 3: Create Demo Pages for Each Virtual Host

We have to create index.html file for each site.

Let’s start with infra.com. We can open up an index.html file in our editor by typing

$ sudo vi /var/www/infra.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to and My file looks like this

<html>

  <head>

    <title>Welcome to infra.com!</title>

  </head>

  <body>

    <h1>Success!  The infra.com virtual host is working!</h1>

  </body>

</html>

Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing

cp /var/www/infra.com/public_html/index.html /var/www/infra1.com/public_html/index.html

Then we can open the file and modify the relevant pieces of information

$ sudo vi /var/www/infra1.com/public_html/index.html

<html>

  <head>

    <title>Welcome to infra1.com!</title>

  </head>

  <body>

    <h1>Success!  The infra1.com virtual host is working!</h1>

  </body>

</html>

Save and close the file.

Step 4: Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf and we can
copy that to our first domain of the virtual host file.

Creating First Virtual Host File

Start by copying the file for the first domain

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/infra.com.conf

Open the new file in your editor with root privileges

$ sudo vi /etc/apache2/sites-available/infra.com.conf

our virtual host file should look like this

<VirtualHost *:80>

    ServerAdmin admin@infra.com

    ServerName infra.com

    ServerAlias www.infra.com

    DocumentRoot /var/www/infra.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Copy first Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying

$ sudo cp /etc/apache2/sites-available/infra.com.conf /etc/apache2/sites-available/infra1.com.conf

Open the new file with root privileges

$ sudo vi /etc/apache2/sites-available/infra1.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this

<VirtualHost *:80>

    ServerAdmin admin@infra1.com

    ServerName infra1.com

    ServerAlias www.infra1.com

    DocumentRoot /var/www/infra1.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Step 5: Enable the New Virtual Host Files

Created Virtual host files needs to be enabled.

We can use the a2ensite tool to enable each of our sites

$ sudo a2ensite infra.com.conf

$ sudo a2ensite infra1.com.conf

VH4

Restart the apache server.

$ sudo service apache2 restart

Step 6: Setup Local Hosts File

$ sudo vi /etc/hosts

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS.

127.0.0.1 localhost

***.***.***.*** infra.com

***.***.***.*** infra1.com

Save and close the file.

This will direct any requests for infra.com and infra1.com on our computer and send them to our server at ***.***.***

Step 7: Test Your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser

http://infra.com

VH5

You should see a page that looks like this

Likewise, if you can visit your second page

http://infra1.com

VH6

You will see the file you created for your second site

Step 8: Conclusion

If both of these sites work well, you’ve successfully configured two virtual hosts on the same server.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

Centralize Logs from Node.js Applications

Featured

Prerequisites

  • Installation of Node.js and NPM
  • Installation of Fluentd

Modifying the Config File

Next, please configure Fluentd to use the forward Input plugin as its data source.

$ sudo vi /etc/td-agent/td-agent.conf

Fluent daemon should listen on TCP port.

Simple configuration is following:

1

Restart your agent once these lines are in place.

$ sudo service td-agent restart

fluent-logger-node

Install fluent-logger-node

$ npm install fluent-logger

Now use npm to install your dependencies locally:

$ npm install

Send an event record to Fluentd

index.js

This is the simplest web app.

2

Run the app and go to http://localhost:4000 in your browser. This will send the logs to Fluentd.

$ node index.js

3

The logs should be output to /var/log/td-agent/td-agent.log  

Store Logs into MongoDB

Fluentd does 3 things:

  1. It continuously “tails” the access log file.
  2. It parses the incoming log entries into meaningful fields (such as ip,path, etc.) and buffers them.
  3. It writes the buffered data to MongoDB periodically.

Configuration         

Fluentd’s config file

$ sudo vi /etc/td-agent/td-agent.conf

 The Fluentd configuration file should look like this:

4

Restart your agent once these lines are in place.

$ sudo service td-agent restart

Then, access MongoDB and see the stored data.

$ mongo

5

Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Installation of MongoDB on Ubuntu 16.04

Featured

MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.

mongodb

MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA31292

After successfully importing the key, you will see:

gpg: Total number processed: 1
gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod
$ mongo

mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Main PID: 4093 (mongod)
Tasks: 16 (limit: 512)
Memory: 47.1M
CPU: 1.224s
CGroup: /system.slice/mongodb.service
└─4093 /usr/bin/mongod --quiet --config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Installing Asterisk on Ubuntu 16.04

Featured

Tags

,

Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.

Asterisk_Logo.svg

Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv

 

Anomaly Detection (AWS)

Introduction :  Amazon CloudWatch Anomaly Detection applies machine-learning algorithms to continuously analyze system and application metrics, determine a normal baseline, and surface anomalies with minimal user intervention.

Anomaly Detection analyzes the historical values for the chosen metric, and looks for predictable patterns that repeat hourly, daily, or weekly. It then creates a best-fit model that will help you to better predict the future, and to more cleanly differentiate normal and problematic behavior

You can adjust and fine-tune the model as desired, and you can even use multiple models for the same CloudWatch metric.

Anomaly Detection in Cloudwatch

  • When we enable anomaly detection for a metric, CloudWatch applies statistical and machine learning algorithms.
  • The algorithms generate an anomaly detection model. The model generates a range of expected values that represent normal metric behavior.

We can use the model of expected values in two ways:

  1. Create anomaly detection alarms based on a metric’s expected value. These types of alarms don’t have a static threshold for determining alarm state.
  2. When viewing a graph of metric data, overlay the expected values onto the graph as a band. This makes it visually clear which values in the graph are out of the normal range.

In a graph with anomaly detection, the expected range of values is shown as a gray band. If the metric’s actual value goes beyond this band, it is shown as red during that time.

Anomaly detection algorithms account for the seasonality and trend changes of metrics. The seasonality changes could be hourly, daily, or weekly.

Examples :

How to create Anomaly detection

we can create our own models in cloudwatch if we have ec2 instances.

  1. Goto AWS account, in search bar type cloudwatch and open cloudwatch console.
  2. In the navigation pane, choose Alarms, Create Alarm.
  3. Choose Select Metric and do one of the following:
    1. Choose the service namespace that contains the metric that you want. To narrow the choices, continue choosing options as they appear. When a list of metrics appears, select the check box next to the metric that you want.
    2. In the search box, enter the name of a metric, dimension, or resource ID and press Enter. Then choose one of the results and continue until a list of metrics appears. Select the check box next to the metric that you want.
  4. Choose the Graphed metrics tab.
    1. Under Statistic , choose one of the statistics or predefined percentiles, or specify a custom percentile.
    2. Under Period, choose the evaluation period for the alarm. When evaluating the alarm, each period is aggregated into one data point. For anomaly detection alarms, the value must be one minute or longer.
      You can also choose whether the y-axis legend appears on the left or right while you’re creating the alarm. This preference is used only while you’re creating the alarm.
    3. Choose Select metric.
      The Specify metric and conditions page appears, showing a graph and other information about the metric and statistic you have selected.
  5. Under Conditions, specify the following:
    1. Choose Anomaly detection.
      If the model for this metric and statistic already exists, CloudWatch displays the anomaly detection band in the sample graph at the top of the screen. If the model does not already exist, the model will be generated when you finish creating the alarm. It takes up to 15 minutes for the actual anomaly detection band generated by the model to appear in the graph. Before that, the band you see is an approximation of the anomaly detection band. To see the graph in a longer time frame, choose Edit at the top right of the page.
    2. For whenever metric is, specify whether the metric must be greater than, lower than, or outside (in either direction) the band to trigger the alarm.
    3. For Anomaly detection threshold, choose the number to use for the anomaly detection threshold. A higher number creates a thicker band of “normal” values that is more tolerant of metric changes, and a lower number creates a thinner band that will go to ALARM state with smaller metric deviations. The number does not have to be a whole number.
  6. Choose Next.
  7. Under Notification, select an SNS topic to notify when the alarm is in ALARM state, OK state, or INSUFFICIENT_DATA state.
    To have the alarm send multiple notifications for the same alarm state or for different alarm states, choose Add notification.
    To have the alarm not send notifications, choose Remove.
  8. To have the alarm perform EC2 actions, choose the appropriate button and choose the alarm state and action to perform.
  9. When finished, choose Next.
  10. Enter a name and description for the alarm. The name must contain only ASCII characters. Then choose Next.
  11. Under Preview and create, confirm that the information and conditions are what you want, then choose Create alarm.

Modifying  an Anomaly detection Model

If we are having Anomaly detection already we can change that if we wants to make any changes,

  1. Goto AWS account, in search bar type cloudwatch and open cloudwatch console.
  2. In the navigation pane, choose Alarms.
  3. Choose the name of the alarm. Use the search box to find the alarm if necessary.
  4. Choose View in metrics.
  5. In the lower part of the screen, choose Edit model.
  6. To exclude a time period from being used to produce the model, choose Add another time range to exclude from training. Then select or enter the days and times to exclude from training, and choose Apply.
  7. If the metric is sensitive to daylight savings time changes, select the appropriate time zone in the Metric timezone box.
  8. Choose Update.

Pricing Details for Anomaly Detection

Standard Resolution (60 sec) Alarms7.58 rs per alarm 
High Resolution (10 sec) Alarms22.74 rs per alarm
Standard Resolution Anomaly Detection Alarms22.74 rs per alarm
High Resolution Anomaly Detection ALarms68.22 rs per alarm

AWS Inspector

Tags

, ,

Definition

AWS Inspector is a Service Provided by Amazon Web Services to Inspect on all available Instance in users account .Inspector outcomes with findings concised in a Report that holds necessary Information which are all Possible for an attack and vulnerability this probably enhances an user to overcome security concerns with respective actions against certain Findings.

How to Use ?

This service can be utilized in two ways :

  • Manual Agent application Installation on Instance then followed by Inspection
  • Selecting Install Agent option during Target Creation

However both remains same end result, An user’s Initial step is to Create an Assessment Target with respective tags and values of the instances on your AWS account.The number of Instance will reflect in your monthly Billing during this Process.

Next Step is about Creating your template were you’ll be including rules and packages such as Common Vulnerability Exposes,Network Reachability,CIS Operating System Security Configurations..etc Then choose the previously created Assessment Target into it also this service can be automated by Assessment Events.

Once the above Process of template creation has been done Initiate Run on Assessment Template Dashboard.AWS Inspector Service requires roughly an hour to result with report.

Get into Assessment runs to check the status of your initiated process,when its done you can download report were it includes severity stages for better classification of security concerns also with Recommended solutions helps to fix loop holes for attack.

Analysis and Consolidation

Go through the report to find loop holes which has huge number of counts based upon severity High-Medium-Low-Informational.Allocate priority to the Findings which has High Severity and follow the steps provided in report accordingly.

Ensure that a collection of findings in a sheet and start consolidating to make the process simple as well as effecient.

Advantages

  • Simple and reliable
  • Reduces burden of Security Concerns
  • Classification increases ease of finding
  • Reduced Time consumption
  • Effecient use of service
  • Includes all Aspects of Security Checks

Key Concepts

Potential risk on a Machine’s data and workflow is been a major concern in today’s world Where AWS Inspector implies its six key concepts

  1. Assessment Target
  2. Assessment template
  3. Inspector agent
  4. Runs
  5. Rules and packages
  6. Findings Report

eliminates the vulnerability on a routinely basis checks preferred based upon the user.Each Concept certainly plays major role in the outcomes of findings whereas its quite similar to Stig Reporting still then it can be concluded as better when compared.

Usage Pricing

As per AWS Documentations on an average the pricing ranges upto 0.30$ for All rules packages discluding Network Reachability.However free trial is available for first 250 instance assessment.

Network Reachability rules package has been added to list recently which would costs in a range of upto 0.15$.For an elaborated information regarding AWS Inspector pricing Check the link https://aws.amazon.com/inspector/pricing/

Conclusion

At a Reasonable pricing range the workflow of Inspector is an appreciable service of recieving detailed report of findings which can assist an user to fix all the patches effeciently.I hope this does brings Gainful content and please do share your opinion on below comments.

How to Install & Secure MongoDB

MongoDB is a document-oriented database that is free and open-source. It considered one of the most popular NoSQL database engines because it is scalable, powerful, reliable and easy to use. In contrast to relational database, MongoDB does not require a deep predefined schema before you can add data since it can be altered at any time. As it uses the NoSQL concept, data rows are stored in JSON-like documents which allows arbitrary data to be inserted.

Prerequisites:-

Ubuntu 16.04 server configured with a non-root sudo user.

Step 1: Adding the MongoDB Repository

$ sudo apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6

-> create a list file for MongoDB

$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list

->Finally, we’ll update the packages list.

$ sudo apt-get update

Step 2 — Installing MongoDB

$ sudo apt-get install mongodb-org

-> Start MongoDB

$ sudo systemctl start mongod

->Check MongoDB Status

$ sudo systemctl status mongod

->Enable MongoDB

$ sudo systemctl enable mongod

Securing MongoDB:-

Step 1 — Adding an Administrative User

To add our user, we’ll connect to the Mongo shell:

$ mongo

-> To enter in admin

> use admin

->To create user

> db.createUser(
> {
> user: "Adminkapil",
> pwd: "Adminkapil'Adminkapil",
> roles: [ { role:"userAdminAnyDatabase", db:"admin" } ]
> }
> )

Step 2 — Enabling Authentication

Let’s open the configuration file:

$ sudo nano /etc/mongod.conf

In the #security section, we’ll remove the hash in front of security to enable the stanza. Then we’ll add the authorization setting. 

Once we’ve saved and exited the file, we’ll restart the daemon:

$ sudo systemctl restart mongod

->Check status of MongoDB after changes

$ sudo systemctl status mongod

 Step 3 — Verifying that Unauthenticated Users are Restricted

First, let’s connect without credentials to verify that our actions are restricted:

$ mongo

We’ll test that our access is restricted with the show dbs command:

> show dbs

Let’s exit the shell to proceed:

> exit

Step 4 — Verifying the Administrative User’s Access

$ mongo -u Adminkapil -p --authenticationDatabase admin

we should see the available databases:

> show dbs

Configuring Remote Access:-

Step 1 — Enabling UFW:-

->To  check  ufw status

$ sudo ufw status

->To enable UFW

$ sudo ufw enable
$ sudo ufw allow from client_ip_address to any port 27017

we’ll run ufw status again:

$ sudo ufw status

Step 2 — Configuring a Public bindIP

To allow remote connections, we will add our host’s publically-routable IP address to the mongod.conf file.

$sudo nano /etc/mongod.conf

We’ll save and exit the file, then restart the daemon:

->Restart Mongodb

$ sudo systemctl restart mongod

->chck status

$ sudo systemctl status mongod

Step 3 Testing the Remote Connection

$ mongo -u AdminSammy -p --authenticationDatabase admin --host 172.29.236.40

Memcached installation in Ubuntu 16.04

What is Memcached?

Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages.

The key features of Memcached 

  • It is open source.
  • Memcached server is a big hash table.
  • It significantly reduces the database load
  • It is perfectly efficient for websites with high database load.
  • It is distributed under the Berkeley Software Distribution (BSD) license.
  • It is a client-server application over TCP or UDP.

Memcached is not 

  • a persistent data store
  • a database
  • application-specific
  • a large object cache
  • fault-tolerant or highly available

Prerequisites

One Ubuntu 16.04 server

Installation

To install Memcached on Ubuntu we use apt

$sudo apt-get update
$sudo apt-get install memcached

We can also install libmemcached-tools, a library that provides several tools to work with your Memcached server:

$ sudo apt-get install libmemcached-tools

Securing Memcached Configuration Settings

To ensure that our Memcached instance is listening on the local interface 127.0.0.1, we will check the default setting in the configuration file located at /etc/memcached.conf.

$ sudo nano /etc/memcached.conf

To disable UDP (while leaving TCP unaffected), we’ll add the following option to the bottom of this file:

Restart  Memcached and verifying

$ sudo systemctl restart memcached
$ sudo netstat -plunt

Adding Authorized Users

To add authenticated users to your Memcached service, it is possible to use Simple Authentication and Security Layer (SASL), a framework that de-couples authentication procedures from application protocols. We will enable SASL within our Memcached configuration file and then move on to adding a user with authentication credentials.

First, we will add the -S parameter to /etc/memcached.conf.

$ sudo nano /etc/memcached.conf

At the bottom of the file, adding -s

And uncommenting the -vv-

Restart the Memcached service:

$ sudo systemctl restart memcached

Next, we can take a look at the logs to be sure that SASL support has been enabled:

$ sudo journalctl -u memcached

Adding an Authenticated User

Now we can download sasl2-bin

$ sudo apt-get install sasl2-bin

Next, we will create the directory and file that Memcached will check for its SASL configuration settings:

$ sudo mkdir -p /etc/sasl2
$ sudo nano /etc/sasl2/memcached.conf

Add the following to the SASL configuration file:

Now we will create a SASL database with our user credentials. We will use the saslpasswd2command to make a new entry for our user in our database using the -c option. Our user will be sammy here, but you can replace this name with your own user. Using the -f option, we will specify the path to our database, which will be the path we set in /etc/sasl2/memcached.conf:

$ sudo saslpasswd2 -a memcached -c -f        /etc/sasl2/memcached-sasldb2 ankush

Finally, we will give the memcache user ownership over the SASL database:

$ sudo chown memcache:memcache /etc/sasl2/memcached-sasldb2

Restart the Memcached service:

$ sudo systemctl restart memcached

Running memcstat again

$ memcstat --servers="127.0.0.1" --username=ankush --password=1234 

Redis installation in Ubuntu 16.04

What is Redis ?

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It is written in ANSI C and licensed under BSD 3-Clause.

It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. 

Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

The name Redis means REmote DIctionary Server.

Redis made popular the idea of a system that can be considered at the same time a store and a cache, using a design where data is always modified and read from the main computer memory, but also stored on disk in a format that is unsuitable for random access of data, but only to reconstruct the data back in memory once the system restarts.

Redis Advantages :

  1. Exceptionally fast − Redis is very fast and can perform about 110000 SETs per second, about 81000 GETs per second.
  2. Supports rich data types − Redis natively supports most of the datatypes that developers already know such as list, set, sorted set, and hashes. This makes it easy to solve a variety of problems as we know which problem can be handled better by which data type.
  3. Operations are atomic − All Redis operations are atomic, which ensures that if two clients concurrently access, Redis server will receive the updated value.
  4. Multi-utility tool − Redis is a multi-utility tool and can be used in a number of use cases such as caching, messaging-queues (Redis natively supports Publish/Subscribe), any short-lived data in your application, such as web application sessions, web page hit counts, etc.

Installation of Redis(Single Node Configuration)

Prerequisites 

  1. Ubuntu 14.04 or above (We are using ubuntu 18.04)
  2. A non root user with sudo privileges 

Installation

In order to get the latest version of Redis, we will use apt to install it from the official Ubuntu repositories.

$ sudo apt update
$ sudo apt install redis-server

Opening this file with nano

$ sudo nano /etc/redis/redis.conf

The supervised directive is set to no by default. Since we are running Ubuntu, which uses the systemd init system, changing this to systemd:

Restart

$ sudo systemctl restart redis.servicesudo systemctl restart redis.service

Testing Redis

$ sudo systemctl status redis

To test that Redis is functioning correctly

$ redis-cli
127.0.0.1:6379> ping
127.0.0.1:6379> set test "It's working!"
127.0.0.1:6379> get test
127.0.0.1:6379> exit

Binding to localhost

By default, Redis is only accessible from localhost. However, if you installed and configured Redis by following a different tutorial than this one, you might have updated the configuration file to allow connections from anywhere. This is not as secure as binding to localhost.

Opening the Redis configuration file for editing:

$ sudo nano /etc/redis/redis.conf\

remove the # from binding 

To restart the service 

$ sudo systemctl restart redis

To check that this change has gone into effect

$ sudo netstat -lnp | grep redis

Configuring a Redis Password

Configuring a Redis password enables one of its two built-in security features — the auth command, which requires clients to authenticate to access the database. The password is configured directly in Redis’s configuration file, /etc/redis/redis.conf, so open that file again with nano editor:

$ sudo nano /etc/redis/redis.conf

Removing # (comment) from # requirepass foobared

To restart 

$ sudo systemctl restart redis.service

Testing

It now asks for a password

Configuration with PHP

Installing the supporting php package for usage of redis with php. This can be done be issuing the following commands to the shell

$ sudo apt install php-redis

Setting up Cloudflare account

Introduction

Cloudflare, Inc. is a U.S. company that provides content delivery network services, DDoS mitigation, Internet security and distributed domain name server services. Cloudflare’s services sit between the visitor and the Cloudflare user’s hosting provider, acting as a reverse proxy for websites. Cloudflare’s headquarters are in San Francisco, California, with additional offices in Lisbon[2], London, Singapore, Munich, San Jose, Champaign, Austin, New York and Washington, D.C.

Overview

More than just Content Delivery (CDN) services, customers rely on Cloudflare’s global network to enhance security, performance, and reliability of anything connected to the Internet.

Cloudflare is designed for easy setup. Anyone with a website and their own domain can use Cloudflare regardless of their platform choice. Cloudflare doesn’t require additional hardware, software, or changes to your code.

What_is_Cloudflare_v7.png

Security

Cloudflare stops malicious traffic before it reaches your origin web server. Cloudflare analyzes potential threats in visitor requests based on a number of characteristics:

  • visitor’s IP address,
  • resources requested,
  • request payload and frequency, and
  • customer-defined firewall rules.

Create your Cloudflare account and add a domain to review our security benefits.

Performance

Cloudflare optimizes the delivery of website resources for your visitors. Cloudflare’s data centers serve your website’s static resources and ask your origin web server for dynamic content. Cloudflare’s global network provides a faster route from your site visitors to our data centers than would be available to a visitor directly requesting your site. Even with Cloudflare between your website and your visitors, resource requests arrive to your visitor sooner.

Reliability

Cloudflare’s globally distributed anycast network routes visitor requests to the nearest Cloudflare data center.  Cloudflare distributed DNS responds to website visitors with Cloudflare IP addresses for traffic you proxy to Cloudflare.  This also provides security by hiding the specific IP address of your origin web server.

Cloudflare-proxied domains share IP addresses from a pool that belongs to the Cloudflare network. As a result, Cloudflare does not offer dedicated or exclusive IP addresses. To reduce the number of Cloudflare IPs that your domain shares with other Cloudflare customer domains, upgrade to a Business or Enterprise plan and upload a Custom SSL certificate.

Also, our flat-rate pricing structure provides predictability and reliability in your CDN and DDoS bandwidth expenses.

Related Resources

  • What is Cloudflare?
  • Get started with Cloudflare

Configuration of Cloudflare

Prerequisites 

  • Browser with Javascript Enabled(Such as Google Chrome, Firefox, Safari)
  • A domain with admin access to change the nameservers for which the CDN needs to be set-up.

Configuration for Cloudflare

Firstly we need to go the cloudflare website. This can be done by hitting the browser with https://www.cloudflare.com and register an account

We can register a new account by hitting the “Sign Up” button and inuting the details such as email-id and password

This will redirect us to an initial setup page which will ask for the domain details

After this step the cloudflare requests all the DNS records associated with the domain and present it to the user for setup

After this it asks for the setup type. We can choose one of the 4 options which will provide their respective feature. For the testing purposes we are choosing the “FREE Tier” and hitting the “Confirm plan” button

Next it asks to change the nameservers to its nameservers and after doing that the set-up is complete

Now the page will get redirected to your domain dashboard and you can control all the things from there. These include “Analytics”, “Crypto”, “Firewall”, “Access”, “Speed”, “Caching”, etc options

Node.js installation on Ubuntu 16.04

What is Node.js

Node.js is a JavaScript platform for general-purpose programming that allows users to build network applications quickly. By leveraging JavaScript on both the front- and back-end, development can be more consistent and designed within the same system.

To Install the Distro-Stable Version for Ubuntu

Ubuntu 16.04 contains a version of Node.js in its default repositories that can be used to easily provide a consistent experience across multiple systems. 

Firstly we’ll update our local package index so that we have access to the most recent package listings.And then we’ll install “Nodejs

$sudo apt-get update
$sudo apt-get install nodejs

To install npm

If the package in the repositories suits your needs, this is all we need to do to get set up with Node.js. In most cases, we’ll also want to also install npm, which is the Node.js package manager. 

$sudo apt-get install npm

Alternative methods 

To Install Using a PPA

An alternative that can get you a more recent version of Node.js is to add a PPA (personal package archive) maintained by NodeSource.

$ cd ~
$ curl -sL https://deb.nodesource.com/setup_8.x -o Nodesource_setup.sh 

To inspect the contents of this script with nano

$ nano nodesource_setup.sh

And run the script under sudo

$ sudo bash nodesource_setup.sh

The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script from nodesource, now we  can install the Node.js package.

$ sudo apt-get install nodejs

To check which version of Node.js

$ nodejs -v

To check the npm version

$ npm -v 

To install the build-essential package:

$ sudo apt-get install build-essential

How To Install Using NVM

An alternative to installing Node.js through apt is to use a specially designed tool called nvm, which stands for “Node.js version manager”.

To start off, we’ll need to get the software packages from our Ubuntu repositories that will allow us to build source packages. The nvm script will leverage these tools to build the necessary components:

$ sudo apt-get update
$ sudo apt-get install build-essential libssl-dev

Once the prerequisite packages are installed, we can pull down the nvm installation script from the project’s GitHub page. We can download it with curl:

$ curlsL  sLhttps://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh -o install_nvm.sh

The installation script with nano

$ nano install_nvm.sh

Running  the script with bash:

$ bash install_nvm.sh

It will install the software into a subdirectory of home directory at ~/.nvm. It will also adds necessary lines to  ~/.profile

$ source ~/.profile

Now, to find out the versions of Node.js that are available for installation

$ nvm ls-remote

To install LS version

 $ nvm install 10.16.0

To use the version we just downloaded by typing:

$ nvm use 8.9.4

We can have npm install packages to the Node.js project’s ./node_modules directory by using the normal format. For example, for the express module:

$ npm install express

Removing Node.js

To remove node.js we simply use apt-get remove command 

$ sudo apt-get remove nodejs

Installing Firewall with ubuntu 16.04


About Firewall

A firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier between a trusted internal network and untrusted external network, such as the Internet.

About UFW

UFW, or Uncomplicated Firewall, is an interface to iptables that is geared towards simplifying the process of configuring a firewall. While iptables is a solid and flexible tool, it can be difficult for beginners to learn how to use it to properly configure a firewall.

Prerequisites

We can use Ubuntu 14.04 or 16.0 server with a sudo non-root user.

Installation of UFW

UFW is installed by default on Ubuntu. If it has not been installed we can install it by using following command

$ sudo apt-get install ufw
This image has an empty alt attribute; its file name is capture-2.png

Here, ufw has already been installed.Paragraph

Checking Application list

we can use the following command to check the applications available in UFW.


$ sudo ufw app list
This image has an empty alt attribute; its file name is capture3-1.png

Checking Status and Rules

By using the below command we can check the status of ufw


$ sudo ufw status verbose
This image has an empty alt attribute; its file name is verbose.png

and also we can check the Rule with the below command

sudo ufw status numbered

This image has an empty alt attribute; its file name is numbered.png

Allowing SSH connections:

we have to set some rules for incoming connections before enabling UFW firewall. This helps our server to respond to these requests.

To configure your server to allow incoming SSH connections, you can use this command:


$ sudo ufw allow ssh

we can actually write the equivalent rule by specifying the port instead of the service name by using the below command.

This image has an empty alt attribute; its file name is rules-1.png
 $ Sudo ufw allow 22
This image has an empty alt attribute; its file name is rule2.png

Enable UFW

to enable UFW firewall we can use the below command.

$  Sudo ufw enable

we will receive a warning that says the command may disrupt existing SSH connections. We already set up a firewall rule that allows SSH connections, so it should be fine to continue. Respond to the prompt with y.

This image has an empty alt attribute; its file name is enable-1.png

The firewall is now active.

Allowing other connections:

 $ sudo ufw allow http

command using port

 $ sudo ufw allow 80

Specific IP Address

If we want to allow the connections from a specific IP Address( from our machine to any other machine) we can use the below command:

 $ sudo ufw allow from IP Address

Deny connections:

We need to create deny rules for any services or IP addresses that you don’t want to allow connections for.

For example, To deny HTTP connections, we could use this command:

 $ sudo ufw deny http

Disable UFW

we can disable UFW firewall by using the following command

 $ sudo ufw disable

we can also reset the UFW firewall by using the below command

$  sudo ufw reset

LAMP Stack installation in Ubuntu 16.04

What is LAMP 

A LAMP Stack is a set of open-source software that can be used to create websites and web applications. LAMP is an acronym, and these stacks typically consist of the Linux operating system, the Apache HTTP Server, the MySQL relational database management system, and the PHP programming language.

Prerequisites

A Non-root user account with sudo privileges set up on your server.

Install Apache and Allow in Firewall

The Apache web server is among the most popular web servers in the world. It’s well-documented, and has been in wide use for much of the history of the web, which makes it a great default choice for hosting a website. 

$ sudo apt-get update
$ sudo apt-get install apache2

Set Global ServerName to Suppress Syntax Warnings

Next, we will add a single line to the /etc/apache2/apache2.conf file to suppress a warning message. While harmless, if you do not set ServerName globally, you will receive the following warning when checking your Apache configuration for syntax errors:

$ sudo apache2ctl configtest

Opening up the main configuration file with our text edit:

$ sudo nano /etc/apache2/apache2.conf

Inside, at the bottom of the file, adding ServerName directive, pointing to our  server’s public IP 

Next, checking for syntax errors by typing:

$ sudo apache2ctl configtest

Restart Apache to implement changes:

$ sudo systemctl restart apache2

Adjust the Firewall to Allow Web Traffic

Next, assuming that you have followed the initial server setup instructions to enable the UFW firewall, make sure that your firewall allows HTTP and HTTPS traffic. You can make sure that UFW has an application profile for Apache like so:

$ sudo ufw app info "Apache Full"

In browser we put IP or URL

http://server_IP_address

How To Find your Server’s Public IP Address

$ sudo apt-get install curl
$ curl http://icanhazip.com

Install MySQL

Now that we have our web server up and running, it is time to install MySQL. MySQL is a database management system. Basically, it will organize and provide access to databases where our site can store information.

Again, we can use apt to acquire and install our software. 

$ sudo apt-get install mysql-server

When the installation is complete, we want to run a simple security script that will remove some dangerous defaults and lock down access to our database system a little bit. Start the interactive script by running:

$ mysql_secure_installation

Answer y for yes, or anything else to continue without enabling.

Install PHP

PHP is the component of our setup that will process code to display dynamic content. It can run scripts, connect to our MySQL databases to get information, and hand the processed content over to our web server to display.

We can once again leverage the apt system to install our components. We’re going to include some helper packages as well, so that PHP code can run under the Apache server and talk to our MySQL database:

$ sudo apt-get install php libapache2-mod-php php-mcrypt php-mysql

To open the dir.conf file in a text editor

$ sudo nano /etc/apache2/mods-enabled/dir.conf

We want to move the PHP index file highlighted above to the first position after the DirectoryIndex specification

Restarting

$ sudo systemctl restart apache2

We can also check on the status of the apache2 service using systemctl:

$ sudo systemctl status apache2

Test PHP Processing on your Web Server

In order to test that our system is configured properly for PHP, we can create a very basic PHP script.

We will call this script info.php. In order for Apache to find the file and serve it correctly, it must be saved to a very specific directory, which is called the “web root”

$ sudo nano /var/www/html/info.php

This will open a blank file. We put the following text, which is valid PHP code, inside the file:

Now we can test whether our web server can correctly display content generated by a PHP script. 

The address we visit 

http://server_IP_address/info.php

To remove this file

$ sudo rm /var/www/html/info.php

Nginx installation on Ubuntu 16.04

What is Nginx 

NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. NGINX is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption. 

NGINX scales in all directions: from the smallest VPS all the way up to large clusters of servers.

This image has an empty alt attribute; its file name is 21DC0xTOjRb6Ey3QXRRJhJkqGz_TZakfo20gFXhmh6jfVd1L7dpxRI71mFoWw0JlfyegyBQOc1gvdDopte4iREjINBMRqrIcYHkAW2BHvMPiO_D03KZWqVcUJ8Rn_Y3WjD5VOyzS

Installation

Step 1: Install Nginx

Nginx is available in Ubuntu’s default repositories, so the installation is rather straightforward.

Firstly we’ll update our local package index so that we have access to the most recent package listings.And then we’ll install “Nginx

$ sudo apt-get update
$ sudo apt-get install nginx

Firewall Configuration

To list the applications configurations that ufw knows how to work with by typing:

$ sudo ufw app list
ubuntu@interninstance: ~

To enable we type:

$ sudo ufw allow 'Nginx HTTP'

Changes can be verified by typing:

$ sudo ufw status

Check  Web Server

$ Sudo systemctl status nginx
ubuntu@interninstance: ~

Check Web Server

We put server name or IP in the browser 

http://server_domain_or_IP