Apache Virtual Hosts on Ubuntu 14.04


The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a Virtual Host.


These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

In this document, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you’ll learn how to serve different content to different visitors depending on which domains they are requesting.


  • Before you begin this tutorial, you should create a non root user.
  • You will also need to have Apache installed in order to work through these steps.


OMegha platform.

Image – Ubuntu-14.04

Lets get Started,

At first we need to update the packages list.

$ sudo apt-get update


Install Apache

$ sudo apt-get install apache2


For the purposes of this document, my configuration will make a virtual host for infra.com and another for infra1.com

Step 1: Create the Directory Structure

Our document root will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

$ sudo mkdir -p /var/www/infra.com/public_html

$ sudo mkdir -p /var/www/infra1.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

Step 2: Grant Permissions

Changing the Ownership

$ sudo chown -R $USER:$USER /var/www/infra.com/public_html

$ sudo chown -R $USER:$USER /var/www/infra1.com/public_html


We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders

$ sudo chmod -R 755 /var/www

Step 3: Create Demo Pages for Each Virtual Host

We have to create index.html file for each site.

Let’s start with infra.com. We can open up an index.html file in our editor by typing

$ sudo vi /var/www/infra.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to and My file looks like this



    <title>Welcome to infra.com!</title>



    <h1>Success!  The infra.com virtual host is working!</h1>



Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing

cp /var/www/infra.com/public_html/index.html /var/www/infra1.com/public_html/index.html

Then we can open the file and modify the relevant pieces of information

$ sudo vi /var/www/infra1.com/public_html/index.html



    <title>Welcome to infra1.com!</title>



    <h1>Success!  The infra1.com virtual host is working!</h1>



Save and close the file.

Step 4: Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf and we can
copy that to our first domain of the virtual host file.

Creating First Virtual Host File

Start by copying the file for the first domain

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/infra.com.conf

Open the new file in your editor with root privileges

$ sudo vi /etc/apache2/sites-available/infra.com.conf

our virtual host file should look like this

<VirtualHost *:80>

    ServerAdmin admin@infra.com

    ServerName infra.com

    ServerAlias www.infra.com

    DocumentRoot /var/www/infra.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


Save and close the file.

Copy first Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying

$ sudo cp /etc/apache2/sites-available/infra.com.conf /etc/apache2/sites-available/infra1.com.conf

Open the new file with root privileges

$ sudo vi /etc/apache2/sites-available/infra1.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this

<VirtualHost *:80>

    ServerAdmin admin@infra1.com

    ServerName infra1.com

    ServerAlias www.infra1.com

    DocumentRoot /var/www/infra1.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


Save and close the file.

Step 5: Enable the New Virtual Host Files

Created Virtual host files needs to be enabled.

We can use the a2ensite tool to enable each of our sites

$ sudo a2ensite infra.com.conf

$ sudo a2ensite infra1.com.conf


Restart the apache server.

$ sudo service apache2 restart

Step 6: Setup Local Hosts File

$ sudo vi /etc/hosts

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS. localhost

***.***.***.*** infra.com

***.***.***.*** infra1.com

Save and close the file.

This will direct any requests for infra.com and infra1.com on our computer and send them to our server at ***.***.***

Step 7: Test Your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser



You should see a page that looks like this

Likewise, if you can visit your second page



You will see the file you created for your second site

Step 8: Conclusion

If both of these sites work well, you’ve successfully configured two virtual hosts on the same server.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.


Centralize Logs from Node.js Applications



  • Installation of Node.js and NPM
  • Installation of Fluentd

Modifying the Config File

Next, please configure Fluentd to use the forward Input plugin as its data source.

$ sudo vi /etc/td-agent/td-agent.conf

Fluent daemon should listen on TCP port.

Simple configuration is following:


Restart your agent once these lines are in place.

$ sudo service td-agent restart


Install fluent-logger-node

$ npm install fluent-logger

Now use npm to install your dependencies locally:

$ npm install

Send an event record to Fluentd


This is the simplest web app.


Run the app and go to http://localhost:4000 in your browser. This will send the logs to Fluentd.

$ node index.js


The logs should be output to /var/log/td-agent/td-agent.log  

Store Logs into MongoDB

Fluentd does 3 things:

  1. It continuously “tails” the access log file.
  2. It parses the incoming log entries into meaningful fields (such as ip,path, etc.) and buffers them.
  3. It writes the buffered data to MongoDB periodically.


Fluentd’s config file

$ sudo vi /etc/td-agent/td-agent.conf

 The Fluentd configuration file should look like this:


Restart your agent once these lines are in place.

$ sudo service td-agent restart

Then, access MongoDB and see the stored data.

$ mongo


Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Techgig Code Gladiators 2018

It was my first hackathon experience of my life, the times internet group organised the nation wide hackathon early this year, It was my first experience of attending a Hackathon finals that was held at Mumbai on 7th June 2018 to 9th June 2018. Our team of 3  from InfraStack-Labs Technologies Pvt Ltd.,  were Arun Narayanan, Kishore Kumar and myself (Anand) who participated national event called Techgig Code Gladiators 2018, which had 3 major rounds over a period of 2 months.

Each round had a different scenarios/use cases given and we had to find a solution for the problem statement. Our team had registered under the category of “Cloud Computing” as we were basically from a cloud computing background. We had a great teamwork and we were one among 13 teams selected for finals under this category.

My journey to the finals  started from Coimbatore to Bangalore on 5th evening to join my team. The next day 6th June 2018 I reached the office for the final preparation of the hackathon with my teammates. We  discussed about various aspects about the hackathon and did a good homework before participation. After our discussion, all of us left office to take some rest before our travel to mumbai.

Screen Shot 2018-08-06 at 10.08.52 AM

Next day on 7th June 2018, we  had to start early in the morning @ 2 AM to reach the airport as our flight was scheduled for  5 AM. Arun and Kishore have booked the cab and, on the way, they picked me at Hebbal and we reached airport @ 3 AM. The flight departed on time and we reached Mumbai by 6.30 AM.

Reliance Corporate Park, Thane

On reaching  Mumbai airport, the Techgig team had arranged a bus for all finalists teamsl to reach the venue Reliance Corporate Park, Navi Mumbai. Our bus  started from airport and reached the venue @ 8:00 AM, ON reaching the beautiful Reliance campus, we went went straight to the registration desk and got our verification done and got our tags for the hackathon Then we were taken to the event location by another bus.

When we entered the venue, the reliance coporate park was really beautiful and huge, we felt like a different world with participants all over the place, some were  busy with registration, some were having real fun with their colleagues and some were playing games and more. We have done our registration and had our breakfast and now we were ready for the Hackathon which was to being at 9:30 AM.

We were sitting in the main  the auditorium, where we could see lot of professionals and students from various parts of India  with different ideas to win the hackathon. We got brief summary about the project for the hackathon and it was 22 Hrs hackathon, the hackathon was to start on 7th June 12:30 PM and ended on 8th June 10:30 AM.

We had 22 Hrs to complete the project, exactly at 12:30 we started our work and we had a plan to complete and deliver the project on time. Every one in the auditorium are busy working on their projects.

We had lunch and again started our work, time runs very fast and we had to rush more to complete our project. Then we went  for refreshment with snacks and coffee at around 4;30 PM. Then again started our work, it was literally a kind of steeplechase event feeling, we were really excited to see ourselves surrounded by smart people all under same roof working of different ideas

At 8PM, We had our dinner and had some rest, again started our work. We had focused on various aspects to complete the project and continued to work till midnight. We had some quick nap  to refresh ourselves and again focused on our work and the time runs very fast it was 6:30 AM and we had four hours to end the hackathon. We had to rush to complete the project on time @ 10:30 AM.

Finally after 21 Hrs of marathon, we reached the climax, only an hour was left to submit the project/solution. We started to collect the sources and data needed for submission, we were ready for the submission, but suddenly there was a technical glitch  and we were not able to submit the presentation and the codebase , so we had to contact the support team. They rectified the issue and we had 15 minutes left for submission, we quickly uploaded the deliverables . and waited for our turn to present the project.


We had scheduled @ 2:30 PM for presentation, we were all  well prepared for the demo/presentation and then came our turn to present the project before the jury. We had delivered our best to impress the jury and got appreciation from jury for the way we were  different from other participants in creating the solution

Best comment received after our final presentation 

“You guys are the only team who did things very differently”

We were the only team, who created solutions on a proprietary cloud platform called “OMegha™ Public Cloud”. All other participants did the system integration and created the solutions on either AWS/IBM Watson/Azure


We waited for the final result, till the time we had opportunity to roam around the beautiful Reliance Corporate Park. The place was wonderful and great place to work. Then we had chance to know about JIO, they presented their new and upcoming technologies. Then we had some fun and played some games and had photo sessions and new friends of course made the experience even more memorable.

At last it was  time for result, we waited eagerly for the result and ours was the last one to be announced. Unfortunately, we are not among the winning 3 , but the appreciation we got on our out-of-the-box thinking for creating the solution was the real happiness, it was a great experience to participate in the hackathon, It was raining heavily as the monsoons had already started in Mumbai, with a light hearted happy hackathon experience we left for the airport way back to Bangalore

Summary of @InfraStack-Labs Technologies Private Limited  team's experience in participating in #Techgigcodegladiators2018 #cloudcomputing #hackathon

#Event - Code Gladiators 2018 
#Event Registrations 228,880
#6500+ registrations (Cloud Computing Category)

-Artificial Intelligence
-Big Data
-Cloud Computing
-Mobility & Location Services
-Internet of Things
-Machine Learning

#45 days marathon
#20 hours of final offline hackathon coding (Final round)
#15 days of online hackathon coding (1st & 2nd round)
#3 Rounds
#3 problem statement (General Cloud Computing, AI Robot & IoT)
#3 team members
#1 Bot created & delivered  (Vinayak) 
#1 IoT platform & delivered




Installation of MongoDB on Ubuntu 16.04

MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.


MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.


To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA31292

After successfully importing the key, you will see:

gpg: Total number processed: 1
gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod
$ mongo

mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Main PID: 4093 (mongod)
Tasks: 16 (limit: 512)
Memory: 47.1M
CPU: 1.224s
CGroup: /system.slice/mongodb.service
└─4093 /usr/bin/mongod --quiet --config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Build RESTful API in Go and MongoDB


Golang is a programming language initially developed at Google in year 2007 by Robert Griesemer, Rob Pike, and Ken Thompson. Go programming language is a statically-typed language with syntax similar to that of C. It provides garbage collection, type safety, dynamic-typing capability, many advanced built-in types such as variable length arrays and key-value maps. It also provides a rich standard library.



MongoDB is a free and open-source cross-platform document-oriented database program. Classified as a NoSQLdatabase program, MongoDB uses JSON-like documents with schemas.



One of the most popular types of API is REST or, as they’re sometimes known, RESTful APIs. REST or RESTful APIs were designed to take advantage of existing protocols. While REST – or Representational State Transfer – can be used over nearly any protocol, when used for web APIs it typically takes advantage of HTTP. This means that developers have no need to install additional software or libraries when creating a REST API


Installation of  MongoDB Server:

Import they GPG key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

Add the MongoDB repository details so APT

$ echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Install MongoDB

$ sudo apt-get install -y mongodb-org



Installation of  Go:

Download the latest package for Go by running this command, which will pull down the Go package file, and save it to your current working directory.

$ sudo curl -O https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gz

Now we can set the Go paths

$ sudo vi ~/.profile

At the end of the file, add this line:

export GOROOT=$HOME/go
export PATH=$PATH:$GOROOT/bin

With the appropriate line pasted into your profile, save and close the file. Next, refresh your profile by running:

$ source ~/.profile



Build RESTful API in Go and MongoDB


The REST API service will create endpoints to manage a store of movies. The operations that our endpoints will allow are:

GET       /movies               Get list of movies

GET       /movies/:id         Find a movie by it’s id

POST    /movies                Create a new movie

Before we begin, we need to get the packages to setup the API:

  • toml :  Parse the configuration file (MongoDB server & credentials)
  • mux : Request router and dispatcher for matching incoming requests to their respective handler
  • mgo : MongoDB driver
$ go get github.com/BurntSushi/toml gopkg.in/mgo.v2 github.com/gorilla/mux

After  installing the dependencies, we create a file called “app.go“, with the following content:

package main
import (import ( "fmt" "log" "net/http"
func AllMoviesEndPoint(w http.ResponseWriter, r *http.Request) {func AllMoviesEndPoint(w http.ResponseWriter, r *http.Request) { movies, err := dao.FindAll() if err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusOK, movies)}
func FindMovieEndpoint(w http.ResponseWriter, r *http.Request) { params := mux.Vars(r) movie, err := dao.FindById(params["id"]) if err != nil { respondWithError(w, http.StatusBadRequest, "Invalid Movie ID") return respondWithJson(w, http.StatusOK, movie)}
func CreateMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } movie.ID = bson.NewObjectId() if err := dao.Insert(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusCreated, movie)
func UpdateMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } if err := dao.Update(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) returnrespondWithJson(w, http.StatusOK, map[string]string{"result": "success"})}func DeleteMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } if err := dao.Delete(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusOK, map[string]string{"result": "success"})}func main() { r := mux.NewRouter() r.HandleFunc("/movies", AllMoviesEndPoint).Methods("GET") r.HandleFunc("/movies", CreateMovieEndPoint).Methods("POST") r.HandleFunc("/movies", UpdateMovieEndPoint).Methods("PUT") r.HandleFunc("/movies", DeleteMovieEndPoint).Methods("DELETE") r.HandleFunc("/movies/{id}", FindMovieEndpoint).Methods("GET") if err := http.ListenAndServe(":3000", r); err != nil { log.Fatal(err) }}

The above code will create a controller for each endpoint, then make an HTTP server on port 3000.

Now create a basic Moviemodel. In Go, we use struct keyword to create a model:

type Movie struct { 
    ID          bson.ObjectId `bson:"_id" json:"id"` 
    Name        string        `bson:"name" json:"name"` 
    CoverImage  string        `bson:"cover_image" json:"cover_image"` 
    Description string        `bson:"description" json:"description"`}

Create the Data Access Object to manage database operations.

package dao
import (
 mgo "gopkg.in/mgo.v2" 
 mgo "gopkg.in/mgo.v2" 
type MoviesDAO struct { 
Server   string 
Database string
var db *mgo.Database
const ( 
COLLECTION = "movies"
func (m *MoviesDAO) Connect() { 
session, err := mgo.Dial(m.Server) 
if err != nil { 
log.Fatal(err) } 
db = session.DB(m.Database)
ffunc (m *MoviesDAO) FindAll() ([]Movie, error) { 
var movies []Movie 
err := db.C(COLLECTION).Find(bson.M{}).All(&movies) 
return movies, err
func (m *MoviesDAO) FindById(id string) (Movie, error) { 
var movie Movie 
err := db.C(COLLECTION).FindId(bson.ObjectIdHex(id)).One(&movie) 
return movie, err
func (m *MoviesDAO) Insert(movie Movie) error { 
err := db.C(COLLECTION).Insert(&movie) 
return err
func (m *MoviesDAO) Delete(movie Movie) error { 
err := db.C(COLLECTION).Remove(&movie) 
return err
func (m *MoviesDAO) Update(movie Movie) error { 
err := db.C(COLLECTION).UpdateId(movie.ID, &movie) 
return err

To run the server in local, type the following command:

$ go run app.go

Create a Movie

api 1

 List of Movies

 api 2

Ansible installation and setting up


, , , , ,

Ansible is a configuration management system. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. This is handy if you need to deploy your application on multiple servers without the need for having to do this manually on all your servers. You can also add identical servers to your cluster.

Ansible provides configuration management so you can add identical servers to your cluster very easily. You can also do centralized management for all of your servers in one place. You can run an apt-get update on all servers at once!

Ansible does deployment and management over SSH. It manages machines in an agent-less manner. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines.

In this tutorial we’ll see how we can install Ansible on Ubuntu 14.04.

Step 1: Installing Ansible

To install the latest version of Ansible

$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

You need to put all the servers that you want to manage with Ansible in the /etc/ansible/hosts file.

You will need to comment out all lines. Go to the latest line of the hosts file to create a category. Say you have a cluster of web and database servers. You could create two separate categories: web and db. If you would want to make a change on all database servers, you could use db as selection so only all database servers would be affected and not other servers such as your web servers in the web category.

Step 2: Setting up SSH keys

As we mentioned above, Ansible primarily communicates with client computers through SSH. While it certainly has the ability to handle password-based SSH authentication, SSH keys help keep things simple.

We can set up SSH keys in two different ways depending on whether you already have a key you want to use.

Create a New SSH Key Pair

If you do not already have an SSH key pair that you would like to use for Ansible administration, we can create one now on your Ansible VPS.

We will create an SSH key pair to authenticate with the hosts that it will administer.

As the user you will be controlling Ansible with, create an RSA key-pair by typing:

$ ssh-keygen

You will be asked to specify the file location of the created key pair, a passphrase, and the passphrase confirmation. Press ENTER through all of these to accept the default values.

Your new keys are available in your user’s ~/.ssh directory. The public key (the one you can share) is called id_rsa.pub. The private key (the one that you keep secure) is called id_rsa. You can copy the content of this file to the authorized_keys in the target servers to set up SSH communication.

Step 3: Test Ansible

To see if you can ping all your servers in the hosts file, you can use the following command:

$ ansible all –m ping

This confirms whether or not your servers are online.

Installing Asterisk on Ubuntu 16.04



Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.


Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv


Production Application Deployment Orchestration Using Auto-Scaling, Load-Balancer, and OpenStack Heat


OpenStack Heat can deploy and configure multiple instances in one command using resources we have in OpenStack. That’s called a Heat Stack.

Heat will create instances from images using existing flavors and networks. It can configure LBaaS and provide VIPs for our load-balanced instances. It can also use the metadata service to inject files, scripts or variables after instance deployment. It can use Ceilometer to create alarms based on instance CPU usage and associate actions like spinning up or terminating instances based on CPU load.

OpenStack provides Auto-scaling features through Heat. This feature reduces the need to manually provision instance capacities in advance. You can use Heat resources to detect when a Ceilometer alarm triggers and provision or de-provision a new VM depending on the trigger. These groups of VMs must be under a Load-balancer which distributes the load among the VMs on the scaling group.

Whether you are running one instance or thousands, you can use Autoscaling to detect, increase, decrease, and replace instances without manual intervention.

In the document, the following two policies are defined:

a) When the CPU utilization rate is above 50%, a new instance is created automatically until the number of instances reaches 4.

b) When any CPU utilization rate is below 15%, an instance is terminated until the number of instances reaches 2.

Autoscaling in Heat is done with the help of three main types:


An AutoScalingGroup is a resource type that is used to encapsulate the resource that we wish to scale, and some properties related to the scale process.


A ScalingPolicy is a resource type that is used to define the effect a scale process will have on the scaled resource.


An Alarm is a resource type the is used to define under which conditions the ScalingPolicy should be triggered.


Deploying a WordPress Application Stack with an Autoscaling group of Web servers and a Load-balancer.

Complete templates are available in here

The following example uses a snapshot of a VM with Apache already installed and configured over an Ubuntu operating system as a base image.


type: string
description: Name of an existing key pair to use for the template
default: dev
type: string
description: Name of image to use for servers
default: ubuntu
type: string
description: Flavor to use for servers
default: m1.small
type: string
description: WordPress database name
default: wordpress
type: string
description: The WordPress database admin account username
default: admin
type: string
description: The WordPress database admin account password
default: admin
type: string
description: Root password for MySQL
default: admin
hidden: true
type: string
description: ID of public n/w for which floating IP will be allocated
default: 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d
type: string
description: ID of private network into which servers get deployed
default: e232d0c0-4363-4b39-b88a-949e177f058a
type: string
description: ID of private sub network into which servers get deployed
default: a0cf224b-1f42-4650-b219-b9320d4ea06f 

The first resource one will provision is the health monitor configuration that will check the virtual machines under the Load-balancer. If the machine is down, the Load-balancer will not send traffic to it. Create the pool using the health monitor and specifying the protocol, the network and subnet, the algorithm to use for distributing the traffic, and the port that will receive the traffic on the virtual IP. Finally, create the Load-balancer using this pool.

     type: OS::Neutron::HealthMonitor
     type: TCP
    delay: 3
max_retries: 3
  timeout: 3

     type: OS::Neutron::Port
     network_id: { get_param: private_net_id }
     - subnet_id: { get_param: private_subnet_id }

    type: OS::Neutron::FloatingIPAssociation
    floatingip_id: { get_resource: lb_vip_floating_ip }
    port_id: { 'Fn::Select': ['port_id', {get_attr: [pool, vip]}]}

    type: OS::Neutron::Pool
    protocol: HTTP
    monitors: [{get_resource: monitor}]
    subnet_id: {get_param: private_subnet_id}
    lb_method: ROUND_ROBIN
    protocol_port: 80

    type: OS::Neutron::LoadBalancer
    protocol_port: 80
    pool_id: {get_resource: pool} 

Associate a public IP for the Load-balancer so that it can be accessed from the Internet. Use the following syntax to create a floating IP for the Load-balancer:

type: OS::Neutron::FloatingIP
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: lb_vip_port }

Use the following syntax to create the Auto-scaling group (VMs) which will be acting as the web servers for the WordPress application:

type: OS::Heat::AutoScalingGroup
min_size: 2
max_size: 4
type: Lib::INF::LB
flavor: {get_param: flavor}
image: {get_param: image}
key_name: dev
pool_id: {get_resource: pool}
metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
template: |
       #!/bin/bash -ex
       sed -i 's/172.16.*.*/' /etc/resolv.conf
       # install dependencies
       apt-get update
       apt-get -y install php5 libapache2-mod-php5 php5-mysql php5-gd mysql-client
       wget http://wordpress.org/latest.tar.gz
       tar -xzf latest.tar.gz
       cp wordpress/wp-config-sample.php wordpress/wp-config.php
       sed -i 's/database_name_here/$db_name/' wordpress/wp-config.php
       sed -i 's/username_here/$db_user/' wordpress/wp-config.php
       sed -i 's/password_here/$db_password/' wordpress/wp-config.php
       sed -i 's/localhost/$db_host/' wordpress/wp-config.php
       rm /var/www/html/index.html
       cp -R wordpress/* /var/www/html/
       chown -R www-data:www-data /var/www/html/ 
       chmod -R g+w /var/www/html/
       $db_name: {get_param: db_name}
       $db_user: {get_param: db_username}
       $db_password: {get_param: [db_password, value]}
       $db_host: {get_attr: [wp_dbserver, first_address]}

                               Here the highlighted part indicates that another YAML file is called upon to create a custom resource. For this a template file is created to define this resource in this case Load-balancing server, and URL to the template file is provided in the environment file. This Load-balancer file will be doing the important function of adding the web server to the load balancing pool using the resource ‘member’.

The Load-balancer server YAML file is given below.

heat_template_version: 2013-05-23
description: A load-balancer server
type: string
description: Image used for servers
type: string
description: SSH key to connect to the servers
type: string
description: flavor used by the servers
type: string
description: Pool to contact
type: string
description: Server user_data
type: json
type: string
description: Network used by the server
default: private
type: OS::Nova::Server
name: web-server
flavor: {get_param: flavor}
image: {get_param: image}
key_name: {get_param: key_name}
metadata: {get_param: metadata}
user_data: {get_param: user_data}
user_data_format: RAW
networks: [{network: {get_param: network} }]
type: OS::Neutron::PoolMember
pool_id: {get_param: pool_id}
address: {get_attr: [server, first_address]}
protocol_port: 80
description: IP Address of the load-balanced server.
value: { get_attr: [server, first_address] }
description: LB member details.
value: { get_attr: [member, show] }


Use the following syntax to create scaling policies:

type: OS::Heat::ScalingPolicy
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: web_server_group}
cooldown: 60
scaling_adjustment: 1
type: OS::Heat::ScalingPolicy
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: web_server_group}
cooldown: 60
scaling_adjustment: -1


Use Ceilometer to establish the alarms (both high and low) for the auto-scaling group for a specific metric. Use the following syntax to create Ceilometer alarms:

       type: OS::Ceilometer::Alarm
       description: Scale-up if the average CPU > 50% for 1 minute
       meter_name: cpu_util
       statistic: avg
       period: 60
       evaluation_periods: 1
       threshold: 50
          - {get_attr: [web_server_scaleup_policy, alarm_url]}
       matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
       comparison_operator: gt

       type: OS::Ceilometer::Alarm
       description: Scale-down if the average CPU < 15% for 1 minute
       meter_name: cpu_util
       statistic: avg
       period: 60
       evaluation_periods: 1
       threshold: 15
          - {get_attr: [web_server_scaledown_policy, alarm_url]}
       matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
       comparison_operator: lt

The first resource is the Nova server that you will initialize from a specific image. In our example, this is Ubuntu 14.04 with Apache2 already installed and configured. This won’t be a part of the Auto-scaling group and we’ll use it as the database server of the WordPress application.

type: OS::Nova::Server
name: wp_db_server
image: {get_param: image}
key_name: dev
- port: { get_resource: wp_dbserver_port }
user_data_format: RAW
__mysql_root_password__: {get_param: [db_rootpassword, value]}
__database_name__: {get_param: db_name}
__database_user__: {get_param: db_username}
__database_password__: {get_param: [db_password, value]}
template: |
apt-get update
export DEBIAN_FRONTEND=noninteractive
apt-get install -y mysql-server
mysqladmin -u root password "__mysql_root_password__"
sed -i "s/bind-address.*/bind-address =" /etc/mysql/my.cnf
service mysql restart
mysql -u root --password="__mysql_root_password__" <<EOF
CREATE DATABASE __database_name__;
CREATE USER '__database_user__'@'localhost';
GRANT ALL PRIVILEGES ON __database_name__.* TO 
'__database_user__'@'localhost' IDENTIFIED BY '__database_password__';
CREATE USER '__database_user__'@'%';
SET PASSWORD FOR '__database_user__'@'%'=PASSWORD("__database_password__");
GRANT ALL PRIVILEGES ON __database_name__.* TO '__database_user__'@'%' IDENTIFIED BY '__database_password__';

Now to call upon the Load-balancer server file, we’ll use an Environment file which is described below.

Lib::INF::LB: https://raw.githubusercontent.com/infrastacklabs/test/master/lb.yaml

Here the URL points to the location where the Load-balancer YAML file is stored.


Launching the stack on Horizon with this Heat template:

Login to OpenStack environment, open the Orchestration part on the left tab and click on Launch stack as shown in the picture.



Click on Launch stack. And select the WordPress.yaml file and the env.yaml file. Click on next.



Verify the parameters and click on launch.



This should launch the Heat Stack.


Wait a few mins to get the MySQL and WordPress installed in the corresponding servers.


One should be able to see a new Load-balancer in the Networks> Load balancer section with a public IP assigned to it.

autoscalingLB5 - Copy


Running this public IP should land the Apache welcome page on the browser.



Stack should be creating three servers in total, two web-servers and one DB server with MYSQL installed.




Assign a public IP to any of the web-servers and access it using the browser. It should land a WordPress installation page like below.

autoscalingLB8 - Copy


Install the application

autoscalingLB9 - Copy




Login with the given credentials and one should be able to access the WordPress dashboard.

autoscalingLB11 - Copy



Scaling can be done in two ways: Manual and Automatic.

By Using Ceilometer Alarms, the heat stack should be able to scale the servers depending on the CPU usage of the webersvers in the group.

By invoking the webhook URLs in the Stack Overview, one should be able to scale down and scale up the number of web servers in the group.



For example:

Running the scale up URL should create a new web server and it should be added under the Load-balancer.




Here by using OpenStack Heat, Ceilometer and Load-balancer, we’are able to create an Application infrastructure that contains a Database server and a group of web servers that can be auto-scaled depending up on the CPU usage on these servers. This Load-balancer automatically registers the instances in the group and distributes incoming traffic across the instances.

Application deployment using Docker-Compose on OMegha Public Cloud

Docker is a container technology for Linux that allows a developer to package up an application with all of the parts it needs. It makes it easier to create, deploy, port and run applications by using containers. Docker Compose makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.

To really take full advantage of Docker’s potential, it’s best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together can quickly become knotty.

To deal with this, the Docker team decided to come up with a solution based on the Fig source called Docker Compose.

This article provides a real-world example of using Docker Compose to install an application, in this case WordPress with PHPMyAdmin as an extra. WordPress normally runs on a LAMP stack, which means Linux, Apache, MySQL/MariaDB, and PHP. The official WordPress Docker image includes Apache and PHP for us, so the only part we have to worry about is MariaDB.



  • Host Machine: Ubuntu 14.04 OMegha Bolt (VM).
  • A non-root user with sudo privileges.
  • Reasonable knowledge of Linux commands.


Installing Docker:

The Docker installation package available in the official Ubuntu 14.04 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. To do that;

First, add the GPG key for the official Docker repository to the system:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update the package database with the Docker packages from the newly added repo:

$ sudo apt-get update

Install Docker

$ sudo apt-get install –y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

$ sudo service docker status


Installing Docker Compose:

Now that you have Docker installed, let’s go ahead and install Docker Compose. First, install python-pip as prerequisite:

$ sudo apt-get install python-pip

Install Docker Compose:

$ sudo pip install docker-compose

Docker Compose makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.


Installation of Application:

Here we’ll install a WordPress application with PHPMyAdmin using Docker-compose. WordPress normally runs on a LAMP stack, which means Linux, Apache, MySQL/MariaDB, and PHP. The official WordPress Docker image includes Apache and PHP for us, so we’ll have to run MariaDB container as a source for database purposes.


Create a folder where our data will live and create the docker-compose.yml file to run our app:

$ mkdir ~/wordpress
$ cd wordpress

Then create a docker-compose.yml with your any text editor.

$ sudo vi ~/wordpress/docker-compose.yml

And paste in the following:

  image: wordpress
    - wordpress_db:mysql
    - 8080:80
  image: mariadb
    MYSQL_ROOT_PASSWORD: mypassword
  image: nazarpc/phpmyadmin
    - wordpress_db:mysql
    - 8181:80
    MYSQL_ROOT_PASSWORD: mypassword


Explaining the YML file:

  • Docker Compose to start a new container called wordpress and download the wordpressimage from the Docker Hub.
  • Define a new container called wordpress_db and told it to use the mariadbimage from the Docker Hub.
  • WordPress container to link our wordpress_db container into the wordpress container and call it mysql (inside the wordpress container the hostname mysql will be forwarded to our wordpress_db container).
  • Set the MYSQL_ROOT_PASSWORD variable to start the mariadb server. The MariaDB Docker image is configured to check for this environment variable when it starts up and will take care of setting up the DB with a root account with the password defined as MYSQL_ROOT_PASSWORD.
  • Set up a port forward so that we can connect to our WordPress install once it actually loads up. The first port number is the port number on the host, and the second port number is the port inside the container. So, this configuration forwards requests on port 8080 of the host to the default web server port 80 inside the container.
  • Grabs docker-phpmyadmin by community member nazarpc, links it to our wordpress_db container with the name mysql, exposes its port 80 on port 8181 of the host system, and finally sets a couple of environment variables with our MariaDB username and password.

This image does not automatically grab the MYSQL_ROOT_PASSWORD environment variable from the wordpress_dbcontainer’s environment the way the wordpress image does. We actually have to copy the MYSQL_ROOT_PASSWORD: mypassword line from the wordpress_db container, and set the username to root.


Now start up the application group:

$ sudo docker-compose up -d

Run it with the -d option, which will tell docker-compose to run the containers in the background so that you can keep using your terminal.

One can see the corresponding images being pulled and run as containers. Towards the end of the output lines you can see that the containers are created and have started running;

Creating wordpress_wordpress_db_1 ...
Creating wordpress_wordpress_db_1 ... done
Creating wordpress_wordpress_1 ...
Creating wordpress_phpmyadmin_1 ... done
Creating wordpress_wordpress_1
Creating wordpress_phpmyadmin_1 ... done

To verify this, list the running containers using the command

$ sudo docker ps

To verify that the app is up, open up a web browser and browse to the IP of your box on port 8080. Here the host IP address we use for this document is of format xx.xxx.xx.xxx.

Type <host-IP>:8080 into the browser. This should land a WordPress installation page as in the picture.

lbaas - Copy

For the demo purposes, fill the fields as per below and install the application.

lbaas2 - Copy

Successful installation will land the following page.

lbaas3 - Copy

To verify the phpMyAdmin, open another tab and type xx.xxx.xx.xxx:8181 into it.

This should open up the phpMyAdmin page.

lbaas4 - Copy

To login, use the details that were used in the YML file. In this case, the user is root, and the password is ‘mypassword’.

Upon logging in, one will be able to access the databases in the MariaDB server as the picture shows;

lbaas5 - Copy


Here by using Docker-compose and Docker concepts in general, installations of wordpress and phpMyAdmin have been made much easier than the traditional ways of installing these applications.

OpenStack Load Balancer (LBaaS) Setup

Installing LBaaS

Run following command on controller node to install the required components (HAProxy and LBaaS package).

root@controller:~#  yum install haproxy openstack-neutron-lbaas


Configuring  LBaaS

Edit the file – /etc/neutron/neutron.conf

service_plugins = router,lbaas


Edit the file – /etc/neutron/neutron_lbaas.conf



Edit the file – /etc/neutron/lbaas_agent.ini

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver


Edit the file – /etc/openstack-dashboard/local_settings.py



‘enable_lb’: True,


Restart the services

# service neutron-server restart
# service neutron-lbaas-agent restart


Now you will be able to see “Load Balancers” on OpenStack Dashboard webpage under Project -> Network Menu.


Creating a ROUND_ROBIN Load Balancer between two instances

Get the details of running Instances

root@controller:~# nova list
| ID                                   | Name | Status | Task State | Power State | Networks            |
| a2e61f7a-f701-4e18-a4e4-3240c6e9b946 | lb1  | ACTIVE | -          | Running     | private=    |
| 4cbd5376-6d32-414c-9272-2c22002d1468 | lb2  | ACTIVE | -          | Running     | private=    |

I have 2 Instances running apache2 server on lb1 & lb2

VM Name -> VM ID -> Private Network IP




Get network list

root@controller:~# neutron net-list
| id                                   | name    | subnets                                                |
| 297c004b-435b-4741-b2f0-53a011403f61 | lb_net1 | e94e1b6c-aff2-45f2-a13c-a41593125a2d       |
| 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d | public  | f72e1955-b32f-4d49-8a6f-1833d6e2e1f3      | 
|                                      |         | 219765fc-be8d-4ef5-90a2-6c7ef2c961e9      |
| e232d0c0-4363-4b39-b88a-949e177f058a | private | a0cf224b-1f42-4650-b219-b9320d4ea06f       |


Get subnet list

root@controller:~# neutron subnet-list
| id                                   | name       | cidr              | allocation_pools                                     |
| e94e1b6c-aff2-45f2-a13c-a41593125a2d | lb_subnet1 |       | {"start": "", "end": ""}            |
| f72e1955-b32f-4d49-8a6f-1833d6e2e1f3 | public1    |      | {"start": "", "end": ""}          |
| 219765fc-be8d-4ef5-90a2-6c7ef2c961e9 | public     |      | {"start": "", "end": ""}          |
|                                      |            |                   |
| a0cf224b-1f42-4650-b219-b9320d4ea06f | private    |       | {"start": "", "end": ""}            |


Create Pool

root@controller:~# neutron lb-pool-create --name http-pool --lb-method ROUND_ROBIN --protocol HTTP --subnet-id a0cf224b-1f42-4650-b219-b9320d4ea06f
Created a new pool:
| Field                  | Value                                |
| admin_state_up         | True                                 |
| description            |                                      |
| health_monitors        |                                      |
| health_monitors_status |                                      |
| id                     | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| lb_method              | ROUND_ROBIN                          |
| members                |                                      |
| name                   | http-pool                            |
| protocol               | HTTP                                 |
| provider               | haproxy                              |
| status                 | PENDING_CREATE                       |
| status_description     |                                      |
| subnet_id              | a0cf224b-1f42-4650-b219-b9320d4ea06f |
| tenant_id              | 4664cf886f57480d9e3a1af4bf8a3a65     |
| vip_id                 |                                      |


Check the pool status

root@controller:~# neutron lb-pool-list
| id                                   | name      | provider | lb_method   | protocol | admin_state_up | status |
| 2cef1601-dfbe-4e02-beaf-71c303a3009f | http-pool | haproxy  | ROUND_ROBIN | HTTP     | True           | ACTIVE |


Add members to pool

root@controller:~# neutron lb-member-create --address --protocol-port 80 http-pool
Created a new member:
| Field              | Value                                |
| address            |                             |
| admin_state_up     | True                                 |
| id                 | 4e1a4853-9b98-4571-a306-9daa1ca9f08b |
| pool_id            | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | 4664cf886f57480d9e3a1af4bf8a3a65     |
| weight             | 1                                    |

root@controller:~# neutron lb-member-create --address --protocol-port 80 http-pool
Created a new member:
| Field              | Value                                |
| admin_state_up     | True                                 |
| id                 | ed53de4b-cad7-417f-82f3-4777f6bc831b |
| pool_id            | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | 4664cf886f57480d9e3a1af4bf8a3a65     |
| weight             | 1                                    |


Check the members are added and ACTIVE

root@controller:~# neutron lb-member-list --sort-key address --sort-dir asc
| id                                   | address     | protocol_port | weight | admin_state_up | status |
| 4e1a4853-9b98-4571-a306-9daa1ca9f08b |    |            80 |      1 | True           | ACTIVE |
| ed53de4b-cad7-417f-82f3-4777f6bc831b |    |            80 |      1 | True           | ACTIVE |


Create Health Monitor

root@controller:~# neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
Created a new health_monitor:
| Field          | Value                                |
| admin_state_up | True                                 |
| delay          | 3                                    |
| expected_codes | 200                                  |
| http_method    | GET                                  |
| id             | 479cd68a-fe2d-4234-89d8-13b1c26291d1 |
| max_retries    | 3                                    |
| pools          |                                      |
| tenant_id      | 4664cf886f57480d9e3a1af4bf8a3a65     |
| timeout        | 3                                    |
| type           | HTTP                                 |
| url_path       | /                                    |


Check the status of health monitor

root@controller:~# neutron lb-healthmonitor-list
| id                                   | type | admin_state_up |
| 479cd68a-fe2d-4234-89d8-13b1c26291d1 | HTTP | True           |


Health Monitor ID is  479cd68a-fe2d-4234-89d8-13b1c26291d1

Associate Health Monitor to Pool

root@controller:~# neutron lb-healthmonitor-associate 479cd68a-fe2d-4234-89d8-13b1c26291d1 http-pool
Associated health monitor 479cd68a-fe2d-4234-89d8-13b1c26291d1


Create a virtual IP for pool with HTTP port

root@controller:~# neutron lb-vip-create --name  http-vip --protocol-port 80 --protocol HTTP --subnet-id a0cf224b-1f42-4650-b219-b9320d4ea06f http-pool
Created a new vip:
| Field               | Value                                |
| address             |                             |
| admin_state_up      | True                                 |
| connection_limit    | -1                                   |
| description         |                                      |
| id                  | 8970ebd2-a19d-4650-b6ee-6e18985e823a |
| name                | http-vip                             |
| pool_id             | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| port_id             | 581babfc-e596-4cae-93a6-3349ed4e5577 |
| protocol            | HTTP                                 |
| protocol_port       | 80                                   |
| session_persistence |                                      |
| status              | PENDING_CREATE                       |
| status_description  |                                      |
| subnet_id           | a0cf224b-1f42-4650-b219-b9320d4ea06f |
| tenant_id           | 4664cf886f57480d9e3a1af4bf8a3a65     |


Check VIP status

root@controller:~# neutron lb-vip-list
| id                                   | name     | address     | protocol | admin_state_up | status |
| 8970ebd2-a19d-4650-b6ee-6e18985e823a | http-vip |    | HTTP     | True           | ACTIVE |


Associate a floating IP to VIP

Add a floating IP from external net

root@controller:~# neutron floatingip-create public
Created a new floating ip:
| Field               | Value                                |
| fixed_ip_address    |                                      |
| floating_ip_address |                            |
| floating_network_id | 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d |
| id                  | 167114de-feb7-4746-b3d3-4e9c801b7375 |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 4664cf886f57480d9e3a1af4bf8a3a65     |


Get the Port ID, Floating IP ID and associate Floating IP to Port ID of VIP

root@controller:~# neutron floatingip-associate 167114de-feb7-4746-b3d3-4e9c801b7375 581babfc-e596-4cae-93a6-3349ed4e5577
Associated floating IP 167114de-feb7-4746-b3d3-4e9c801b7375


Check the Floating IP list to verify

root@controller:~# neutron floatingip-list --sort-key floating_ip_address --sort-dir asc
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
| 03bc56f2-c78a-4b90-919c-7c89dec548d7 |                  |           |                                      |
| 0e5ce334-18e6-4389-8faa-666fbb86e1bc |                  |           |                                      |
| 12b33a39-89b1-423f-90f9-ad3d7617d536 |                  |           |                                      |
| 14992ab1-19d1-4271-bd6b-86d52bb1a2d3 |                  |           |                                      |
| 167114de-feb7-4746-b3d3-4e9c801b7375 |         |           | 581babfc-e596-4cae-93a6-3349ed4e5577 |
| 314fa02e-9ae6-42b1-9c15-11202aff8489 |                  |           |                                      |
| 888b1e36-5fc9-4aa1-995a-cb6e6834a1e3 |                  |           |                                      |
| 8fc7e5f1-4a08-4ce3-b17a-20212348051f |                  |           |                                      |


Now check in browser http://<associated_floating_ip/