Kubernetes Installation on Ubuntu 16.04


, , , , , , , , , ,

About Kubernetes

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

The below are the step by step guide to install Kubernetes on top of Ubuntu 16.04. Here, one machine will act as the master and the other two machines will be the node. You can then replicate the same steps to deploy the Kubernetes cluster onto your prod.

Steps to follow for the installation of Kubernetes:

The below steps have to be executed on both master and nodes.

Step 1: Login as a root user to execute the below commands

$ sudo su
# apt-get update

Step 2: We need to turn off the Swap space to avoid the erros, then we need to open the fstab file and comment the line with swap partition.

# swapoff -a
# nano /etc/fstab


Step 3: Next to change the hostname of master and node machines, better to have names easily recognized as master and node.

# nano /etc/hostname


Step 4: Next to update the hosts file with IP’s of master and nodes. Execute the below command to find the IP in both master and nodes.

# ifconfig


Step 5 : Now have to edit the hosts file to update the IP’s of master and nodes in both master and node machines.

# nano /etc/hosts


Step 6 : Next we will make above IP address, static ip, we need to edit the network interface files, Execute the below command to update.

# nano /etc/network/interfaces

Now have to add the below lines to the file

auto enp0s8
iface enp0s8 inet static
address <IP-Address-Of-Machine>


Step 7 : Now have to restart the machines.

Step 8 : Once the machines are up, We have to install the OpenSSH-Server on machines. Execute the below command.

# sudo apt-get install openssh-server

Step 9 : Next we need to install the Docker, because Docker images will be used for managing the containers in the cluster. Execute the below commands:

# sudo su
# apt-get update
# apt-get install -y docker.io

Step 10 : Next we need to install three essential components for setting up Kubernetes environment. kubeadm, kubectl, and kubelet. Before that we need to execute the below commands.

# apt-get update && apt-get install -y apt-transport-https curl
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
# apt-get update

Step 11 : Now we will install the 3 essential components. Kubelet is the lowest level component in Kubernetes. It’s responsible for what’s running on an individual machine. Kuebadm is used for administrating the Kubernetes cluster. Kubectl is used for controlling the configurations on various nodes inside the cluster.

# apt-get install -y kubelet kubeadm kubectl

Step 12 : Next, we need to update the configuration file of Kubernetes. Execute the below command:

# nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Now add the following line after the last “Environment Variable” to the configuration and save the changes.



Now we have successfully install Kuberenetes on both the master and node machines.
Still now, only the Kubernetes environment has been setup. now, we will be installalling  Kubernetes completely, by moving onto the next 2 phases, where we will individually set the configurations in both machines.

Steps Only For Kubernetes Master

These steps will only be executed on the master.

Step 1 : We will now start our Kubernetes cluster from the master’s machine. Update the Master IP address and execute the below command:

# kubeadm init --apiserver-advertise-address= --pod-network-cidr=

Screenshot from 2018-09-27 16-58-33

From the above output we have to execute the below commands to use kubectl & join nodes to cluster.
From the below output. execute them as a non-root user. This will enable us to use kubectl from the CLI.


The below output should also be saved for future. This will be used to join nodes to your cluster.


Step 2 : Execute the below command to enable kubectl.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 3 : To verify, if kubectl is working or not, execute the below command:

$ kubectl get pods -o wide --all-namespaces


Step 4 : We will now notice from the above screenshot, that all the pods are running except one: ‘kube-dns’. For resolving this we will install a pod network. To install the CALICO pod network, execute the below command

$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml


After some time, you will notice that all pods shift to the running state.


Step 5: Next, we will install the dashboard. To install the Dashboard, execute the below command

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml


Step 6 : Your dashboard is now ready with it’s the pod in the running state.


Step 7 : By default dashboard will not be visible on the Master. Execute the below command

$ kubectl proxy


To view the dashboard in the browser, navigate to the following address in the browser of your Master


The below will be displayed on the webpage, to enter the credentials

Screenshot from 2018-09-27 17-08-32

Next we will create the service account for the dashboard and get it’s credentials. Open a new terminal and execute the below commands, or your kubectl proxy command will stop.

Step 8 : To create a service account for dashboard in the default namespace, execute the below command

$ kubectl create serviceaccount dashboard -n default


Step 9 : To add the cluster binding rules to your dashboard account, execute the below command

$ kubectl create clusterrolebinding dashboard-admin -n default \
  --clusterrole=cluster-admin \


Step 10 : To obtain the token required for your dashboard login, execute the below command

$ kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode


Step 11 : Copy the above token and paste in the dashboard page, by selecting Token option and signin to login.

Screenshot from 2018-09-27 17-11-27

Now you have successfully logged in to dashboard.

Screenshot from 2018-09-27 17-12-16

Steps For Only Kubernetes Node

Now we are goining the join the node to the cluster. We have run the join command that we have saved, when we execute ‘kubeadm init’ command on the master.

Execute the below command to join the node.


Screenshot from 2018-09-27 17-20-49

Now the kubernetes cluster is ready.


Installing Open Source Hosting Control Panel (ZesleCP) on Ubuntu 16.04


Zesle Control Panel

Secure Web Control Panel for your needs…


Zesle is one of the popular open source control panel that any own can download and install. This is very simple and can be installed in just one command.

System Requirements:

  • Ubuntu 14/16
  • 1Ghz CPU
  • 512MB RAM
  • 10+GB DISK

Zesle is simpler and very User friendly.  Using Zesle you’ll be able to do the below tasks…

  • Add multiple domains without hassle;
  • Add multiple sub domains;
  • Install WordPress easily with one-click app;
  • Install free Let’s Encrypt SSL certificates with ease;
  • Configure basic PHP settings;
  • Manage Email accounts;
  • Access phpMyAdmin.

and much more. Let’s see how to install Zesle in your hosting.

Step 1: It’s super-easy to install Zesle. Run the following command with Root privilege.

$ cd /home && curl -o latest -L http://zeslecp.com/release/latest && sh latest

Step 2: The installation will begin and in between It will ask the admin’s email address. Please provide your email id and click enter.

Step 3: You will see the below screen at the end of the installation.


Step 4: This is how Zesle will looks like. Once the installation is completed, it’ll show you the temporary password and the login URL.

Step 5: The login URL will be your IP address followed by the port number(2087 will be the default one). For example: is a sample URL.

Step 6: Just enter this in your browser and you’ll get the login screen.


Step 7: Use root as your username

Step 8: Copy paste the temporary root password provided. Once you entered the correct password, the control panel will open and here is how it looks like. All the options mentioned above will be available in the left side of the UI.


Step 9: In the Dashboard, you can create your account, and install WordPress on your domain using “Once Click Apps

Step 10: There is installation steps end for free Linux Web Hosting control panel called ZesleCP.


Installation of Open Project Management System on Ubuntu 16.04



Open Project is a web-based management system for location-independent team
collaboration, released under GNU GPL 3 License. It’s a project management software
that provides task-management, team collaboration, scrum etc. Open Project is written
in Ruby on Rails and AngularJS. In this tutorial, I will show you how to install and
configure the Open Project management system using Ubuntu 16.04. The tool can be
installed manually or by using packages from the repository. For t his guide, we will
install Open Project from repository.


  •  Ubuntu 16.04.
  •  Good Internet Connectivity.
  •  Root Privileges.

What we will do

  • Update and Upgrade System.
  • Install Open Project Management System.
  • Configure the Open Project System.
  • Testing.

Step 1: Update and Upgrade System

Before installing the Open Project on to the Ubuntu system, update all available repositories and upgrade the Ubuntu system.

Run the following commands.

$ sudo apt update
$ sudo apt upgrade -y

Step 2: Install Open Project Management System

Download the open project key and add it to the system.

$ sudo wget -qO- https://dl.packager.io/srv/opf/openproject-ce/key | sudo apt-key add

And download the open project repository for Ubuntu 16.04 in the ‘/etc/apt/sources.list.d’ directory.

$ sudo wget -O /etc/apt/sources.list.d/openproject-ce.list \

Now update the Ubuntu repository and install open project using the apt command as shown below.

$ sudo apt update
$ sudo apt-get install openproject -y

Step 3: Configure the Open Project System

Run the Open Project configuration command. A Graphical based UI screen will appear.

$  sudo openproject configure


Select ‘Install and configure MySQL server locally’ and click ‘OK’. It will automatically install MySQL server on the system, and automatically create the database for openproject installation.

For the web server configuration, choose the ‘Install apache2 server’ and click ‘OK’. it will automatically install the apache2 web server and configure the virtual host for Open Project application.


Now type the domain name for your Open project application, and choose ‘OK’.

Next, for the SSL configuration. If you have purchased SSL certificates, choose ‘yes’, and ‘no’ if you don’t have SSL certificates.


Skip the subversion support, GitHub support and SMTP configuration. (if not needed).

And for memcached installation choose ‘Install’ and select ‘OK’ for better performance of Open Project.


Finally, installation and configuration of all the packages required for Open Project installation should happen automatically.

Step 4: Testing

Check whether the Open Project service is up and running.

$  sudo service openproject status

Now run the openproject web service using the following command.

$  sudo openproject run web

Now open your web browser and type on the address bar your floating IP to access the system


Now click the ‘Sign in’ button to log in to the admin dashboard initially using ‘admin’ as user and ‘admin’ as password and later you can change it.

Finally, the installation and configuration for Open Project on Ubuntu 16.04 has been completed successfully.




Apache Virtual Hosts on Ubuntu 14.04


The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a Virtual Host.


These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

In this document, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you’ll learn how to serve different content to different visitors depending on which domains they are requesting.


  • Before you begin this tutorial, you should create a non root user.
  • You will also need to have Apache installed in order to work through these steps.


OMegha platform.

Image – Ubuntu-14.04

Lets get Started,

At first we need to update the packages list.

$ sudo apt-get update


Install Apache

$ sudo apt-get install apache2


For the purposes of this document, my configuration will make a virtual host for infra.com and another for infra1.com

Step 1: Create the Directory Structure

Our document root will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

$ sudo mkdir -p /var/www/infra.com/public_html

$ sudo mkdir -p /var/www/infra1.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

Step 2: Grant Permissions

Changing the Ownership

$ sudo chown -R $USER:$USER /var/www/infra.com/public_html

$ sudo chown -R $USER:$USER /var/www/infra1.com/public_html


We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders

$ sudo chmod -R 755 /var/www

Step 3: Create Demo Pages for Each Virtual Host

We have to create index.html file for each site.

Let’s start with infra.com. We can open up an index.html file in our editor by typing

$ sudo vi /var/www/infra.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to and My file looks like this



    <title>Welcome to infra.com!</title>



    <h1>Success!  The infra.com virtual host is working!</h1>



Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing

cp /var/www/infra.com/public_html/index.html /var/www/infra1.com/public_html/index.html

Then we can open the file and modify the relevant pieces of information

$ sudo vi /var/www/infra1.com/public_html/index.html



    <title>Welcome to infra1.com!</title>



    <h1>Success!  The infra1.com virtual host is working!</h1>



Save and close the file.

Step 4: Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf and we can
copy that to our first domain of the virtual host file.

Creating First Virtual Host File

Start by copying the file for the first domain

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/infra.com.conf

Open the new file in your editor with root privileges

$ sudo vi /etc/apache2/sites-available/infra.com.conf

our virtual host file should look like this

<VirtualHost *:80>

    ServerAdmin admin@infra.com

    ServerName infra.com

    ServerAlias www.infra.com

    DocumentRoot /var/www/infra.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


Save and close the file.

Copy first Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying

$ sudo cp /etc/apache2/sites-available/infra.com.conf /etc/apache2/sites-available/infra1.com.conf

Open the new file with root privileges

$ sudo vi /etc/apache2/sites-available/infra1.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this

<VirtualHost *:80>

    ServerAdmin admin@infra1.com

    ServerName infra1.com

    ServerAlias www.infra1.com

    DocumentRoot /var/www/infra1.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


Save and close the file.

Step 5: Enable the New Virtual Host Files

Created Virtual host files needs to be enabled.

We can use the a2ensite tool to enable each of our sites

$ sudo a2ensite infra.com.conf

$ sudo a2ensite infra1.com.conf


Restart the apache server.

$ sudo service apache2 restart

Step 6: Setup Local Hosts File

$ sudo vi /etc/hosts

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS. localhost

***.***.***.*** infra.com

***.***.***.*** infra1.com

Save and close the file.

This will direct any requests for infra.com and infra1.com on our computer and send them to our server at ***.***.***

Step 7: Test Your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser



You should see a page that looks like this

Likewise, if you can visit your second page



You will see the file you created for your second site

Step 8: Conclusion

If both of these sites work well, you’ve successfully configured two virtual hosts on the same server.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

Techgig Code Gladiators 2018

It was my first hackathon experience of my life, the times internet group organised the nation wide hackathon early this year, It was my first experience of attending a Hackathon finals that was held at Mumbai on 7th June 2018 to 9th June 2018. Our team of 3  from InfraStack-Labs Technologies Pvt Ltd.,  were Arun Narayanan, Kishore Kumar and myself (Anand) who participated national event called Techgig Code Gladiators 2018, which had 3 major rounds over a period of 2 months.

Each round had a different scenarios/use cases given and we had to find a solution for the problem statement. Our team had registered under the category of “Cloud Computing” as we were basically from a cloud computing background. We had a great teamwork and we were one among 13 teams selected for finals under this category.

My journey to the finals  started from Coimbatore to Bangalore on 5th evening to join my team. The next day 6th June 2018 I reached the office for the final preparation of the hackathon with my teammates. We  discussed about various aspects about the hackathon and did a good homework before participation. After our discussion, all of us left office to take some rest before our travel to mumbai.

Screen Shot 2018-08-06 at 10.08.52 AM

Next day on 7th June 2018, we  had to start early in the morning @ 2 AM to reach the airport as our flight was scheduled for  5 AM. Arun and Kishore have booked the cab and, on the way, they picked me at Hebbal and we reached airport @ 3 AM. The flight departed on time and we reached Mumbai by 6.30 AM.

Reliance Corporate Park, Thane

On reaching  Mumbai airport, the Techgig team had arranged a bus for all finalists teamsl to reach the venue Reliance Corporate Park, Navi Mumbai. Our bus  started from airport and reached the venue @ 8:00 AM, ON reaching the beautiful Reliance campus, we went went straight to the registration desk and got our verification done and got our tags for the hackathon Then we were taken to the event location by another bus.

When we entered the venue, the reliance coporate park was really beautiful and huge, we felt like a different world with participants all over the place, some were  busy with registration, some were having real fun with their colleagues and some were playing games and more. We have done our registration and had our breakfast and now we were ready for the Hackathon which was to being at 9:30 AM.

We were sitting in the main  the auditorium, where we could see lot of professionals and students from various parts of India  with different ideas to win the hackathon. We got brief summary about the project for the hackathon and it was 22 Hrs hackathon, the hackathon was to start on 7th June 12:30 PM and ended on 8th June 10:30 AM.

We had 22 Hrs to complete the project, exactly at 12:30 we started our work and we had a plan to complete and deliver the project on time. Every one in the auditorium are busy working on their projects.

We had lunch and again started our work, time runs very fast and we had to rush more to complete our project. Then we went  for refreshment with snacks and coffee at around 4;30 PM. Then again started our work, it was literally a kind of steeplechase event feeling, we were really excited to see ourselves surrounded by smart people all under same roof working of different ideas

At 8PM, We had our dinner and had some rest, again started our work. We had focused on various aspects to complete the project and continued to work till midnight. We had some quick nap  to refresh ourselves and again focused on our work and the time runs very fast it was 6:30 AM and we had four hours to end the hackathon. We had to rush to complete the project on time @ 10:30 AM.

Finally after 21 Hrs of marathon, we reached the climax, only an hour was left to submit the project/solution. We started to collect the sources and data needed for submission, we were ready for the submission, but suddenly there was a technical glitch  and we were not able to submit the presentation and the codebase , so we had to contact the support team. They rectified the issue and we had 15 minutes left for submission, we quickly uploaded the deliverables . and waited for our turn to present the project.


We had scheduled @ 2:30 PM for presentation, we were all  well prepared for the demo/presentation and then came our turn to present the project before the jury. We had delivered our best to impress the jury and got appreciation from jury for the way we were  different from other participants in creating the solution

Best comment received after our final presentation 

“You guys are the only team who did things very differently”

We were the only team, who created solutions on a proprietary cloud platform called “OMegha™ Public Cloud”. All other participants did the system integration and created the solutions on either AWS/IBM Watson/Azure


We waited for the final result, till the time we had opportunity to roam around the beautiful Reliance Corporate Park. The place was wonderful and great place to work. Then we had chance to know about JIO, they presented their new and upcoming technologies. Then we had some fun and played some games and had photo sessions and new friends of course made the experience even more memorable.

At last it was  time for result, we waited eagerly for the result and ours was the last one to be announced. Unfortunately, we are not among the winning 3 , but the appreciation we got on our out-of-the-box thinking for creating the solution was the real happiness, it was a great experience to participate in the hackathon, It was raining heavily as the monsoons had already started in Mumbai, with a light hearted happy hackathon experience we left for the airport way back to Bangalore

Summary of @InfraStack-Labs Technologies Private Limited  team's experience in participating in #Techgigcodegladiators2018 #cloudcomputing #hackathon

#Event - Code Gladiators 2018 
#Event Registrations 228,880
#6500+ registrations (Cloud Computing Category)

-Artificial Intelligence
-Big Data
-Cloud Computing
-Mobility & Location Services
-Internet of Things
-Machine Learning

#45 days marathon
#20 hours of final offline hackathon coding (Final round)
#15 days of online hackathon coding (1st & 2nd round)
#3 Rounds
#3 problem statement (General Cloud Computing, AI Robot & IoT)
#3 team members
#1 Bot created & delivered  (Vinayak) 
#1 IoT platform & delivered




Centralize Logs from Node.js Applications



  • Installation of Node.js and NPM
  • Installation of Fluentd

Modifying the Config File

Next, please configure Fluentd to use the forward Input plugin as its data source.

$ sudo vi /etc/td-agent/td-agent.conf

Fluent daemon should listen on TCP port.

Simple configuration is following:


Restart your agent once these lines are in place.

$ sudo service td-agent restart


Install fluent-logger-node

$ npm install fluent-logger

Now use npm to install your dependencies locally:

$ npm install

Send an event record to Fluentd


This is the simplest web app.


Run the app and go to http://localhost:4000 in your browser. This will send the logs to Fluentd.

$ node index.js


The logs should be output to /var/log/td-agent/td-agent.log  

Store Logs into MongoDB

Fluentd does 3 things:

  1. It continuously “tails” the access log file.
  2. It parses the incoming log entries into meaningful fields (such as ip,path, etc.) and buffers them.
  3. It writes the buffered data to MongoDB periodically.


Fluentd’s config file

$ sudo vi /etc/td-agent/td-agent.conf

 The Fluentd configuration file should look like this:


Restart your agent once these lines are in place.

$ sudo service td-agent restart

Then, access MongoDB and see the stored data.

$ mongo


Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Installation of MongoDB on Ubuntu 16.04


MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.


MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.


To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA31292

After successfully importing the key, you will see:

gpg: Total number processed: 1
gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod
$ mongo

mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Main PID: 4093 (mongod)
Tasks: 16 (limit: 512)
Memory: 47.1M
CPU: 1.224s
CGroup: /system.slice/mongodb.service
└─4093 /usr/bin/mongod --quiet --config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Build RESTful API in Go and MongoDB


Golang is a programming language initially developed at Google in year 2007 by Robert Griesemer, Rob Pike, and Ken Thompson. Go programming language is a statically-typed language with syntax similar to that of C. It provides garbage collection, type safety, dynamic-typing capability, many advanced built-in types such as variable length arrays and key-value maps. It also provides a rich standard library.



MongoDB is a free and open-source cross-platform document-oriented database program. Classified as a NoSQLdatabase program, MongoDB uses JSON-like documents with schemas.



One of the most popular types of API is REST or, as they’re sometimes known, RESTful APIs. REST or RESTful APIs were designed to take advantage of existing protocols. While REST – or Representational State Transfer – can be used over nearly any protocol, when used for web APIs it typically takes advantage of HTTP. This means that developers have no need to install additional software or libraries when creating a REST API


Installation of  MongoDB Server:

Import they GPG key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

Add the MongoDB repository details so APT

$ echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Install MongoDB

$ sudo apt-get install -y mongodb-org



Installation of  Go:

Download the latest package for Go by running this command, which will pull down the Go package file, and save it to your current working directory.

$ sudo curl -O https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gz

Now we can set the Go paths

$ sudo vi ~/.profile

At the end of the file, add this line:

export GOROOT=$HOME/go
export PATH=$PATH:$GOROOT/bin

With the appropriate line pasted into your profile, save and close the file. Next, refresh your profile by running:

$ source ~/.profile



Build RESTful API in Go and MongoDB


The REST API service will create endpoints to manage a store of movies. The operations that our endpoints will allow are:

GET       /movies               Get list of movies

GET       /movies/:id         Find a movie by it’s id

POST    /movies                Create a new movie

Before we begin, we need to get the packages to setup the API:

  • toml :  Parse the configuration file (MongoDB server & credentials)
  • mux : Request router and dispatcher for matching incoming requests to their respective handler
  • mgo : MongoDB driver
$ go get github.com/BurntSushi/toml gopkg.in/mgo.v2 github.com/gorilla/mux

After  installing the dependencies, we create a file called “app.go“, with the following content:

package main
import (import ( "fmt" "log" "net/http"
func AllMoviesEndPoint(w http.ResponseWriter, r *http.Request) {func AllMoviesEndPoint(w http.ResponseWriter, r *http.Request) { movies, err := dao.FindAll() if err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusOK, movies)}
func FindMovieEndpoint(w http.ResponseWriter, r *http.Request) { params := mux.Vars(r) movie, err := dao.FindById(params["id"]) if err != nil { respondWithError(w, http.StatusBadRequest, "Invalid Movie ID") return respondWithJson(w, http.StatusOK, movie)}
func CreateMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } movie.ID = bson.NewObjectId() if err := dao.Insert(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusCreated, movie)
func UpdateMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } if err := dao.Update(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) returnrespondWithJson(w, http.StatusOK, map[string]string{"result": "success"})}func DeleteMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } if err := dao.Delete(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusOK, map[string]string{"result": "success"})}func main() { r := mux.NewRouter() r.HandleFunc("/movies", AllMoviesEndPoint).Methods("GET") r.HandleFunc("/movies", CreateMovieEndPoint).Methods("POST") r.HandleFunc("/movies", UpdateMovieEndPoint).Methods("PUT") r.HandleFunc("/movies", DeleteMovieEndPoint).Methods("DELETE") r.HandleFunc("/movies/{id}", FindMovieEndpoint).Methods("GET") if err := http.ListenAndServe(":3000", r); err != nil { log.Fatal(err) }}

The above code will create a controller for each endpoint, then make an HTTP server on port 3000.

Now create a basic Moviemodel. In Go, we use struct keyword to create a model:

type Movie struct { 
    ID          bson.ObjectId `bson:"_id" json:"id"` 
    Name        string        `bson:"name" json:"name"` 
    CoverImage  string        `bson:"cover_image" json:"cover_image"` 
    Description string        `bson:"description" json:"description"`}

Create the Data Access Object to manage database operations.

package dao
import (
 mgo "gopkg.in/mgo.v2" 
 mgo "gopkg.in/mgo.v2" 
type MoviesDAO struct { 
Server   string 
Database string
var db *mgo.Database
const ( 
COLLECTION = "movies"
func (m *MoviesDAO) Connect() { 
session, err := mgo.Dial(m.Server) 
if err != nil { 
log.Fatal(err) } 
db = session.DB(m.Database)
ffunc (m *MoviesDAO) FindAll() ([]Movie, error) { 
var movies []Movie 
err := db.C(COLLECTION).Find(bson.M{}).All(&movies) 
return movies, err
func (m *MoviesDAO) FindById(id string) (Movie, error) { 
var movie Movie 
err := db.C(COLLECTION).FindId(bson.ObjectIdHex(id)).One(&movie) 
return movie, err
func (m *MoviesDAO) Insert(movie Movie) error { 
err := db.C(COLLECTION).Insert(&movie) 
return err
func (m *MoviesDAO) Delete(movie Movie) error { 
err := db.C(COLLECTION).Remove(&movie) 
return err
func (m *MoviesDAO) Update(movie Movie) error { 
err := db.C(COLLECTION).UpdateId(movie.ID, &movie) 
return err

To run the server in local, type the following command:

$ go run app.go

Create a Movie

api 1

 List of Movies

 api 2

Ansible installation and setting up


, , , , ,

Ansible is a configuration management system. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. This is handy if you need to deploy your application on multiple servers without the need for having to do this manually on all your servers. You can also add identical servers to your cluster.

Ansible provides configuration management so you can add identical servers to your cluster very easily. You can also do centralized management for all of your servers in one place. You can run an apt-get update on all servers at once!

Ansible does deployment and management over SSH. It manages machines in an agent-less manner. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines.

In this tutorial we’ll see how we can install Ansible on Ubuntu 14.04.

Step 1: Installing Ansible

To install the latest version of Ansible

$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

You need to put all the servers that you want to manage with Ansible in the /etc/ansible/hosts file.

You will need to comment out all lines. Go to the latest line of the hosts file to create a category. Say you have a cluster of web and database servers. You could create two separate categories: web and db. If you would want to make a change on all database servers, you could use db as selection so only all database servers would be affected and not other servers such as your web servers in the web category.

Step 2: Setting up SSH keys

As we mentioned above, Ansible primarily communicates with client computers through SSH. While it certainly has the ability to handle password-based SSH authentication, SSH keys help keep things simple.

We can set up SSH keys in two different ways depending on whether you already have a key you want to use.

Create a New SSH Key Pair

If you do not already have an SSH key pair that you would like to use for Ansible administration, we can create one now on your Ansible VPS.

We will create an SSH key pair to authenticate with the hosts that it will administer.

As the user you will be controlling Ansible with, create an RSA key-pair by typing:

$ ssh-keygen

You will be asked to specify the file location of the created key pair, a passphrase, and the passphrase confirmation. Press ENTER through all of these to accept the default values.

Your new keys are available in your user’s ~/.ssh directory. The public key (the one you can share) is called id_rsa.pub. The private key (the one that you keep secure) is called id_rsa. You can copy the content of this file to the authorized_keys in the target servers to set up SSH communication.

Step 3: Test Ansible

To see if you can ping all your servers in the hosts file, you can use the following command:

$ ansible all –m ping

This confirms whether or not your servers are online.

Installing Asterisk on Ubuntu 16.04




Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.


Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv