Installation of MongoDB on Ubuntu 16.04

MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.

mongodb

MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv EA312927

After successfully importing the key, you will see:

gpg: Total number processed: 1

gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo “deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod

 

  • mongodb.service – High-performance, schema-free document-oriented database

Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)

Active: active (running) since Mon 2016-04-25 14:57:20 EDT; 1min 30s ago

Main PID: 4093 (mongod)

Tasks: 16 (limit: 512)

Memory: 47.1M

CPU: 1.224s

CGroup: /system.slice/mongodb.service

└─4093 /usr/bin/mongod –quiet –config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Advertisements

Build RESTful API in Go and MongoDB

Go

Golang is a programming language initially developed at Google in year 2007 by Robert Griesemer, Rob Pike, and Ken Thompson. Go programming language is a statically-typed language with syntax similar to that of C. It provides garbage collection, type safety, dynamic-typing capability, many advanced built-in types such as variable length arrays and key-value maps. It also provides a rich standard library.

 

MongoDB

MongoDB is a free and open-source cross-platform document-oriented database program. Classified as a NoSQLdatabase program, MongoDB uses JSON-like documents with schemas.

 

RESTAPI

One of the most popular types of API is REST or, as they’re sometimes known, RESTful APIs. REST or RESTful APIs were designed to take advantage of existing protocols. While REST – or Representational State Transfer – can be used over nearly any protocol, when used for web APIs it typically takes advantage of HTTP. This means that developers have no need to install additional software or libraries when creating a REST API

 

Installation of  MongoDB Server:

Import they GPG key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

Add the MongoDB repository details so APT

$ echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Install MongoDB

$ sudo apt-get install -y mongodb-org

 

 

Installation of  Go:

Download the latest package for Go by running this command, which will pull down the Go package file, and save it to your current working directory.

$ sudo curl -O https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gz

Now we can set the Go paths

$ sudo vi ~/.profile

At the end of the file, add this line:

export GOROOT=$HOME/go
export PATH=$PATH:$GOROOT/bin

With the appropriate line pasted into your profile, save and close the file. Next, refresh your profile by running:

$ source ~/.profile

 

 

Build RESTful API in Go and MongoDB

 

The REST API service will create endpoints to manage a store of movies. The operations that our endpoints will allow are:

GET       /movies               Get list of movies

GET       /movies/:id         Find a movie by it’s id

POST    /movies                Create a new movie

Before we begin, we need to get the packages to setup the API:

  • toml :  Parse the configuration file (MongoDB server & credentials)
  • mux : Request router and dispatcher for matching incoming requests to their respective handler
  • mgo : MongoDB driver
$ go get github.com/BurntSushi/toml gopkg.in/mgo.v2 github.com/gorilla/mux

After  installing the dependencies, we create a file called “app.go“, with the following content:

package main
import (import ( "fmt" "log" "net/http"
 "github.com/gorilla/mux")
func AllMoviesEndPoint(w http.ResponseWriter, r *http.Request) {func AllMoviesEndPoint(w http.ResponseWriter, r *http.Request) { movies, err := dao.FindAll() if err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusOK, movies)}
func FindMovieEndpoint(w http.ResponseWriter, r *http.Request) { params := mux.Vars(r) movie, err := dao.FindById(params["id"]) if err != nil { respondWithError(w, http.StatusBadRequest, "Invalid Movie ID") return respondWithJson(w, http.StatusOK, movie)}
func CreateMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } movie.ID = bson.NewObjectId() if err := dao.Insert(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusCreated, movie)
func UpdateMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } if err := dao.Update(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) returnrespondWithJson(w, http.StatusOK, map[string]string{"result": "success"})}func DeleteMovieEndPoint(w http.ResponseWriter, r *http.Request) { defer r.Body.Close() var movie Movie if err := json.NewDecoder(r.Body).Decode(&movie); err != nil { respondWithError(w, http.StatusBadRequest, "Invalid request payload") return } if err := dao.Delete(movie); err != nil { respondWithError(w, http.StatusInternalServerError, err.Error()) return } respondWithJson(w, http.StatusOK, map[string]string{"result": "success"})}func main() { r := mux.NewRouter() r.HandleFunc("/movies", AllMoviesEndPoint).Methods("GET") r.HandleFunc("/movies", CreateMovieEndPoint).Methods("POST") r.HandleFunc("/movies", UpdateMovieEndPoint).Methods("PUT") r.HandleFunc("/movies", DeleteMovieEndPoint).Methods("DELETE") r.HandleFunc("/movies/{id}", FindMovieEndpoint).Methods("GET") if err := http.ListenAndServe(":3000", r); err != nil { log.Fatal(err) }}

The above code will create a controller for each endpoint, then make an HTTP server on port 3000.

Now create a basic Moviemodel. In Go, we use struct keyword to create a model:

type Movie struct { 
    ID          bson.ObjectId `bson:"_id" json:"id"` 
    Name        string        `bson:"name" json:"name"` 
    CoverImage  string        `bson:"cover_image" json:"cover_image"` 
    Description string        `bson:"description" json:"description"`}

Create the Data Access Object to manage database operations.

package dao
import (
  "log"
"github.com/mlabouardy/movies-restapi/models"
 mgo "gopkg.in/mgo.v2" 
 mgo "gopkg.in/mgo.v2" 
"gopkg.in/mgo.v2/bson"
)
type MoviesDAO struct { 
Server   string 
Database string
}
var db *mgo.Database
const ( 
COLLECTION = "movies"
)
func (m *MoviesDAO) Connect() { 
session, err := mgo.Dial(m.Server) 
if err != nil { 
log.Fatal(err) } 
db = session.DB(m.Database)
}
ffunc (m *MoviesDAO) FindAll() ([]Movie, error) { 
var movies []Movie 
err := db.C(COLLECTION).Find(bson.M{}).All(&movies) 
return movies, err
}
func (m *MoviesDAO) FindById(id string) (Movie, error) { 
var movie Movie 
err := db.C(COLLECTION).FindId(bson.ObjectIdHex(id)).One(&movie) 
return movie, err
}
func (m *MoviesDAO) Insert(movie Movie) error { 
err := db.C(COLLECTION).Insert(&movie) 
return err
}
func (m *MoviesDAO) Delete(movie Movie) error { 
err := db.C(COLLECTION).Remove(&movie) 
return err
}
func (m *MoviesDAO) Update(movie Movie) error { 
err := db.C(COLLECTION).UpdateId(movie.ID, &movie) 
return err
}

To run the server in local, type the following command:

$ go run app.go

Create a Movie

api 1

 List of Movies

 api 2

Ansible installation and setting up

Tags

, , , , ,

Ansible is a configuration management system. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. This is handy if you need to deploy your application on multiple servers without the need for having to do this manually on all your servers. You can also add identical servers to your cluster.

Ansible provides configuration management so you can add identical servers to your cluster very easily. You can also do centralized management for all of your servers in one place. You can run an apt-get update on all servers at once!

Ansible does deployment and management over SSH. It manages machines in an agent-less manner. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines.

In this tutorial we’ll see how we can install Ansible on Ubuntu 14.04.

Step 1: Installing Ansible

To install the latest version of Ansible

$ sudo apt-get install software-properties-common

$ sudo apt-add-repository ppa:ansible/ansible

$ sudo apt-get update

$ sudo apt-get install ansible

You need to put all the servers that you want to manage with Ansible in the /etc/ansible/hosts file.

You will need to comment out all lines. Go to the latest line of the hosts file to create a category. Say you have a cluster of web and database servers. You could create two separate categories: web and db. If you would want to make a change on all database servers, you could use db as selection so only all database servers would be affected and not other servers such as your web servers in the web category.

Step 2: Setting up SSH keys

As we mentioned above, Ansible primarily communicates with client computers through SSH. While it certainly has the ability to handle password-based SSH authentication, SSH keys help keep things simple.

We can set up SSH keys in two different ways depending on whether you already have a key you want to use.

Create a New SSH Key Pair

If you do not already have an SSH key pair that you would like to use for Ansible administration, we can create one now on your Ansible VPS.

We will create an SSH key pair to authenticate with the hosts that it will administer.

As the user you will be controlling Ansible with, create an RSA key-pair by typing:

$ ssh-keygen

You will be asked to specify the file location of the created key pair, a passphrase, and the passphrase confirmation. Press ENTER through all of these to accept the default values.

Your new keys are available in your user’s ~/.ssh directory. The public key (the one you can share) is called id_rsa.pub. The private key (the one that you keep secure) is called id_rsa. You can copy the content of this file to the authorized_keys in the target servers to set up SSH communication.

Step 3: Test Ansible

To see if you can ping all your servers in the hosts file, you can use the following command:

$ ansible all –m ping

 

This confirms whether or not your servers are online.

Installing Asterisk on Ubuntu 16.04

Tags

,

Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.

Asterisk_Logo.svg

Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv

 

Production Application Deployment Orchestration Using Auto-Scaling, Load-Balancer, and OpenStack Heat

Overview:

OpenStack Heat can deploy and configure multiple instances in one command using resources we have in OpenStack. That’s called a Heat Stack.

Heat will create instances from images using existing flavors and networks. It can configure LBaaS and provide VIPs for our load-balanced instances. It can also use the metadata service to inject files, scripts or variables after instance deployment. It can use Ceilometer to create alarms based on instance CPU usage and associate actions like spinning up or terminating instances based on CPU load.

OpenStack provides Auto-scaling features through Heat. This feature reduces the need to manually provision instance capacities in advance. You can use Heat resources to detect when a Ceilometer alarm triggers and provision or de-provision a new VM depending on the trigger. These groups of VMs must be under a Load-balancer which distributes the load among the VMs on the scaling group.

Whether you are running one instance or thousands, you can use Autoscaling to detect, increase, decrease, and replace instances without manual intervention.

In the document, the following two policies are defined:

a) When the CPU utilization rate is above 50%, a new instance is created automatically until the number of instances reaches 4.

b) When any CPU utilization rate is below 15%, an instance is terminated until the number of instances reaches 2.

Autoscaling in Heat is done with the help of three main types:

OS::Heat::AutoScalingGroup

An AutoScalingGroup is a resource type that is used to encapsulate the resource that we wish to scale, and some properties related to the scale process.

OS::Heat::ScalingPolicy

A ScalingPolicy is a resource type that is used to define the effect a scale process will have on the scaled resource.

OS::Ceilometer::Alarm

An Alarm is a resource type the is used to define under which conditions the ScalingPolicy should be triggered.

 

Deploying a WordPress Application Stack with an Autoscaling group of Web servers and a Load-balancer.

Complete templates are available in here

The following example uses a snapshot of a VM with Apache already installed and configured over an Ubuntu operating system as a base image.

Parameters:

key_name:
type: string
description: Name of an existing key pair to use for the template
default: dev
image:
type: string
description: Name of image to use for servers
default: ubuntu
flavor:
type: string
description: Flavor to use for servers
default: m1.small
db_name:
type: string
description: WordPress database name
default: wordpress
db_username:
type: string
description: The WordPress database admin account username
default: admin
db_password:
type: string
description: The WordPress database admin account password
default: admin
db_rootpassword:
type: string
description: Root password for MySQL
default: admin
hidden: true
public_net_id:
type: string
description: ID of public n/w for which floating IP will be allocated
default: 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d
private_net_id:
type: string
description: ID of private network into which servers get deployed
default: e232d0c0-4363-4b39-b88a-949e177f058a
private_subnet_id:
type: string
description: ID of private sub network into which servers get deployed
default: a0cf224b-1f42-4650-b219-b9320d4ea06f 

The first resource one will provision is the health monitor configuration that will check the virtual machines under the Load-balancer. If the machine is down, the Load-balancer will not send traffic to it. Create the pool using the health monitor and specifying the protocol, the network and subnet, the algorithm to use for distributing the traffic, and the port that will receive the traffic on the virtual IP. Finally, create the Load-balancer using this pool.

Resources: 
  monitor:
     type: OS::Neutron::HealthMonitor
properties:
     type: TCP
    delay: 3
max_retries: 3
  timeout: 3

lb_vip_port:
     type: OS::Neutron::Port
     properties:
     network_id: { get_param: private_net_id }
     fixed_ips:
     - subnet_id: { get_param: private_subnet_id }

lb_pool_vip:
    type: OS::Neutron::FloatingIPAssociation
    properties:
    floatingip_id: { get_resource: lb_vip_floating_ip }
    port_id: { 'Fn::Select': ['port_id', {get_attr: [pool, vip]}]}

pool:
    type: OS::Neutron::Pool
    properties:
    protocol: HTTP
    monitors: [{get_resource: monitor}]
    subnet_id: {get_param: private_subnet_id}
    lb_method: ROUND_ROBIN
    vip:
    protocol_port: 80

lb:
    type: OS::Neutron::LoadBalancer
    properties:
    protocol_port: 80
    pool_id: {get_resource: pool} 

Associate a public IP for the Load-balancer so that it can be accessed from the Internet. Use the following syntax to create a floating IP for the Load-balancer:

lb_vip_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: lb_vip_port }

Use the following syntax to create the Auto-scaling group (VMs) which will be acting as the web servers for the WordPress application:

web_server_group:
type: OS::Heat::AutoScalingGroup
properties:
min_size: 2
max_size: 4
resource:
type: Lib::INF::LB
properties:
flavor: {get_param: flavor}
image: {get_param: image}
key_name: dev
pool_id: {get_resource: pool}
metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
user_data:
str_replace:
template: |
       #!/bin/bash -ex
       sed -i 's/172.16.*.*/8.8.8.8/g' /etc/resolv.conf
       # install dependencies
       apt-get update
       apt-get -y install php5 libapache2-mod-php5 php5-mysql php5-gd mysql-client
       wget http://wordpress.org/latest.tar.gz
       tar -xzf latest.tar.gz
       cp wordpress/wp-config-sample.php wordpress/wp-config.php
       sed -i 's/database_name_here/$db_name/' wordpress/wp-config.php
       sed -i 's/username_here/$db_user/' wordpress/wp-config.php
       sed -i 's/password_here/$db_password/' wordpress/wp-config.php
       sed -i 's/localhost/$db_host/' wordpress/wp-config.php
       rm /var/www/html/index.html
       cp -R wordpress/* /var/www/html/
       chown -R www-data:www-data /var/www/html/ 
       chmod -R g+w /var/www/html/
       params:
       $db_name: {get_param: db_name}
       $db_user: {get_param: db_username}
       $db_password: {get_param: [db_password, value]}
       $db_host: {get_attr: [wp_dbserver, first_address]}

                               Here the highlighted part indicates that another YAML file is called upon to create a custom resource. For this a template file is created to define this resource in this case Load-balancing server, and URL to the template file is provided in the environment file. This Load-balancer file will be doing the important function of adding the web server to the load balancing pool using the resource ‘member’.

The Load-balancer server YAML file is given below.

heat_template_version: 2013-05-23
description: A load-balancer server
parameters:
image:
type: string
description: Image used for servers
key_name:
type: string
description: SSH key to connect to the servers
flavor:
type: string
description: flavor used by the servers
pool_id:
type: string
description: Pool to contact
user_data:
type: string
description: Server user_data
metadata:
type: json
network:
type: string
description: Network used by the server
default: private
resources:
server:
type: OS::Nova::Server
properties:
name: web-server
flavor: {get_param: flavor}
image: {get_param: image}
key_name: {get_param: key_name}
metadata: {get_param: metadata}
user_data: {get_param: user_data}
user_data_format: RAW
networks: [{network: {get_param: network} }]
member:
type: OS::Neutron::PoolMember
properties:
pool_id: {get_param: pool_id}
address: {get_attr: [server, first_address]}
protocol_port: 80
outputs:
server_ip:
description: IP Address of the load-balanced server.
value: { get_attr: [server, first_address] }
lb_member:
description: LB member details.
value: { get_attr: [member, show] }

 

Use the following syntax to create scaling policies:

web_server_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: web_server_group}
cooldown: 60
scaling_adjustment: 1
web_server_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: web_server_group}
cooldown: 60
scaling_adjustment: -1

 

Use Ceilometer to establish the alarms (both high and low) for the auto-scaling group for a specific metric. Use the following syntax to create Ceilometer alarms:

cpu_alarm_high:
       type: OS::Ceilometer::Alarm
       properties:
       description: Scale-up if the average CPU > 50% for 1 minute
       meter_name: cpu_util
       statistic: avg
       period: 60
       evaluation_periods: 1
       threshold: 50
       alarm_actions:
          - {get_attr: [web_server_scaleup_policy, alarm_url]}
       matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
       comparison_operator: gt

cpu_alarm_low:
       type: OS::Ceilometer::Alarm
       properties:
       description: Scale-down if the average CPU < 15% for 1 minute
       meter_name: cpu_util
       statistic: avg
       period: 60
       evaluation_periods: 1
       threshold: 15
       alarm_actions:
          - {get_attr: [web_server_scaledown_policy, alarm_url]}
       matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
       comparison_operator: lt

The first resource is the Nova server that you will initialize from a specific image. In our example, this is Ubuntu 14.04 with Apache2 already installed and configured. This won’t be a part of the Auto-scaling group and we’ll use it as the database server of the WordPress application.

resources:
wp_dbserver:
type: OS::Nova::Server
properties:
name: wp_db_server
image: {get_param: image}
key_name: dev
networks:
- port: { get_resource: wp_dbserver_port }
user_data_format: RAW
user_data:
str_replace:
params:
__mysql_root_password__: {get_param: [db_rootpassword, value]}
__database_name__: {get_param: db_name}
__database_user__: {get_param: db_username}
__database_password__: {get_param: [db_password, value]}
template: |
#!/bin/bash
apt-get update
export DEBIAN_FRONTEND=noninteractive
apt-get install -y mysql-server
mysqladmin -u root password "__mysql_root_password__"
sed -i "s/bind-address.*/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
service mysql restart
mysql -u root --password="__mysql_root_password__" <<EOF
CREATE DATABASE __database_name__;
CREATE USER '__database_user__'@'localhost';
SET PASSWORD FOR 
'__database_user__'@'localhost'=PASSWORD("__database_password__");
GRANT ALL PRIVILEGES ON __database_name__.* TO 
'__database_user__'@'localhost' IDENTIFIED BY '__database_password__';
CREATE USER '__database_user__'@'%';
SET PASSWORD FOR '__database_user__'@'%'=PASSWORD("__database_password__");
GRANT ALL PRIVILEGES ON __database_name__.* TO '__database_user__'@'%' IDENTIFIED BY '__database_password__';
FLUSH PRIVILEGES;
EOF

Now to call upon the Load-balancer server file, we’ll use an Environment file which is described below.

resource_registry:
Lib::INF::LB: https://raw.githubusercontent.com/infrastacklabs/test/master/lb.yaml

Here the URL points to the location where the Load-balancer YAML file is stored.

 

Launching the stack on Horizon with this Heat template:

Login to OpenStack environment, open the Orchestration part on the left tab and click on Launch stack as shown in the picture.

autoscalingLB1

 

Click on Launch stack. And select the WordPress.yaml file and the env.yaml file. Click on next.

autoscalingLB2

 

Verify the parameters and click on launch.

autoscalingLB3

 

This should launch the Heat Stack.

autoscalingLB4

Wait a few mins to get the MySQL and WordPress installed in the corresponding servers.

 

One should be able to see a new Load-balancer in the Networks> Load balancer section with a public IP assigned to it.

autoscalingLB5 - Copy

 

Running this public IP should land the Apache welcome page on the browser.

autoscalingLB6

 

Stack should be creating three servers in total, two web-servers and one DB server with MYSQL installed.

autoscalingLB7

 

 

Assign a public IP to any of the web-servers and access it using the browser. It should land a WordPress installation page like below.

autoscalingLB8 - Copy

 

Install the application

autoscalingLB9 - Copy

 

autoscalingLB10

 

Login with the given credentials and one should be able to access the WordPress dashboard.

autoscalingLB11 - Copy

 

Scaling:

Scaling can be done in two ways: Manual and Automatic.

By Using Ceilometer Alarms, the heat stack should be able to scale the servers depending on the CPU usage of the webersvers in the group.

By invoking the webhook URLs in the Stack Overview, one should be able to scale down and scale up the number of web servers in the group.

autoscalingLB12

 

For example:

Running the scale up URL should create a new web server and it should be added under the Load-balancer.

autoscalingLB13

 

autoscalingLB14

Here by using OpenStack Heat, Ceilometer and Load-balancer, we’are able to create an Application infrastructure that contains a Database server and a group of web servers that can be auto-scaled depending up on the CPU usage on these servers. This Load-balancer automatically registers the instances in the group and distributes incoming traffic across the instances.

Application deployment using Docker-Compose on OMegha Public Cloud

Docker is a container technology for Linux that allows a developer to package up an application with all of the parts it needs. It makes it easier to create, deploy, port and run applications by using containers. Docker Compose makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.

To really take full advantage of Docker’s potential, it’s best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together can quickly become knotty.

To deal with this, the Docker team decided to come up with a solution based on the Fig source called Docker Compose.

This article provides a real-world example of using Docker Compose to install an application, in this case WordPress with PHPMyAdmin as an extra. WordPress normally runs on a LAMP stack, which means Linux, Apache, MySQL/MariaDB, and PHP. The official WordPress Docker image includes Apache and PHP for us, so the only part we have to worry about is MariaDB.

 

Prerequisites:

  • Host Machine: Ubuntu 14.04 OMegha Bolt (VM).
  • A non-root user with sudo privileges.
  • Reasonable knowledge of Linux commands.

 

Installing Docker:

The Docker installation package available in the official Ubuntu 14.04 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. To do that;

First, add the GPG key for the official Docker repository to the system:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update the package database with the Docker packages from the newly added repo:

$ sudo apt-get update

Install Docker

$ sudo apt-get install –y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

$ sudo service docker status

 

Installing Docker Compose:

Now that you have Docker installed, let’s go ahead and install Docker Compose. First, install python-pip as prerequisite:

$ sudo apt-get install python-pip

Install Docker Compose:

$ sudo pip install docker-compose

Docker Compose makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.

 

Installation of Application:

Here we’ll install a WordPress application with PHPMyAdmin using Docker-compose. WordPress normally runs on a LAMP stack, which means Linux, Apache, MySQL/MariaDB, and PHP. The official WordPress Docker image includes Apache and PHP for us, so we’ll have to run MariaDB container as a source for database purposes.

Steps:

Create a folder where our data will live and create the docker-compose.yml file to run our app:

$ mkdir ~/wordpress
$ cd wordpress

Then create a docker-compose.yml with your any text editor.

$ sudo vi ~/wordpress/docker-compose.yml

And paste in the following:

wordpress:
  image: wordpress
  links:
    - wordpress_db:mysql
  ports:
    - 8080:80
wordpress_db:
  image: mariadb
  environment:
    MYSQL_ROOT_PASSWORD: mypassword
phpmyadmin:
  image: nazarpc/phpmyadmin
  links:
    - wordpress_db:mysql
  ports:
    - 8181:80
  environment:
    MYSQL_USERNAME: root
    MYSQL_ROOT_PASSWORD: mypassword

 

Explaining the YML file:

  • Docker Compose to start a new container called wordpress and download the wordpressimage from the Docker Hub.
  • Define a new container called wordpress_db and told it to use the mariadbimage from the Docker Hub.
  • WordPress container to link our wordpress_db container into the wordpress container and call it mysql (inside the wordpress container the hostname mysql will be forwarded to our wordpress_db container).
  • Set the MYSQL_ROOT_PASSWORD variable to start the mariadb server. The MariaDB Docker image is configured to check for this environment variable when it starts up and will take care of setting up the DB with a root account with the password defined as MYSQL_ROOT_PASSWORD.
  • Set up a port forward so that we can connect to our WordPress install once it actually loads up. The first port number is the port number on the host, and the second port number is the port inside the container. So, this configuration forwards requests on port 8080 of the host to the default web server port 80 inside the container.
  • Grabs docker-phpmyadmin by community member nazarpc, links it to our wordpress_db container with the name mysql, exposes its port 80 on port 8181 of the host system, and finally sets a couple of environment variables with our MariaDB username and password.

This image does not automatically grab the MYSQL_ROOT_PASSWORD environment variable from the wordpress_dbcontainer’s environment the way the wordpress image does. We actually have to copy the MYSQL_ROOT_PASSWORD: mypassword line from the wordpress_db container, and set the username to root.

 

Now start up the application group:

$ sudo docker-compose up -d

Run it with the -d option, which will tell docker-compose to run the containers in the background so that you can keep using your terminal.

One can see the corresponding images being pulled and run as containers. Towards the end of the output lines you can see that the containers are created and have started running;

Creating wordpress_wordpress_db_1 ...
Creating wordpress_wordpress_db_1 ... done
Creating wordpress_wordpress_1 ...
Creating wordpress_phpmyadmin_1 ... done
Creating wordpress_wordpress_1
Creating wordpress_phpmyadmin_1 ... done

To verify this, list the running containers using the command

$ sudo docker ps

To verify that the app is up, open up a web browser and browse to the IP of your box on port 8080. Here the host IP address we use for this document is of format xx.xxx.xx.xxx.

Type <host-IP>:8080 into the browser. This should land a WordPress installation page as in the picture.

lbaas - Copy

For the demo purposes, fill the fields as per below and install the application.

lbaas2 - Copy

Successful installation will land the following page.

lbaas3 - Copy

To verify the phpMyAdmin, open another tab and type xx.xxx.xx.xxx:8181 into it.

This should open up the phpMyAdmin page.

lbaas4 - Copy

To login, use the details that were used in the YML file. In this case, the user is root, and the password is ‘mypassword’.

Upon logging in, one will be able to access the databases in the MariaDB server as the picture shows;

lbaas5 - Copy

 

Here by using Docker-compose and Docker concepts in general, installations of wordpress and phpMyAdmin have been made much easier than the traditional ways of installing these applications.

OpenStack Load Balancer (LBaaS) Setup

Installing LBaaS

Run following command on controller node to install the required components (HAProxy and LBaaS package).

root@controller:~#  yum install haproxy openstack-neutron-lbaas

 

Configuring  LBaaS

Edit the file – /etc/neutron/neutron.conf

service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_plugins = router,lbaas

 

Edit the file – /etc/neutron/neutron_lbaas.conf

service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

 

Edit the file – /etc/neutron/lbaas_agent.ini

[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

 

Edit the file – /etc/openstack-dashboard/local_settings.py

OPENSTACK_NEUTRON_NETWORK = {


….

‘enable_lb’: True,

 

Restart the services

# service neutron-server restart
# service neutron-lbaas-agent restart

 

Now you will be able to see “Load Balancers” on OpenStack Dashboard webpage under Project -> Network Menu.

 

Creating a ROUND_ROBIN Load Balancer between two instances

Get the details of running Instances

root@controller:~# nova list
+--------------------------------------+------+--------+------------+-------------+---------------------+
| ID                                   | Name | Status | Task State | Power State | Networks            |
+--------------------------------------+------+--------+------------+-------------+---------------------+
| a2e61f7a-f701-4e18-a4e4-3240c6e9b946 | lb1  | ACTIVE | -          | Running     | private=10.0.1.1    |
| 4cbd5376-6d32-414c-9272-2c22002d1468 | lb2  | ACTIVE | -          | Running     | private=10.0.1.2    |
+--------------------------------------+------+--------+------------+-------------+---------------------+

I have 2 Instances running apache2 server on lb1 & lb2

VM Name -> VM ID -> Private Network IP

lb1->a2e61f7a-f701-4e18-a4e4-3240c6e9b946->10.0.1.1

lb2->4cbd5376-6d32-414c-9272-2c22002d1468->10.0.1.2

 

Get network list

root@controller:~# neutron net-list
+--------------------------------------+---------+--------------------------------------------------------+
| id                                   | name    | subnets                                                |
+--------------------------------------+---------+--------------------------------------------------------+
| 297c004b-435b-4741-b2f0-53a011403f61 | lb_net1 | e94e1b6c-aff2-45f2-a13c-a41593125a2d 10.0.2.0/24       |
| 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d | public  | f72e1955-b32f-4d49-8a6f-1833d6e2e1f3 10.0.16.0/26      | 
|                                      |         | 219765fc-be8d-4ef5-90a2-6c7ef2c961e9 10.0.22.0/26      |
| e232d0c0-4363-4b39-b88a-949e177f058a | private | a0cf224b-1f42-4650-b219-b9320d4ea06f 10.0.1.0/24       |
+--------------------------------------+---------+--------------------------------------------------------+

 

Get subnet list

root@controller:~# neutron subnet-list
+--------------------------------------+------------+-------------------+------------------------------------------------------+
| id                                   | name       | cidr              | allocation_pools                                     |
+--------------------------------------+------------+-------------------+------------------------------------------------------+
| e94e1b6c-aff2-45f2-a13c-a41593125a2d | lb_subnet1 | 10.0.2.0/24       | {"start": "10.0.2.1", "end": "10.0.2.80"}            |
| f72e1955-b32f-4d49-8a6f-1833d6e2e1f3 | public1    | 10.0.16.0/26      | {"start": "10.0.16.1", "end": "10.0.16.60"}          |
| 219765fc-be8d-4ef5-90a2-6c7ef2c961e9 | public     | 10.0.22.0/26      | {"start": "10.0.22.1", "end": "10.0.22.60"}          |
|                                      |            |                   |
| a0cf224b-1f42-4650-b219-b9320d4ea06f | private    | 10.0.1.0/24       | {"start": "10.0.1.1", "end": "10.0.1.80"}            |
+--------------------------------------+------------+-------------------+------------------------------------------------------+

 

Create Pool

root@controller:~# neutron lb-pool-create --name http-pool --lb-method ROUND_ROBIN --protocol HTTP --subnet-id a0cf224b-1f42-4650-b219-b9320d4ea06f
Created a new pool:
+------------------------+--------------------------------------+
| Field                  | Value                                |
+------------------------+--------------------------------------+
| admin_state_up         | True                                 |
| description            |                                      |
| health_monitors        |                                      |
| health_monitors_status |                                      |
| id                     | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| lb_method              | ROUND_ROBIN                          |
| members                |                                      |
| name                   | http-pool                            |
| protocol               | HTTP                                 |
| provider               | haproxy                              |
| status                 | PENDING_CREATE                       |
| status_description     |                                      |
| subnet_id              | a0cf224b-1f42-4650-b219-b9320d4ea06f |
| tenant_id              | 4664cf886f57480d9e3a1af4bf8a3a65     |
| vip_id                 |                                      |
+------------------------+--------------------------------------+

 

Check the pool status

root@controller:~# neutron lb-pool-list
+--------------------------------------+-----------+----------+-------------+----------+----------------+--------+
| id                                   | name      | provider | lb_method   | protocol | admin_state_up | status |
+--------------------------------------+-----------+----------+-------------+----------+----------------+--------+
| 2cef1601-dfbe-4e02-beaf-71c303a3009f | http-pool | haproxy  | ROUND_ROBIN | HTTP     | True           | ACTIVE |
+--------------------------------------+-----------+----------+-------------+----------+----------------+--------+

 

Add members to pool

root@controller:~# neutron lb-member-create --address 10.0.1.1 --protocol-port 80 http-pool
Created a new member:
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| address            | 10.0.1.1                             |
| admin_state_up     | True                                 |
| id                 | 4e1a4853-9b98-4571-a306-9daa1ca9f08b |
| pool_id            | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | 4664cf886f57480d9e3a1af4bf8a3a65     |
| weight             | 1                                    |
+--------------------+--------------------------------------+

root@controller:~# neutron lb-member-create --address 10.0.1.2 --protocol-port 80 http-pool
Created a new member:
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| admin_state_up     | True                                 |
| id                 | ed53de4b-cad7-417f-82f3-4777f6bc831b |
| pool_id            | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | 4664cf886f57480d9e3a1af4bf8a3a65     |
| weight             | 1                                    |
+--------------------+--------------------------------------+

 

Check the members are added and ACTIVE

root@controller:~# neutron lb-member-list --sort-key address --sort-dir asc
+--------------------------------------+-------------+---------------+--------+----------------+--------+
| id                                   | address     | protocol_port | weight | admin_state_up | status |
+--------------------------------------+-------------+---------------+--------+----------------+--------+
| 4e1a4853-9b98-4571-a306-9daa1ca9f08b | 10.0.1.1    |            80 |      1 | True           | ACTIVE |
| ed53de4b-cad7-417f-82f3-4777f6bc831b | 10.0.1.2    |            80 |      1 | True           | ACTIVE |
+--------------------------------------+-------------+---------------+--------+----------------+--------+

 

Create Health Monitor

root@controller:~# neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
Created a new health_monitor:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| admin_state_up | True                                 |
| delay          | 3                                    |
| expected_codes | 200                                  |
| http_method    | GET                                  |
| id             | 479cd68a-fe2d-4234-89d8-13b1c26291d1 |
| max_retries    | 3                                    |
| pools          |                                      |
| tenant_id      | 4664cf886f57480d9e3a1af4bf8a3a65     |
| timeout        | 3                                    |
| type           | HTTP                                 |
| url_path       | /                                    |
+----------------+--------------------------------------+

 

Check the status of health monitor

root@controller:~# neutron lb-healthmonitor-list
+--------------------------------------+------+----------------+
| id                                   | type | admin_state_up |
+--------------------------------------+------+----------------+
| 479cd68a-fe2d-4234-89d8-13b1c26291d1 | HTTP | True           |
+--------------------------------------+------+----------------+

 

Health Monitor ID is  479cd68a-fe2d-4234-89d8-13b1c26291d1

Associate Health Monitor to Pool

root@controller:~# neutron lb-healthmonitor-associate 479cd68a-fe2d-4234-89d8-13b1c26291d1 http-pool
Associated health monitor 479cd68a-fe2d-4234-89d8-13b1c26291d1

 

Create a virtual IP for pool with HTTP port

root@controller:~# neutron lb-vip-create --name  http-vip --protocol-port 80 --protocol HTTP --subnet-id a0cf224b-1f42-4650-b219-b9320d4ea06f http-pool
Created a new vip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| address             | 10.0.1.4                             |
| admin_state_up      | True                                 |
| connection_limit    | -1                                   |
| description         |                                      |
| id                  | 8970ebd2-a19d-4650-b6ee-6e18985e823a |
| name                | http-vip                             |
| pool_id             | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| port_id             | 581babfc-e596-4cae-93a6-3349ed4e5577 |
| protocol            | HTTP                                 |
| protocol_port       | 80                                   |
| session_persistence |                                      |
| status              | PENDING_CREATE                       |
| status_description  |                                      |
| subnet_id           | a0cf224b-1f42-4650-b219-b9320d4ea06f |
| tenant_id           | 4664cf886f57480d9e3a1af4bf8a3a65     |
+---------------------+--------------------------------------+

 

Check VIP status

root@controller:~# neutron lb-vip-list
+--------------------------------------+----------+-------------+----------+----------------+--------+
| id                                   | name     | address     | protocol | admin_state_up | status |
+--------------------------------------+----------+-------------+----------+----------------+--------+
| 8970ebd2-a19d-4650-b6ee-6e18985e823a | http-vip | 10.0.1.4    | HTTP     | True           | ACTIVE |
+--------------------------------------+----------+-------------+----------+----------------+--------+

 

Associate a floating IP to VIP

Add a floating IP from external net

root@controller:~# neutron floatingip-create public
Created a new floating ip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 10.0.22.3                            |
| floating_network_id | 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d |
| id                  | 167114de-feb7-4746-b3d3-4e9c801b7375 |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 4664cf886f57480d9e3a1af4bf8a3a65     |
+---------------------+--------------------------------------+

 

Get the Port ID, Floating IP ID and associate Floating IP to Port ID of VIP

root@controller:~# neutron floatingip-associate 167114de-feb7-4746-b3d3-4e9c801b7375 581babfc-e596-4cae-93a6-3349ed4e5577
Associated floating IP 167114de-feb7-4746-b3d3-4e9c801b7375

 

Check the Floating IP list to verify

root@controller:~# neutron floatingip-list --sort-key floating_ip_address --sort-dir asc
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 03bc56f2-c78a-4b90-919c-7c89dec548d7 |                  | 10.0.22.2           |                                      |
| 0e5ce334-18e6-4389-8faa-666fbb86e1bc |                  | 10.0.22.5           |                                      |
| 12b33a39-89b1-423f-90f9-ad3d7617d536 |                  | 10.0.22.7           |                                      |
| 14992ab1-19d1-4271-bd6b-86d52bb1a2d3 |                  | 10.0.22.9           |                                      |
| 167114de-feb7-4746-b3d3-4e9c801b7375 | 10.0.1.4         | 10.0.22.3           | 581babfc-e596-4cae-93a6-3349ed4e5577 |
| 314fa02e-9ae6-42b1-9c15-11202aff8489 |                  | 10.0.16.4           |                                      |
| 888b1e36-5fc9-4aa1-995a-cb6e6834a1e3 |                  | 10.0.16.6           |                                      |
| 8fc7e5f1-4a08-4ce3-b17a-20212348051f |                  | 10.0.22.9           |                                      |
+--------------------------------------+------------------+---------------------+--------------------------------------+

 

Now check in browser http://<associated_floating_ip/

Infrastructure as Code [IaC]: AutoScale application infrastrcture using OpenStack Heat

OpenStack Heat can deploy and configure multiple instances in one command usingresources we have in OpenStack. That’s called a Heat Stack.

Heat will create instances from images using existing flavors and networks. It can configure LBaaS and provide VIPs for our load-balanced instances. It can also use the metadata service to inject files, scripts or variables after instance deployment. It can even use Ceilometer to create alarms based on instance CPU usage and associated actions like spinning up or terminating instances based on CPU load.

OpenStack Heat is an application orchestration engine designed for the OpenStack Cloud. It is integrated into the OpenStack distro and can be used via CLI or via the Horizon GUI. Heat uses a proprietary templating language called HOT (Heat Orchestration Template) for defining application topologies.

Autoscaling in Heat is done with the help of three main types:

OS::Heat::AutoScalingGroup

An AutoScalingGroup is a resource type that is used to encapsulate the resource that we wish to scale and some properties related to the scaling process.

OS::Heat::ScalingPolicy

A ScalingPolicy is a resource type that is used to define the effect a scaling process will have on the scaled resource.

OS::Ceilometer::Alarm

An Alarm is a resource type the is used to define under which conditions the ScalingPolicy should be triggered.

Helps to have a basic auto-scaling template to understand this better. Please go through this template that we’ll be using for this example.

Explaining the template:

Parameters:  These are tidbits of information—like a specific image ID, or a particular network ID— that are passed to the Heat template by the user. This allows users to create more generic templates that could potentially use different resources.

Resources: Resources are the specific objects that Heat will create and/or modify as part of its operation, and the second of the three major sections in a Heat template.

auto_scale_group: The group of servers that will be constantly monitored and autoscaled when the conditions are met. In this example, it’ll have a minimum of one server and a maximum of three servers.

server_scaleup_policy: The effect a scaling process will have on the scaled resource. Here it’ll add a new server.

server_scaledown_policy: The effect a scaling process will have on the scaled resource. Here it’ll delete a server.

cpu_alarm_high: Alarm condition to trigger the scale up process. Here it’s scaling up if the instance CPU load is greater than 50% for more than a min.

cpu_alarm_low: scaling down if the instance CPU load is lower than 15% for more than a min.

Launching the stack on horizon with this heat tmmplate:

Login to Openstack environment, open the Orchestration part on the left tab and click on Launch stack as shown in the pic.

1

Click on Launch stack.

2

 

Select the template file which named basic.yaml in our case.

3

Launch the heat stack which will create a single instance named test-server. The process will take a min or two, after which you can try accessing the newly created server after assigning a public IP address. As this is a basic template, the vms in this example do not actually do anything.

To check if the autoscaling is working, we have to stress test the instance that’s created.

SSH into this instance and install stress tool.

ubuntu@test-server:~$ sudo apt-get install stress

ter installing we can start doing stress testing. As we are testing using CPU load, we can try to increase the CPU usage percentage of the instance upto the high ‘90s which will force the Heat orchestration part of the openstacn to spin off a new machine.

To do that type the command

ubuntu@test-server:~$ stress –c 1

By running top command we can produce an ordered list of running processes the load in the instance.

ubuntu@test-server:~$ top

Which will show that the stress process is taking up more than 90% of CPU, which, by the configuration in the template, should be enough to trigger the scale up policy.

Checking the heat engine logs in the controller server should give us more insights in to how this was brought about.

Upon getting the Scaleup process triggered the Heat engine logs will look like the following

root@controller:/var/log/heat# tail -f heat-engine.log
[……………………………………………………………………………………………………………………………………………………………………………………………………...
2017-09-11 13:39:09.101 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm server_scaleup_policy, new state alarm
2017-09-11 13:39:09.162 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaleup_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by 1
2017-09-11 13:39:09.162 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaleup_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by 1
2017-09-11 13:39:09.359 2559 INFO heat.engine.service [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Updating stack Basic-auto_scale_group-5yoys2hit67h
2017-09-11 13:39:09.797 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "knrlwocbkiel"
2017-09-11 13:39:12.364 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "cqr4fi6ivrhl"
2017-09-11 13:39:13.344 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "oxym4qwzzof4"
2017-09-11 13:39:14.668 2559 INFO heat.engine.update [-] Resource knrlwocbkiel for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 13:39:14.783 2559 INFO heat.engine.resource [-] creating Server "cqr4fi6ivrhl" Stack "Basic-auto_scale_group-5yoys2hit67h" [ce6979a7-63fb-419f-835f-ff480af0a0dc]
2017-09-11 13:39:17.218 2559 INFO heat.engine.update [-] Resource oxym4qwzzof4 for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 13:39:26.776 2559 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE started
2017-09-11 13:39:26.936 2559 INFO heat.engine.stack [-] Stack DELETE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE completed successfully
2017-09-11 13:39:27.064 2559 INFO heat.engine.stack [-] Stack UPDATE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack UPDATE completed successfully
……………………………………………………………………………………………………………………………………………………………………………………………………………]

This scale up policy is triggered by the alarm which is generated by Ceilometer. To check what all metrics are monitored by the ceilometer, one can go to the compute node and check the ceilometer-agent-compute.log.

root@compute:/var/log/ceilometer# tail -f ceilometer-agent-compute.log
2017-09-11 13:18:20.734 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.write.requests.rate in the context of meter_source
2017-09-11 13:18:20.809 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.allocation in the context of meter_source
2017-09-11 13:18:20.813 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.latency in the context of meter_source
2017-09-11 13:18:20.813 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.capacity in the context of meter_source
2017-09-11 13:18:20.817 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.write.requests in the context of meter_source
2017-09-11 13:18:20.820 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.read.requests.rate in the context of meter_source
2017-09-11 13:18:20.821 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.write.requests.rate in the context of meter_source
2017-09-11 13:18:20.821 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.write.requests in the context of meter_source
2017-09-11 13:18:20.825 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.read.requests in the context of meter_source
2017-09-11 13:18:20.828 1257 INFO ceilometer.agent.manager [-] Polling pollster cpu_util in the context of meter_source
2017-09-11 13:18:20.829 1257 INFO ceilometer.agent.manager [-] Polling pollster memory.resident in the context of meter_source
2017-09-11 13:18:20.836 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.read.requests in the context of meter_source

cpu_util meter is highlighted above.

Here the heat engine log showed that a new instance is spawned off as the Scale Up policy was triggered. Let’s check it using Horizon dashboard.

4

By the same logic, a low CPU load should be triggering the scale-down policy and remove the instance.

We can check this by canceling the stress load earlier and keeping the CPU load low.

Upon getting the Scaleup process triggered the Heat engine logs will look like the following

# createdb omegha-odoo
root@controller:/var/log/heat# tail -f heat-engine.log
2017-09-11 15:18:24.002 2557 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm server_scaledown_policy, new state alarm
2017-09-11 15:18:24.071 2557 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaledown_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by -1
2017-09-11 15:18:24.298 2558 INFO heat.engine.service [req-a5a2d796-6329-45c0-8d4d-a1624132d4d5 - eafb2b89314842dabca3e4ace895c796] Updating stack Basic-auto_scale_group-5yoys2hit67h
2017-09-11 15:18:24.740 2558 INFO heat.engine.resource [req-a5a2d796-6329-45c0-8d4d-a1624132d4d5 - eafb2b89314842dabca3e4ace895c796] Validating Server "k6lh3qvth5vd"
2017-09-11 15:18:26.353 2558 INFO heat.engine.update [-] Resource k6lh3qvth5vd for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 15:18:26.355 2558 INFO heat.engine.resource [-] deleting Server "nfivm24eye5c" [36a3eda3-0c57-45dd-8cde-e21909c54fa8] Stack "Basic-auto_scale_group-5yoys2hit67h" [ce6979a7-63fb-419f-835f-ff480af0a0dc]
2017-09-11 15:18:30.053 2558 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE started
2017-09-11 15:18:30.213 2558 INFO heat.engine.stack [-] Stack DELETE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE completed successfully
2017-09-11 15:18:30.365 2558 INFO heat.engine.stack [-] Stack UPDATE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack UPDATE completed successfully

In Horizon dashboard, the number will instances will have gone back to one.

5

Thus, according to the CPU utilization, the instances is this particular group are scaled.

MANUAL SCALING USING WEBHOOK URLs:

To do the Manual scaling we can use Webhook URLs that are created during the stack creation, which are available at the Stack Overview section in the Horizon Dashboard.

6

By invoking these URLs from the controller, we can manually scale and sacle down the number of servers.

For example to scale up the server by 1, run the following command in the controller server.

ubuntu@controller# curl –XPOST –i “http://controller:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3Aeafb2b89314842dabca3e4ace895c796%3Astacks%2FBasic%2Fd89a76c8-e74f-4d7c-9779-0a9d70709c05%2Fresources%2Fserver_scaleup_policy?Timestamp=2017-09-12T06%3A14%3A53Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=d8921846030e4aef8080eb932582290f&SignatureVersion=2&Signature=64Vp6CkUiqYRQvf0z9YWMvCtyP%2BHfITj2AJj0GnTCOM%3D”

Note: The template file used for this documentation is available here.

Installing Docker on Ubuntu 14.04

 

Docker is probably the most talked-about infrastructure technology of the past few years. It’s a container technology for Linux that allows a developer to package up an application with all of the parts it needs. It makes it easier to create, deploy, port and run applications by using containers. In a way, they’re like virtual machines, but unlike vms, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they’re running on.

large_v-trans

There are actually two versions of Docker and are supported on multiple platforms.

We’ll be installing the Docker CE [Community Edition] on Ubuntu 14.04 here.

The chances are that the Docker installation package available in the official Ubuntu 14.04 repository may not be the latest version. If you want to get the latest version, install Docker from the official Docker repository. To do that

First, add the GPG key for the official Docker repository to the system:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

$sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update the package database with the Docker packages from the newly added repo:

$ sudo apt-get update

Install Docker

$ sudo apt-get install –y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

$ sudo service docker status

Like all Linux services, Docker can be started, stopped and restarted using the following commands.

$ sudo service docker stop
$ sudo service start
$ sudo service restart

 

 

 

 

Integrating SAML Applications with Okta

Tags

, , ,

 

  • Login to Okta

okta-1

  • Click on Admin tab.

okta-2

  • Select Applications from dashboard.

okta-3

  • Click on Add Applications.

okta-4

  • Click on Create New App.

okta-5

  • Select SAML.2.0 and click Create.

okta-6

  • Give App name and click Next.

okta-7

  • Enter the URL of your Application and add Attribute Statements.

okta-8

  • Click Next.

okta-9

  • Select I’m an Okta customer adding an internal app.
  • Check This is an internal app that we have created box and click Finish.

okta-10

  • Click on People.

okta-11

  • Click on Assign to People and assign it to the user.

ok-13

  • Now go to My Applications.

okta-12

  • Open the Application.
  • Enter your Credentials .
  • Now Okta Extension will pop to save credentials in Okta.
  • Click Save Password.

okta-9