Production Application Deployment Orchestration Using Auto-Scaling, Load-Balancer, and OpenStack Heat

Overview:

OpenStack Heat can deploy and configure multiple instances in one command using resources we have in OpenStack. That’s called a Heat Stack.

Heat will create instances from images using existing flavors and networks. It can configure LBaaS and provide VIPs for our load-balanced instances. It can also use the metadata service to inject files, scripts or variables after instance deployment. It can use Ceilometer to create alarms based on instance CPU usage and associate actions like spinning up or terminating instances based on CPU load.

OpenStack provides Auto-scaling features through Heat. This feature reduces the need to manually provision instance capacities in advance. You can use Heat resources to detect when a Ceilometer alarm triggers and provision or de-provision a new VM depending on the trigger. These groups of VMs must be under a Load-balancer which distributes the load among the VMs on the scaling group.

Whether you are running one instance or thousands, you can use Autoscaling to detect, increase, decrease, and replace instances without manual intervention.

In the document, the following two policies are defined:

a) When the CPU utilization rate is above 50%, a new instance is created automatically until the number of instances reaches 4.

b) When any CPU utilization rate is below 15%, an instance is terminated until the number of instances reaches 2.

Autoscaling in Heat is done with the help of three main types:

OS::Heat::AutoScalingGroup

An AutoScalingGroup is a resource type that is used to encapsulate the resource that we wish to scale, and some properties related to the scale process.

OS::Heat::ScalingPolicy

A ScalingPolicy is a resource type that is used to define the effect a scale process will have on the scaled resource.

OS::Ceilometer::Alarm

An Alarm is a resource type the is used to define under which conditions the ScalingPolicy should be triggered.

 

Deploying a WordPress Application Stack with an Autoscaling group of Web servers and a Load-balancer.

Complete templates are available in here

The following example uses a snapshot of a VM with Apache already installed and configured over an Ubuntu operating system as a base image.

Parameters:

key_name:
type: string
description: Name of an existing key pair to use for the template
default: dev
image:
type: string
description: Name of image to use for servers
default: ubuntu
flavor:
type: string
description: Flavor to use for servers
default: m1.small
db_name:
type: string
description: WordPress database name
default: wordpress
db_username:
type: string
description: The WordPress database admin account username
default: admin
db_password:
type: string
description: The WordPress database admin account password
default: admin
db_rootpassword:
type: string
description: Root password for MySQL
default: admin
hidden: true
public_net_id:
type: string
description: ID of public n/w for which floating IP will be allocated
default: 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d
private_net_id:
type: string
description: ID of private network into which servers get deployed
default: e232d0c0-4363-4b39-b88a-949e177f058a
private_subnet_id:
type: string
description: ID of private sub network into which servers get deployed
default: a0cf224b-1f42-4650-b219-b9320d4ea06f 

The first resource one will provision is the health monitor configuration that will check the virtual machines under the Load-balancer. If the machine is down, the Load-balancer will not send traffic to it. Create the pool using the health monitor and specifying the protocol, the network and subnet, the algorithm to use for distributing the traffic, and the port that will receive the traffic on the virtual IP. Finally, create the Load-balancer using this pool.

Resources: 
  monitor:
     type: OS::Neutron::HealthMonitor
properties:
     type: TCP
    delay: 3
max_retries: 3
  timeout: 3

lb_vip_port:
     type: OS::Neutron::Port
     properties:
     network_id: { get_param: private_net_id }
     fixed_ips:
     - subnet_id: { get_param: private_subnet_id }

lb_pool_vip:
    type: OS::Neutron::FloatingIPAssociation
    properties:
    floatingip_id: { get_resource: lb_vip_floating_ip }
    port_id: { 'Fn::Select': ['port_id', {get_attr: [pool, vip]}]}

pool:
    type: OS::Neutron::Pool
    properties:
    protocol: HTTP
    monitors: [{get_resource: monitor}]
    subnet_id: {get_param: private_subnet_id}
    lb_method: ROUND_ROBIN
    vip:
    protocol_port: 80

lb:
    type: OS::Neutron::LoadBalancer
    properties:
    protocol_port: 80
    pool_id: {get_resource: pool} 

Associate a public IP for the Load-balancer so that it can be accessed from the Internet. Use the following syntax to create a floating IP for the Load-balancer:

lb_vip_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: lb_vip_port }

Use the following syntax to create the Auto-scaling group (VMs) which will be acting as the web servers for the WordPress application:

web_server_group:
type: OS::Heat::AutoScalingGroup
properties:
min_size: 2
max_size: 4
resource:
type: Lib::INF::LB
properties:
flavor: {get_param: flavor}
image: {get_param: image}
key_name: dev
pool_id: {get_resource: pool}
metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
user_data:
str_replace:
template: |
       #!/bin/bash -ex
       sed -i 's/172.16.*.*/8.8.8.8/g' /etc/resolv.conf
       # install dependencies
       apt-get update
       apt-get -y install php5 libapache2-mod-php5 php5-mysql php5-gd mysql-client
       wget http://wordpress.org/latest.tar.gz
       tar -xzf latest.tar.gz
       cp wordpress/wp-config-sample.php wordpress/wp-config.php
       sed -i 's/database_name_here/$db_name/' wordpress/wp-config.php
       sed -i 's/username_here/$db_user/' wordpress/wp-config.php
       sed -i 's/password_here/$db_password/' wordpress/wp-config.php
       sed -i 's/localhost/$db_host/' wordpress/wp-config.php
       rm /var/www/html/index.html
       cp -R wordpress/* /var/www/html/
       chown -R www-data:www-data /var/www/html/ 
       chmod -R g+w /var/www/html/
       params:
       $db_name: {get_param: db_name}
       $db_user: {get_param: db_username}
       $db_password: {get_param: [db_password, value]}
       $db_host: {get_attr: [wp_dbserver, first_address]}

                               Here the highlighted part indicates that another YAML file is called upon to create a custom resource. For this a template file is created to define this resource in this case Load-balancing server, and URL to the template file is provided in the environment file. This Load-balancer file will be doing the important function of adding the web server to the load balancing pool using the resource ‘member’.

The Load-balancer server YAML file is given below.

heat_template_version: 2013-05-23
description: A load-balancer server
parameters:
image:
type: string
description: Image used for servers
key_name:
type: string
description: SSH key to connect to the servers
flavor:
type: string
description: flavor used by the servers
pool_id:
type: string
description: Pool to contact
user_data:
type: string
description: Server user_data
metadata:
type: json
network:
type: string
description: Network used by the server
default: private
resources:
server:
type: OS::Nova::Server
properties:
name: web-server
flavor: {get_param: flavor}
image: {get_param: image}
key_name: {get_param: key_name}
metadata: {get_param: metadata}
user_data: {get_param: user_data}
user_data_format: RAW
networks: [{network: {get_param: network} }]
member:
type: OS::Neutron::PoolMember
properties:
pool_id: {get_param: pool_id}
address: {get_attr: [server, first_address]}
protocol_port: 80
outputs:
server_ip:
description: IP Address of the load-balanced server.
value: { get_attr: [server, first_address] }
lb_member:
description: LB member details.
value: { get_attr: [member, show] }

 

Use the following syntax to create scaling policies:

web_server_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: web_server_group}
cooldown: 60
scaling_adjustment: 1
web_server_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: web_server_group}
cooldown: 60
scaling_adjustment: -1

 

Use Ceilometer to establish the alarms (both high and low) for the auto-scaling group for a specific metric. Use the following syntax to create Ceilometer alarms:

cpu_alarm_high:
       type: OS::Ceilometer::Alarm
       properties:
       description: Scale-up if the average CPU > 50% for 1 minute
       meter_name: cpu_util
       statistic: avg
       period: 60
       evaluation_periods: 1
       threshold: 50
       alarm_actions:
          - {get_attr: [web_server_scaleup_policy, alarm_url]}
       matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
       comparison_operator: gt

cpu_alarm_low:
       type: OS::Ceilometer::Alarm
       properties:
       description: Scale-down if the average CPU < 15% for 1 minute
       meter_name: cpu_util
       statistic: avg
       period: 60
       evaluation_periods: 1
       threshold: 15
       alarm_actions:
          - {get_attr: [web_server_scaledown_policy, alarm_url]}
       matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
       comparison_operator: lt

The first resource is the Nova server that you will initialize from a specific image. In our example, this is Ubuntu 14.04 with Apache2 already installed and configured. This won’t be a part of the Auto-scaling group and we’ll use it as the database server of the WordPress application.

resources:
wp_dbserver:
type: OS::Nova::Server
properties:
name: wp_db_server
image: {get_param: image}
key_name: dev
networks:
- port: { get_resource: wp_dbserver_port }
user_data_format: RAW
user_data:
str_replace:
params:
__mysql_root_password__: {get_param: [db_rootpassword, value]}
__database_name__: {get_param: db_name}
__database_user__: {get_param: db_username}
__database_password__: {get_param: [db_password, value]}
template: |
#!/bin/bash
apt-get update
export DEBIAN_FRONTEND=noninteractive
apt-get install -y mysql-server
mysqladmin -u root password "__mysql_root_password__"
sed -i "s/bind-address.*/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
service mysql restart
mysql -u root --password="__mysql_root_password__" <<EOF
CREATE DATABASE __database_name__;
CREATE USER '__database_user__'@'localhost';
SET PASSWORD FOR 
'__database_user__'@'localhost'=PASSWORD("__database_password__");
GRANT ALL PRIVILEGES ON __database_name__.* TO 
'__database_user__'@'localhost' IDENTIFIED BY '__database_password__';
CREATE USER '__database_user__'@'%';
SET PASSWORD FOR '__database_user__'@'%'=PASSWORD("__database_password__");
GRANT ALL PRIVILEGES ON __database_name__.* TO '__database_user__'@'%' IDENTIFIED BY '__database_password__';
FLUSH PRIVILEGES;
EOF

Now to call upon the Load-balancer server file, we’ll use an Environment file which is described below.

resource_registry:
Lib::INF::LB: https://raw.githubusercontent.com/infrastacklabs/test/master/lb.yaml

Here the URL points to the location where the Load-balancer YAML file is stored.

 

Launching the stack on Horizon with this Heat template:

Login to OpenStack environment, open the Orchestration part on the left tab and click on Launch stack as shown in the picture.

autoscalingLB1

 

Click on Launch stack. And select the WordPress.yaml file and the env.yaml file. Click on next.

autoscalingLB2

 

Verify the parameters and click on launch.

autoscalingLB3

 

This should launch the Heat Stack.

autoscalingLB4

Wait a few mins to get the MySQL and WordPress installed in the corresponding servers.

 

One should be able to see a new Load-balancer in the Networks> Load balancer section with a public IP assigned to it.

autoscalingLB5 - Copy

 

Running this public IP should land the Apache welcome page on the browser.

autoscalingLB6

 

Stack should be creating three servers in total, two web-servers and one DB server with MYSQL installed.

autoscalingLB7

 

 

Assign a public IP to any of the web-servers and access it using the browser. It should land a WordPress installation page like below.

autoscalingLB8 - Copy

 

Install the application

autoscalingLB9 - Copy

 

autoscalingLB10

 

Login with the given credentials and one should be able to access the WordPress dashboard.

autoscalingLB11 - Copy

 

Scaling:

Scaling can be done in two ways: Manual and Automatic.

By Using Ceilometer Alarms, the heat stack should be able to scale the servers depending on the CPU usage of the webersvers in the group.

By invoking the webhook URLs in the Stack Overview, one should be able to scale down and scale up the number of web servers in the group.

autoscalingLB12

 

For example:

Running the scale up URL should create a new web server and it should be added under the Load-balancer.

autoscalingLB13

 

autoscalingLB14

Here by using OpenStack Heat, Ceilometer and Load-balancer, we’are able to create an Application infrastructure that contains a Database server and a group of web servers that can be auto-scaled depending up on the CPU usage on these servers. This Load-balancer automatically registers the instances in the group and distributes incoming traffic across the instances.

Advertisements

Application deployment using Docker-Compose on OMegha Public Cloud

Docker is a container technology for Linux that allows a developer to package up an application with all of the parts it needs. It makes it easier to create, deploy, port and run applications by using containers. Docker Compose makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.

To really take full advantage of Docker’s potential, it’s best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together can quickly become knotty.

To deal with this, the Docker team decided to come up with a solution based on the Fig source called Docker Compose.

This article provides a real-world example of using Docker Compose to install an application, in this case WordPress with PHPMyAdmin as an extra. WordPress normally runs on a LAMP stack, which means Linux, Apache, MySQL/MariaDB, and PHP. The official WordPress Docker image includes Apache and PHP for us, so the only part we have to worry about is MariaDB.

 

Prerequisites:

  • Host Machine: Ubuntu 14.04 OMegha Bolt (VM).
  • A non-root user with sudo privileges.
  • Reasonable knowledge of Linux commands.

 

Installing Docker:

The Docker installation package available in the official Ubuntu 14.04 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. To do that;

First, add the GPG key for the official Docker repository to the system:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update the package database with the Docker packages from the newly added repo:

$ sudo apt-get update

Install Docker

$ sudo apt-get install –y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

$ sudo service docker status

 

Installing Docker Compose:

Now that you have Docker installed, let’s go ahead and install Docker Compose. First, install python-pip as prerequisite:

$ sudo apt-get install python-pip

Install Docker Compose:

$ sudo pip install docker-compose

Docker Compose makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.

 

Installation of Application:

Here we’ll install a WordPress application with PHPMyAdmin using Docker-compose. WordPress normally runs on a LAMP stack, which means Linux, Apache, MySQL/MariaDB, and PHP. The official WordPress Docker image includes Apache and PHP for us, so we’ll have to run MariaDB container as a source for database purposes.

Steps:

Create a folder where our data will live and create the docker-compose.yml file to run our app:

$ mkdir ~/wordpress
$ cd wordpress

Then create a docker-compose.yml with your any text editor.

$ sudo vi ~/wordpress/docker-compose.yml

And paste in the following:

wordpress:
  image: wordpress
  links:
    - wordpress_db:mysql
  ports:
    - 8080:80
wordpress_db:
  image: mariadb
  environment:
    MYSQL_ROOT_PASSWORD: mypassword
phpmyadmin:
  image: nazarpc/phpmyadmin
  links:
    - wordpress_db:mysql
  ports:
    - 8181:80
  environment:
    MYSQL_USERNAME: root
    MYSQL_ROOT_PASSWORD: mypassword

 

Explaining the YML file:

  • Docker Compose to start a new container called wordpress and download the wordpressimage from the Docker Hub.
  • Define a new container called wordpress_db and told it to use the mariadbimage from the Docker Hub.
  • WordPress container to link our wordpress_db container into the wordpress container and call it mysql (inside the wordpress container the hostname mysql will be forwarded to our wordpress_db container).
  • Set the MYSQL_ROOT_PASSWORD variable to start the mariadb server. The MariaDB Docker image is configured to check for this environment variable when it starts up and will take care of setting up the DB with a root account with the password defined as MYSQL_ROOT_PASSWORD.
  • Set up a port forward so that we can connect to our WordPress install once it actually loads up. The first port number is the port number on the host, and the second port number is the port inside the container. So, this configuration forwards requests on port 8080 of the host to the default web server port 80 inside the container.
  • Grabs docker-phpmyadmin by community member nazarpc, links it to our wordpress_db container with the name mysql, exposes its port 80 on port 8181 of the host system, and finally sets a couple of environment variables with our MariaDB username and password.

This image does not automatically grab the MYSQL_ROOT_PASSWORD environment variable from the wordpress_dbcontainer’s environment the way the wordpress image does. We actually have to copy the MYSQL_ROOT_PASSWORD: mypassword line from the wordpress_db container, and set the username to root.

 

Now start up the application group:

$ sudo docker-compose up -d

Run it with the -d option, which will tell docker-compose to run the containers in the background so that you can keep using your terminal.

One can see the corresponding images being pulled and run as containers. Towards the end of the output lines you can see that the containers are created and have started running;

Creating wordpress_wordpress_db_1 ...
Creating wordpress_wordpress_db_1 ... done
Creating wordpress_wordpress_1 ...
Creating wordpress_phpmyadmin_1 ... done
Creating wordpress_wordpress_1
Creating wordpress_phpmyadmin_1 ... done

To verify this, list the running containers using the command

$ sudo docker ps

To verify that the app is up, open up a web browser and browse to the IP of your box on port 8080. Here the host IP address we use for this document is of format xx.xxx.xx.xxx.

Type <host-IP>:8080 into the browser. This should land a WordPress installation page as in the picture.

lbaas - Copy

For the demo purposes, fill the fields as per below and install the application.

lbaas2 - Copy

Successful installation will land the following page.

lbaas3 - Copy

To verify the phpMyAdmin, open another tab and type xx.xxx.xx.xxx:8181 into it.

This should open up the phpMyAdmin page.

lbaas4 - Copy

To login, use the details that were used in the YML file. In this case, the user is root, and the password is ‘mypassword’.

Upon logging in, one will be able to access the databases in the MariaDB server as the picture shows;

lbaas5 - Copy

 

Here by using Docker-compose and Docker concepts in general, installations of wordpress and phpMyAdmin have been made much easier than the traditional ways of installing these applications.

OpenStack Load Balancer (LBaaS) Setup

Installing LBaaS

Run following command on controller node to install the required components (HAProxy and LBaaS package).

root@controller:~#  yum install haproxy openstack-neutron-lbaas

 

Configuring  LBaaS

Edit the file – /etc/neutron/neutron.conf

service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_plugins = router,lbaas

 

Edit the file – /etc/neutron/neutron_lbaas.conf

service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

 

Edit the file – /etc/neutron/lbaas_agent.ini

[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

 

Edit the file – /etc/openstack-dashboard/local_settings.py

OPENSTACK_NEUTRON_NETWORK = {


….

‘enable_lb’: True,

 

Restart the services

# service neutron-server restart
# service neutron-lbaas-agent restart

 

Now you will be able to see “Load Balancers” on OpenStack Dashboard webpage under Project -> Network Menu.

 

Creating a ROUND_ROBIN Load Balancer between two instances

Get the details of running Instances

root@controller:~# nova list
+--------------------------------------+------+--------+------------+-------------+---------------------+
| ID                                   | Name | Status | Task State | Power State | Networks            |
+--------------------------------------+------+--------+------------+-------------+---------------------+
| a2e61f7a-f701-4e18-a4e4-3240c6e9b946 | lb1  | ACTIVE | -          | Running     | private=10.0.1.1    |
| 4cbd5376-6d32-414c-9272-2c22002d1468 | lb2  | ACTIVE | -          | Running     | private=10.0.1.2    |
+--------------------------------------+------+--------+------------+-------------+---------------------+

I have 2 Instances running apache2 server on lb1 & lb2

VM Name -> VM ID -> Private Network IP

lb1->a2e61f7a-f701-4e18-a4e4-3240c6e9b946->10.0.1.1

lb2->4cbd5376-6d32-414c-9272-2c22002d1468->10.0.1.2

 

Get network list

root@controller:~# neutron net-list
+--------------------------------------+---------+--------------------------------------------------------+
| id                                   | name    | subnets                                                |
+--------------------------------------+---------+--------------------------------------------------------+
| 297c004b-435b-4741-b2f0-53a011403f61 | lb_net1 | e94e1b6c-aff2-45f2-a13c-a41593125a2d 10.0.2.0/24       |
| 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d | public  | f72e1955-b32f-4d49-8a6f-1833d6e2e1f3 10.0.16.0/26      | 
|                                      |         | 219765fc-be8d-4ef5-90a2-6c7ef2c961e9 10.0.22.0/26      |
| e232d0c0-4363-4b39-b88a-949e177f058a | private | a0cf224b-1f42-4650-b219-b9320d4ea06f 10.0.1.0/24       |
+--------------------------------------+---------+--------------------------------------------------------+

 

Get subnet list

root@controller:~# neutron subnet-list
+--------------------------------------+------------+-------------------+------------------------------------------------------+
| id                                   | name       | cidr              | allocation_pools                                     |
+--------------------------------------+------------+-------------------+------------------------------------------------------+
| e94e1b6c-aff2-45f2-a13c-a41593125a2d | lb_subnet1 | 10.0.2.0/24       | {"start": "10.0.2.1", "end": "10.0.2.80"}            |
| f72e1955-b32f-4d49-8a6f-1833d6e2e1f3 | public1    | 10.0.16.0/26      | {"start": "10.0.16.1", "end": "10.0.16.60"}          |
| 219765fc-be8d-4ef5-90a2-6c7ef2c961e9 | public     | 10.0.22.0/26      | {"start": "10.0.22.1", "end": "10.0.22.60"}          |
|                                      |            |                   |
| a0cf224b-1f42-4650-b219-b9320d4ea06f | private    | 10.0.1.0/24       | {"start": "10.0.1.1", "end": "10.0.1.80"}            |
+--------------------------------------+------------+-------------------+------------------------------------------------------+

 

Create Pool

root@controller:~# neutron lb-pool-create --name http-pool --lb-method ROUND_ROBIN --protocol HTTP --subnet-id a0cf224b-1f42-4650-b219-b9320d4ea06f
Created a new pool:
+------------------------+--------------------------------------+
| Field                  | Value                                |
+------------------------+--------------------------------------+
| admin_state_up         | True                                 |
| description            |                                      |
| health_monitors        |                                      |
| health_monitors_status |                                      |
| id                     | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| lb_method              | ROUND_ROBIN                          |
| members                |                                      |
| name                   | http-pool                            |
| protocol               | HTTP                                 |
| provider               | haproxy                              |
| status                 | PENDING_CREATE                       |
| status_description     |                                      |
| subnet_id              | a0cf224b-1f42-4650-b219-b9320d4ea06f |
| tenant_id              | 4664cf886f57480d9e3a1af4bf8a3a65     |
| vip_id                 |                                      |
+------------------------+--------------------------------------+

 

Check the pool status

root@controller:~# neutron lb-pool-list
+--------------------------------------+-----------+----------+-------------+----------+----------------+--------+
| id                                   | name      | provider | lb_method   | protocol | admin_state_up | status |
+--------------------------------------+-----------+----------+-------------+----------+----------------+--------+
| 2cef1601-dfbe-4e02-beaf-71c303a3009f | http-pool | haproxy  | ROUND_ROBIN | HTTP     | True           | ACTIVE |
+--------------------------------------+-----------+----------+-------------+----------+----------------+--------+

 

Add members to pool

root@controller:~# neutron lb-member-create --address 10.0.1.1 --protocol-port 80 http-pool
Created a new member:
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| address            | 10.0.1.1                             |
| admin_state_up     | True                                 |
| id                 | 4e1a4853-9b98-4571-a306-9daa1ca9f08b |
| pool_id            | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | 4664cf886f57480d9e3a1af4bf8a3a65     |
| weight             | 1                                    |
+--------------------+--------------------------------------+

root@controller:~# neutron lb-member-create --address 10.0.1.2 --protocol-port 80 http-pool
Created a new member:
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| admin_state_up     | True                                 |
| id                 | ed53de4b-cad7-417f-82f3-4777f6bc831b |
| pool_id            | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| protocol_port      | 80                                   |
| status             | PENDING_CREATE                       |
| status_description |                                      |
| tenant_id          | 4664cf886f57480d9e3a1af4bf8a3a65     |
| weight             | 1                                    |
+--------------------+--------------------------------------+

 

Check the members are added and ACTIVE

root@controller:~# neutron lb-member-list --sort-key address --sort-dir asc
+--------------------------------------+-------------+---------------+--------+----------------+--------+
| id                                   | address     | protocol_port | weight | admin_state_up | status |
+--------------------------------------+-------------+---------------+--------+----------------+--------+
| 4e1a4853-9b98-4571-a306-9daa1ca9f08b | 10.0.1.1    |            80 |      1 | True           | ACTIVE |
| ed53de4b-cad7-417f-82f3-4777f6bc831b | 10.0.1.2    |            80 |      1 | True           | ACTIVE |
+--------------------------------------+-------------+---------------+--------+----------------+--------+

 

Create Health Monitor

root@controller:~# neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
Created a new health_monitor:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| admin_state_up | True                                 |
| delay          | 3                                    |
| expected_codes | 200                                  |
| http_method    | GET                                  |
| id             | 479cd68a-fe2d-4234-89d8-13b1c26291d1 |
| max_retries    | 3                                    |
| pools          |                                      |
| tenant_id      | 4664cf886f57480d9e3a1af4bf8a3a65     |
| timeout        | 3                                    |
| type           | HTTP                                 |
| url_path       | /                                    |
+----------------+--------------------------------------+

 

Check the status of health monitor

root@controller:~# neutron lb-healthmonitor-list
+--------------------------------------+------+----------------+
| id                                   | type | admin_state_up |
+--------------------------------------+------+----------------+
| 479cd68a-fe2d-4234-89d8-13b1c26291d1 | HTTP | True           |
+--------------------------------------+------+----------------+

 

Health Monitor ID is  479cd68a-fe2d-4234-89d8-13b1c26291d1

Associate Health Monitor to Pool

root@controller:~# neutron lb-healthmonitor-associate 479cd68a-fe2d-4234-89d8-13b1c26291d1 http-pool
Associated health monitor 479cd68a-fe2d-4234-89d8-13b1c26291d1

 

Create a virtual IP for pool with HTTP port

root@controller:~# neutron lb-vip-create --name  http-vip --protocol-port 80 --protocol HTTP --subnet-id a0cf224b-1f42-4650-b219-b9320d4ea06f http-pool
Created a new vip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| address             | 10.0.1.4                             |
| admin_state_up      | True                                 |
| connection_limit    | -1                                   |
| description         |                                      |
| id                  | 8970ebd2-a19d-4650-b6ee-6e18985e823a |
| name                | http-vip                             |
| pool_id             | 2cef1601-dfbe-4e02-beaf-71c303a3009f |
| port_id             | 581babfc-e596-4cae-93a6-3349ed4e5577 |
| protocol            | HTTP                                 |
| protocol_port       | 80                                   |
| session_persistence |                                      |
| status              | PENDING_CREATE                       |
| status_description  |                                      |
| subnet_id           | a0cf224b-1f42-4650-b219-b9320d4ea06f |
| tenant_id           | 4664cf886f57480d9e3a1af4bf8a3a65     |
+---------------------+--------------------------------------+

 

Check VIP status

root@controller:~# neutron lb-vip-list
+--------------------------------------+----------+-------------+----------+----------------+--------+
| id                                   | name     | address     | protocol | admin_state_up | status |
+--------------------------------------+----------+-------------+----------+----------------+--------+
| 8970ebd2-a19d-4650-b6ee-6e18985e823a | http-vip | 10.0.1.4    | HTTP     | True           | ACTIVE |
+--------------------------------------+----------+-------------+----------+----------------+--------+

 

Associate a floating IP to VIP

Add a floating IP from external net

root@controller:~# neutron floatingip-create public
Created a new floating ip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 10.0.22.3                            |
| floating_network_id | 7645e3f6-444d-4e4b-ad4f-a9cf49683b2d |
| id                  | 167114de-feb7-4746-b3d3-4e9c801b7375 |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 4664cf886f57480d9e3a1af4bf8a3a65     |
+---------------------+--------------------------------------+

 

Get the Port ID, Floating IP ID and associate Floating IP to Port ID of VIP

root@controller:~# neutron floatingip-associate 167114de-feb7-4746-b3d3-4e9c801b7375 581babfc-e596-4cae-93a6-3349ed4e5577
Associated floating IP 167114de-feb7-4746-b3d3-4e9c801b7375

 

Check the Floating IP list to verify

root@controller:~# neutron floatingip-list --sort-key floating_ip_address --sort-dir asc
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 03bc56f2-c78a-4b90-919c-7c89dec548d7 |                  | 10.0.22.2           |                                      |
| 0e5ce334-18e6-4389-8faa-666fbb86e1bc |                  | 10.0.22.5           |                                      |
| 12b33a39-89b1-423f-90f9-ad3d7617d536 |                  | 10.0.22.7           |                                      |
| 14992ab1-19d1-4271-bd6b-86d52bb1a2d3 |                  | 10.0.22.9           |                                      |
| 167114de-feb7-4746-b3d3-4e9c801b7375 | 10.0.1.4         | 10.0.22.3           | 581babfc-e596-4cae-93a6-3349ed4e5577 |
| 314fa02e-9ae6-42b1-9c15-11202aff8489 |                  | 10.0.16.4           |                                      |
| 888b1e36-5fc9-4aa1-995a-cb6e6834a1e3 |                  | 10.0.16.6           |                                      |
| 8fc7e5f1-4a08-4ce3-b17a-20212348051f |                  | 10.0.22.9           |                                      |
+--------------------------------------+------------------+---------------------+--------------------------------------+

 

Now check in browser http://<associated_floating_ip/

Infrastructure as Code [IaC]: AutoScale application infrastrcture using OpenStack Heat

OpenStack Heat can deploy and configure multiple instances in one command usingresources we have in OpenStack. That’s called a Heat Stack.

Heat will create instances from images using existing flavors and networks. It can configure LBaaS and provide VIPs for our load-balanced instances. It can also use the metadata service to inject files, scripts or variables after instance deployment. It can even use Ceilometer to create alarms based on instance CPU usage and associated actions like spinning up or terminating instances based on CPU load.

OpenStack Heat is an application orchestration engine designed for the OpenStack Cloud. It is integrated into the OpenStack distro and can be used via CLI or via the Horizon GUI. Heat uses a proprietary templating language called HOT (Heat Orchestration Template) for defining application topologies.

Autoscaling in Heat is done with the help of three main types:

OS::Heat::AutoScalingGroup

An AutoScalingGroup is a resource type that is used to encapsulate the resource that we wish to scale and some properties related to the scaling process.

OS::Heat::ScalingPolicy

A ScalingPolicy is a resource type that is used to define the effect a scaling process will have on the scaled resource.

OS::Ceilometer::Alarm

An Alarm is a resource type the is used to define under which conditions the ScalingPolicy should be triggered.

Helps to have a basic auto-scaling template to understand this better. Please go through this template that we’ll be using for this example.

Explaining the template:

Parameters:  These are tidbits of information—like a specific image ID, or a particular network ID— that are passed to the Heat template by the user. This allows users to create more generic templates that could potentially use different resources.

Resources: Resources are the specific objects that Heat will create and/or modify as part of its operation, and the second of the three major sections in a Heat template.

auto_scale_group: The group of servers that will be constantly monitored and autoscaled when the conditions are met. In this example, it’ll have a minimum of one server and a maximum of three servers.

server_scaleup_policy: The effect a scaling process will have on the scaled resource. Here it’ll add a new server.

server_scaledown_policy: The effect a scaling process will have on the scaled resource. Here it’ll delete a server.

cpu_alarm_high: Alarm condition to trigger the scale up process. Here it’s scaling up if the instance CPU load is greater than 50% for more than a min.

cpu_alarm_low: scaling down if the instance CPU load is lower than 15% for more than a min.

Launching the stack on horizon with this heat tmmplate:

Login to Openstack environment, open the Orchestration part on the left tab and click on Launch stack as shown in the pic.

1

Click on Launch stack.

2

 

Select the template file which named basic.yaml in our case.

3

Launch the heat stack which will create a single instance named test-server. The process will take a min or two, after which you can try accessing the newly created server after assigning a public IP address. As this is a basic template, the vms in this example do not actually do anything.

To check if the autoscaling is working, we have to stress test the instance that’s created.

SSH into this instance and install stress tool.

ubuntu@test-server:~$ sudo apt-get install stress

ter installing we can start doing stress testing. As we are testing using CPU load, we can try to increase the CPU usage percentage of the instance upto the high ‘90s which will force the Heat orchestration part of the openstacn to spin off a new machine.

To do that type the command

ubuntu@test-server:~$ stress –c 1

By running top command we can produce an ordered list of running processes the load in the instance.

ubuntu@test-server:~$ top

Which will show that the stress process is taking up more than 90% of CPU, which, by the configuration in the template, should be enough to trigger the scale up policy.

Checking the heat engine logs in the controller server should give us more insights in to how this was brought about.

Upon getting the Scaleup process triggered the Heat engine logs will look like the following

root@controller:/var/log/heat# tail -f heat-engine.log
[……………………………………………………………………………………………………………………………………………………………………………………………………...
2017-09-11 13:39:09.101 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm server_scaleup_policy, new state alarm
2017-09-11 13:39:09.162 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaleup_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by 1
2017-09-11 13:39:09.162 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaleup_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by 1
2017-09-11 13:39:09.359 2559 INFO heat.engine.service [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Updating stack Basic-auto_scale_group-5yoys2hit67h
2017-09-11 13:39:09.797 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "knrlwocbkiel"
2017-09-11 13:39:12.364 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "cqr4fi6ivrhl"
2017-09-11 13:39:13.344 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "oxym4qwzzof4"
2017-09-11 13:39:14.668 2559 INFO heat.engine.update [-] Resource knrlwocbkiel for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 13:39:14.783 2559 INFO heat.engine.resource [-] creating Server "cqr4fi6ivrhl" Stack "Basic-auto_scale_group-5yoys2hit67h" [ce6979a7-63fb-419f-835f-ff480af0a0dc]
2017-09-11 13:39:17.218 2559 INFO heat.engine.update [-] Resource oxym4qwzzof4 for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 13:39:26.776 2559 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE started
2017-09-11 13:39:26.936 2559 INFO heat.engine.stack [-] Stack DELETE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE completed successfully
2017-09-11 13:39:27.064 2559 INFO heat.engine.stack [-] Stack UPDATE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack UPDATE completed successfully
……………………………………………………………………………………………………………………………………………………………………………………………………………]

This scale up policy is triggered by the alarm which is generated by Ceilometer. To check what all metrics are monitored by the ceilometer, one can go to the compute node and check the ceilometer-agent-compute.log.

root@compute:/var/log/ceilometer# tail -f ceilometer-agent-compute.log
2017-09-11 13:18:20.734 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.write.requests.rate in the context of meter_source
2017-09-11 13:18:20.809 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.allocation in the context of meter_source
2017-09-11 13:18:20.813 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.latency in the context of meter_source
2017-09-11 13:18:20.813 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.capacity in the context of meter_source
2017-09-11 13:18:20.817 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.write.requests in the context of meter_source
2017-09-11 13:18:20.820 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.read.requests.rate in the context of meter_source
2017-09-11 13:18:20.821 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.write.requests.rate in the context of meter_source
2017-09-11 13:18:20.821 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.write.requests in the context of meter_source
2017-09-11 13:18:20.825 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.read.requests in the context of meter_source
2017-09-11 13:18:20.828 1257 INFO ceilometer.agent.manager [-] Polling pollster cpu_util in the context of meter_source
2017-09-11 13:18:20.829 1257 INFO ceilometer.agent.manager [-] Polling pollster memory.resident in the context of meter_source
2017-09-11 13:18:20.836 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.read.requests in the context of meter_source

cpu_util meter is highlighted above.

Here the heat engine log showed that a new instance is spawned off as the Scale Up policy was triggered. Let’s check it using Horizon dashboard.

4

By the same logic, a low CPU load should be triggering the scale-down policy and remove the instance.

We can check this by canceling the stress load earlier and keeping the CPU load low.

Upon getting the Scaleup process triggered the Heat engine logs will look like the following

# createdb omegha-odoo
root@controller:/var/log/heat# tail -f heat-engine.log
2017-09-11 15:18:24.002 2557 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm server_scaledown_policy, new state alarm
2017-09-11 15:18:24.071 2557 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaledown_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by -1
2017-09-11 15:18:24.298 2558 INFO heat.engine.service [req-a5a2d796-6329-45c0-8d4d-a1624132d4d5 - eafb2b89314842dabca3e4ace895c796] Updating stack Basic-auto_scale_group-5yoys2hit67h
2017-09-11 15:18:24.740 2558 INFO heat.engine.resource [req-a5a2d796-6329-45c0-8d4d-a1624132d4d5 - eafb2b89314842dabca3e4ace895c796] Validating Server "k6lh3qvth5vd"
2017-09-11 15:18:26.353 2558 INFO heat.engine.update [-] Resource k6lh3qvth5vd for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 15:18:26.355 2558 INFO heat.engine.resource [-] deleting Server "nfivm24eye5c" [36a3eda3-0c57-45dd-8cde-e21909c54fa8] Stack "Basic-auto_scale_group-5yoys2hit67h" [ce6979a7-63fb-419f-835f-ff480af0a0dc]
2017-09-11 15:18:30.053 2558 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE started
2017-09-11 15:18:30.213 2558 INFO heat.engine.stack [-] Stack DELETE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE completed successfully
2017-09-11 15:18:30.365 2558 INFO heat.engine.stack [-] Stack UPDATE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack UPDATE completed successfully

In Horizon dashboard, the number will instances will have gone back to one.

5

Thus, according to the CPU utilization, the instances is this particular group are scaled.

MANUAL SCALING USING WEBHOOK URLs:

To do the Manual scaling we can use Webhook URLs that are created during the stack creation, which are available at the Stack Overview section in the Horizon Dashboard.

6

By invoking these URLs from the controller, we can manually scale and sacle down the number of servers.

For example to scale up the server by 1, run the following command in the controller server.

ubuntu@controller# curl –XPOST –i “http://controller:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3Aeafb2b89314842dabca3e4ace895c796%3Astacks%2FBasic%2Fd89a76c8-e74f-4d7c-9779-0a9d70709c05%2Fresources%2Fserver_scaleup_policy?Timestamp=2017-09-12T06%3A14%3A53Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=d8921846030e4aef8080eb932582290f&SignatureVersion=2&Signature=64Vp6CkUiqYRQvf0z9YWMvCtyP%2BHfITj2AJj0GnTCOM%3D”

Note: The template file used for this documentation is available here.

Installing Docker on Ubuntu 14.04

 

Docker is probably the most talked-about infrastructure technology of the past few years. It’s a container technology for Linux that allows a developer to package up an application with all of the parts it needs. It makes it easier to create, deploy, port and run applications by using containers. In a way, they’re like virtual machines, but unlike vms, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they’re running on.

large_v-trans

There are actually two versions of Docker and are supported on multiple platforms.

We’ll be installing the Docker CE [Community Edition] on Ubuntu 14.04 here.

The chances are that the Docker installation package available in the official Ubuntu 14.04 repository may not be the latest version. If you want to get the latest version, install Docker from the official Docker repository. To do that

First, add the GPG key for the official Docker repository to the system:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

$sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update the package database with the Docker packages from the newly added repo:

$ sudo apt-get update

Install Docker

$ sudo apt-get install –y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

$ sudo service docker status

Like all Linux services, Docker can be started, stopped and restarted using the following commands.

$ sudo service docker stop
$ sudo service start
$ sudo service restart

 

 

 

 

Integrating SAML Applications with Okta

Tags

, , ,

 

  • Login to Okta

okta-1

  • Click on Admin tab.

okta-2

  • Select Applications from dashboard.

okta-3

  • Click on Add Applications.

okta-4

  • Click on Create New App.

okta-5

  • Select SAML.2.0 and click Create.

okta-6

  • Give App name and click Next.

okta-7

  • Enter the URL of your Application and add Attribute Statements.

okta-8

  • Click Next.

okta-9

  • Select I’m an Okta customer adding an internal app.
  • Check This is an internal app that we have created box and click Finish.

okta-10

  • Click on People.

okta-11

  • Click on Assign to People and assign it to the user.

ok-13

  • Now go to My Applications.

okta-12

  • Open the Application.
  • Enter your Credentials .
  • Now Okta Extension will pop to save credentials in Okta.
  • Click Save Password.

okta-9

Single Sign-On Using OKTA for Applications

okta

 

Login to your Okta Account.

Click on Admin.

okta-2

It will redirect to dashboard.

Click on Applications.

o-2

Click on Add Application.

o-3

Click Create New App.

o-4

Select SWA (Secure Web Authentication) and click Create.

o-5

Fill the Application details.

o-6

Click Finish.

o-7

Now Assign it by clicking Assign to People.

o-8

Select the Users whom you want to assign..

o-9

Click Save and Go Back.

o-10

Now go to My Applications.

o-11

Here you can see a message to install plugin.

Click Install Plugin.

o-12

o-13

Install Okta Secure Web Authentication Plug-in.

o-14

Now your Application is added successfully.

o-15

Give the Username and Password.

o-16

OKTA Connect any user to any application with primary and multi-factor authentication

Executive Summary: – Authentication is a crucial part of any application development. Whether you are developing an internal IT app for your employees – or building a portal for your partners – or exposing a set of APIs for developers building apps around your resources, Okta Platform can provide the right support for your projects.

Use Cases

Authentication

Multi-Factor Authentication

API Access Management

Integrate Corporate Applications with Okta

       Okta

  • Securely store user profiles, manage passwords, and organize users into groups with Okta’s Universal Directory.

 

       ACTIVE DIRECTORY & LDAP

  • Use your existing LDAP or Active Directory as your user profile master and password store. Deploy the Okta Agent to securely delegate authentication to AD or LDAP and sync user data to and from Okta.

 

       FEDERATED IDENTITY PROVIDER

  • Connect to any federated identity provider. Okta manages all federation trust relationships, handles diverse SAML implementations, and stores user profile information.

 

       DATABASE

  • Use an existing database as your user profile master.
    Deploy the Okta Agent to securely sync user data to and from Okta.

 

       SOCIAL IDENTITY PROVIDER

  • Sync profile attributes and authenticate users from any social identity provider.

 

 

       Applications created in .NET   Java     JavaScript    PHP       Python are supported

 

       CREATE AUTHORIZATION POLICIES BASED ON:

 

  • User profile
  • Group membership
  • Network zone
  • Client
  • User or administrator consent
  • Complete standard-compliant support for OAuth 2.0
  • Proven compatibility with 3rd party API management solutions

 

Note: – Instantly revoke or update user permissions based on user status and profile.

 

 

Technical Architecture

  • Authentication – (OKTA Integration, OpenID or Any Corporate Directory, Application MFA Token)
  • Hardware Authentication – (Server SSH-Key, Passphrase, MFA Token)
  • Database Authentication – (MySQL Workbench access through SSH-Tunnel)

Below is a high-level architecture presentation on how to integrate enterprise applications with OpenID(IDP) and OKTA cloud along with multi-layer security and high availability(HA) features.

  • End user – OKTA integration.

– Create new OKTA cloud account

– Addtion of a new internal or cloud applications with OKTA Cloud.

– Add corporate users & their access control

– Adding MFA token authentication layer

(The Okta Platform gives you the flexibility to deploy Okta’s built-in factors, or integrate with existing tokens (Yubikey). Native factors include SMS, and the Okta Verify app for iOS and Android. Integrations include Google Authenticator, RSA SecurID, Symantec VIP, and Duo Security.)

  • Implement OpenID as IDP layer
  • Implementation & Configuration of HAProxy (Load Balancer)
  • NGINX (Reverse Proxy) Layer with High Availability(HA) option
  • Application/Web Server (SSH-Key + Passphrase + Password-MFA Token) access security.
    • SSH-Tunnel to Application Server created in proxy/jumphost server
    • Private SSH-Key for the proxy /jump host
    • Add the details of the local(proxy server) and remote(application server) ports and add the tunnel details
    • Access the application server through the (Proxy server) using (SSH-Key + Passphrase + Password-MFA Token) combination.
  • The database (MySQL- Percona) will setup an isolated DB network and can be accessed only through the “Standard TCP/IP over SSH-Tunnel” through through a whitelisted application/web server (Using MySQL Workbench or PhpMyAdmin Console)

 

Note:- Additional layer security for database servers is with SSH-KEY and a passphrase.

MySQL replication monitoring script

Tags

, ,

This short article explains how you can use a short script to check whether your MySQL master master replication is working or not and how to get the mail notification when it isn’t.
Needless to say the mail service should be enabled on the server to facilitate carrying out of this job.

MySQL login paths are used here for logging in for better safeguarding of usernames and password. To know more about this, check this MySQL documentation page on it.

 #!/bin/bash
# Checks MySQL Replication status. Sends user(s) a notification when the replication goes down. 
status=0
MasterHost="DB master server"
SlaveHost="DB slave Server"
emails="Test@gmail.com demo@gmail.com" #multiple emails space separated
Subject="Replication status - Down"
#Grab the lines for each and use Gawk to get the last part of the string(Yes/No)
SQLresponse=`mysql --login-path=local -e "show slave status \G" |grep -i "Slave_SQL_Running"|gawk '{print $2}'`
IOresponse=`mysql --login-path=local -e "show slave status \G" |grep -i "Slave_IO_Running"|gawk '{print $2}'`
    if [ "$SQLresponse" = "No" ]; then
    error="Replication on the slave MySQL server($SlaveHost) has stopped working. Slave_SQL_Running: No"
    status=1
    fi
    if [ "$IOresponse" = "No" ]; then
    error="Replication on the slave MySQL server($SlaveHost) has stopped working. Slave_IO_Running: No"
    status=1
    fi
    # If the replication is not working
    if [ $status = 1 ]; then
    for address in $emails; do
    echo -e $error | mail -s "$Subject" $address
    echo "Replication down, sent email to $address"
    done
    fi

Setup a cron job to run this script every five or ten minutes to get the notification whenever the replication goes down.

The wait is over. We are proud to announce the launch of OMegha™ Public Cloud straight from our labs

Tags

, , , , , , , ,

IT industry most of the time stays in a bubble of buzzwords and one such buzzword of recent times is “CLOUD”. You will find people using this “CLOUD” in every conversation they strike. Now, just ask them what is a “CLOUD” and what do you do with “CLOUD” and the response would another set of buzzwords “OPEX”, “CAPEX”, “AWS”, “IaaS”, “PaaS” & “SaaS” and of course “COST SAVING” and “SECURITY”.

Now, this is the chance for all wanna-be cloud engineers, cloud technologists/architects and IT DevOps managers and Operations team members to understand what a real “CLOUD” looks-like and what is good and bad about it and how to utilize only good things by adopting the so called “CLOUD”

InfraStack-Labs is conducting a one day workshop on Dec 10th 2016 
event-1agenda

 

 

Watch this space for registrations.

More Info :- https://infrastack-labs.com/news-and-events