OpenStack Heat can deploy and configure multiple instances in one command usingresources we have in OpenStack. That’s called a Heat Stack.

Heat will create instances from images using existing flavors and networks. It can configure LBaaS and provide VIPs for our load-balanced instances. It can also use the metadata service to inject files, scripts or variables after instance deployment. It can even use Ceilometer to create alarms based on instance CPU usage and associated actions like spinning up or terminating instances based on CPU load.

OpenStack Heat is an application orchestration engine designed for the OpenStack Cloud. It is integrated into the OpenStack distro and can be used via CLI or via the Horizon GUI. Heat uses a proprietary templating language called HOT (Heat Orchestration Template) for defining application topologies.

Autoscaling in Heat is done with the help of three main types:

OS::Heat::AutoScalingGroup

An AutoScalingGroup is a resource type that is used to encapsulate the resource that we wish to scale and some properties related to the scaling process.

OS::Heat::ScalingPolicy

A ScalingPolicy is a resource type that is used to define the effect a scaling process will have on the scaled resource.

OS::Ceilometer::Alarm

An Alarm is a resource type the is used to define under which conditions the ScalingPolicy should be triggered.

Helps to have a basic auto-scaling template to understand this better. Please go through this template that we’ll be using for this example.

Explaining the template:

Parameters:  These are tidbits of information—like a specific image ID, or a particular network ID— that are passed to the Heat template by the user. This allows users to create more generic templates that could potentially use different resources.

Resources: Resources are the specific objects that Heat will create and/or modify as part of its operation, and the second of the three major sections in a Heat template.

auto_scale_group: The group of servers that will be constantly monitored and autoscaled when the conditions are met. In this example, it’ll have a minimum of one server and a maximum of three servers.

server_scaleup_policy: The effect a scaling process will have on the scaled resource. Here it’ll add a new server.

server_scaledown_policy: The effect a scaling process will have on the scaled resource. Here it’ll delete a server.

cpu_alarm_high: Alarm condition to trigger the scale up process. Here it’s scaling up if the instance CPU load is greater than 50% for more than a min.

cpu_alarm_low: scaling down if the instance CPU load is lower than 15% for more than a min.

Launching the stack on horizon with this heat tmmplate:

Login to Openstack environment, open the Orchestration part on the left tab and click on Launch stack as shown in the pic.

1

Click on Launch stack.

2

 

Select the template file which named basic.yaml in our case.

3

Launch the heat stack which will create a single instance named test-server. The process will take a min or two, after which you can try accessing the newly created server after assigning a public IP address. As this is a basic template, the vms in this example do not actually do anything.

To check if the autoscaling is working, we have to stress test the instance that’s created.

SSH into this instance and install stress tool.

ubuntu@test-server:~$ sudo apt-get install stress

ter installing we can start doing stress testing. As we are testing using CPU load, we can try to increase the CPU usage percentage of the instance upto the high ‘90s which will force the Heat orchestration part of the openstacn to spin off a new machine.

To do that type the command

ubuntu@test-server:~$ stress –c 1

By running top command we can produce an ordered list of running processes the load in the instance.

ubuntu@test-server:~$ top

Which will show that the stress process is taking up more than 90% of CPU, which, by the configuration in the template, should be enough to trigger the scale up policy.

Checking the heat engine logs in the controller server should give us more insights in to how this was brought about.

Upon getting the Scaleup process triggered the Heat engine logs will look like the following

root@controller:/var/log/heat# tail -f heat-engine.log
[……………………………………………………………………………………………………………………………………………………………………………………………………...
2017-09-11 13:39:09.101 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm server_scaleup_policy, new state alarm
2017-09-11 13:39:09.162 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaleup_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by 1
2017-09-11 13:39:09.162 2558 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaleup_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by 1
2017-09-11 13:39:09.359 2559 INFO heat.engine.service [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Updating stack Basic-auto_scale_group-5yoys2hit67h
2017-09-11 13:39:09.797 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "knrlwocbkiel"
2017-09-11 13:39:12.364 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "cqr4fi6ivrhl"
2017-09-11 13:39:13.344 2559 INFO heat.engine.resource [req-0636aff9-ec21-493c-bddb-dd4307c6cc88 - eafb2b89314842dabca3e4ace895c796] Validating Server "oxym4qwzzof4"
2017-09-11 13:39:14.668 2559 INFO heat.engine.update [-] Resource knrlwocbkiel for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 13:39:14.783 2559 INFO heat.engine.resource [-] creating Server "cqr4fi6ivrhl" Stack "Basic-auto_scale_group-5yoys2hit67h" [ce6979a7-63fb-419f-835f-ff480af0a0dc]
2017-09-11 13:39:17.218 2559 INFO heat.engine.update [-] Resource oxym4qwzzof4 for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 13:39:26.776 2559 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE started
2017-09-11 13:39:26.936 2559 INFO heat.engine.stack [-] Stack DELETE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE completed successfully
2017-09-11 13:39:27.064 2559 INFO heat.engine.stack [-] Stack UPDATE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack UPDATE completed successfully
……………………………………………………………………………………………………………………………………………………………………………………………………………]

This scale up policy is triggered by the alarm which is generated by Ceilometer. To check what all metrics are monitored by the ceilometer, one can go to the compute node and check the ceilometer-agent-compute.log.

root@compute:/var/log/ceilometer# tail -f ceilometer-agent-compute.log
2017-09-11 13:18:20.734 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.write.requests.rate in the context of meter_source
2017-09-11 13:18:20.809 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.allocation in the context of meter_source
2017-09-11 13:18:20.813 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.latency in the context of meter_source
2017-09-11 13:18:20.813 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.capacity in the context of meter_source
2017-09-11 13:18:20.817 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.write.requests in the context of meter_source
2017-09-11 13:18:20.820 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.read.requests.rate in the context of meter_source
2017-09-11 13:18:20.821 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.write.requests.rate in the context of meter_source
2017-09-11 13:18:20.821 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.write.requests in the context of meter_source
2017-09-11 13:18:20.825 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.read.requests in the context of meter_source
2017-09-11 13:18:20.828 1257 INFO ceilometer.agent.manager [-] Polling pollster cpu_util in the context of meter_source
2017-09-11 13:18:20.829 1257 INFO ceilometer.agent.manager [-] Polling pollster memory.resident in the context of meter_source
2017-09-11 13:18:20.836 1257 INFO ceilometer.agent.manager [-] Polling pollster disk.device.read.requests in the context of meter_source

cpu_util meter is highlighted above.

Here the heat engine log showed that a new instance is spawned off as the Scale Up policy was triggered. Let’s check it using Horizon dashboard.

4

By the same logic, a low CPU load should be triggering the scale-down policy and remove the instance.

We can check this by canceling the stress load earlier and keeping the CPU load low.

Upon getting the Scaleup process triggered the Heat engine logs will look like the following

# createdb omegha-odoo
root@controller:/var/log/heat# tail -f heat-engine.log
2017-09-11 15:18:24.002 2557 INFO heat.engine.resources.openstack.heat.scaling_policy [-] Alarm server_scaledown_policy, new state alarm
2017-09-11 15:18:24.071 2557 INFO heat.engine.resources.openstack.heat.scaling_policy [-] server_scaledown_policy Alarm, adjusting Group auto_scale_group with id Basic-auto_scale_group-5yoys2hit67h by -1
2017-09-11 15:18:24.298 2558 INFO heat.engine.service [req-a5a2d796-6329-45c0-8d4d-a1624132d4d5 - eafb2b89314842dabca3e4ace895c796] Updating stack Basic-auto_scale_group-5yoys2hit67h
2017-09-11 15:18:24.740 2558 INFO heat.engine.resource [req-a5a2d796-6329-45c0-8d4d-a1624132d4d5 - eafb2b89314842dabca3e4ace895c796] Validating Server "k6lh3qvth5vd"
2017-09-11 15:18:26.353 2558 INFO heat.engine.update [-] Resource k6lh3qvth5vd for stack Basic-auto_scale_group-5yoys2hit67h updated
2017-09-11 15:18:26.355 2558 INFO heat.engine.resource [-] deleting Server "nfivm24eye5c" [36a3eda3-0c57-45dd-8cde-e21909c54fa8] Stack "Basic-auto_scale_group-5yoys2hit67h" [ce6979a7-63fb-419f-835f-ff480af0a0dc]
2017-09-11 15:18:30.053 2558 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE started
2017-09-11 15:18:30.213 2558 INFO heat.engine.stack [-] Stack DELETE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack DELETE completed successfully
2017-09-11 15:18:30.365 2558 INFO heat.engine.stack [-] Stack UPDATE COMPLETE (Basic-auto_scale_group-5yoys2hit67h): Stack UPDATE completed successfully

In Horizon dashboard, the number will instances will have gone back to one.

5

Thus, according to the CPU utilization, the instances is this particular group are scaled.

MANUAL SCALING USING WEBHOOK URLs:

To do the Manual scaling we can use Webhook URLs that are created during the stack creation, which are available at the Stack Overview section in the Horizon Dashboard.

6

By invoking these URLs from the controller, we can manually scale and sacle down the number of servers.

For example to scale up the server by 1, run the following command in the controller server.

ubuntu@controller# curl –XPOST –i “http://controller:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3Aeafb2b89314842dabca3e4ace895c796%3Astacks%2FBasic%2Fd89a76c8-e74f-4d7c-9779-0a9d70709c05%2Fresources%2Fserver_scaleup_policy?Timestamp=2017-09-12T06%3A14%3A53Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=d8921846030e4aef8080eb932582290f&SignatureVersion=2&Signature=64Vp6CkUiqYRQvf0z9YWMvCtyP%2BHfITj2AJj0GnTCOM%3D”

Note: The template file used for this documentation is available here.

Advertisements