Elastic Search Cluster on Ubuntu 14.04

Featured

Elastic Search is a popular open source search server that is used for real-time distributed search and analysis of data. When used for anything other than development, Elastic Search should be deployed across multiple servers as a cluster, for the best performance, stability, and scalability.

elasticsearch_logo

Demonstration:

OMegha Platform.

Image – Ubuntu-14.04

Prerequisites:

You must have at least three Ubuntu 14.04 servers to complete this, because an Elastic Search cluster should have a minimum of 3 master-eligible nodes. If you want to have dedicated master and data nodes, you will need at least 3 servers for your master nodes plus additional servers for your data nodes.

Install Java 8:

Elastic Search requires Java, so we will install that now. We will install a recent version of Oracle Java 8 because that is what Elastic Search recommends. It should, however, work fine with OpenJDK, if you decide to go that route.

Complete this step on all of your Elastic Search servers.

Add the Oracle Java PPA to apt:

$ sudo add-apt-repository -y ppa:webupd8team/java

Update your apt package database:

$ sudo apt-get update

Install the latest stable version of Oracle Java 8 with this command (and accept the license agreement that pops up):

$ sudo apt-get -y install oracle-java8-installer

Be sure to repeat this step on all of your Elastic Search servers.

Now that Java 8 is installed, let’s install Elastic Search.

Install Elastic Search:

Elastic Search can be downloaded directly from elastic.co in zip, tar.gz, deb, or rpm packages. For Ubuntu, it’s best to use the deb (Debian) package which will install everything you need to run Elastic Search.

$ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.deb

Then install it in the usual Ubuntu way with the dpkg command like this:

$ sudo dpkg -i elasticsearch-1.7.2.deb

This results in Elastic Search being installed in /usr/share/elastic Search/ with its configuration files placed in /etc/elastic Search and its init script added in /etc/init.d/elastic search.

$ sudo update-rc.d elasticsearch defaults

Be sure to repeat these steps on all of your Elastic Search servers.

Elastic Search is now installed but it needs to be configured before you can use it.

Configure Elastic search Cluster

Now it’s time to edit the Elastic search configuration. Complete these steps on all of your Elastic search servers.

Open the Elastic search configuration file for editing:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Set Cluster Name:

Next, set the name of your cluster, which will allow your Elastic search nodes to join and form the cluster. You will want to use a descriptive name that is unique (within your network).

Find the line that specifies cluster.name, uncomment it, and replace its value with the your desired cluster name. In this tutorial, we will name our cluster “elastic search_cluster”:

ELK1

Set Node Name:

Next, we will set the name of each node. This should be a descriptive name that is unique within the cluster.

Find the line that specifies node.name, uncomment it, and replace its value with your desired node name. In this tutorial, we will set each node name to the host name of server by using the ${HOSTNAME}environment variable:

ELK2

For Master Node:

For Master set the node.master as True and for node.data as False

ELK3

For Data Node:

For Data set the node.master as False and for node.data as True

ELK4

Network Host:

Set the network host as 0.0.0.0

ELK5

Set Discovery Hosts:

Next, you will need to configure an initial list of nodes that will be contacted to discover and form a cluster. This is necessary in a unicast network.

Find the line that specifies discovery.zen.ping.unicast.hosts and uncomment it.

For example, if you have three servers node01, node02, and node03 with respective VPN IP addresses of 10.0.0.1, 10.0.0.2, and 10.0.0.3, you could use this line:

ELK6

Save and Exit.

Your servers are now configured to form a basic Elastic search cluster. There are more settings that you will want to update, but we’ll get to those after we verify that the cluster is working.

Save and exit elasticsearch.yml.

Start Elastic search:

Now start Elastic search:

$ sudo service elasticsearch restart

Then run this command to start Elastic search on boot up:

$ sudo update-rc.d elasticsearch defaults 95 10

Be sure to repeat these steps (Configure Elastic search) on all of your Elastic search servers.

Testing:

By now, Elastic search should be running on port 9200. You can test it with curl, the command line client-side URL transfers tool and a simple GET request like this:

$ curl -X GET 'http://localhost:9200'

You should see the following response:

ELK7

If you see a response similar to the one above, Elastic search is working properly. If not, make sure that you have followed correctly the installation instructions and you have allowed some time for Elastic search to fully start.

Check Cluster State:

If everything was configured correctly, your Elastic search cluster should be up and running. Before moving on, let’s verify that it’s working properly. You can do so by querying Elastic search from any of the Elastic search nodes.

From any of your Elastic search servers, run this command to print the state of the cluster:

$ curl -XGET 'http://localhost:9200/_cluster/state?pretty'

EL8

If you see output that is similar to this, your Elastic search cluster is running! If any of your nodes are missing, review the configuration for the node(s) in question before moving on.

Next, we’ll go over some configuration settings that you should consider for your Elastic search cluster.

Enable Memory Locking:

Elastic recommends to avoid swapping the Elastic search process at all costs, due to its negative effects on performance and stability. One way avoid excessive swapping is to configure Elastic search to lock the memory that it needs.

Complete this step on all of your Elastic search servers.

Edit the Elastic search configuration:

$ sudo vi /etc/elasticsearch/elasticsearch.yml

Find the line that specifies bootstrap.mlockall and uncomment it:

ELK9

Save and exit.

Now restart Elastic search to put the changes into place:

$ sudo service elasticsearch restart

Cluster Health:

This API can be used to see general info on the cluster and gauge its health:

$ curl -XGET 'localhost:9200/_cluster/health?pretty'

ELK10

Cluster State:

This API can be sued to see a detailed status report on your entire cluster. You can filter results by specifying parameters in the call URL.

$ curl -XGET 'localhost:9200/_cluster/state?pretty'

ELK11

Conclusion:

Your Elastic search cluster should be running in a healthy state, and configured with some basic optimizations.

 

Node.js Installation

Featured

Node.js is a cross-platform environment and library for running JavaScript applications which are used to create networking and server-side applications.

It is used to develop I/O intensive web applications like video streaming sites, single-page applications, and other web applications.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Node.js Installation In Ubuntu 16.04

Step-1 Update the Package List

Before installing the node.js on to the Ubuntu system, update all available repositories.

sudo apt-get update

Step-2  Install Node.js

Run below command to install standard package of node.js

sudo apt-get install nodejs

Step-3 Install NPM

For installing Node.js the npm package is also required. By using below command we install the npm package.

sudo apt-get install npm

In order for some npm packages to work (those that require compiling code from source, for example), you will need to install the build-essential package:

sudo apt-get install build-essential

Installation Check

After Installing the node.js and npm we check that installation is correct or not by typing the following commands:

Node.js Installation Check

nodejs --version

NPM Installation Check

npm --version

Remove Node.js

For removing the node.js from ubuntu system completely we type following commands:

Remove Package without Configuration Files

This command removes node.js but configuration files remain so next time we install node.js than it will be used.

sudo apt-get remove nodejs

Remove Package with Configuration Files

If you don’t want to kept the configuration files then use following command.

sudo apt-get purge nodejs

Finally, Remove Unused Packages

For removing the unused packages that are installed with node.js run the following command:

sudo apt-get autoremove 

Installing Open Source Hosting Control Panel (ZesleCP) on Ubuntu 16.04

Featured

Zesle Control Panel

Secure Web Control Panel for your needs…

ZCP

Zesle is one of the popular open source control panel that any own can download and install. This is very simple and can be installed in just one command.

System Requirements:

  • Ubuntu 14/16
  • 1Ghz CPU
  • 512MB RAM
  • 10+GB DISK

Zesle is simpler and very User friendly.  Using Zesle you’ll be able to do the below tasks…

  • Add multiple domains without hassle;
  • Add multiple sub domains;
  • Install WordPress easily with one-click app;
  • Install free Let’s Encrypt SSL certificates with ease;
  • Configure basic PHP settings;
  • Manage Email accounts;
  • Access phpMyAdmin.

and much more. Let’s see how to install Zesle in your hosting.

Step 1: It’s super-easy to install Zesle. Run the following command with Root privilege.

$ cd /home && curl -o latest -L http://zeslecp.com/release/latest && sh latest

Step 2: The installation will begin and in between It will ask the admin’s email address. Please provide your email id and click enter.

Step 3: You will see the below screen at the end of the installation.

zcsp1.png

Step 4: This is how Zesle will looks like. Once the installation is completed, it’ll show you the temporary password and the login URL.

Step 5: The login URL will be your IP address followed by the port number(2087 will be the default one). For example: https://11.11.11.111:2087 is a sample URL.

Step 6: Just enter this in your browser and you’ll get the login screen.

zcsp2

Step 7: Use root as your username

Step 8: Copy paste the temporary root password provided. Once you entered the correct password, the control panel will open and here is how it looks like. All the options mentioned above will be available in the left side of the UI.

zcsp3

Step 9: In the Dashboard, you can create your account, and install WordPress on your domain using “Once Click Apps

Step 10: There is installation steps end for free Linux Web Hosting control panel called ZesleCP.

 

Installation of Open Project Management System on Ubuntu 16.04

Featured

OpenProjectLogo

Open Project is a web-based management system for location-independent team
collaboration, released under GNU GPL 3 License. It’s a project management software
that provides task-management, team collaboration, scrum etc. Open Project is written
in Ruby on Rails and AngularJS. In this tutorial, I will show you how to install and
configure the Open Project management system using Ubuntu 16.04. The tool can be
installed manually or by using packages from the repository. For t his guide, we will
install Open Project from repository.

Prerequisites

  •  Ubuntu 16.04.
  •  Good Internet Connectivity.
  •  Root Privileges.

What we will do

  • Update and Upgrade System.
  • Install Open Project Management System.
  • Configure the Open Project System.
  • Testing.

Step 1: Update and Upgrade System

Before installing the Open Project on to the Ubuntu system, update all available repositories and upgrade the Ubuntu system.

Run the following commands.

$ sudo apt update
$ sudo apt upgrade -y

Step 2: Install Open Project Management System

Download the open project key and add it to the system.

$ sudo wget -qO- https://dl.packager.io/srv/opf/openproject-ce/key | sudo apt-key add

And download the open project repository for Ubuntu 16.04 in the ‘/etc/apt/sources.list.d’ directory.

$ sudo wget -O /etc/apt/sources.list.d/openproject-ce.list \
  https://dl.packager.io/srv/opf/openproject-ce/stable/7/installer/ubuntu/16.04.repo

Now update the Ubuntu repository and install open project using the apt command as shown below.

$ sudo apt update
$ sudo apt-get install openproject -y

Step 3: Configure the Open Project System

Run the Open Project configuration command. A Graphical based UI screen will appear.

$  sudo openproject configure

op1

Select ‘Install and configure MySQL server locally’ and click ‘OK’. It will automatically install MySQL server on the system, and automatically create the database for openproject installation.

For the web server configuration, choose the ‘Install apache2 server’ and click ‘OK’. it will automatically install the apache2 web server and configure the virtual host for Open Project application.

op2

Now type the domain name for your Open project application, and choose ‘OK’.

Next, for the SSL configuration. If you have purchased SSL certificates, choose ‘yes’, and ‘no’ if you don’t have SSL certificates.

op3

Skip the subversion support, GitHub support and SMTP configuration. (if not needed).

And for memcached installation choose ‘Install’ and select ‘OK’ for better performance of Open Project.

op4

Finally, installation and configuration of all the packages required for Open Project installation should happen automatically.

Step 4: Testing

Check whether the Open Project service is up and running.

$  sudo service openproject status

Now run the openproject web service using the following command.

$  sudo openproject run web

Now open your web browser and type on the address bar your floating IP to access the system

op5

Now click the ‘Sign in’ button to log in to the admin dashboard initially using ‘admin’ as user and ‘admin’ as password and later you can change it.

Finally, the installation and configuration for Open Project on Ubuntu 16.04 has been completed successfully.

 

 

 

Apache Virtual Hosts on Ubuntu 14.04

Featured

The Apache web server is the most popular way of serving web content on the internet. It accounts for more than half of all active websites on the internet and is extremely powerful and flexible.

Apache breaks its functionality and components into individual units that can be customized and configured independently. The basic unit that describes an individual site or domain is called a Virtual Host.

virtual-hosting-apache

These designations allow the administrator to use one server to host multiple domains or sites off of a single interface or IP by using a matching mechanism. This is relevant to anyone looking to host more than one site off of a single VPS.

In this document, we will walk you through how to set up Apache virtual hosts on an Ubuntu 14.04 VPS. During this process, you’ll learn how to serve different content to different visitors depending on which domains they are requesting.

Prerequisites

  • Before you begin this tutorial, you should create a non root user.
  • You will also need to have Apache installed in order to work through these steps.

Demonstration:

OMegha platform.

Image – Ubuntu-14.04

Lets get Started,

At first we need to update the packages list.

$ sudo apt-get update

VH1

Install Apache

$ sudo apt-get install apache2

VH2

For the purposes of this document, my configuration will make a virtual host for infra.com and another for infra1.com

Step 1: Create the Directory Structure

Our document root will be set to individual directories under the /var/www directory. We will create a directory here for both of the virtual hosts we plan on making.

Within each of these directories, we will create a public_html folder that will hold our actual files. This gives us some flexibility in our hosting.

$ sudo mkdir -p /var/www/infra.com/public_html

$ sudo mkdir -p /var/www/infra1.com/public_html

The portions in red represent the domain names that we are wanting to serve from our VPS.

Step 2: Grant Permissions

Changing the Ownership

$ sudo chown -R $USER:$USER /var/www/infra.com/public_html

$ sudo chown -R $USER:$USER /var/www/infra1.com/public_html

VH3

We should also modify our permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders

$ sudo chmod -R 755 /var/www

Step 3: Create Demo Pages for Each Virtual Host

We have to create index.html file for each site.

Let’s start with infra.com. We can open up an index.html file in our editor by typing

$ sudo vi /var/www/infra.com/public_html/index.html

In this file, create a simple HTML document that indicates the site it is connected to and My file looks like this

<html>

  <head>

    <title>Welcome to infra.com!</title>

  </head>

  <body>

    <h1>Success!  The infra.com virtual host is working!</h1>

  </body>

</html>

Save and close the file when you are finished.

We can copy this file to use as the basis for our second site by typing

cp /var/www/infra.com/public_html/index.html /var/www/infra1.com/public_html/index.html

Then we can open the file and modify the relevant pieces of information

$ sudo vi /var/www/infra1.com/public_html/index.html

<html>

  <head>

    <title>Welcome to infra1.com!</title>

  </head>

  <body>

    <h1>Success!  The infra1.com virtual host is working!</h1>

  </body>

</html>

Save and close the file.

Step 4: Create New Virtual Host Files

Virtual host files are the files that specify the actual configuration of our virtual hosts and dictate how the Apache web server will respond to various domain requests.

Apache comes with a default virtual host file called 000-default.conf and we can
copy that to our first domain of the virtual host file.

Creating First Virtual Host File

Start by copying the file for the first domain

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/infra.com.conf

Open the new file in your editor with root privileges

$ sudo vi /etc/apache2/sites-available/infra.com.conf

our virtual host file should look like this

<VirtualHost *:80>

    ServerAdmin admin@infra.com

    ServerName infra.com

    ServerAlias www.infra.com

    DocumentRoot /var/www/infra.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Copy first Virtual Host and Customize for Second Domain

Now that we have our first virtual host file established, we can create our second one by copying that file and adjusting it as needed.

Start by copying

$ sudo cp /etc/apache2/sites-available/infra.com.conf /etc/apache2/sites-available/infra1.com.conf

Open the new file with root privileges

$ sudo vi /etc/apache2/sites-available/infra1.com.conf

You now need to modify all of the pieces of information to reference your second domain. When you are finished, it may look something like this

<VirtualHost *:80>

    ServerAdmin admin@infra1.com

    ServerName infra1.com

    ServerAlias www.infra1.com

    DocumentRoot /var/www/infra1.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Save and close the file.

Step 5: Enable the New Virtual Host Files

Created Virtual host files needs to be enabled.

We can use the a2ensite tool to enable each of our sites

$ sudo a2ensite infra.com.conf

$ sudo a2ensite infra1.com.conf

VH4

Restart the apache server.

$ sudo service apache2 restart

Step 6: Setup Local Hosts File

$ sudo vi /etc/hosts

The details that you need to add are the public IP address of your VPS server followed by the domain you want to use to reach that VPS.

127.0.0.1 localhost

***.***.***.*** infra.com

***.***.***.*** infra1.com

Save and close the file.

This will direct any requests for infra.com and infra1.com on our computer and send them to our server at ***.***.***

Step 7: Test Your Results

Now that you have your virtual hosts configured, you can test your setup easily by going to the domains that you configured in your web browser

http://infra.com

VH5

You should see a page that looks like this

Likewise, if you can visit your second page

http://infra1.com

VH6

You will see the file you created for your second site

Step 8: Conclusion

If both of these sites work well, you’ve successfully configured two virtual hosts on the same server.

If you need to access this long term, consider purchasing a domain name for each site you need and setting it up to point to your VPS server.

Centralize Logs from Node.js Applications

Featured

Prerequisites

  • Installation of Node.js and NPM
  • Installation of Fluentd

Modifying the Config File

Next, please configure Fluentd to use the forward Input plugin as its data source.

$ sudo vi /etc/td-agent/td-agent.conf

Fluent daemon should listen on TCP port.

Simple configuration is following:

1

Restart your agent once these lines are in place.

$ sudo service td-agent restart

fluent-logger-node

Install fluent-logger-node

$ npm install fluent-logger

Now use npm to install your dependencies locally:

$ npm install

Send an event record to Fluentd

index.js

This is the simplest web app.

2

Run the app and go to http://localhost:4000 in your browser. This will send the logs to Fluentd.

$ node index.js

3

The logs should be output to /var/log/td-agent/td-agent.log  

Store Logs into MongoDB

Fluentd does 3 things:

  1. It continuously “tails” the access log file.
  2. It parses the incoming log entries into meaningful fields (such as ip,path, etc.) and buffers them.
  3. It writes the buffered data to MongoDB periodically.

Configuration         

Fluentd’s config file

$ sudo vi /etc/td-agent/td-agent.conf

 The Fluentd configuration file should look like this:

4

Restart your agent once these lines are in place.

$ sudo service td-agent restart

Then, access MongoDB and see the stored data.

$ mongo

5

Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Installation of MongoDB on Ubuntu 16.04

Featured

MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.

mongodb

MongoDB is one of several database types to arise in the mid-2000s under the NoSQL banner. Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents. Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Like other NoSQL databases, MongoDB supports dynamic schema design, allowing the documents in a collection to have different fields and structures. The database uses a document storage and data interchange format called BSON, which provides a binary representation of JSON-like documents. Automatic sharding enables data in a collection to be distributed across multiple systems for horizontal scalability as data volumes increase.

This blog will help you set up MongoDB on your server for a production application environment.

Prerequisites

To follow this blog, you will need:

  • One Ubuntu 16.04 server set up by following this initial server setup, including a sudo non-root user.

Adding the MongoDB Repository

MongoDB is already included in Ubuntu package repositories, but the official MongoDB repository provides most up-to-date version and is the recommended way of installing the software. In this step, we will add this official repository to our server.

Ubuntu ensures the authenticity of software packages by verifying that they are signed with GPG keys, so we first have to import they key for the official MongoDB repository.

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA31292

After successfully importing the key, you will see:

gpg: Total number processed: 1
gpg:        imported: 1    (RSA:  1)

Next, we have to add the MongoDB repository details so apt will know where to download the packages from.

Issue the following command to create a list file for MongoDB.

$ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

After adding the repository details, we need to update the packages list.

$ sudo apt-get update

Installing and Verifying MongoDB

Now we can install the MongoDB package itself.

$ sudo apt-get install -y mongodb-org

This command will install several packages containing latest stable version of MongoDB along with helpful management tools for the MongoDB server.

Next, start MongoDB with systemctl.

$ sudo systemctl start mongod

You can also use systemctl to check that the service has started properly.

$ sudo systemctl status mongod
$ mongo

mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Main PID: 4093 (mongod)
Tasks: 16 (limit: 512)
Memory: 47.1M
CPU: 1.224s
CGroup: /system.slice/mongodb.service
└─4093 /usr/bin/mongod --quiet --config /etc/mongod.conf

The last step is to enable automatically starting MongoDB when the system starts.

$ sudo systemctl enable mongod

The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

Installing Asterisk on Ubuntu 16.04

Featured

Tags

,

Asterisk is a software implementation of a telephone private branch exchange (PBX). It allows telephones interfaced with a variety of hardware technologies to make calls to one another, and to connect to telephony services, such as the public switched telephone network (PSTN) and voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol “*”.

Asterisk_Logo.svg

Some of the many features of Asterisk include:

  • The Asterisk software includes many features available in commercial and proprietary PBX systems: voice mailconference callinginteractive voice response and automatic call distribution.
  • Users can create new functionality by writing dial plan scripts in several of Asterisk’s own extensionslanguages, by adding custom loadable modules written in C, or by implementing Asterisk Gateway Interface (AGI) programs using any programming language capable of communicating via the standard streams system (stdin and stdout) or by network TCP sockets.
  • Asterisk supports several standard voice over IPprotocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and 323.
  • Asterisk supports most SIP telephones, acting both as registrar and back-to-back user agent.
  • By supporting a variety of traditional and VoIP telephony services, Asterisk allows deployers to build telephone systems, or migrate existing systems to new technologies.

asterisk arc

 Install Asterisk from Source

After logging in to your Ubuntu Server as an user, issue the following command to switch to root user.

$ sudo su 

Now you are root, but you need to set the password with the following command.

# passwd

Next step would be to install initial dependencies for asterisk.

# apt-get install build-essential wget libssl-dev libncurses5-dev libnewt-dev libxml2-dev linux-headers-$(uname -r) libsqlite3-dev uuid-dev git subversion

Installing Asterisk

Now when we are root and dependencies are satisfied, we can move to /usr/src/ directory and download latest asterisk version there.

# cd /usr/src
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-15-current.tar.gz

Next we unpack it.

# tar zxvf asterisk-15-current.tar.gz

Now we need to enter into the newly unpacked directory,

# cd asterisk-15*

Before we actually compile the asterisk code, we need ‘pjproject’ as asterisk-15 introduces the support for pjsip. So we will compile it first:

# git clone git://github.com/asterisk/pjproject pjproject
# cd pjproject
# ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'
# make dep
# make && make install
# ldconfig
# ldconfig -p |grep pj

Configuring Asterisk

And now we commence to configuring and compiling the Asterisk code.

# cd ..
# contrib/scripts/get_mp3_source.sh
# contrib/scripts/install_prereq install

This will install mp3 tones and satisfy additional dependencies which might take some time and ask you for your country code. Following command will compile and install asterisk.

# ./configure && make menuselect && make && make install

When that is finished, to avoid making hundred of config files yourself, after install you normally want to run this command, which will make initial config for you:

# make samples

And for having the start up script installed and enabled to start asterisk on every boot, we run make config, followed by ldconfig:

# make config
# ldconfig

Now we can start asterisk for the first time and see if it actually works.

# /etc/init.d/asterisk start

and then we can enter asterisk console with command.

# asterisk -rvvv

Now we need to do additional steps to make it run as asterisk user. First we need to stop asterisk.

# systemctl stop asterisk

Then we need to add group and user named asterisk.

# groupadd asterisk
# useradd -d /var/lib/asterisk -g asterisk asterisk

Asterisk needs to be configured to start as the user we just created, we can edit /etc/default/asterisk by hand but it is more efficient to use following two sed commands.

# sed -i 's/#AST_USER="asterisk"/AST_USER="asterisk"/g' /etc/default/asterisk
# sed -i 's/#AST_GROUP="asterisk"/AST_GROUP="asterisk"/g' /etc/default/asterisk

To run properly, asterisk user needs to be assigned ownership to all essential asterisk directories.

# chown -R asterisk:asterisk /var/spool/asterisk /var/run/asterisk /etc/asterisk /var/{lib,log,spool}/asterisk /usr/lib/asterisk

The asterisk.conf also needs to be edited to uncoment lines for runuser and rungroup:

# sed -i 's/;runuser = asterisk/runuser = asterisk/g' /etc/asterisk/asterisk.conf
# sed -i 's/;rungroup = asterisk/rungroup = asterisk/g' /etc/asterisk/asterisk.conf

when this is done, reboot the server so Asterisk brings up automatically by systemd, and then type asterisk -rvvv to enter the asterisk console.

# asterisk -rvvv

 

Impact of Apache Log4j (Log4shell) Vulnerability

Tags

,

What is Apache Log4j?

Log4j is an open-source project managed by the Apache Software Foundation. Apache Log4j is a Java-based logging utility. Log4j Java library’s role is to log information that helps applications run smoothly, determine what’s happening, and help with the debugging process when errors occur.

What is Log4shell and How it Affected?

Log4Shell is a severe critical vulnerability affecting many versions of the Apache Log4j application. The vulnerability allows unauthenticated remote code execution. Log4Shell means that attackers can remotely run whatever code they want and gain access to all data on the affected machine. It also allows them to delete or encrypt all files on the affected machine and store them for a ransom demand. This potentially makes it a target for anything that uses a vulnerable version of Log4j to log user-controllable data. According to the Indian Computer Emergency Response Team (CERT), multiple vulnerabilities are reported in Apache Log4j, which can be exploited by an overseas attacker to execute arbitrary code or perform denial of service attack on the targeted servers. Approximately 3 Billion devices runs Java so at the identical time Log4j are often found in many devices and applications that priorities this issue high. And also this vulnerability is found in products of  famous technology vendors like AWS, IBM, Cloudflare, Cisco, iCloud, Minecraft: Java Edition, Steam, and VMware.

The Apache Log4j vulnerability has made global headlines since it had been discovered in early December. This issue affects Apache Log4j 2 versions 2.0 to 2.14.1 mainly and NIST published a critical CVE within the National Vulnerability Database on December 10th, 2021, naming this as CVE-2021-44228. And given severity score available, 10 for this vulnerability.

Impact of Log4shell

Log files record and track computing events. Log files are extremely valuable in computing as they provide a way for system admins to track the operations of system in order to spot problems and make corrections. Log files are important data points for security and surveillance, providing a full history of events overtime. Beyond operating systems, log files are found in applications, web browsers, Hardware. If there is proper log file tracking applications can either avoid or quickly rectify errors within their operating systems. Smart Tracking         reduces downtime and minimizes the risk of lost data. Log data is typically sent to a secure collection point before further processing by system admins. From this it is understandable that how important is log file is? In Log files there will all the information’s regarding of an application or systems.

So this is how Log4shell vulnerability becomes a big threat to these log files. As mentioned Log4j is a Java based logging tool used to collect logs from the system. Due to this vulnerability, for attacker it is easy for them to hack the system and get the data’s.

How to overcome this?

Since Apache Log4j 2 versions 2.0 to 2.14.1 are mostly affected by this vulnerability. For fixing this issue the user can upgrade Log4J to the latest and patched version 2.16.0. This will remove the risk from code execution.

The screenshot below is a list of the HTTP headers in a GET request that illustrates the attacker using interact.sh to probe for vulnerable servers.

How chatbots improve customer experience

Tags

,

What is a chatbot?

A chatbot system uses AI (Artificial Intelligence) to chat with a user in natural language via messaging apps, Mobile apps, and Telephone.

One of the advantages of chatbots is that unlike applications are not downloaded.,Not necessary to update them and they won’t take up more space on phone memory.

Benefits of using chatbots for customer experience

Customers want quick solutions to their problems Some are used for the traditional way of phone support. Therefore a hard time accepting a chatbot with the idea that it is a  robotic interaction, which requires a human touch. However more customers are interested in using new technology, especially it means a quick resolution to their issues

Chatbots are automated programs that can simulate a human conversation as human conversation. The chatbots use NLP – (Natural Language processing) to understand human conversations. and to provide relevant answers to questions. From the customer’s point of view, they are talking to actual persons, or at least, so it seems.

Open and closed chatbots

Depending on use cases chatbots can be either open or closed. Open chatbots are those which use AI to learn from their interaction with users.

Closed chatbots are those that only execute the script that may or may not use AI depending on the evaluation of messages.

Few ways where we can improve customer experience

Reduce wait time

Bots can reduce customers’ wait time and get them where they want to be quicker.

Always on customer service

The chatbots never sleep it offers 24/7 customer service support. Best practice chatbots are trained by using historical conversation

Personalized human interaction

The computerized chatbots can help to personalize the customer experience in retrospect.

Encourage employees

The benefit of AI can be to support the employees to work more on challenging tasks.AI can mimic human behavior, and staff may fear that their jobs are at risk. Chatbots are a chance to support the staff to focus on more high-value activities rather than daily tasks.

Cloud Gaming

Tags

, , , , , , , , , , , ,

Unlike normal gaming, the game runs on on-demand servers is cloud gaming. Normally video games are played in the local system which the video game run on a local devices like computers, smartphones, and video game consoles. 

In the local system, the games need to be installed then only we can play the game, the games are stored in the local system some of the games have larger sizes for example Call Of Duty: Modern Warfare (231GB), Quantum Break (178GB), GTA 5 (72GB), etc. These games require massive storage space, some upcoming games are more than 1 TB. 

Video games are fully graphical, to run a video game the system needs to have GPU, when the graphic card has more processing capability higher the video quality, and better the gameplay, the best graphic card available is the Nvidia GeForce RTX 3080 or AMD Radeon RX 6800 XT. these graphic cards cost between 1.5 lakh to 2 lakh rupees there are some other options too.

Also, the processor plays a major role in the video game, The more powerful the processor the smoother the gameplay. High-performance-oriented processors are best for gaming like Intel core i9 or AMD Ryzen 9. 

Why cloud gaming?

cloud gaming costs less compared to local gaming, in local gaming we need to buy or build the system  which is an expensive exercise. The parts in a gaming pc are very expensive, and when it comes to cloud gaming we just need a decent system that can connect with the internet. And there is no need to download we just need to install the service provider software. And pay for the game. 

cloud game is the best option to play on a low-end device because there is no bottleneck. 

How Cloud Gaming Works?

The games are stored and run in the servers and controlled remotely, the video games are streamed to the player. For cloud gaming, we need high-speed internet connection with low latency. To access the cloud games we need client software. 

The main cloud gaming services are: 

NVIDIA GeForce Now

 

PlayStation Now 

Google Stadia 

Shadow 

AWS and the Ukraine-Russia conflict

Tags

As the world comes to terms with the ongoing conflict between Russia and Ukraine following the declaration of war on the 24th of february, political and military spheres begin weighing their options, devising strategies and policy to take on the bear head on. The rest of the world looks on in horror and confusion as the chaos unfolds.Particularly interesting have been the stance taken by large tech companies. Their decision to cease operations and take a stand of non cooperation was met with global praise. How large food conglomerates banded together to supply aid and how SpaceX-Starlink was able to set up satellite internet are highly commendable humanitarian efforts.

In the midst of all this chaos it may be a good idea to take a closer look as to how AWS is coping with the challenges and playing its part in tightening the chokehold on Russia. According to some estimates around 40% of the world’s internet traffic is handled by AWS. so it’s highly imperative that we closely examine how they cope with adversity and keep things ticking without missing a beat.

Let’s look at all the assets AWS has in Ukraine and Russia. Considering the general hostility of the region in recent decades AWS does not have any of its data centers in both countries. Apart from a few service desks in Ukraine there isn’t much of a presence or staff count. Since AWS is a company headquartered in the United States of America which always keeps a safe distance from Russia, AWS never officially started running operations or catered to the Russian Federation. In the wake of the conflict the US and its allies imposed sanctions and owing to the general policy many companies halted operations in Russia, some went as far as pulling out entirely.

AWS was the first of the big tech companies to announce their revised policy amidst the conflict. A strong condemnation of the violence and a new stance on the issue. AWS completely stopped new user registration for  Russians. This while seemingly strong statement though wouldn’t affect current users on the cloud or their service. Azure and Google cloud followed suit. In the aftermath of this policy change though not much changed in Russian cyberspace there were reports of online money exchanges going down, and messaging services facing outages.

On a global level an exponential rise in DDOS attacks are being reported and Ukraine is not the only target, many fortune 500 countries rely on Ukraines outsourced services and they are taking a rough beating as well. With war being waged on all fronts AWS and other cloud service providers are setting up counter measures to track and neutralize threats such as the infamous HermeticaWiper malware thats taking down financial institutions and government organizations.

Its great to see big tech band together for the greater good and take a stand against bullying, bigotry and violence. The almost Gandhian tactic of noncooperation  may not be enough on its own but nevertheless an extra crack of the whip that hopefully stops the bear for good.

AWS Best Practices For Security

Tags

, ,

More professional industries are going online every day, which means that the demand for proper cybersecurity is at an all-time high. AWS security services may give near-limitless benefits to all elements of your business or organization. While AWS security can be managed by a third party, some things must be handled by us. 

Following a set of AWS security best practices will assist guarantee that every component of your AWS services is as safe as feasible and running as intended.

1. Multi-factor Authentication

Sometimes, we can see that many people use the same password for all of their accounts, which might give exploiters an advantage to compromise your account. Event the complex passwords could be compromised. So, in order to increase the security Multi-Factor authentication is used.

We can say MFA is nothing but a secondary password that will be on a secondary device. So, when a user needs to log into their account they need to use their primary password as well their secondary device.

A secondary device can be in the form of a physical device as well as software. These devices provide time-based credentials that will be used to log into the account. With this, we can ensure that only the users with regulated equipment can access the network.

2. AWS Identity & Access Management

IAM is a service that can be used to create users, groups, and roles with separate permissions attached for each. Permissions are assigned by the administrator for each and every user or group.

IAM may also help with onboarding new workers because it allows them to be immediately allocated to a role or group depending on their department. The most significant advantage of IAM is that it allows us to rigorously monitor and regulate access to a network or services inside your organization.

3. Data Backups

Data backup is common practice in most businesses, but how and when you save data may make a significant difference in damage mitigation and restoration timeframes. Planned data backups, like password reset schedules, give an extra degree of safety to your organization without interfering with regular operations.

A backup is only as valuable as the data it contains. If a large period of time elapses between a backup and a data loss, any material generated in that time is lost. By establishing frequent backups and defining a routine, you can assure minimum data loss and get back up and running as soon as feasible.

4. Managing Root Access

An AWS service account will have root credentials, which grant the most access to a network. One potential issue with this is that if those credentials are compromised, the entire network is at risk, and malevolent parties might obtain complete control of your network.

IAM can mitigate this risk by creating a top-level user account with tAn AWS service account will have root credentials, which grant the most access to a network. One potential issue with this is that if those credentials are compromised, the entire network is at risk, and malevolent parties might obtain complete control of your network.

IAM can mitigate this risk by creating a top-level user account with the same capabilities as a root user, but with separate login credentials. As a result, if one type of top-level access is compromised, another is readily accessible to address the issue. MFA is another answer to this problem since it raises the difficulty of hacking login credentials tenfold.

5. Keep Policies Updated

Cloud services are always in use and might alter or be upgraded on a regular basis. It is critical for network security to keep your policies up to date with software or service updates. Updates and alterations can create weak points in a network, which if not handled, can lead to network breaches or the loss of sensitive data. Many rules will need to be modified in order to remain compliant with new legislation or AWS requirements. By keeping security policies up to date, you assure regulatory compliance and a greater degree of network security.

Deployment made easier with AWS Beanstalk

Tags

,

Day by day many revolutionary changes are made in the area of technologies. Most of the companies compete together to offers new and relevant service to their customers. They very well know that only if they bring new and revolutionary changes then only they can exist in the industry. Because each and every companies facing very tight and tough competition from their competitors. Every company tries to grab the top most positions in Industry.

AWS is a most comprehensive and broadly adopted cloud platform. AWS is a company who provides various quality services to their customers and also these services helps users to deal with their various needs. Some of main services provided by AWS is Amazon Elastic Cloud Compute (EC2), Amazon S3 (Simple Storage Service), Amazon Virtual Private Cloud (VPC), Amazon CloudFront and Amazon Relational Database Services (RDS). Likewise, AWS provides 60+ services to their customers.

AWS Elastic Beanstalk is a service offered by Amazon Web Services for deploying applications which orchestrates various AWS services, including EC2S3Simple Notification Service  , CloudWatchautoscaling, and Elastic Load Balancer. With Elastic Beanstalk, one can quickly deploy and manage applications in the AWS cloud without needing to learn about the infrastructure that runs the applications. We can directly upload our website code to Beanstalk, and it will automatically host the application for us with a URL we can concentrate on the code of our application than on the architecture on which it is hosted. That means other parts like deployment, from capacity provisioning, load balancing, autoscaling to application, health monitoring all will be handled by Elastic Beanstalk. Elastic Beanstalk supports all the application which developed in GO, Java, .NET, node.js, PHP, Python and Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instance, to run your application. AWS is an PAAS service used for deploying and scaling web applications and services developed with java, .NET, PHP, Node.js on familiar servers like Apache, Nginx. Tomcat and IIS. If you are trying to develop your own application by using AWS Elastic Beanstalk, what all you have to do is concentrate on building code rest of the task like installing EC2 Instances, Auto Scaling Group, Maintaining security and Monitoring servers, storage, networking and managing virtualization, operating system and Database by Elastic Beanstalk. The main advantages of AWS Beanstalk is it’s Highly scalable, fast and simple to begin, offers quick deployment, Supports multi-tenant architecture, highly flexible, Simplifies operations, Cost-efficient.

The following diagram shows an example Elastic Beanstalk architecture for a web server environment, it shows how the components in this environment work together.

The environment is the heart of the application. In the diagram, the environment is shown within the top-level solid line. When you create an environment, Elastic Beanstalk provisions the resources required to run your application. AWS resources created for an environment include one elastic load balancer (ELB in the diagram), an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances.

Impact of cloud computing on medical technology

Tags

, ,

While moving to the cloud we are having double benefits. On the business side, Cloud computing has reduced operational expenses while allowing customers to receive excellent and personalized care. Doctors can save details of the patient to cloud storage and they can collect data from anywhere regarding patients’ health. Mostly 60 % of devices use cloud computing technology.

Most use cases for cloud computing.

Lower cost

  • The main conclusion of cloud computing is to give availability to computer resources.
  • There are no additional charges added to cloud storage
  • We need to only pay for the resources we use.

Access to high power data

  • For health care data, both structured as well as unstructured data is a huge asset.
  • It is very useful to look into details of patients

Advantages of cloud computing in the healthcare industry

  • Low Cost
  • Outsourcing information reduces the amount spent on new technology
  • Easier to maintain
  • More secure
  • Companies are hired to watch over information
  • Interoperability
  • Access information from anywhere
  • Can be accessed using different devices’
  • Beneficial for small companies

The following are the companies that support healthcare:-

Microsoft, IBM, Sun microsystems

But hence in healthcare, the number of systems operating manually, generally relying on paper, such as medical records to notify and make decisions in most of the conditions is still significantly high. The Healthcare industry differs greatly from other industries, and the key differences of the healthcare industry with other industries can be categorized into three segments. Firstly this sector is highly regulated by governed law including regulations to safeguard patients, Secondly, the cost of high-risk errors that occur in healthcare are more costly than in other industries, and finally, this sector consists of numerous numbers until such as hospital administration staff, labs, and patient

Healthcare and cloud computing

  • Patient’s information would be stored in the cloud
  • Accessed and managed over the internet
  • Since we are on a paperless routine this is a great idea to store information
  • Authorized users
  • Information on one cloud is connected to another cloud.

Considerations with cloud computing in healthcare

  • Since information is stored on the internet, safety measures must be taken
  • Personal health information
  • Secure transmission of PHI over the internet

Difficulties for the adoption of cloud computing in healthcare industry

  1. Security concerns

Patient data is sensitive data must be guarded against external threads

  1. System downtime

If the cloud offers more reliability there is possible downtime. If we have planned before downtime and started preparations to avoid downtime

Installation and configuration of Apache Airflow using Docker

Tags

, , ,

Introduction

Apache Airflow has released its production-ready docker image finally, earlier developers used to depend on third-party developer’s docker images to deploy their cloud technologies.

In this blog post, we would look at how can we install apache airflow using the official docker image.

We would be using Linux operating system if you guys are using any other OS then don’t get disappointed, you can also follow along and we’ll provide reference links.

First of all, let’s look at the requirements that we need in order to have a flawless experience.

Before you begin

Let’s look at the necessary tools required:

  • Docker Community Edition.
  • Docker compose.

We also have to make sure that we assign at least 4 GB of memory to docker containers (Note: 1GB per container, there would be 4 containers in total.). And the docker-compose version should be v1.29.1 and newer. If we are using an older version of docker-compose then we might not be able to use the full potential of Apache Airflow.

Installing Docker CE

Now let’s start by installing the docker. For installing it on windows or mac, follow the links attached.

Setting up the repository

We need to update our Linux package list and allow apt to use the repository over HTTPS, for that use the following commands

$ sudo apt-get update

$ sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

Now we need to add Dockers official GPG key

$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

After adding the GPG key we need to set up docker to use stable repositories. 

$ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Installing Docker Engine

For installing docker-engine we need to update the package list once again.

$ sudo apt-get update

Now let us install the latest version of the Docker engine and containerd.

$ sudo apt-get install docker-ce docker-ce-cli containerd.io

In order to check whether docker has been installed properly, we can run the default hello-world container, if we see a hello world in the terminal after running the following command then everything went ok and we are ready to move on to the next step.

$ sudo docker run hello-world

Installing Docker compose

We need to download the stable version of Docker compose. So, use the following commands to do the same.

$ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Now we need to give permission for the binaries to be executable.

$ sudo chmod +x /usr/local/bin/docker-compose

After giving permissions we need to make sure whether docker-compose is working properly.

$ docker-compose --version

If you see any version number then everything is working properly as it should be, but if get no version number then there might be some problem. 

You can skip this part if you get a version number using the above commands.

First of all, make sure that python is installed properly on your computer. Then try to install docker-compose using pip.

$ sudo pip install docker-compose

Installing Apache Airflow

Let’s start by fetching the docker-compose file for Apache airflow. The docker-compose file contains all the things you need to get started. using the following command.

$ curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.2.4/docker-compose.yaml'

Now we need to create three folders for Apache Airflow to place our dags(Workflow that we create goes here), logs(Airflow generates logs for all the workflow we run), plugins(We can create custom plugins that can be used with Airflow). So, follow the below commands in order to create these folders.

$ mkdir ./dags ./logs ./plugins

As we are using Linux operating system we need to configure the above directories that we have created for docker-compose.

$ echo -e “AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0” > .env

Let’s now initialize the database (The docker-compose file which we have downloaded comes with simple configurations so that we get started quickly. You can later change the configuration file as your need later on.).

$ sudo docker-compose up airflow-init

After initializing the database you might get an exit code as 0. Then we are good to go with our next step. If you get anything other than that, there might be some problems with the database. Usually, the default configuration file has been set to use Postgres SQL as the default database. 

And finally, let’s run Apache airflow.

$ sudo docker-compose up

You can see that all the components are now getting started. Now let’s go to our localhost with port 8080 to check whether the web server has started and worked as it should.

You might see a login screen as above, the default username and password are “airflow” and “airflow” respectively. And voila you have a working Apache airflow docker container for making your workflow automated.

What’s next?

Now we can create and run our dags using the UI. You can also do a whole lot of things using Apache airflow. I have read about some data scientists using Airflow to automate their workflow in order to fetch and clean their data. And we can see that Apache Airflow is being used by many reputed companies to work more efficiently. So, explore the projects and make your daily workflow more efficient and easy.

If you guys need a detailed post regarding how to configure Apache airflow, please leave a comment below.

BOSS – The OS everyone forgot but needs to remember

Tags

, , , , , , ,

On 25th January 2022, The Union minister for electronics and IT Rajeev Chandrasekhar announced its plans to develop an indigenous mobile operating system that can shake up the duopoly of the current market and promote the Aatmanirbhar policy. This also comes at a time when Chinese assertion into the software market is at an all-time high and poses privacy concerns and national security risks.

With the announcement online forums started hosting discussions on the matter, polarizing as they may be, they do highlight some important points


Lol …let me guess, they will use the Android core and just call it something else

Even if India is a little bit successful, this poses more a threat to Google than Apple. I see this going in the direction of full-on government spy OS.

We’re going to call it Indian OS or IOS and we think you’re going to love it *keynote ends in an extravagant dance number*


Bharat Operating System Solutions (BOSS GNU/Linux)

This is an Indian Linux distribution derived from Debian, officially released on 11th July 2006

It came in 3 variants

  • BOSS Desktop (for personal use, home, and office),
  • EduBOSS (for schools and education community),
  • BOSS Advanced Server and BOSS MOOL.

Looking at this in totality it’s a well-made package with great possibilities, As of 2019 very few People or institutions use BOSS. Afterall Linux based systems are virtually bulletproof. Then how come we don’t see or even hear about this?

Back in 2006 Computers were not as ubiquitous as today, you still needed to be a little savvy to be able to use a PC, so when the government rolls out a brand-new OS there was little buzz in the consumer market. Nobody was queueing up or pre-ordering for a copy of BOSS.

Even with Government endorsement Government offices were reluctant to switch to an entirely new OS and forced adoption would hinder efficiency. The fact is, though the system was robust the rollout was what brought it down. The Indian railways which is the single largest employer in the world tried to switch to BOSS, it didn’t work due to the sheer scale of the overhaul. A more thoughtful and nuanced change instead of an overnight overhaul should have been way to go. in the case of the railways they gave up before even starting. Overhauls and these big schemes are typically just statements to grab headlines.

One thing that was missed and should have been the main focus is using the system in critical sectors like defence. From the information available in the public domain, we can understand that most of the world’s missile systems still run on Windows XP. We boast of developing cruise missiles but prefer to use foreign tech to launch them. Similarly, our radar networks and encrypted communication channels seem to use software that we have limited control over. When it comes to technical supremacy, proprietary tech always gets the edge. Just take a look at the top tech companies and the number of patents they own, therefore there is no question of whether a projects like BOSS is necessary or not. BOSS was a great idea but never allowed to reach full potential

 Not to be cynical but given the precedent, another state-funded project would be a ship with many red flags. Private enterprises would have a better shot at shaking things up and making a difference, the current proposal to create a nurturing ecosystem for development is a good idea instead of total bureaucratic control. It’s too early to deem the fate of the proposed mobile OS but there’s no reason not to be optimistic.

Would be great to know your thoughts, drop them in the comments below.

Quick Start Guide For The Metaverse

Tags

, , , ,

Metaverse is a replication of real world, it is a virtual environment. Which is like real world but digitally, it is mix of AR (augmented reality) and VR (virtual reality).
In this virtual space you use an avatar to access the metaverse. The main purpose of metaverse is to connect with people. Metaverse is like a second life or entering to a different multiverse. It is the future of digital platforms. Access Remote locations, and educational opportunities in an environment mediated by technology in immersive ways.
Many companies are working on metaverse projects for example Facebook, Microsoft, Roblox, Nvidia etc.

THINGS TO DO IN METAVERSE 

In the metaverse you can play games, do shopping, attend meeting, create 3D avatar and create virtual world, invest in virtual-land, NTF or tokens and more. 

Technologies in METAVERSE 

  • Virtual reality (VR) 
  • Augmented reality (AR) 
  • Blockchain 
  • AI 

How to access metaverse

To access the metaverse it requires a VR headset, this VR headset give immersive 3D experience, VR headsets have a head-up display (HUD) that allows simulated environment and experience first-person view (FPV). It has 360-degree view. We also need joystick to control

To understand how metaverse will be try to watch (Read Player One). 

This movie is based on metaverse, in the movie one enters into a metaverse and in that metaverse he tries to play a game and try to win the game. It is good movie to watch. 

Gaming in metaverse 

When it comes to gaming in VR and AR it will be immersive experience, we feel inside the game, it feels like we are like in the game. some VR games  

  • Beat Saber 
  • Cloudland VR Minigolf 
  • Moss 
  • Firewall Zero Hour 

The future of the metaverse is so incredible, mostly very this will change to metaverse or virtual world. The worst that can happen after the metaverse is fully functional is that we loose a lot of interpersonal relationships and blur the lines between virtual and reality

Here is a YouTube video (Growing up in a Metaverse) that gives a lot of insight.