A few years ago, No one knows about Cloud computing but now world moves into Cloud computing and major role in the IT industry.

Cloud Computing: Is a general term for the on-demand delivery of computer-based, databases, applications, storage, and different other IT services through the internet. But how do you confirm it which Cloud Service provider is best? Which Cloud Service provider is cheapest and expensive? has different types of services?

Today we compare Amazon Web Services(AWS) and Google Cloud Platform(GCP). Nowadays, mostly preferable three cloud services are Amazon Web Services, Google Cloud Platform, and  Microsoft Azure.

Here is will discuss the following  below services on Google Cloud Platform and Amazon Web Services:

The Major Difference between AWS and Google Cloud Platform


Amazon Web Services are most required preferable Cloud platform so it AWS is the leader of the Cloud computing services due to explore IaaS ( Infrastructure as a Service) since 2006. AWS has already built a powerful global network to provide a virtual host for the IT industry.

Data centers are fiber linked an arranged all over the global network system.

Amazon Web Services mainly focused on Security, Automation, programmable, etc.  AWS Cloud Build is extensible, fully managed build service that provides continuous integration, continuous development ( CI/CD). It helps for automatic scaling and grows on demand with your customization. Depends on different types of versions.

AWS Code Deploy: Code Deploy delivers the working package to every instance outlined pre-configured parameters. Including EC2 instance on-premises services.

AWS Compute: Amazon EC2 (Elastic Compute Cloud), containers, AWS Batch, Auto Scaling, AWS Lambda, Amazon VPC.

AWS Storage: Amazon Simple Storage Service(S3), Elastic Block storage, AWS Storage Gateway.

Database: Amazon DynamoDB, Amazon Elastic Cache, Amazon RedShift.

Migration: AWS  Migration Hub, AWS Database Migration Service, AWS Snowball.

Networking & Content Delivery:  Amazon VPC (Virtual Private Cloud), CloudFront, RouteS3

Developer Tools: Amazon Web Service CodeStar, AWS CodeBuild, Code Deploy, AWS X-Ray, AWS Tools & SDKs.

Management Tools: Amazon CloudWatch, AWS Cloud Trail, AWS config, AWS Managed Sevices, AWS Management Console.

Security, Identity & Compliance: AWS Identity and Access Management(IAM), Amazon Cloud Directory, Amazon Inspector, AWS Key Management Service.

Big Data and Machine Learning  & Artificial intelligence products:

Amazon Web Services (AWS) is mostly building the Big Data systems due to the integration with DevOps tools like Kuberbetes, Docker, etc. AWS supports Hadoop platforms like Hortonworks, Cloudera distributions in Big Data environment with AWS Lambda, which is a perfect match for Bigdata analysis tasks. Artificial intelligence services provide developers with the ability to add intelligence applications through an API call to pre-trained services Amazon Lex uses for Amazon Alexa to provide advanced deep learning functionalities of ASR and Natural Programming Language to enable to build applications.

Google Cloud Platform:

Google Cloud Platform is a lot of varieties of services and solutions to software and hardware infrastructure that Google uses for YouTube and Gmail. GCP is one of the largest and most advanced computer network, storage for some applications. Monitoring, Stackdriver debugger, Stackdriver Logging, Security Scanner services.

Some management tools for the Google Cloud Platform environment in follow below tools:

Google Compute Engine: Google Compute Engine allows users to launch virtual desktops, machines on the cloud.  It is VMs boot quickly come with persistent on storage, performance. Virtual servers are available in a different configuration including sizes of machines.

Google Deployment Manager: Google Cloud Deployment manager allows all the resources needed for your application. Deployment Manager is one of the DevOps teams. For deploy many resources at one time, in parallel in Google Cloud Console in a hierarchical view.

GCP Cloud Console: GCP Cloud console gives a detailed view of every day in Cloud platform in web applications, virtual machines, data analysis, data store, networking, developer services. It is scalable and diagnoses productions issues in applications.

Google Cloud Platform is Google Compute instance up to 96 vCPUs and 634 GB of RAM.Google Cloud for storage/disk with volume sizes up to 64 TB.

Coming to Network information GCP is subject to a 2Gbits /seconds(Gbps) for better performance. It may increase the network up to 16Gbps. Google cloud platform to provide more good network system in Cloud.
GCP is to provide Security for applications with strong authentications factor. NSA has infiltrated the data center connections on Google Cloud. Even the stored data is encrypted, not to mention the traffic between data centers. There is Relational Database Sevice does provide data encryption as an option in different multiple availability zones.

  • Predictions and Facts:

Cloud predicated that Infrastructure as a service(IaaS), currently growing at 24 %. Nowadays clearly that the market for Cloud Computing is growing at different rates.

  • Market Share (AWS vs Google Cloud):

Present market share the topmost competing for Cloud Computing are AWS, Google Cloud Platform, Microsoft Azure, etc. Then comparatively AWS alarming rate is high compared to all cloud services.

  • Service Comparision:

Coming to service comparisons in Google Cloud and AWS, various services offered by AWS and GCP.

IaaS in GCP is Google Compute Engine coming to AWS is Amazon Elastic Compute Cloud.

PaaS in GCP is Google App Engine. In AWS is AWS Elastic Beanstalk

Containers are in GCP is Google Kubernetes Engine. In AWS is Amazon Elastic Compute Cloud Containers Services.

Finally, Serviceless Functions are in GCP is Google Cloud Functions. In WAS is AWS Lambda.

  • Storage Services:

Coming storage services File Storage in AWS is Amazon Elastic File System. In GCP is ZFZ/Avere.

Object storage in AWS is Elastic Load Balancer. GCP is Google Cloud Storage.

  • Management Services:

For Monitoring AWS Services we are using in Amazon CloudWatch. In GCP using Stackdriver Monitoring.

For Deployment AWS Services using AWS Cloud Formation in GCP using Google Cloud Deployment Manager.

Pricing Comparision :

Now Google Cloud Platform is a clear winner to the cost of services. First GCP provides $300 for free tier account for 12 months. AWS also provides less cost for one-month free tier services how much will spending on the machines. For example 2 CPU cores, 8GB RAM instance for GCP priced at $50 per month. Coming to AWS instances with same configurations priced at  $69 per month. M

Summary:  Amazon Web Services supports AWS documentation and AWS ForumsGoogle. Coming to Google Cloud Platform to provide some support documentation Cloud Forums and Google Cloud Documentation. Now go with Billing and Pricing AWS simple monthly calculator and Google Cloud Platform pricing Calculator. Both are very good Cloud Computing services at present market.

MapR Installation steps on AWS

MapR Installation on Amazon Web Service Machine with simple steps for Hadoop environment.

Step 1: Login with AWS credentials and then open the root machine.

[ec2-user@ip----~]$ sudo su -

Step 2: Put off the IP tables  services

[root@ip---- ~]# service iptables stop

Step 3: Check the configuration of iptables

[root@ip----- ~]# chkconfig iptables off

Step 4: Edit the SELinux configuration

[root@ip----~]# vim /etc/selinux/config

Step 5: EDIT replace enforcing with disabled (save and exit)

[root@ip----~]# SELINUX = disabled

Step 6: Open repos by using below command

[root@ip----~]# cd /etc/yum.repos.d/

Step 7: edit mar ecosystem repo file.

[root@ip----yum.repos.d]# vi mapr_ecosystem.repo

Put the following lines into the above file

name = MapR Ecosystem Components
baseurl = http://package.mapr.com/releases/MEP/MEP-3.0.4/redhat
gpgcheck = 0
enabled = 1
protected = 1

Step 8: edit mapr installer repo files.

[root@ip----yum.repos.d]# vi mapr_installer.repo

Step 9: Edit mapr core repo files.

[root@ip----yum.repos.d]# vi mapr_core.repo

Put the following lines into the above file

name = MapR Core Components
baseurl = http://archive.mapr.com/releases/v5.0.0/redhat/
gpgcheck = 1
enabled = 1
protected = 1

Step 10: create yum repolist

[root@ip----- yum.repos.d]# yum repolist

(here you will seen all packages)
Step 11: Search mapr package files.

[root@ip------ yum.repos.d]# yum list all | grep mapr

(this displays all packages related to mapr)

Step 12: import rpm package files

[root@ip----- yum.repos.d]# rpm --import


Step 13:  install mapr cldb file server,webserver,resource manager and node manager

[root@ip------ yum.repos.d]# yum install mapr-cldb mapr-fileserver mapr-

webserver mapr-resourcemanager mapr-nodemanager

Step 14: Install mapr Zookeeper

[root@ip------ yum.repos.d]# yum install mapr-zookeeper

Step 15: list of mapr files

[root@ip----- yum.repos.d]# ls -l /opt/mapr/roles/

Step 16: search for mapr rpm files by using files grep command.

[root@ip------ yum.repos.d]# rpm -qa | grep mapr

(displays installed packages related to mapr)

Step 17: Adding Group for mapr system

[root@ip------ yum.repos.d]# groupadd -g 5000 mapr

Step 18: Adding a user for mapr group system

[root@ip------ yum.repos.d]# useradd -g 5000 -u 5000 mapr

Step 19 : Set passwd for mapr user

[root@ip------ yum.repos.d]#passwd mapr

(here you will give password for mapr user)
(you can give any name)

Step 20: create id mapr

[root@ip------ yum.repos.d]# id mapr

Step 21: check Fully Qualified Doman Name using below command

[root@ip------ yum.repos.d]# hostname -f

Step 22: check disk availability

[root@ip------ yum.repos.d]# fdisk -l

(here you have seen available disks in that machine and select the second disk for mapr)

Step 23: Edit second disk information for maprdisk file system.

[root@ip----- yum.repos.d]# vi /root/maprdisk.txt

(here that second disk put here)(save and exit)

Step 24: Set the configuration server in different zones.

[root@ip----- yum.repos.d]# /opt/mapr/server/configure.sh -N training -C ip--------.ap-southeast-1.compute.internal -Z ip------.ap-southeast-1.compute.internal:5181

Step 25: Edit second disk files

[root@ip------ yum.repos.d]# cat /root/maprdisk.txt

Step 26: Download the rpm files

[root@ip------ ~]# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Step 27: Extra package for enterprise linux system

[root@ip------ ~]# rpm -Uvh epel-release-6*.rpm

Step 28: Start Zookeeper services

[root@ip------ ~]# service mapr-zookeeper start

Step 29 :Start warden services

[root@ip-1----- ~]# service mapr-warden start

Step 30: Start MapR CLI NODE CLDB MASTER service

[root@ip----- ~]# maprcli node cldbmaster

Here you will go with your machine ip in web server for mcs..shown below..

How to generate PPK from PEM and open AWS console

Nowadays most of the technical people suffer from PEM file to PPK file generating with a little bit easy to understand.

Login AWS account as per your credentials and click on  Instance ( Step 7: Review Instance Launch) than window showing like below image.

Then choose your option whether it existing or creating a key pair.

First, download the PEM file from AWS account whether to create a new key pair or existing key pair.

Here choose an existing key pair then give a name for that key pair and acknowledge it.

After that Launch instance machine as per requirement.

Download Putty Key Generator from Putty official website then load the PEM file like below snapshot.

The first load the PEM file then clicks on Generate button.

Note: After the generating time some randomness by moving the mouse over the blank area otherwise, it will not generate the PPK file.

Then Save the generate the PPK file as either save private key or save public key.

After generating of PPK file then go with Putty

Note: Putty Generator only used to generate files.

Open Putty then give IP address and Port number as per machine details.

Here will give IPV4 address or completely Hostname ( check with “hostname” command in Linux machine). Don’t give  IPV6 address.
Next will go within the category clicks on SSH option -> Auth -> Browse the PPK file for authentication as per below snapshot in Putty.

Here SSH means that Secure Shell key management system for the authentication system in the network services.

After selecting the SSH option go with Auth option then will get direct Browse option so simply browse the PPK or PEM file then clicks on Open button.

Finally, open the command prompt ( terminal ) console then will give username after that will get Yes or No option then click on YES option.

Launch AWS Instance

Here Free tier version of Amazon Web Services Instances in simple steps for beginners and how to connect machines and generate PEM files.

Step 1: Login AWS account click on AWS Management Console then give you credentials.

Step 2: Click on Launch Virtual Machine EC2 (Amazon Elastic Compute Cloud).

Step3: Choose AMI step then go with Free tier version and click on Amazon Linux 2 AMI (HVM), SSD Volume Type then clicks on Select button.

Step 4: Choose Instance Type here we selected General purpose and t2.micro (Available 1GB memory and 1 CPU core) and then directly select Review and the Launch button.

Note: If you don’t need a configuration of instances then directly go with Review and Launch of the machine in Dashboard.

Step5: Clicks on Configure Instance whether you need one or more instances and click on Next: Add storage steps.

Step6: Clicks on Add Storage. It acts like Hard Disk in the computer so choose the size of the machine.

Step 7:  Clicks on Add Tags otherwise no need to configured.

Step 8: Next Goto Configure Security Group for security purpose on the machines. It provides strong security to choose types like SSH or any other.

Step 9: After clicks on Review and  Launch button then directly will launch  AWS free tier version machines for beginners.

Step 10: Select an existing key pair or create a new key pair option then select a key pair will give a specific name for that pem file.

Step 11: Start the AWS Instance.

Step 12: After successfully launching machine go and check the status of the machine and click on Instance ID.

Step 13: After completing the above steps go to start Amazon Web Services Instances then connect with pem file. Here we must and should change pem file into ppf file from putty generator. Then start with putty and use it is simple.

AWS CloudFormation

What is CloudFormation?

It is a way to create and manage the collection of related AWS resources, providing and upgrading in order to developers and administrators.

CloudFormation services allow cloud administrators to automate the process of managing resources such as storage and servers. It enables developers to copy and manipulates the code.

But the administrator run some steps for application

The first step Configuring web server for creating EC2 instances and deploy the application.

The second step Configuring a database server for creating a database.

The third step configures network settings like creating VPC, routing mechanisms etc.

Finally, mostly configuring security for creating IAM users, security-related issues.

How to use CloudFormation to create to set of AWS resources.

Automation templates and stack creation in CloudFormation in AWS. Some management tools are a category in AWS management console.

Two types to work with CloudFormation:

1.Create New Stack and Create new StackSet

First, click to create New Stack and Click on Create new StackSet.

What are Stack and StackSet?

A stack is a collection of AWS resources that can be managed into a single unit. StackSet is creating Stacks in AWS account.

A stack can contain all AWS resources like web servers, DB configurations and network security to required for web application in AWS – EC2.

2.Design a template:

It is a graphics tool for creating, viewing, etc for AWS CloudFormation.

What is a template?

In an AWS CloudFormation template is JSON(JavaScript Object Notation)  or YAML(YAML Ain’t Markup language) formatted text file.

Some features of CloudFormation stack:

1.CloudFormation Stack is created using a template.

2. Sometimes deletion of a stack will delete all the associated resources.

3. It helps to manage all resources in a single place as a unit.

Conclusion:  AWS CloudFormation is created, manage and collection of AWS resources for developers and administrators in Cloud platform.

What to know to be a AWS Certified Cloud Practitioner?

Want to be a certified AWS Cloud practitioner?

These are the list of concepts every AWS CCP aspirant should know

  • Firstly, should be able to define AWS Cloud and AWS global infrastructure.
  • Similarly, tell out the key AWS key services and their common use cases.
  • Should be able to describe the Cloud architectural principles of AWS.
  • Describe compliance aspects of AWS platform, security and shared security model
  • Describe the account management, billing, and pricing models
  • Find the sources of technical assistance or documentation
  • Define the Cloud value proposition of AWS
  • Describe the basic characteristics of AWS Cloud operation and deployment.
    New to the cloud?
    Check out this introduction AWS cloud to get a basic understanding.

Simplified introduction to AWS

The main aim of this article is to provide a basic understanding on  Amazon Web Services terminology, their global infrastructure, and core resources of AWS.

What is AWS?

Amazon Web Services is a cloud service providing the infrastructure as a service. In addition to infra, AWS delivers networking, virtualization, security, database, storage and developer tools.

The main advantage of AWS cloud is “High Availability”, “Fault Tolerance”, “Scalability” and “Elasticity”

scalability & Elasticity

AWS has to capacity to adapt the growing workload on its hardware and can dynamically add hardware on demand.

For instance, in the case of too many users trying to access then Amazon scales up to the number of instances to handle. Similarly, when users reduce the number of instances will be reduced.

High availability

High scalability and elasticity of AWS result in high availability.

Fault tolerance

If there is a fault in the system due to corruption or any other failure we will still be able to access our files. For example, if an instance fails, AWS automatically removes that instance and launches a new instance.

Amazon global infrastructure

AWS has 60 Availability zones in 20 geographic regions across the world, planning to expand for 12 more Availability zones and 4 more AWS Regions as shown in below image. Each region has a set of availability zone and each availability zone hold the physical data center.  Edge locations are also part of AWS infra, edge locations host the content delivery network called Amazon CloudFront for fast delivery of content to the user from any location.

Virtual Private Cloud(VPC’s)

VPC is a private virtual network, that allows the user to place AWS resources & set permissions to them as a layer of security.


Relational Database Service is a database that AWS provides for storing data running as a web service in the cloud.


Simple Storage Service is a large unlimited storage bucket. S3 can store any number objects providing high availability.


The EC2 instance is nothing but a server with computing power for running applications on AWS. One of the common use case, a web hosting server that contains all the files and code to display a web page.

AWS Architecture

EC2 Vs Lambda

What is AWS Lambda ?

AWS Lambda is a computer service. It runs code without managing servers. Mainly it executes your code only when needed pay them and use it.

Where you can easily adaptable:

1. Run code in parallel.

2.Bring your own code even native libraries.

3.Create back-end, event handlers.

Where you don’t have think about AWS Lambda


2.Scaling and fault tolerance

3.Operating System updates.

What is Amazon EC2?

Amazon EC2  => Amazon Elastic Cloud Compute :

EC2 provides flexible computing capacity in Amazon Web Services for faster deployment and easily developing the applications. It uses service to create virtual machines instead of new hardware and balance work load across running virtual machines.

The virtual machines created using AWS EC2 service are commonly known as EC2 instances.

Difference between EC2 Vs Lambda

1.  EC2 offers flexibility can choose instances type and option for customized operating system, security and network.

AWS Lambda makes it easy to execute code in response to events

2.Amazon EC2 provides an easy – to – use service for deploying and scaling web applications.

In Lambda Trigger function in response to events without providing instances.

3. EC2 responsible for providing capacity monitoring performance, monitoring and scalability.

Lambda provides easy scaling and high availability to your code.