How to generate PPK from PEM and open AWS console




Nowadays most of the technical people suffer from PEM file to PPK file generating with little bit easy to understand.

Login AWS account as per your credentials and click on  Instance ( Step 7: Review Instance Launch) then window showing like below image.

Then choose your option whether it existing or creating a key pair.

First, download the PEM file from AWS account whether to create a new key pair or existing key pair.

Here choose an existing key pair then give a name for that key pair and acknowledge it.

After that Launch instance machine as per requirement.

Download Putty Key Generator from Putty official website then load the PEM file like below snapshot.

First load the PEM file then clicks on Generate button.

Note: After the generating time some randomness by moving the mouse over the blank area otherwise, it will not generate the PPK file.

Then Save the generate the PPK file as either save private key or save public key.

After generating of PPK file then go with Putty

Note: Putty Generator only used to generate files.

Open Putty then give IP address and Port number as per machine details.

Here will give IPV4 address or completely Hostname ( check with “hostname” command in Linux machine). Don’t give  IPV6 address.



Next will go within the category clicks on SSH option -> Auth -> Browse the PPK file for authentication as per below snapshot in Putty.

Here SSH means that Secure Shell key management system for the authentication system in the network services.

After selecting the SSH option go with Auth option then will get direct Browse option so simply browse the PPK or PEM file then clicks on Open button.

Finally, open the command prompt ( terminal ) console then will give username after that will get Yes or No option then click on YES option.


Launch AWS Instance




Here Free tier version of Amazon Web Services Instances in simple steps for beginners and how to connect machines and generate PEM files.

Step 1: Login AWS account click on AWS Management Console then give you credentials.

Step 2: Click on Launch Virtual Machine EC2 (Amazon Elastic Compute Cloud).

Step3: Choose AMI step then go with Free tier version and click on Amazon Linux 2 AMI (HVM), SSD Volume Type then clicks on Select button.




Step 4: Choose Instance Type here we selected General purpose and t2.micro (Available 1GB memory and 1 CPU core) and then directly select Review and the Launch button.

Note: If you don’t need configuration of instances then directly go with Review and Launch of the machine in Dashboard.

Step5: Clicks on Configure Instance whether you need one or more instances and click on Next: Add storage steps.

Step6: Clicks on Add Storage. It acts like Hard Disk in the computer so choose the size of the machine.

Step 7:  Clicks on Add Tags otherwise no need to configured.

Step 8: Next Goto Configure Security Group for security purpose on the machines. It provides strong security to choose types like SSH or any other.

Step 9: After clicks on Review and  Launch button then directly will launch  AWS free tier version machines for beginners.

Step 10: Select an existing key pair or create a new key pair option then select a key pair will give a specific name for that pem file.

Step 11: Start the AWS Instance.

Step 12: After successfully launching machine go and check the status of the machine and click on Instance ID.

Step 13: After completing the above steps go to start Amazon Web Services Instances then connect with pem file. Here we must and should change pem file into ppf file from putty generator. Then start with putty and use it is simple.


AWS CloudFormation




What is CloudFormation?

It is a way to create and manage the collection of related AWS resources, providing and upgrading in order to developers and administrators.

CloudFormation services allow cloud administrators to automate the process of managing resources such as storage and servers. It enables developers for copy and manipulates the code.

But administrator run some steps for application

The first step Configuring web server for creating EC2 instances and deploy the application.

The second step Configuring a database server for creating a database.

The third step configures network settings like creating VPC, routing mechanisms etc.

Finally, mostly configuring security for creating IAM users, security-related issues.

How to use CloudFormation to create to set of AWS resources.

Automation templates and stack creation in CloudFormation in AWS. Some management tools are a category in AWS management console.

Two types to work with CloudFormation:

1.Create New Stack and Create new StackSet

First, click to create New Stack and Click on Create new StackSet.

What are Stack and StackSet?

A stack is a collection of AWS resources that can be managed into a single unit. StackSet is creating Stacks in AWS account.

A stack can contain all AWS resources like web servers, DB configurations and network security to required for web application in AWS – EC2.



2.Design a template:

It is a graphics tool for creating, viewing etc for AWS CloudFormation.

What is a template?

In an AWS CloudFormation template is JSON(JavaScript Object Notation)  or YAML(YAML Ain’t Markup language) formatted text file.

Some features of CloudFormation stack:

1.CloudFormation Stack is created using a template.

2. Sometimes deletion of a stack will delete all the associated resources.

3. It helps to manage all resources at a single place as a unit.

Conclusion:  AWS CloudFormation is created, manage and collection of AWS resources for developers and administrators in Cloud platform.

What to know to be a AWS Certified Cloud Practitioner?

Want to be a certified AWS Cloud practitioner?

These are the list of concepts every AWS CCP aspirant should know

  • Firstly, should be able to define AWS Cloud and AWS global infrastructure.
  • Similarly, tell out the key AWS key services and their common use cases.
  • Should be able to describe the Cloud architectural principles of AWS.
  • Describe compliance aspects of AWS platform, security and shared security model
  • Describe the account management, billing, and pricing models
  • Find the sources of technical assistance or documentation
  • Define the Cloud value proposition of AWS
  • Describe the basic characteristics of AWS Cloud operation and deployment.
    New to the cloud?


    Check out this introduction AWS cloud to get a basic understanding.

Simplified introduction to AWS

The main aim of this article is to provide a basic understanding on  Amazon Web Services terminology, their global infrastructure, and core resources of AWS.

(adsbygoogle = window.adsbygoogle || []).push({});

What is AWS?

Amazon Web Services is a cloud service providing the infrastructure as a service. In addition to infra, AWS deliver networking, virtualization, security, database, storage and developer tools.

The main advantage of AWS cloud is “High Availability”, “Fault Tolerance”, “Scalability” and “Elasticity”

scalability & Elasticity

AWS have to capacity to adapt the growing workload on its hardware and can dynamically add hardware on demand.

For instance, in the case of too many users trying to access then Amazon scales up the number of instances to handle. Similarly, when users reduce the number of instances will be reduced.

High availability

High scalability and elasticity of AWS result in high availability.

Fault tolerance

If there is a fault in the system due to corruption or any other failure we will still be able to access our files. For example, if an instance fails, AWS automatically removes that instance and launches a new instance.

Amazon global infrastructure

AWS has 60 Availability zones in 20 geographic regions across the world, planning to expand for 12 more Availability zones and 4 more AWS Regions as shown in below image. Each region has a set of availability zone and each availability zone hold the physical data center.  Edge locations are also part of AWS infra, edge locations host the content delivery network called Amazon CloudFront for fast delivery of content to the user from any location.

Virtual Private Cloud(VPC’s)

VPC is a private virtual network, that allows the user to place AWS resources & set permissions to them as a layer of security.

RDS

Relational Database Service is a database that AWS provides for storing data running as a web service in the cloud.

S3

Simple Storage Service is a large unlimited storage bucket. S3 can store any number objects providing high availability.

(adsbygoogle = window.adsbygoogle || []).push({});

EC2

The EC2 instance is nothing but a server with computing power for running applications on AWS. One of the common use case, a web hosting server that contains all the files and code to display a web page.

AWS Architecture

EC2 Vs Lambda



What is AWS Lambda ?

AWS Lambda is a computer service. It runs code without managing servers. Mainly it executes your code only when needed pay them and use it.

Where you can easily adaptable:

1. Run code in parallel.

2.Bring your own code even native libraries.

3.Create back-end, event handlers.

Where you don’t have think about AWS Lambda

1.Servers

2.Scaling and fault tolerance

3.Operating System updates.

What is Amazon EC2?

Amazon EC2  => Amazon Elastic Cloud Compute :

EC2 provides flexible computing capacity in Amazon Web Services for faster deployment and easily developing the applications. It uses service to create virtual machines instead of new hardware and balance work load across running virtual machines.

The virtual machines created using AWS EC2 service are commonly known as EC2 instances.

Difference between EC2 Vs Lambda




1.  EC2 offers flexibility can choose instances type and option for customized operating system, security and network.

AWS Lambda makes it easy to execute code in response to events

2.Amazon EC2 provides an easy – to – use service for deploying and scaling web applications.

In Lambda Trigger function in response to events without providing instances.

3. EC2 responsible for providing capacity monitoring performance, monitoring and scalability.

Lambda provides easy scaling and high availability to your code.

 

 

 

Cloud Migration




Cloud migration is a processing of moving data or  completely deploying an organization’s assets into cloud environment.

Now a days cloud adaption growth is ultimately, migrating to cloud is one of the major business in cloud market.

Some basic steps involved in Cloud Migration:

1.Migration and business planning:

In existing applications, data and architecture etc are considered

It can be planned to suitable tools in the market for easy to work.

Example : AWS application services, Cloud Physics

2.Server & DB migration

In smaller applications are order of design and validations process

After getting information migration process can be accelerated .

Example AWS server and DB migration

3. Dependencies and migration strategies:

some applications are dependencies and type of certain migration strategies .

Example : S3 transfer acceleration

4. Cloud based operations:

Normally cloud based operations after migration can be resumed.

Example : Cloud Watch

Cloud migration strategies are majorly:

1.Re – platform

For example : Migrate from WebLogic  server to an open source platform Apache Tomcat running on a Virtual machines .

In cloud mostly used AWS Elastic Beanstalk


2.Re – factor

In re – factor, applications are re – developed. It needs to good architecture with target platform for performance, scaling etc.

3.Re-host

In this approach the infrastructure configuration of the application will re – deployed into new host environment.

 

 

Popular Services Offerings in Cloud

Popular IaaS Offerings:

Infrastructure as a Service is the provisioning of IT infrastructure resources like processors, storage and networks etc. over the internet.

Below services best IaaS  offerings in Cloud era.

Amazon Simple Storage Service (S3):




Now a days Developers looking for highly scalable, fast and cheap data storage for Amazon Cloud based storage solution i.e. S3. It is a simple web service interface easily access and storage from anywhere.

Citrix Xenserver :

In IT industry IT administrators looking to host, deploy and manage virtual machines would be Citrix Xenserver.

Google Compute Engine:

Consumers of the Google Compute Engine can use Virtual machines are hosting and supporting infra structure.

Popular SaaS Offerings:

Software as a Service is a software distribution model in which software is not downloading it is hosted by third party providing with license and subscription.

LinkedIn, a business and employment oriented social networking  site for solutions, premium subscriptions and some other tools.

Pixlr :

Pixlr is a Cloud based photo editor that offers a wide range of images traditional full version of Photoshop running on a web browser.

Salesforce:

Salesforce’s Cloud based CRM software to streamline and automate tracking and managing of customer and analyze customer data to provide business in cloud.

Popular PaaS offerings :

Platform as a Service is platform based category for customers to develop, run and manage  applications.

Google App Engine:

Google App Engine is a cloud based platform which provides developers with public services and API’s for building web applications and mobile based applications .

 

Microsoft Azure:

Microsoft Azure is a Cloud platform which allows developers to build  operations using tools, applications and frameworks provided on it. The developer need to the underlying infrastructure or a manage the tools licenses and software upgraded.


 

IBM Bluemix:

IBM Bluemix provides a cloud plat form that provides several programming language as well as and integrated DevOps to build, run, deploy and manage application of Developers using the platform also need not concerned about installing software deal with Virtual machines.

 

 

Introduction to Cloud Computing

Cloud Computing:

Cloud computing is a term which indicates delivery of computing resources – Servers, Storage and Network etc. internet and can be accessed from anywhere and anytime.

Enterprises are leveraging the power of cloud computing  to significantly reduce the cost of ownership associated with their IT infrastructure.

In Cloud Computing the computing hardware and software resource that you need to process your task are provided for ” as a service ” over the internet.


It is the responsibility of the vendor, the Cloud Service Provider (CSP), to develop, own and maintain them.

The consumer need not know exactly where the resources are located and how it all works.

Some traditional computing techniques:

I) Cluster Computing

II) Grid Computing

III) Utility Computing

IV) Distributed Computing

Example:

If you want to use an electronic mail services, you would need the hardware and software sources.

I)  An email server to send, receive and store your mails.

II) An email client to access the data and operations in your email server

If you use a cloud based mail service like Gmail, Outlook services.

Cloud service providers have come up with  different ways to deployment models  through which they can make the Cloud Services

Cloud Service Models:

1.Infrastructure as a Service (IaaS)

2. Platform as a Service (PaaS)

3. Software as a Service (SaaS)

Cloud Deployment Models:

1. Public Cloud

2. Private Cloud

3. Hybrid Cloud

4. Community Cloud.


IaaS :

Infrastructure as a Service is the providing of IT infrastructure resources  like processors, storage, load balance  , networks, firewalls .

IaaS Works in Simple:

Application, Data, Run time Environments and Operating Systems are Managed by Consumer.

Its like Layer formed top layer is  Application layer and below layer is  Data because here available data information from Operating system.

Networking, Storage and Processors are Manage by CSP (Cloud Service Provider).

Here Network layer and hardware layers like storage processors layers for Information.

IaaS providers allocate hardware components like virtual or real from resource