How To Access Blocked Websites

In some colleges, offices are not accessing needed content so we need to use that content page. But we get errors like a blocked website, error not found like this.

Below methods are used for open which website is blocked and which website is an error like restrict the page while browsing the internet.

 1. Use Proxy Websites:

Restricting your access to some particular websites. At the times, you need a way to access the blocked websites and in those situations, proxy websites act as a rescue method.

Below websites are used for proxy servers for unauthorized access for any website.

   2. Use IP Rather Than URL

The blocked website sometimes are stored as a list of URLs and using IP of the website might work in a few of the cases. To get the IP address for any website, you do a ping command in command prompt (In Linux using Terminal) for the website is accessing or not for all users type:

check simple website whether it is working or not type below command in your command prompt.


 3. Change Network Proxy In Google Chrome:

Above two options are not possible then will go with below option through browsers like Google Chrome or Firefox etc.

Will go with Google Chrome for change proxy.

1. Goto “Settings” Then Click on “Show Advanced Settings”

2. Click on “Change Proxy Settings” Then LAN Settings

3. Then Click on “Use automatic configuration script and Use a proxy server”


Note: Above steps are worked for Mozilla Firefox also. In my perspective using google chrome.

Summary: Some of the colleges and offices are using Tor web browser but in India blocked that Tor browser due to insecurity issues. So will follow the above steps for simple access. Do not misuse while opening the blocked site.

How to remove Autorun.inf Virus in Windows

Now, a day’s most familiar virus attacking in Windows system Autorun.inf file so simply will remove from the system

Simple Method:

Step 1: right Click on Command Prompt then Run as Administrator

Step 2: Type cd\ then enter

Step3: Type attrib -s -h -r autorun.inf

If virus found(Autorun.inf or FAantivirus.vbs) file in the C drive then

Step 4: Tpye  del Autorun.inf file(FAantivirus.vbs)


Using Autorun remove tool

Step 1: Download and Install Autorum remove tool

Step 2: Put USB into you system

Windows Simple Commands

To find out IP address in windows using below command:


To find IP address, mac and LAN setting info using below command:



How to find out mac address in windows

How to find out Address Resolution Protocol

2. How to find out BCDBOOT


4.How to find out HOSTNAME in windows machine

5.How to find out MD HELLo

6. To communicate two machines or Network communication in Windows machine using “ping”

To check Query process in windows

REGEDIT in simple command for windows

SET command in Windows

To check system information for windows users, for Host name, version, etc info using SYSTEMINFO command

To find out version of windows

For network statistical to find network info of windows

How to Setup Cloudera Multi Node Cluster Setup with Pictures

Cloudera Installation and Configure Multi Node Cluster

  1. Open Putty:

2. Type Your Machine IP address and then click on Open

3.Then Login as per Username & Password:

4. Type: vi/etc/hosts then add remaining hosts

5. Edit: vi/etc/sysconfig/network

6. Type: vi/etc/selinux/config

SELINUX =enforcing replaced with disabled

7.Type: setenforce 0

8.Type: yum install ntp ntpdate ntp-doc: Install ntp(Netowork Time Protocol)

9. After Installation ntp then check ntp configurations type: chkconfig ntpd on

10.Type: vi/etc/ntp.conf

11.Type : ntpq -p

12.Then start the service ntpd start

13.Then ntpq

14. Then rsa pub key generator ssh-keygen-t rsa in remaining machines

15. File save as id_rsa /root/.ssh

17.ll -check whether is there or not>authorized_keys

19.Type: scp authorized_keys root@machine@.localdomain:/root/.ssh

20.Then type : yum install openssl python perl
21. yum clean all
22.yum repolist

23. Then download Clouder Manager using below command


24.chmod 700 cloudera-manager-installer.bin

25.Then type ./cloudera-manager-insatller.bin click on Next


26. After that Accept License

27. It will take automatically installing JDK

28. Automatically Installing Embedded Database

29. Cloudera manager server Installing

30. Installation Successfully

31.Click on “OK”

32. If you get any Error then you have disabled Firewalls and IP tables
33. Disabled firewall Type: systemctl disable firewalld

34. Disabled IPV6 Type: vi /etc/sysctl.conf

35. Browse your Machine

36.Login : Username: admin

Password: admin

37. Yes, I accept the UserLicense ” Terms and Conditions”

38. Select Cloudera Express “Free”

39.Then Search host machines using as per domain names

40. Select Repository

41. If you need any Proxy Settings then select and fill it. Don’t need leave it.

42.Click on Continue for Three machines cluster Installations. Is there any issue then choose Mozila FireFox .

43. Click on “Continue” check CDH version

44. 100% completed then click on “Continue”

45.After “Continue” then check Validations

46. Here mainly two validations are showing warnings then type below commands then Run Again

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
sysctl vm.swappiness=10

47. Click on “Finish”

48.It shown Version Summary

49.HDFS NameNode and ResourceManager must be different

50.Select “Core with Spark” then Continue


51. Click on “Test Connection”  when using embedded  Database

52.Successful Setup the Cluster.

How to Recovery Deleted Files (Pandora) from Pendrive with Pictures

How to Recover Deleted Files from Pendrive / HardDisk using Pandora tool in Windows

Pandora: Is a one of the best recover deleted files with Pandora tool an easy to use interface. Pandora Recovery provides files recovery tool for computer and USB in Windows operating system with simple steps with pictures.

Step 1: Download Pandora software recovery file from Pandora official website and keep it in your folder/desktop.

Step 2: After Downloading the Pandora (Exe / Binary) file and install it in your local machine. Then open the software it showing Pandora Recovery Wizard click on “exit wizard”. See all folders in the left corner of Pandora recovery tool display all disk related information like your local machines disks and external disk information.

Step 2: If you want to recover your external disk files/folders then select the left corner option. I selected my Pendrive for recovery.

Step 3: Then click on Removable disk select the folder from disk in windows operating system.

Step 4: After clicking on a removable disk in that folders it showing 0% means the files are available the recoverable into your local machine folder. If it showing 100% means that file does not recoverable.

Step 5: Choose “Recover to” option for retrieve files from Pendrive /  external disk

Step 6:  Then choose Specific Path then click on “Recover Now”.

Step7: After getting files then it showing below the window.

Summary: Pandora Recovery tool is one of the best software for deleted files recoveries from Pendrive/ external hard disk or the local machine. This is one is simple recovery files from Pendrive when you will lose your content or data in your hard disk by using pandora recovery Installation will get data recovery very simple to windows operating system users. Everything is step by step processing for deleted files recovery with pictures. So many software are available in the market will choose the only pandore because it is very simple to use.

How to Install SQOOP on Ubuntu

Apache Sqoop Installation on Ubuntu

Apache SQOOP is one of the Hadoop components. It is mainly used for data fetching from HDFS to RDBMS vice versa or bulk data between Hadoop and data stores such as relational databases.

Prerequisites :

Before you can installation of Sqoop, you have to need Hadoop 2.x.x and compatible with Sqoop 1.x.x

Step 1: Download SQOOP 1.x.x tar ball from below website: p/1.4.6/

Step 2: After downloading extract the SQOOP tar ball using below  command:

tar – xzvf sqoop – 1.x.x. bin – hadoop- 2.x.x – alpha.tar. gz

Step 3: Update the bashrc file with SQOOP_HOME & PATH variables

export SQOOP_HOME=/home/slthupili/INSTALL/sqoop-1.x.x.bin-hadoop-2.x.x


Step 4: To check the bashrc changes, open a new terminal and type ‘echo $ SQOOP_HOME’


Step 5: To Integrate with MySQL Database from Hadoop Using SQOOP, we MUST have to place the respective

JAR file (mysql – connector-java5.1.38. jar) in $SQOOP _ HOME / lib path

Step 6: To check the version of SQOOP using below command:

sqoop version

Above steps are simple to the installation of Sqoop on top of Hadoop in Ubuntu

To check with this video for more clarity on SQOOP Installation on Ubuntu

Sqoop to import data from a relational databases management system (RDBMS) like a  MySQL into the Hadoop Distributed File System. Sqoop automates most of this process on the database to explain about schema for the data to be imported. Sqoop uses Map Reduce to import and export the data.

How to Install Hadoop Single Node Cluster

How to Install Hadoop Single Node Cluster on Ubuntu.

Step 1: Update the “System Software Repositories” using sudo apt-get update

First step update the packages of Ubuntu

Step 2: JAVA 1.8 JRE INSTALLATION using below command.

JAVA is prerequisite for Installation so first install JRE then will go with JDK


Step 3: JAVA 1.8 JDK INSTALL using below command


Step 4: How to check JAVA version on Linux using below command

Step 5: After that We must and should Install SSH(Secure Shell) using below command:

SSH for secure less communication in name node and secondary name node for frequently communication

Step 6: Check  SSH Installation using below command

After installation of SSH will check using ssh localhost command whether the communication is working or not.

Step 7: Download Hadoop-2.6.0 tarball from Apache Mirrors.

After completion of Hadoop prerequisites then download the hadoop tarball

Step 8: Extract the tar ball using below command


Step 9: Update Environment variables and Path for HADOOP_HOME and JAVA_HOME:



Step 10: To check Path variable is there or not after that edit the Configuration files as part of Hadoop Installation.



Step 11: First open “Core-site.xml” file, add the properties

Core-site file for Name node information

Step 12: Open “hdfs-site.xml” file and add the properties

Hdfs site xml file related to replication factor and data node information.

Step 13: Open “yarn-site.xml” file and add the properties to configure ‘Resource Manager’ & ‘Node Manage’ details:

Step 14: Update JAVA_HOME path in ‘’ file

Step 15:Update JAVA_HOME path in ‘‘ file

Step 16: Open ‘mapred-site.xml‘ and update the yarn into that file

Step 17: Open slaves file and check whether the hostname is localhost or not

Step 18: Before starting Name Node, we must and should format the name node using below command: hadoop namenode -format

Step 19: To start all the daemons of hadoop using below command:

Step 20: How to check daemons whether work or not using jps command

Step 21: After that all to access the Name Node information in GUI:


How to Install Cassandra on Ubuntu/Linux in Hadoop Eco-System

How to Install Apache Cassandra on Ubuntu in Hadoop Eco – System

Apache Cassandra is a open source, distributed, NoSQL database management system designed to handle huge amount of data across in cheapest servers. It provides high availability written in JAVA for data processing.

Why should we use Apache Cassandra

Now a days Cassandra is complete NoSQL database and robust deployed by some of social networks like Facebook, Twitter, and e-commerce.

1.Cassandra supports for a wide set of data structures.

2. Scalable architecture.

3. Cassandra is High reliability

4. And Cassandra supports ACID properties.

Prerequisites for Apache Cassandra Installation :

Cassandra requires a number of installations. First we need JAVA and Python 2.7 is also mandatory

Below step by step process for Cassandra installation on Ubuntu

Step 1: Download Cassandra-2.x.x-bin.tar.gz from below website:

First download tar ball from Cassandra official website

Step2:  Extract the downloaded Tarball using below command

tar-xzvf cassandra-2.x.x-bin.tar.gz

Step 3: After that Update the CASSANDRA_HOME and PATH Variables in bashrc file


Step 4: To check whether bashrc change or not, after that a open new terminal and type‘echo $CASSANDRA_HOME’ command


Step 5: After completed above settings start Cassandra Server, use below command:


Step 6: If start with Cassandra Query Language Shell using below command:


Characteristics of  Cassandra:

1.Cassandra is a cloumn – oriented database and fault tolerant

2. It is highly consistent and Scalable.

3. Cassandra was created at FaceBook and data model is based on Google Big Data.



In this Apache Cassandra Installation tutorial simple steps to installations with pictures and basic knowledge on Cassandra

How to Install Apache Spark on Ubuntu/Linux

Apache Spark Installation

Spark is a framework and in-memory data processing engine. Compare with Hadoop Map Reduce 100 times faster for data processing.  Developed in Java, Scala, Python and R languages. Nowadays mostly working and execute the data in Streaming, Machine Learning.

Prerequisite of Spark Installation:

1. Update the packages on Ubuntu using

sudo apt – get update

After entering your password it will update some packages

2.  Now you can install the JDK for Java installation

sudo apt – get install default – jdk

Java version must be greater than 1.6 version

Step 1 : Download spark tar ball  from Apache spark official website

Step 2: Tar ball file into your  Hadoop directory

Step 3 : After that Extract the Downloaded tarball using below command:

tar -xzvf  spark tar ball

Step 4:  After tarball extraction , we get Spark directory and Update the SPARK_HOME & PATH variables in bashrc file

using below commands:

export SPARK_HOME=/home/slthupili/INSTALL/spark-2.x.x-bin-hadoop2.x


Step 5 : To check the bashrc changes, open a new terminal and type ‘echo $SPARK_HOME command

Step 6: After successfully Installation of Spark, Will check with Spark shell in terminal using below command :



Step 7: To check spark version and scala version using below commands:




Check with this video for any confusing while installation of Apache Spark


Above steps are very simple to the installation of Apache Spark on top of Hadoop in a single node cluster.

How to Remove Short Cut Virus on Windows

Remove Short Cut Virus on Windows

How to remove shortcut virus in windows simple steps for how to copy files from virus pen drive in Windows

#)USB shown like this below

Method 1:

simple step :

In search box we enter .(dot) then it showing files

Then copy the required files from pen drive to Windows machine.

Method 2:

1 . Open Windows menu and open command prompt with Run as Administrator.

2. After that open external drivers like USB then Connect to Your USB storage into a computer or PC.

3. In command prompt type below command:

attrib -h -r -s/s/d F:\*.*

Then separate files each and everyone below like this

4) After separation each and every file then required file copy into your computer.  Then Delete Virus contain files:

After completion of above steps then copy required files.

In Windows Operating System short cut Virus one of the malware to destroy files and system from external storage devices, like USB drives, external hardware drives, memory cards etc.

This virus affects more on files stored on a USB but an infected drive to entire computer run the software and a new folder will be created with all of your existing files in your computer or USB.