Showing posts with label part3. Show all posts
Showing posts with label part3. Show all posts

Sunday, 3 December 2017

Whois Lookup - Gather Information through Whois Footprinting


Hello friends... This is out 100th article today. And we are excited to get response from all of you. Just before starting to study this topic, I would like to inform you that all the articles from now onward will be most important in hacking. Because this is the point at which real hacking starts. The previous articles might not seem much interesting to all but they were important for the "n00bs". A reason why this site will be the best in future - We post everything. Everything at one place - sooner or later this will become number one site to study hacking.

Now, related to this article... Basically, everything from now on will be related to hacking and IP address and concepts of network and domains in main. So, I suggest you to read the articles on IP address, domains and networking first. This is basically important as you know - A server is hacked by its IP address and an attacker is also tracked by using a unsecured network and IP address.

What is Whois?

Whois, as the name implies, is a protocol granting users access to the massive database of registered owners of an internet resource such as an autonomous system, an IP block, and a domain name, among others. In other words, it is a query and response protocol that lets users find out ‘who is’ the registered owner of a domain by simply typing the exact domain name.

The protocol, in return, will deliver the response in a format that is readable to the human. A more detailed specification of the Whois protocol can be found in RFC 3912. Here are a few reasons why people are conducting a Whois search:
  • Domain buying and trading
  • Check domain expiration
  • Find out domain owner identity
  • Find out location and address of the owner
  • Marketing purposes
Based on the above usage, the importance of a Whois search is clear. But why is Whois important to Hackers? And how is it important? These are the two questions which will be answered here...

How to perform a Whois Lookup?

To understand the importance of Whois in hacking, we will study an example of whois lookup. And to study the example, you need to know about how to perform whois lookup.

Doing a Whois lookup is very simple and quick. There are only a few easy steps to make, and the results will be instantly shown in a few seconds. The procedures are as follows:
  1. Visit https://whois.net
  2. Enter the domain name your want to lookup in the search box
  3. Hit the ‘GO’ button
The results will immediately show up in the next few seconds, depending on your internet speed. Other websites can also be used for Whois Lookup. My personal favourite is - https://www.whois.com/whois/

Below is the information obtained by whois lookup of the domain "gtu.ac.in".
Domain Information
Domain: gtu.ac.in
Registrar: ERNET India (R9-AFIN)
Registration Date: 2008-07-15
Expiration Date: 2026-07-15
Updated Date: 2017-01-27
Status: ok

Name Servers:
ns-602.awsdns-11.net
ns-355.awsdns-44.com
ns-1775.awsdns-29.co.uk
ns-1501.awsdns-59.org

Registrant Contact
Name: gujarat technological university
Organization: gujarat technological university
Street: JACPC building l d college of engineering campus
City: ahmedabad
Postal Code: 380015
Country: IN
Phone: +91.9909980005
Email: registrar@gtu.ac.in

Administrative Contact
Name: n n bhuptani
Organization: gujarat technological university
Street: JACPC building l d college of engineering campus
City: ahmedabad
Postal Code: 380015
Country: IN
Phone: +91.9909980005
Email: registrar@gtu.ac.in

Technical Contact
Name: Harshad Borisa
Organization: gujarat technological university
Street: Gujarat Technological University JACPC building L. D. college of engineering campus
City: ahmedabad
State: Gujrat
Postal Code: 380015
Country: IN
Phone: +91.7926301500
Email: rupendra@gtu.edu.in

As you can clearly see, whois lookup provides us with the details such as:
  • Domain expiry date
  • Email address of owner
  • Mobile number of owner
  • Address of owner
  • IP address or IP block
  • And much more...
Based on this information, the importance of whois is determined. Take note that the registrant’s details may vary based on the Top Level Domain, or TLD. Some TLDs will not show all information of the registrant, while others will not show any detail at all. Also, the owners’ information may be concealed if they are subscribed to the domain privacy, and the domain registrar’s information and contact details will be shown, instead.

Importance of Whois Lookup

Whois lookup is useful in many ways depending on the motive of the person performing lookup. There are various things to be applied on whois lookup but the two of them which are mostly used are listed below...
  • If you are defender, it can help you in tracking down the attacker - You can perform whois lookup on the attacker's IP address and find out the ISP and the location of the ISP which provided IP address to the attacker. Then contact the ISP to reveal other details.
  • If you are on the attacking side, it helps you finding targets to attack - Based on the information available, you can contact the owner and try some social engineering tricks on him/her.
Being able to identify the owner of a domain is one advantage that benefits many users. However, there is also a major disadvantage that comes with it, which is lack of privacy on the part of the domain owner since their identities are made public. Prior to the domain registration, user are required to reveal their full name, address, and contact details such as email address and phone numbers. This is in compliance to the stipulations of Internet Corporation for Assigned Names and Numbers or ICANN, mandating that the registrants’ details be made publicly available through the Whois directories. This provides an entry point for spammers and marketers to grab email addresses and phone numbers for their marketing and spamming activities.

Due to the massive criticism on lack of privacy, most domain registrants like GoDaddy and Hostgator are now offering domain privacy that provides privacy to the owners by concealing some details of their personal information. In this case, the contact information of the registrar is displayed instead of the domain owner. But such feature is available at a premium price.

The above article provides the complete information about Whois Lookup. If you still don't understand how to use it, comment below your queries. If you still don't understand where to use it, then wait for it.

Remember - Hacking is not performed using a single trick or tool. One needs to combine the power of everything he/she has to perform hacking. And you are learning a small part of it to develop your powers. Learn everything separately and combine them at a time.

Thursday, 31 August 2017

Email Tracking - Track your email to know if the receiver opened it, clicked on a link and much more..


Hello fellas, here you are going to learn about Email Tracking. Email tracking is a method used to obtain information from sent emails. For a smooth start, let me give you an example. Suppose that you are the attacker. You have created a file which is trustworthy by its name (let the file-name be "IDM Cracked Latest Version"). But along with this file, you did also bind (attach) an executable in background (hidden from user). This executable is nothing else but a keylogger. Hence, the file will seem useful to user but is really a spyware. Now, you mail this file to the victim and wait for him/her to open it. Here is the trap.

Most of email services doesn't provide a way for the sender to know if his/her email was seen by the receiver or just ignored. In WhatsApp, Facebook and any other messaging service, we can know if our message was read or ignored. But none of email service provide us with this feature. So if you sent a spyware file to victim, it will take for weeks to know if he/she downloaded the file or not. And your attack will be unsuccessful or it will give delayed result. This is a simple example where email tracking becomes handy. So that now we have seen the importance of Email Tracking, let us study the process in depth.

There are in general, two methods to obtain information from Emails.
  1. Email Tracing
  2. Email Tracking
Yes, Email Tracing is different from Email Tracking. To study the difference and learn what Email Tracing is, click here.. Both the procedures (Email tracing and tracking) are independent. Hence, you can directly study this article to learn tracking irrespective of studying email tracing. But I would still suggest you to go through email tracing at least once before continuing, as its an interesting and important topic.

What is Email Tracking?

To be technical, it’s a method for monitoring email delivery through the use of a digitally time-stamped record to show the exact time and date an email was opened. You send an email. Your victim opens it. You get a notification in the corner of your screen and have the time of the email being opened on record. Every time the email is opened or a link is clicked, you’ll know it happened.

There are mainly two kind of receipts required when an email is sent.
  1. Delivery Receipt - Indicates if the email is delivered or not. This receipt is provided in-built by all the email services.
  2. Read Receipt - Indicates if the email you sent is read by the receiver or just ignored. This service is not provided by most of email service providers. But we can still modify the service provider functions to get a read receipt.
Click to view full size image
And Email Tracking is a methodology of obtaining read receipts of any sent email. So now let us see the advantages of email tracking before knowing how it actually works.

How is Email Tracking useful?

Email Tracking is mainly used in two fields - Spying and Marketing. Initially, email and link tracking feels like spying on your customers or potential clients. However, nothing nefarious is happening. Using email tracking actually saves time and increases productivity for both you and the customer. When you see a notification you know your email has been opened. You no longer have to send the “did you get my email?” message unless they actually haven’t gotten it.

Also, you’ll know exactly when people are sitting down at their desks and has your business on their mind. If you reach out to them close to this time, you’ll save your client time by contacting them when they’ve already got your company on their mind. Instead of trying to get them at a random time on a random day, they’ll already be thinking about you, and less likely to be busy on something else. If you notice an email being opened multiple times, then you’ll know there’s a higher chance for engagement with them. You can tell if they’re checking information you sent them before or after a call/meeting.

Email tracking is great for:
  • Knowing when to follow up with people.
  • Providing specific information based on the feedback (For example: If they keep clicking an email about a certain product, you could send more information about it).
  • Helping marketing know what’s getting clients to click onward and what’s failing to get their attention.
  • Giving peace of mind that you’re getting to clients.
Now let us see how Email Tracking works.

How does Email Tracking work in general?

To understand email tracking, we must first know the importance of Web-beacon or Tracking-pixel. 
  • Web-beacon:A web beacon is an object embedded in a web page or email, which unobtrusively (usually invisibly) allows checking that a user has accessed the content. Common uses are email tracking and page tagging for web analytics.
  • Tracking-pixel: Tracking pixel is a type of Web-beacon. A tracking pixel is a transparent image, measuring one pixel by one pixel (very small). Once imbedded on a Web page or in an email, a tracking pixel connects to a PHP file stored on your Web server. Each time the tracking pixel is viewed, it pulls the PHP file from the server, creating a logged event that lets you know exactly when and for how many times customers accessed the page or opened the message.
Now that we know about tracking pixel, we can note two of its important properties - Its transparent and when it is accessed, the event is logged along with the date-and-time stamp in the log file. When you see the log file, you can tell about when and how many times the image was accessed.

Email Tracking works the same way. You need to imbed the tracking-pixel in the email. I used the word "Imded" and not "Embed". When you embed an image, the image loads in the email and is sent to the receiver as an attachment. Hence, the log file will store the time when the email was sent - as tracking-pixel was attached (accessed) when the email was created. But when you imbed an image, a html tag including the link (<img src="link">) to that image is sent in the email. Hence, the image is loaded when the receiver opens the email. So, the log file will save the time when the image was accessed by the receiver which indirectly indicates when email was seen by the receiver.

Limitations of Email Tracking Pixel

Typically, there are no limitations of Email Tracking Pixel but there are problems which occur due to following reasons:
  • The image isn’t loaded when an email is opened. Many web, desktop, and mobile email clients do not open images by default. Especially from unknown senders.
  • An ad or tracking blocker is being used. Several extensions exist that block email opens from being reported.
  • The image is loaded, but the email isn’t actually open. Some email clients render images as a preview, and will trigger email open false positives. The same effect is produced by Gmail's Image Caching feature.
  • Some enterprise security systems will block emails w/ open tracking pixels or tracked links. Worse than email tracking not working, your email just might not actually get through.
The above was a brief list of what can cause email tracking to fail. The most important of the above is Gmail's Image Caching feature. (I cannot mention about it here due to the limitations of size og my article but you can google it.)

Some of useful tools for Email Tracking

Email tracking can be done with the help of three methods - Manual Method, Web-browser Extensions and Online Tools. Manual method is a bit harder and lengthy so I will mention it in my upcoming articles. The extensions and tools are listed here:
If you know about other good tools, write the name and link in comments. Till then, stay connected.. Thank you..

Saturday, 24 June 2017

Email Footprinting - Trace an Email and Collect Information from it..!


In the previous article, I wrote on Website Scraping, Website Monitoring and Website Mirroring. It contained the methodology of gathering information from a website. Similarly, this article refers to gathering information from an Email.

An Email can give us access to a lot of sensitive information. Information such as:
  • Sender's Email
  • Sender's Name
  • Sender's Physical Location
  • The Path through which Email travelled - The transfer agents in between
  • Sender's IP Address
  • Active Ports of Sender
and much more information about the sender can be known

These sensitive information can lead a Hacker to access many of the data about the target. So, in this article we are going to study about how to collect information from Emails.

There are in general, two methods of gathering information from emails.
  • Tracing Email
  • Tracking Email
And here we are going to study tracing an email. Tracking email is not the part of Email Footprinting but still we will study it later. For now, let us not go into deep about email tracking and just study only the difference between Email Tracing and Email Tracking.

Email Tracing vs. Email Tracking

Tracing generally refers to movement in backward direction while tracking refers to movement in forward direction. A common example is, when you order an item on amazon, they let you to track the delivery of that item. Hence you can track where your object is right now. That is referred to as tracking. Object is yours and you are spying on your object. While in tracing, object belong to someone else and you are spying on other's object.

When you send a mail and you start spying on it (if receiver clicked a link in your mail or if receiver opened your mail or any other activity), then it is called Email Tracking. Similarly, when you get an email in your inbox and you spy on the that email (move backwards and get information about from where the mail was sent and information of every sender), it is called Email Tracing.

Now that we know about Email Tracing and what type of information can be obtained, let us see the topic in brief.

Email Header

We know that we can obtain information about sender from Email. Think somewhat deeper.. There might be a source from which we get all these information. Yes, that source is the Email Header.

In an e-mail, the body (content text) is always preceded by header lines that identify particular routing information of the message, including the sender, recipient, date and subject. Some headers are mandatory, such as the FROM, TO and DATE headers. Others are optional, but very commonly used, such as SUBJECT and CC. Other headers include the sending time stamps and the receiving time stamps of all mail transfer agents that have received and sent the message.

Mail Transfer Agents (MTA) are the intermediate routers, computers or servers that help in transfer of email from a sender to the receiver. Generally, sender and receiver are not connected by a direct connection. Hence, we use MTAs to create a path between sender's mail box (on sender's mail server) and receiver's mail box (on receiver's mail server). To know more about How Email system works, click here..

In other words, any time a message is transferred from one user to another (i.e. when it is sent or forwarded), the message is date/time stamped by a mail transfer agent (MTA) - a computer program or software agent that facilitates the transfer of email message from one computer to another. This date/time stamp, like FROM, TO, and SUBJECT, becomes one of the many headers that precede the body of an email. Hence, there might be multiple sub-headers in an email header providing information about each MTA unit associated in the transfer.

Headers Provide Routing Information

Besides the most common identifications (from, to, date, subject), email headers also provide information on the route an email takes as it is transferred from one computer to another. As mentioned earlier, mail transfer agents (MTA) facilitate email transfers. When an email is sent from one computer to another it travels through a MTA. Each time an email is sent or forwarded by the MTA, it is stamped with a date, time and recipient. This is why some emails, if they have had several destinations, may have several RECEIVED headers: there have been multiple recipients since the origination of the email. In a way it is much like the same way the post office would route a letter: every time the letter passes through a post office on its route, or if it is forwarded on, it will receive a stamp. In this case the stamp is an email header.

An example of simple email header with only one sender an receiver tag is shown below:

Click to view full size image
The above example is the simplest header of all. But still it might look complicated to you. Hence, is proves that tracing the email manually is complex. But we need to know the manual method too, because only using automated tools doesn't provide perfection.

Manual method to trace an Email

To find the information from a received email you're curious about, open the email and look for the header details. How you find that email's header depends on the email program you use. Do you use Gmail or Yahoo? Hotmail or Outlook? 

For example, if you're a Gmail user, here are the steps you'd take:
  1. Open the message you want to view
  2. Click the down arrow next to the "Reply" link
  3. Select "Show Original" to open a new window with the full headers
Similarly, you can find a method from Google for other Email Programs. If I write methods for all of them, article would become lengthy.

Automated Tools for Email Tracing

Here is a small list of some of the best tools for Email Tracing..
You can easily search Google for other tools.

As I told, email tracking and email tracing are different. I will teach you about Email Tracking in my next article. So, stay connected..

Sunday, 18 June 2017

Website Footprinting - Website Scraping, Website Mirroring and Website Monitoring


While Footprinting refers to gathering the needed information and getting knowledge of how things work, website footprinting refers to extracting data from a website and knowing how the site works. Basically, working of a website is known on the basis of the javascript files or the js code which executes on an activity. There are many other things which determine the methodology of working od a site and this may be helpful to the attacker. So, let us explore more on the terms and methods.

Website Footprinting is the first step towards hacking a website. To hack a site, we need information such as:
  • How the site works?
  • How frequent are new article posted on site?
  • Is the admin of website active/inactive?
  • What type of data is available on the site?
  • And much more...
These can be achieved by footprinting a website. Following all the steps in website footprinting leads us to get confidential information from the site and know how the site works in reality. Let us explore more about this.

Website Scraping

The best way to extract information from a webpage is to open the page in browser and then examine it's source code and cookies used by the site. But examining the source code doesn't provide all the needed information and looking at cookies manually is tiresome. So, the concept of extracting data from a website came into existence.

Web Scraping (also termed Screen Scraping, Web Data Extraction, Web Harvesting etc.) is a technique employed to extract large amounts of data from websites whereby the data is extracted and saved to a local file in your computer or to a database in table (spreadsheet) format.

Data displayed by most websites can only be viewed using a web browser. They do not offer the functionality to save a copy of this data for personal use. The only option then is to manually copy and paste the data - a very tedious job which can take many hours or sometimes days to complete. Web Scraping is the technique of automating this process, so that instead of manually copying the data from websites, the Web Scraping software will perform the same task within a fraction of the time.

A web scraping software will automatically load and extract data from multiple pages of websites based on your requirement. It is either custom built for a specific website or is one which can be configured to work with any website. With the click of a button you can easily save the data available in the website to a file in your computer.

One of the useful Web Scraping Software is listed below: 
You can also use any other software/plugin/script for the same job. These are easily available on internet. The main concern is that, the tool must be easy to use.

Website Mirroring

Mirroring refers to downloading the entire website offline on your harddisk for browsing it offline.

Mirroring an entire website onto local machine enables an attacker to browse website offline; it can also assist in finding directory structure and other valuable information from mirrored copy without multiple requests to web server. Sending multiple requests to a web server may be dangerous as the admin when looking to log files, can identify that you were trying to collect sensitive information from the site and it can help the admin to traceback you.

Some well-known web mirroring tools are:
There are many other tools which are easily available on Google but these are the best.

Website Monitoring

Monitoring a website refers to getting information such as:
  • How frequently the admin posts on the site?
  • Which posts are deleted?
  • When was an article posted?
  • Get alerted when a new article is posted on the site.
There are two methods used for different purposes. The first three purpose listed above are satisfied by Internet Archives. You can refer to its complete guide in this article.

The second method is easy to use and satisfies the fourth (last) purpose of website monitoring. It works the same way when you subscribe to a website. When a new post is posted, you are informed about it through mail service. But the major difference is that, in subscription, the alert mail is controlled by the admin i.e. we are alerted of the new article when the admin wants; while in monitoring, we are the controller. That is, we check regularly if a site has posted a new article or has made any changes.

But doing this task manually is tiresome as said before. So automated tools and services are used with a view to reduce the work. Some of the tools used for this purpose are:
The above are some of the best services while you can search google for more such services if you want.

Monday, 17 April 2017

Gather Information Using Google Hacking


As a part of our chapter on Footprinting and Reconnaissance, this article is to make you aware about how to gather information using Google search. We have seen earlier on how to search google servers that deep to get direct download links. Ever though what was it?

We have been using Google search for a long time but none of us tried to search deep in server. Just we clicked on the website link that google showed to us but instead we can try Google search to modify results according to our needs. These all can be done using Google Dorks - also known as google commands or filters. So, let us start understanding what Google Dorks is and how to use them.

Google Dorks can be used as per our wish:
  • For Hacking
  • For Normal Uses
It depends on individual how he/she uses this function. Let us start understanding the term and its uses.

Basics

Google hacking involves using advanced operators in the Google search engine to locate specific strings of text within search results.

Examples

  • Some of the more popular examples are finding specific versions of vulnerable Web applications.
  • Devices connected to the Internet can be found. A search string such as inurl:"ViewerFrame?Mode=" will find public web cameras.
  • Another useful search is following intitle:index.of followed by a search keyword. This can give a list of files on the servers. For example, intitle:index.of mp3 will give all the MP3 files available on various servers. We have seen this technique to get direct download links of movies, PDFs, songs and more..

History

Everytime the history seems to us. But here, this is not the case. It is the case were a computer expert turned into a hacker.

The concept of "Google Hacking" dates back to 2002, when Johnny Long began to collect interesting Google search queries that uncovered vulnerable systems and/or sensitive information disclosures - labeling them googleDorks.

The list of googleDorks grew into large dictionary of queries, which were eventually organized into the original Google Hacking Database (GHDB) in 2004. In short, GHDB is an extended version of Google Dorks.

After the release of GHDB, Johnny Long wrote his own book on Google Hacking popularly known as Google Hacking for Penetration Testers.

Introduction

A misconfigured server may expose several business information on Google. It is difficult to get access to files from database sites through Google. We can use as an example, the use of “cache” Google, where it stores older versions of all sites that were once indexed by their robots. This feature allows you to have access to pages that have already been taken from the air, since they already exist in the database of Google. To read more on Google cache and to know how to use it, click here..

What kind of data can be exploited?

We all know that Google spies on us by keeping a record of what we search or what we do..! Similarly, Google keeps a spy of various servers too. It maintains the information either in its storage server or in its server cache. Hence, many a times, important data of a server gets leaked unknowingly.

You might have heard of performing SQL injection using Google search. Here are many other data that we can obtain from Google using GHDB.
 

Advisories and Vulnerabilities 

These searches locate vulnerable servers. These searches are often generated from various security advisory posts, and in many cases are product or version-specific. 

Error Messages

Really retarded error messages that provide us more of the information. When we come to know that a website is not properly configured, we can start searching for the mistake in the site which can be used as a vulnerable part to whole website. Sometimes, error message provide us this kind of information.

Files containing juicy info

No usernames or passwords, but interesting stuff which has same value as usernames and passwords. 

Files containing passwords

Google search can also provide us passwords form its database if we use Dorks correctly.  

Files containing usernames

These files contain usernames, but no passwords...

Footholds

Queries that can help a hacker gain a foothold into a web server

Pages containing login portals

These are login pages for various services. Consider them the front door of a website's more sensitive functions.

Pages containing network or vulnerability data

These pages contain such things as firewall logs, honeypot logs, network information, IDS logs... all sorts of fun stuff!

Sensitive Directories

Google's collection of web sites sharing sensitive directories. The files contained in here will vary from sesitive to top-secret!

Various Online Devices

This category contains things like printers, video cameras, and all sorts of cool things found on the web with Google.

Vulnerable Files

HUNDREDS of vulnerable files that Google can find on websites...

Vulnerable Servers

These searches reveal servers with specific vulnerabilities. These are found in a different way than the searches found in the "Vulnerable Files" section. 

Tools which help to perform Google Hacking

There are two official websites which help us perform google hacking:
Also there is an app available on playstore named "Google Dorks" which can be used to learn basics of GHDB.


There are so many things to learn in GHDB and all of them cannot be mentioned in a single article. Hence, I am looking forward to open a new tab in this blog specially for GHDB. So, keep in touch..!

Tuesday, 7 March 2017

Footprinting and Reconnaissance - Monitoring target using Alerts, Groups, Forums and Blogs


We have seen many methods of gathering information about the target system. But still some of them are left. Here, we are going to discuss about two new methods to spy on target in a complete legitimate way.

The first method includes spying on target system using alert services while the second one includes spying using online groups, forums and blogs.

Monitoring Target using Online Alert Service :

Before going into deep, I will tell you what actually an alert service means.! An alert service works the same way as a subscription service. Suppose that you have subscribed my blog for free articles (see the subscription box on the right side on window), then you will get daily updates of my posts in this blog. But the thing is, you will not get instant updates all the time.

An alert service works the same. You can see an alert in the way you place reminders in your mobile. When the time comes, it provides you with a reminder (alert) about whatever the task is to be done. An alert service provides you instant update when the server data gets modified in any way. Suppose that admin posted new data on the website, then you will get an immediate alert about the change.

In short, we can say that :
Alerts are content monitoring services that provide up-to-date information based on your preference, usually via email or SMS in an automated manner.

So, now as you know what is an alert, let us see how to use such services. Generally, an alert service works differently depending on the server that provides the service. That means, if we have two services available, one from Google and other from Yahoo, there is a lot of difference between the way they provide us with service. Also, some of the alert providing services are charged while the others are free.

Here are a few examples of Alert Services :

Information Gathering using Groups, Forums and Blogs :

Gorups, Forums and Blog provide sensitive information about a target such as public network information, system information, personal information, etc. Hence, it also becomes a part of Information Gathering though of a very little importance.

So, you need to gather this type of information. For that, Register with fake profiles in Google groups, Yahoo groups, etc. and try to join target organisation's employee groups when they share personal and company information.

Search for information like Fully Qualified Domain Names (FQDNs), IP addresses and usernames in groups, forums or blogs.

Tuesday, 21 February 2017

Footprinting and Reconnaissance - Location Information and People Search


In last article, I mentioned the steps to determine the Operating System of the target. And here, we are going to study about determine the geographic location of the target, as well as using online people search services.

So. first you might think 'Why determining the location in important in Hacking?'. It holds a lot importance as many things can be determined from the location of a company. A few of them are listed below :-
  • Services provided by the company.
  • Nature of the society at that place.
  • Mindset of workers (people residing there).
  • And many more...
Now, but might think 'What is the importance of all the above mentioned things!'. Basically, these all things comes into light when we perform social engineering attacks (will be taught in later articles) on the target company.
 
Tools for finding geographical location :-
 
But only knowing the geographic location doesn't help you to perform social attacks. Many other information is needed to perform such attacks. From the list of information needed, only two information gathering tricks are mentioned here. One is location information (mentioned above) and other is People Search (see below).

Social networking sites are the great source of personal and organizational information. Information about an individual can be found at various people search websites. The people search returns the following information about a person or organization :-
  • Residential addresses and Email addresses
  • Contact numbers and Date-of-Birth
  • Photos and Social Networking Profiles
  • Blog URLs
  • Satellite pictures of Private Residencies 
  • Upcoming projects and operating environments.

People Search Online Services :-
NOTE :- These services are mostly for US people. That means, you can only find the data of citizens of US and not other countries. I am still finding such server for other countries database. Till then enjoy with this.
 
Post your problems and feedback at the bottom of this page in the comment box. Thank you.. 

Thursday, 16 February 2017

Methods to determine the OS of the target system


I have mentioned in my previous article about why determining the OS of target's system is so important when attacking. In this article, I am going to tell you the methods that can be used to determine the OS of the target.

Generally there are many methods that can be used to accomplish this task. But we are just referring to the easiest methods. Operating system of a server can be found using :
  • Linux command shell
  • Online tools
  • Search Engines
  • And many more..
Linux command shell is a hard to implement now as you might not be having knowledge of Linux commands. (Also, I am posting articles on Linux system but commands are not posted yet.) So, for this time, we are just skipping the Linux coding part. The remaining two methods are easier to implement and here is a brief tutorial on it.

Using online tools such as Netcraft :



If you have read my previous post about finding the restricted URLs of a company, then you might be aware about netcraft. But if you aren't, don't worry. Here is the complete method :
  1. Open https://www.netcraft.com/ in a new browser window.
  2. Search the Home Page of netcraft for text 'What's that site running?'
  3. You will find out a search box besides the text written 'Find out what technologies are powering any website'.
  4. Type the name of the server you want to search for. E.g. www.microsoft.com (Here, im my case its ldce.ac.in).
  5. Enjoy with the results.

Using Shodan search engine :



Ever heard about Shodan search engine..!! If you are new to it, make a practise of remembering this name as it becomes most useful in Hacking compared to Google. Also, Hackers refer it as the Most Dangerous Search Engine. To read more about it and learn how to hack using Shodan, click here.

Now, back to this article...Shodan search engine lets you to find specific computers (routers, servers, etc..) using a variety of filters. Follow these steps to find OS using shodan.
  1. Visit : https://www.shodan.io/.
  2. In the search box, type the website you want to search for. (See the image above, click on it to get full size view.)
  3. Its Done..! Enjoy the results.
As you can see, Shodan gives you the extra results about the website's hosting server, the company providing the SSL certificate, etc. These are only the basics. Shodan can provide you a complete information about any server. But this part of the tutorial is limited. I have posted new article on Shodan Search Engine if you want to know more about it.

Monday, 13 February 2017

Footprinting and Reconnaissance - Determining the Operating System


Determining the Operating System on which the server runs is the most important part of Hacking. Mostly, hacking is breaking-into the target's system to steal data or any such purpose. Hence, the security of the system becomes thing of prime importance.

Why is determining the Operating System so important?

A few important things depends on the type of operating system of the server. These are :
  • Programs that can be installed on the server : Suppose you want to install malware like keyloggers or other spying software on the target's system, then you must know the type of OS he/she is using. Its because, there comes different software for different OSes. Like you can't run IOS apps on Android and vice versa. Also, you can't run windows EXEs on Linux.
  • Commands that can be executed : This is important in the case when you want to remotely control a system. Suppose that we found a vulnerability in the system and installed a malware in it which allows us to remotely control the server via its Shell. Here, we need to know the shell codes (CMD in windows and BaseShell in Linux). Also, knowing only the shell codes is not enough. It is useless until you know the system on which you have to execute the scripts.
  • The storage location of the information about users and passwords : When you want to steal username and password information of users or admin from the server, this is important. It is because, Windows and Linux like Operating Systems have a predefined file (located at a specific path) which stores the sensitive information. So, when you come to know the OS, you just have to move to that path to steal the data information.
  • Vulnerabilities for a given operating system : Some of the Operating Systems possess vulnerabilities that can be exploited by attackers. Linux operating system provides the highest security than windows. When you have knowledge about the vulnerability inside an OS, you can target any server running on that OS, exploiting that vulnerability. Recently, windows 10 was attacked by a Zero Day attack and the vulnerability still exists.
Not only determining the OS but also determining the version of operating system is of great use. We know that new versions are released as a patch to bugs in the old versions. Suppose, the target luckily runs on the old version..!! Here, it becomes easy to hack into the system as we are aware to the bugs in the old versions.

Each operating system has a unique set of features and the hacker must know them..

Steps for how to determine the Operating System of a server will be mentioned in my Next Article.

Saturday, 11 February 2017

Footprinting and Reconnaissance - Finding Companies' Restricted URLs



Restricted URLs provide an insight and understanding of the number of websites hosted on a particular server. Not only the number of websites but also you can find the name of the site. From the name, we can easily know the importance of that site.

Restricted URLs are always present on any server. They are useful for administrator purpose. When making a huge server, a single man (admin) cannot handle it. Hence, it becomes necessary that the site be maintained by many people according to the task assigned to them.

When there is a need to access data of any website by employees, it asks for authentication of employees and then they are logged in. Many a times, when we visit a bank, we see the employees doing their work on a private window using their authentication. But we can't access the server. Its because we don't know the exact URL on which they are working.

Suppose there be a large server xyz.com. The server can't be accessed by a single person. Hence, it might include the page where a person needs to login as admin and start his/her work. The admin login page is generally admin.xyz.com. Here, the guess was easy but there are many other such private URLs made for employee use only. So, today we will see how to find such private URLs of a server.


In the above image, you can see the private URLs of microsoft.com. Its what I searched for. Now, let's start the tutorial.

How to find Private URLs of any company?

Follow these simple steps to know the trick.
  1. Open : https://www.netcraft.com.
  2. Search the Home Page of netcraft for text 'What's that site running?'.
  3. You will find a search box besides the text written 'Find out what technologies are powering any website:'.
  4. Type the name of the server (company) you want to search for. (Here, in my case its microsoft.com).
  5. Enjoy with the results...

Thursday, 9 February 2017

Footprinting via Internet Archives


To know how to collect information using internet archives, we should first know what is an internet archive. So, let us first understand the term 'Internet Archives'. The Internet Archives is a digital archive of all the websites. It includes the date and time when the archive was updated. Also it contains the view of the page which was modified or uploaded. The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible. Its web archive, the Wayback Machine, contains over 150 billion web captures. The Archive also oversees one of the world's largest book digitization projects.

The Wayback Machine is a digital archive of the World Wide Web and other information on the Internet created by the Internet Archive. Since 1996, the Wayback Machine has been archiving cached pages of websites onto its large cluster of Linux nodes. It revisits sites every few weeks or months and archives a new version. Sites can also be captured on the fly by visitors who enter the site's URL into a search box. The intention is to capture and archive content that otherwise would be lost whenever a site is changed or closed down. The overall vision of the machine's creators is to archive the entire Internet.

A few questions like :
  • How to spy on a website?
  • How to get site contents without register/login?
  • How to view old contents of a site?
  • How to see the contents deleted by admin of a site?
  • How to see the frequency of uploads of a website?
Here, you will find answers to all these questions.


How to use WayBack Machine?

  1. Visit   :-   https://archive.org/ 
  2. You will see the search bar beside the text "WayBack Machine".
  3. Just enter the URL in the search bar and you will see the calendar wise list of webpages captured on that site. (See the pic above, click on it to view in full size)
  4. Just click on any date on the calendar and you will be redirected to the version of website at that time.

Why to use WayBack Machine?

  • View Site Changes - This one is pretty obvious since it’s what everyone tends to use the Wayback Machine for, but it’s still worth mentioning some use cases. The Wayback Machine’s snapshots can be used to compare a site’s rendered appearance on different dates to see when some aspect of it was changed. This can be helpful when trying to determine the cause of a drop in a page’s rankings. Looking at the page in the time around the drop will help you determine what changes were made that may have negatively affected rankings. Use that intel to build a plan to fix the problem.
  • Familiarize Yourself with a Site - When you get a new client, it’s important to become familiar with their site and get a feel of the ins and outs of their brand. The Wayback Machine allows you to do just that! You can see how the site has changed over the years, and you can even get a sense of what their brand voice was and how it may have changed. For those working on branding and content initiatives, this is very useful.
  • Find Old Webpages - Sometimes, it happens that the admin of a website might have removed a page due to some reason. But the removed data might be of some use to you. You may have a need to that data but the admin has removed it. Here, WayBack Machine comes to a great help. It helps you to see the old data on websites.
  • Discover Old URL Structures -
    Sometimes a website’s URL structure gets changed and the old structure is forgotten over time, which can cause problems when it comes to pulling data. If you have a general idea of when the structure was changed, you can use the Wayback Machine to find exactly when the change occurred and what the old structure used to be. Then you can and map out the newer URLs with their older counterparts. This can also be a great help if a site’s content has been reorganized or its subfolder names have been changed.
  • Examine Robots.txt - The Wayback Machine indexes pretty much everything it finds on a site, including robots.txt files. This is great because if your site is having technical or crawlability issues, you can find the date or range where the changes causing those issues were made to robots.txt. All you have to do is search the Wayback Machine for a site’s robots.txt and compare snapshots around the time the problem started occurring until you find the culprit. We’ve had this happen recently when an enterprise client was getting “blocked resource” alerts in Google Search Console, but everything looked fine in the robots.txt. A little detective work with the Wayback Machine and we found that the robots.txt had been changed, and reverted back without documentation.
  • Validate Analytics Code Placement and Use - The Wayback Machine indexes the source code for pages as well, so you can view and retrieve old code from previous pages. This is good for looking at past analytics code placement and use on a site if you’ve been noticing some unusual numbers in your analytics account. Depending on how the site was coded, this process could be used for event tracking as well.

Tuesday, 7 February 2017

Footprinting via Cached Pages



First of all, to know how to use cached data from search engines in hacking, we need to know what is cached data. Cached data is the data which is stored temporarily in the memory with an aim to provide faster loading of web pages.

Take an example : You might have observed that when you open any website for the first time in your browser (not using incognito mode), it will take some time to load the webpage. But then you close that webpage and try to re-visit the same page. Here, visiting the webpage another time will load it faster than the time it took at the first time. This is because, when you visit any page on internet, your browser captures the page data and stores it in the cache memory. On re-visiting the page, the elements of some part of the page are loaded from cache (offline) and this is why it takes less time.

The search engines have a similar process to do. When one visits a webpage via search engine, it stores the page in its cache memory. And this makes it faster to provide us results of the query we searched.

Google Cached Pages :

Google takes a snapshot of each page it examines and caches (stores) that version as a back-up. The cached version is what Google uses to judge if a page is a good match for your query.

Practically every search result includes a Cached link. Clicking on that link takes you to the Google cached version (old version captured by search engine) of that web page, instead of the current version of the page. This is useful if the original page is unavailable because of:
  • Internet congestion
  • A down, overloaded, or just slow website
  • The owner’s recently removing the page from the Web
You can always access page’s cached version faster than the page itself as we all know that google’s servers are typically much more faster than any web servers.


  • When Google displays the cached page, a header at the top serves as a reminder that what you see isn’t necessarily the most recent version of the page.
  • The Cached link will be omitted for sites whose owners have requested that Google remove the cached version or not cache their content, as well as any sites Google hasn’t indexed.
  • If the original page contains more than 101 kilobytes of text, the cached version of the page will consist of the first 101 kbytes (120 kbytes for pdf files).

Why to use Google Cached Pages?

Google Cache can be used for various purposes few of which are as follows:
  • To see the contents of the page (if the original site is temporarily down).
  • To see the content of the dynamically generated page (if the original page has been updated since the cache and no longer contains the information you need. Thus, if Google returns a link to a page that appears to have little to do with your actual query, or if you can’t find the information you’re seeking on the current version of the page, take a look at the cached version).
  • To access websites blocked in your country.
  • To more quickly access the information of a slow loading page. 
  • To avoid registrations/subscriptions or advertisements on a webpage.

How to find Cached Webpages? 

  1. Just go to Google Web Search text box and add the keyword cache: in front of the URL that you would like to see. Ex. cache:gtu.ac.in
  2. There is “cached” link in each of the Google Web Search results, except for those web pages that do not allow Google Web Search to cache or snapshot on them. If there is this “Cached” link, just click. (See the snapshot provided above, click on it to view in full size)
With reference to Google Webmaster Tools Help on Remove a page or site from Google’s search results, a web page will not be available in Google Cache database if it is not visible to search engine crawler.

Footprinting and Reconnaissance - Footprinting through Search Engines


Attackers can use search engines to extract information about the target. He/she can extract information such as technology platforms, employee details, login pages, intranet portls, etc. It helps them in performing social engineering and other types of attacks.

Take for an example, you want to create a phishing page to hack someone's facebook account. For this you need to know the target and contact him/her directly or indirectly. This can be done via his/ her facebook messenger. Also you have to visit the facebook login page to create a similar duplicate page. In the above steps, you collected information from search engine.

Apart from obtaining information from search engines, Search engine caches and internet archives may also provide sensitive information that has been removed from World Wide Web.

This technique is mainly used whenever you want to get details of a server which were removed by the administrator a few days ago or when you want to get the details of a website at a particular past date.


Saturday, 4 February 2017

Footprinting and Reconnaissance- Introduction




I have mentioned Hacking Phases in my previous article. It was just a basic information on the stages. Now, we are going to discuss the full tutorials in detail along with the tools required to perform these steps. So, let’s start.

Footprinting is the process of collecting as much as information as possible about the target network, for identifying various ways to intrude (enter without invitation) into an organisation’s network system.

Footprinting is the first step of any attack on information systems; attacker gathers publicly available sensitive information, using which he/she performs social engineering, system and network attacks, etc.

Advantages of footprinting :


  • Know Security Posture : It allows attackers to get an idea of the external condition of security of the target organization.
  • Reduce Focus Area : It reduces attacker’s focus area to specific range of IP address, networks, domain names, etc.
  • Identify Vulnerabilities : It allows attackers to identify defects in the target systems in order to select appropriate exploits.
  • Draw Network Map : It allows attackers to draw an outline of the target organisation’s network infrastructure and the path (sequence of IPs) to reach the target server; to know more about the actual environment that they are going to break.

Objectives of Footprinting :-


  1. Collect Network Information :
    • Domain name
    • IP addresses of reachable systems
    • Rouge/Private websites
    • TCP and UDP services running
    • Network protocols
    • VPN points
  2. Collect System Information :
    • Remote system type
    • User and group names
    • Routing tables
    • System architecture
    • System names
    • Passwords
  3. Collect Organisation's Information : 
    • Employee details
    • Organisation’s website
    • Location details
    • Address and phone numbers
    • Comments in HTML source code
    • Security Policies implemented
    • Other organisations connected

Popular Posts