Task: DVWA SETUP
Setting up DVWA
Grab a copy of DWWA using Docker
docker run --rm -it -p 80:80 vulnerables/web-dvwa
This runs it in the command line (localhost:80) so then to a web browser to show off the stuff
Default creds are:
- username: admin
- password: password
You will need to login and reset/initiate the database.
Setup complete!!
Recon
Recon, or reconnaissance (I had to look up that spelling), is the act of observation of a region or locale to ascertain strategic features. In a pentest or hack, this means observing the target network and probing it for strategic weaknesses. Any time you use a tool you are sending data to that network alerting them to your presence and your intentions oftentimes, so some of the initial stages of recon involve what is called "passive recon", before we move onto "active recon".
Passive Recon
As part of passive recon, we do open source intelligence gathering and try to find out as much about a target as possible.
Passive Recon
Depending on your definition, Passive reconnisance may require not interacting with the site in any way (for example, using google cache etc)
However, We can class "Normal Browsing" as a passive activity.
If you lot do open sources intelligence on this task, you can find lots of walkthroughs and material to help you hack this target.
I would recommend that you take the time to do that outside of the session, but for now focus on the other types of passive recon. One of the most effective ways to do passive recon, is to read... yep... just read the site. The things we are looking for are:
- Version information, what's running and what is the version number
- Site links; like how extensive the site is, do they have contact emails, or maybe an admin login page?
- The url; is it php or html? What file extensions are used?
We can dig a little deeper too:
- Scripts; What JavaScript's are running on the site? Does it have session cookies?
- Comments; in the html or in the javascript, does it tell you anything about the way this was developed or how it could be vulnerable?
- robots.txt
The reason this is considered passive recon is that up to this point we are doing nothing more than what a normal user might do on any website, while leaving nothing particularly discernible on the target network/machine, and so avoiding rule based detection methods.
Getting Started. Eyebaling the Site
Our first stage in Recon is to RFTM!!
- Read the pages and see if there is any information right in front of you.
- What do we see when we look at the page?? version info??
- What about common extensions like /robots.txt ?? /cgi-bin ?? /index.php ??
Things we collect:
- version information (for vulnerability checking)
- service information, what services are running so we know what attacks might work
- Entry boxes, can we enter information that could be mangled in some way
- Javascript, what scripts are running on the pages
- Comments, sometimes comments are left in the html and the javascript that will let you know there is a vulnerability there
Recon using Tools
This is essentially the use of tools to attack a network. We've looked at nmap as a network probing tool that definitely leaves a trace on the target machine/network, but there are some other tools that we can also use on websites to automate something called "directory traversal". Generally directory traversal is considered an attack of sorts, but still falls under the recon category because it doesn't directly exploit a target system. The tools we'll use for this are dirbuster and gobuster.
Scanning shows up more as there are often things developers leave in their www file that get served but don't always get linked, this is why we use tools
Dirbuster and Gobuster
Both of these tools do the same thing really, except that dirbuster was designed as a GUI application and gobuster was designed to be a command line tool predominantly. They take a target domain name URL and try to guess what paths might extend off of the main URL using a dictionary list of commonly used paths. It guesses these paths and listens for the html response code from the server. If it receives something in the 200 range then it knows the path exists and tells us of that successful path. As you can imagine, we are flooding the webserver with a lot of requests by doing this, so it really will create a lot of network traffic, logs and noise to alert a SOC team that someone is running a scan.
Important
Dont use these tools on public facing websites unless you have permission
NOTE FOR THE LABS
These commands seem to be having difficulty finding anything in the labs I would give a much smaller word list a go so they dont run for ever
You can grab a new wordlist from
https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/common.txt
Dirbuster Example
dirbuster -H -t 40 -l <your wordlist> -u http://localhost -vvv
-H : This means that the program is told to run headlessly (leave the GUI for someone else) -t : the number of threads to use (default 10) -l : (that's a dash lowercase L ), the wordlist location -u : The target of the attack, in our case localhost -x : extensions to test too -vvv: only needed one -v, makes it verbose cause otherwise dirbuster gives you no indication it's doing anything.
Gobuster Example
gobuster dir -u 127.0.0.1 -w <your wordlist> -t 40 -x.php,.txt,.html
Note
Older versions of Gobuster (found in ubuntu) have slighty differnt syntax. Try
gobuster -u 127.0.0.1 -w <your wordlist> -t 40 -x.php,.txt,.html
-u : target of the attack -w : wordlist location -t : threads -x : testing file extensions -v : verbosity
Task
Task
Try running a scan using the tools. with the commmands above. What do you find?
Once you have the scan complete then you go to each of the paths to have a closer look at what you've found.
Which of the tools do you prefer? Look for a tutorial to see what other informtion you can find using it.