Skip to content

Tools for dealing with requests

As we discovered in the previous step, the main way we communicate via HTTP is through request objects.

There are several methods of generating request objects

  • GET requests are trivial, as we can modify the URL parameters directly in the URL (through the browser or cURL / wget)
  • POST requests are a little bit harder, as we need to send the request data inside the request body field of the request header. This means that we need some way of generating a request with our data in it.

Generating request parameters

When adding data to request methods, each of the parameters take the form parameter=value with the list of parameters separated by & symbols. Special Characters are URL encoded using ASCII. If you are manually converting lots of parameters this can be a pain to do by hand, so I like to use python's URLlib library. You can do this using the following:

import urllib

urllib.urlencode({"username":"dang", "password":"foobar"})
'username=dang&password=foobar'</code></pre>

Or as a command line one-liner

$ python -c "import urllib; print urllib.urlencode({'username':'dang', 'password':'foobar'})"
username=dang&password=foobar

Mangling Requests in a browser.

We can use the inspector tool to help us modify requests in the Browser. This gives us a quick and dirty way of changing the information that is being sent to the server.

Making a HTTP request from the command line

As the requests are well specified, we can talk to the server in its own language, and get information from it. Let's do this with a socket based connection; in this case we are using netcat, but we could do similar with sockets (python or similar).

The following code uses netcat to make a HTTP request from http://cueh.coventry.ac.uk

$ nc -v cueh.coventry.ac.uk 80
Connection to cueh.coventry.ac.uk 80 port [tcp/http] succeeded!
GET / HTTP/1.1
Host: cueh.coventry.ac.uk

HTTP/1.1 200 OK
Date: Wed, 01 Mar 2017 13:03:43 GMT
Server: Apache/2.4.7 (Ubuntu)
Last-Modified: Tue, 10 May 2016 15:48:20 GMT
ETag: "2446-5327edab5ac1c"
Accept-Ranges: bytes
Content-Length: 9286
Vary: Accept-Encoding
Content-Type: text/html

.... SNIP

Note

If you are following along and trying the commands, you will need to type stuff here, and you may also need to hit enter a few times after the host line.

We can break this down to map across to the HTTP request parameters described above:

  • $ nc -v -C cueh.coventry.ac.uk 80
    • Connect to the server (port 80) using netcat
  • Connection to cueh.coventry.ac.uk 80 port [tcp/http] succeeded!
    • Response from the server
  • GET / HTTP/1.1
    • The Request Method GET
    • The Resource /
    • The HTTP Version HTTP/1.1
  • Host: cueh.coventry.ac.uk
    • The host (or site) we are connecting to
  • HTTP/1.1 200 OK
    • A Response from the server containing the file we have requested.

So we have used a command line tool to request a web site outside of a browser. While I love netcat for its versatility, its not the most user friendly way of doing this. There are other dedicated tools for grabbing data from websites.

cURL

cURL is a command line tool for transferring data using URLs. This allows us to grab data from the web without needing a browser, and gives us scope for automation. cURL gives us some nice options to send data to a server. Unfortunately as the response will be the raw data returned by the server, the output can be difficult to understand (unless you like parsing HTML in your head). However, the raw format can help us with automation, as we can write scripts to extract relevant information, using text manipulation tools such as Grep to search for the relevant strings.

Some examples of using cURL to get data from a website are shown below. We will make use of httpbin for the requests, this is a testing service that allows users to check what a HTTP request is sending to the server. For each of these examples, you may want to consider how the request headers map across to those discussed in step 1.3

A basic GET request using cURL

In this example we perform a simple get request. The URL http://httpbin.org/get will respond with the parameters that have been sent as part of the request.

$curl http://httpbin.org/get
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.64.0"
  },
  "origin": "194.66.32.17, 194.66.32.17",
  "url": "https://httpbin.org/get"
}

GET request with some data.

In this example, we add some parameters to the GET request, to simulate sending some data to the server as part of the request. The parameters we send are:

  • username: foo
  • password: bar

In our first approach we manually create the encoded URL by hand, I have done this by hand, but we could make use of the python script shown above to help generate the URL.

$curl http://httpbin.org/get?name=foo&passtext=bar
{
  "args": {
    "name": "foo",
    "passtext": "bar"
  },
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.64.0"
  },
  "origin": "194.66.32.17, 194.66.32.17",
  "url": "https://httpbin.org/get?name=foo&passtext=bar"
}

Using cURL to make POST requests

The major difference between the way HTTP GET and POST requests transmit the data to the server is the way it is encoded. With POST requests we need to put the data in the request body.

With cURL we include data in the request body by using data-urlencode options. We also need to tell cURL we are making a POST request by including the -X POST flag.

We make a POST request with the same data to http://httpbin.org and can see the parameters we have specified included.

$curl -X POST --data-urlencode "name=foo" --data-urlencode "passtext=bar"  http://httpbin.org/post
{
  "args": {},
  "data": "",
  "files": {},
  "form": {
    "name": "foo",
    "passtext": "bar"
  },
  "headers": {
    "Accept": "*/*",
    "Content-Length": "21",
    "Content-Type": "application/x-www-form-urlencoded",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.64.0"
  },
  "json": null,
  "origin": "194.66.32.17, 194.66.32.17",
  "url": "https://httpbin.org/post"
}

We can also use cURL to make other types of request (such a HEAD, OPTIONS), and I would recommend practising using it to make different types of request. While there are lots of great cURL cheat sheets available on the web, you cant beat having our own cheat sheet for a tool.

Python Requests Library

While cURL is a great tool for grabbing web data from the command line, and it has the advantage of being available on most Linux systems. However, it can be a bit clunky when you are trying to write scripts that deal with web data.

The python-requests http://docs.python-requests.org/en/master/ software library allows us to make requests using Python. This allows us to write programs that automate tasks on the web, saving time and effort if we need to repeat ourselves.

Tip

Requests is an amazing tool for dealing with any online content, its useful during pentests and I know from personal experience its invaluable during CTF competitions. Being able to automate flag capture from web services, and reuse old exploits I have put together has saved me hours.

Making GET requests

The code below shows in interactive python session making a GET request from httpbin.org

>>> import requests
>>> r = requests.get("http://httpbin.org/get") #Send the Request
>>> r.status_code   #Check the Status Code
200
>>> r.text   #Print the returned document
>>> print (r.text)
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <title>httpbin.org</title>
.... <SNIP> ....

We Can also send parameters to our get request using the params variable. Here we show sending a get request, with the parameters.

  • username: foo
  • password: bar
>>> import requests
## Payload we are sending to the server
>>> payload = {"username": "foo", "passtext": "bar"}
## Make the Request itself
>>>  r = requests.get("http://httpbin.org/get", params=payload)
>>> print (r.text)
{
  "args": {
    "passtext": "bar",
    "username": "foo"
  },
  "headers": {
    "Accept": "*/*",
    "Accept-Encoding": "gzip, deflate",
    "Host": "httpbin.org",
    "User-Agent": "python-requests/2.21.0"
  },
  "origin": "194.66.32.17, 194.66.32.17",
  "url": "https://httpbin.org/get?username=foo&passtext=bar"
}

Making POST requests

Post Requests are made in a similar way, but we use the post function.

Tip

Also note the change in function variable names from params to data , I find this trolls me quite often as I forget to change it.

The following code, demonstrates POST with the python requests library

## Payload we are sending to the server
>>> payload = {"username": "foo", "passtext": "bar"}
## POST request this time
>>> r = requests.post("http://httpbin.org/post", data=payload)

>>> print(r.text)
{
  "args": {},
  "data": "",
  "files": {},
  "form": {
    "passtext": "bar",
    "username": "foo"
  },
  "headers": {
    "Accept": "*/*",
    "Accept-Encoding": "gzip, deflate",
    "Content-Length": "25",
    "Content-Type": "application/x-www-form-urlencoded",
    "Host": "httpbin.org",
    "User-Agent": "python-requests/2.21.0"
  },
  "json": null,
  "origin": "194.66.32.17, 194.66.32.17",
  "url": "https://httpbin.org/post"
}

Video: Making Requests with Python Requests

Other Tools

Other tools exist for generating and/or modifying requests, such as Burp Suite (an industry standard tool) or in-browser extensions such as tamperchrome (a chrome extension I am quite keen on)

Tools you like to use #newtools

What tools do you know about, like, or use, for generating and/or modifying requests? Conduct some research, and use your own experience if you have any, to talk to each other about why you like (or dislike) these. You might like to start by looking at the ones listed under 'further reading'.

Using Postman

Postman gives us another way to play around with the data we send to the server.

Using Burp Suite

Task

While I love using requests for automation, Burp Suite is a pretty useful tool for dealing with HTTP. Using the inspector tool, we can see the data that gets forwarded as past of the request.

We will use Burp for the Lab tasks, but its worth having a read here.

Further reading

Back to top